An Algorithmist s Toolkit September 15, Lecture 2
|
|
- Barbara Parrish
- 6 years ago
- Views:
Transcription
1 An Algorithmist s Toolkit September 15, 007 Lecture Lecturer: Jonathan Kelner Scribe: Mergen Nachin Administrative Details Signup online for scribing. Review of Lecture 1 All of the following are covered in detail in the notes for Lecture 1: The definition of L G, specifically that L G = D G A G, where D G is a diagonal matrix of degrees and A G is the adjacency matrix of graph G. The action of L G on a vector x, namely that [L G x] i = deg(i) (x i average of x on neighbors of i) The eigenvalues of L G are λ 1 λ n with corresponding eigenvectors v 1,..., v n. The first and the most trivial eigenvector is v 1 = 1 with an eigenvalue λ 1 = 0. We are mostly interested in v and λ. 3 Properties of the Laplacian 3.1 Some simple properties Lemma 1 (Edge Union) If G and H are two graphs on the same vertex set with disjoint edge sets, L G H = L G + L H (additivity) (1) Lemma (Isolated Vertices) If a vertex i G is isolated, then the corresponding row and column of the Laplacian are zero, i.e. [L G ] i,j = [L G ] j,i = 0 for all j. Lemma 3 (Disjoint Union) These together imply that the Laplacian of the disjoint union of G and H is direct sum of L G and L H, i.e.: ( ) LG 0 L G H = LG L H = () 0 L H Proof Consider the graph G v(h) = (V G V H, E G ), namely the graph consisting of G along with the vertex set of H as disjoint vertices. Define v(g) H similarly. By the second remark, ( LG 0 ) ( ) 0 0 L G v(h) = and Lv(G) H = LH By definition, G H = (G v(h)) (v(g) H), and so by the first remark: ( ) LG 0 L G H = L G L H =. 0 L H This implies the Laplacian is the direct sum of the Laplacians of the connected components. Thus, -1
2 Theorem 4 (Disjoint Union Spectrum) If L G has eigenvectors v 1,..., v n with eigenvalues λ 1,..., λ n, and L H has eigenvectors w 1,..., w n with eigenvalues µ 1,..., µ n, then L G H has eigenvectors: with corresponding eigenvalues: Proof v 1 0,..., v n 0, 0 w 1,..., 0 w n λ 1,..., λ n, µ 1,..., µ n. By the previous lemma, ( ) ( ) ( ) L L G H (v 1 0) = G 0 v 1 λ = 1 v 1 0 L H 0 0 Thus v 1 0 is an eigenvector of L G H with eigenvalue λ 1. The rest follow by symmetry. 3. The Laplacian of an edge Definition 5 Let L e be the Laplacian of the graph on n vertices consisting of just the edge e. Example 1 If e is the edge (v 1, v ), then L e = By additivity, this lets us write: L G = L e (3) e E This will allow us to prove a number of facts about the Laplacian by proving them for one edge and adding them up. The more general technique, which we ll use more later, is to bound Laplacians by adding matrices of substructures. So for an edge e, ( ) 1 1 L e = [zeros]. 1 1 Note that: ( ) ( ) ( ) ( ) 1 ( = 1 1 = ), and so 1 ( ) T 1 is an eigenvector with eigenvalue. This decomposition implies: ( ) ( ) x T L e x = ( ) 1 ( ) x x 1 x = (x 1 x 1 x ). (4) Remark The Laplacian is a quadratic form, specifically: x T L G x = x T ( L e )x = x T L e x = (x i x j ) (5) e E e E (i,j) E This implies that L is positive semidefinite. -
3 3.3 Review of Positive Semidefiniteness Definition 6 A symmetric matrix M is positive semidefinite (PSD) if x R n, x T Mx 0. M is positive definite (PD) if the inequality is strict x = 0. Lemma 7 M is PSD iff all eigenvalues λ i 0. Similarly M is PD iff all eigenvalues λ i > 0. Proof Let s consider the matrix M in its eigenbasis, that is M = Q T ΛQ. Clearly, y T Λy = i λ iy i 0 for all y R n iff λ i 0 for all i. Similar for PD matrix. Lemma 8 (PSD Matrix Decomposition) M is PSD iff there exists a matrix A such that M = A T A. (6) Note that A can be (n k) for any k, and that it need not be square. Importantly, note that A is not unique. Proof ( ) If M is positive semidefinite, recall that M can be diagonalized as M = Q T ΛQ, thus ( ) T ( ) M = Q T Λ 1/ Λ 1/ Q = Λ 1/ Q Λ 1/ Q, where Λ 1/ has λ i on the diagonal. ( ) If M = A T A, then Letting y = (Ax) R k, we see that: x T Mx = x T A T Ax = (Ax) T (Ax) x T Mx = y T y = y Factoring the Laplacian We know from the previous section that we can factor L as A T A using eigenvectors, but there also exists a much nicer factorization which we will show here. Definition 9 Let m be the number of edges and n be the number of vertices. Then the incidence matrix = G is the m n matrix given by: 1 if e = (v, w) and v < w e,v = 1 if e = (v, w) and v > w (7) 0 otherwise. -3
4 Example The Laplacian and the Incidence matrix of the graph G= 3 4 b 1 c a is L G = 1 = G Lemma 10 L G = T. [ ] ( ) ( ) Proof Observe that T ij = (ith column of ) (jth column of ) = e [ ] e,vi [ ] e,vj This gives three cases: When i = j, When i = j and no edge exists between v i and v j, [ ] ( ) T = [ ] = 1 = deg(i). ij e When i = j and exists an edge e between v i and v j, [ ] ( ) ( ) ( ) ( ) T ij = [ ] e,vi [ ] e,vj = [ ] e,v i [ ] e,v j = 1. e,v i e incident to v i [ ] ( ) ( ) T ij = [ ] e,vi [ ] e,vj = 0 as every edge is non-incident to at least one of v i, v j. Corollary 11 Note that This gives another proof that L is PSD. 3.5 Dimension of the Null Space e e x T L G x = x = (x i x j ), (i,j) E Theorem 1 If G is connected, the null space is 1-dimensional and spanned by the vector 1. Proof Let x null(l), i.e. L G x = 0. This implies x T L G x = (x i x j ) = 0. (i,j) E Thus, x i = x j for every (i, j) E. As G is connected, this means that all x i are equal. Thus every member of the null space is a multiple of 1. Corollary 13 If G is connected, λ > 0. Corollary 14 The dimension of the null space of L G is exactly the number of connected components of G. -4
5 4 Spectra of Some Common Graphs We compute the spectra of some graphs: Lemma 15 (Complete graph) The Laplacian for the complete graph K n on n vertices has eigenvalue 0 with multiplicity 1 and eigenvalue n with multiplicity n 1 and associated eigenspace {x x 1 = 0}. Proof By corollary 13, we conclude that eigenvalue 0 has multiplicity 1. Now, take any vector v which is orthogonal to 1 and consider [L Kn v] i. Note that this value is equal to (n 1)v i v j = nv i v j = nv i Hence any vector v which is orthogonal to 1 is an eigenvector with an eigenvalue n. j=i Lemma 16 (Ring graph) The Laplacian for the ring graph R n on n vertices has eigenvectors x k (u) = sin(πku/n), and y k (u) = cos(πku/n) for 0 k n/. Both x k and y k have eigenvalue cos(πk/n). Note that, x 0 = 0 should be ignored and y 0 is 1, and when n is even x n/ = 0 should be ignored and we only have y n/. Proof. The best way to see is to plot the graph on the circle using these vectors as coordinates. Below is the plot for a k = 3. j Just consider vertex 1. Keep in mind that sin(x) = sin(x) cos(x). Then, [Lx k ] 1 = x k (1) x k (0) x k () = sin(πk/n) 0 sin(πk/n) = sin(πk/n) sin(πk/n) cos(πk/n) = ( cos(πk/n)) sin(πk/n) = ( cos(πk/n))x k (1) Note that this shows that x(u) = R(e πi(ku+c)/n ) is an eigenvector for any k Z, c C. Lemma 17 (Path graph) The Laplacian for the path graph P n on n vertices has the same eigenvalues as R n and eigenvectors v k (u) = sin(πku/n + π/n) for 0 k < n -5
6 Proof R P We will realize P n as quotient of R n. Suppose z was an eigenvector of L Rn in which z i = z n 1 i for 0 i < n. Take the first n components of z and call this vector v. Note that for 0 < i < n: [L Pn v] i = (v i neighbors of i in P n ) = (z i neighbors of i in R n ) = (z i neighbors of i in R n ) + (z n i 1 neighbors of (n i 1) in R n ) 1 = ([L R n z] i + [L Rn z] n i 1 ) 1 = (λz i + λz n i 1 ) = λz i = λv i Now consider the case when i = 0. [L Pn v] 0 = v 0 v 1 = v 0 v 1 + v 0 = z 0 z 1 + z 0 = z 0 z 1 + z n 1 = λz 0 = λv 0 Hence, v is an eigenvector of L Pn. Now we show that such v exists, that is, there exists eigenvector z of L Rn in which z i = z n 1 i for 0 i < n. Take z, z k (u) = sin(πku/n + π/n) = sin(πku/n) cos(π/n) + cos(πku/n) sin(π/n) = x k (u) cos(π/n) + y k sin(π/n) We see that z k is in the span of x k and y k. Hence, z k is an eigenvector of L Rn cos(πk/n) by lemma 16. Check that z k satisfies z k (i) = z k (n 1 i) with an eigenvalue 4.1 Graph Products The next natural example is the grid graph, which will follow from general theory about product graphs. -6
7 Definition 18 Let G = (V, E), and H = (W, F ). The product graph G H has vertex set V W and edge set: ((v 1, w), (v, w)), (v 1, v ) E, w W and ((v, w 1 ), (v, w )), (w 1, w ) F, v V. Example 3 P n P m = G n,m. We see that the vertices of P n P m are: (v 1, w 1 ) (v 1, w ) (v 1, w m ) (v, w 1 ) (v, w ) (v, w m ) v(g n,m ) = (v n, w 1 ) (v n, w ) (v n, w m ) The vertices are written in the above layout because it makes the edges intuitive. The edges are: For a fixed w, i.e. a column in the above layout, a copy of P n. For a fixed v, i.e. a row in the above layout, a copy of P m. Thus P n P m = G n,m, which is to say the product of two path graphs is the grid graph. Theorem 19 (Graph Products) If L G has eigenvectors v 1,..., v n with eigenvalues λ 1,..., λ n, and L H has eigenvectors w 1,..., w k with eigenvalues µ 1,..., µ k, then L G H has, for all 1 i n, 1 j k, an eigenvector: z ij (v, w) = x i (v)y j (w) of eigenvalue λ i + µ j. Note importantly that eigenvalues add here, they do not multiply. Proof Let A m be the graph with m isolated vertices. We can then decompose the product as: G H = (G A k ) (A n H), i.e. the edge union of k disjoint copies of G and n disjoint copies of H, exactly as in the definition. By Lemmas 1 and 3 we have L G H = L G Ak + L An H = L G I k + I n L H Consider z ij = x i y j as above for a fixed i and j, we see that: L G H z ij = (L G I k ) (x i y j ) + (I n L H ) (x i y j ) = (λ i x i y j ) + (x i µ j y j ) = (λ i + µ j ) (x i y j ) = (λ i + µ j )z ij. Corollary 0 G n,m has eigenvectors and eigenvalues completely determined by those of P n and P m as above. 5 Why is this called the Laplacian? It turns out that the graph Laplacian is very naturally related to the continuous Laplacian. d In 1 dimension, the continuous Laplacian is dx. d f + d f In dimensions, the continuous Laplacian is f = dx dy. -7
8 5.1 Discretizing Derivatives, 1d case Consider a 1d function f : R R, which we wish to discretize at the points (..., k, k 1, k, k+1, k+,...): We approximate the first derivative at the line midpoints, df, up to scaling, by taking the differences dx between the values at the adjacent points: The discrete first derivative of f is a function on edges, and is, up to scaling Pn f, the incidence matrix d f of P n defined earlier. In order to compute the second derivative at the original points, dx, again up to scaling, we take the differences of the adjacent midpoints at the vertices: The discrete second derivative of f is thus, up to scaling, L Pn f. 5. Discretizing Derivatives, d case Here we discretize f : R R on a grid: f(k-1,k+1) f(k,k+1) f(k+1,k+1) f(k-1,k) f(k,k) f(k+1,k) f(k-1,k-1) f(k,k-1) f(k+1,k-1) To compute the discrete derivative in the x and y directions, we ll just look at a little piece. On the df df horizontal edges, we approximate dx up to scaling, and do likewise on the vertical edges with dy : -8
9 f(k,k+1)-f(k,k) f(k,k)-f(k-1,k) f(k+1,k)-f(k,k) f(k,k)-f(k,k-1) Again, the discrete derivative of f is a function on edges. When we consider the concatenation of the two discretization of the directional derivatives, we see that the discretization of the gradient, up to scaling, is Gn,m f. Finally we use this to compute the discretized Laplacian, up to scaling, and get: f(k,k+1)+f(k,k-1)+f(k+1,k)+f(k-1,k) -4 f(k,k) Thus the discretized Laplacian in two dimensions of f is L Gn,m f. 5.3 A Note on Fourier Coefficients We observed that paths and rings had eigenvectors that looked like Fourier coefficients. In the continuous case: d sin(kx + c) dx d cos(kx + c) dx = k sin(kx + c) = k cos(kx + c) Thus sin(kx + c) and cos(kx + c) are eigenfunctions of the d eigenvalue k. 6 Bounding Laplacian Eigenvalues dx operator, i.e. the 1d Laplacian, both with Lemma 1 (Sum of the eigenvalues) Given an n-vertex graph G with degrees d i, where d max = max i d i, and Laplacian L G with eigenvalues λ i, λ i = d i d max n (8) Proof i The first two expressions are both the trace, the upper bound is trivial. i Lemma (Bounds on λ and λ n ) Given λ i and d i as above, i d i λ n 1 (9) -9
10 λ n i d i n 1 (10) n Proof By the previous slide and the fact that λ 1 = 0, we get i= λ i = i d i. As λ λ n, the bounds follow immediately. 7 Bounding λ and λ max Theorem 3 (Courant-Fischer Formula) For any n n symmetric matrix A, x T Ax λ 1 = min x T Ax = min x =1 x=0 x T x x T Ax λ = min x T Ax = min (11) x =1 x=0 x T x x v 1 x v 1 x T Ax λ max = max x T Ax = max x =1 x=0 x T x Proof We consider the diagonalization A = Q T ΛQ. As seen earlier, x T Ax = (Qx) T Λ(Qx). As Q is orthogonal, we also have Qx = x. Thus it suffices to consider diagonal matrices. Moreover, all of the equalities on the right follow immediately from linearity. Thus we need to consider, for x = 1: λ 1 x 1 T Λx = (x 1 λix x x n ).... i = λ i x i = x. We compute the gradient and find: λ n [ ( )] λ i x i λ i x 3 i 3 x x T Λx = i x = λ i(x i x i ( x i ) i ) thus all extremal values occur when one x i = 1 and the rest are 0. The identities follow immediately. Corollary 4 (Rayleigh Quotient) Letting G = (V, E) be a graph with Laplacian L G, x n λ 1 = 0 v 1 = 1 x T L G x (i,j) E (x i x j ) λ = min x T = P x min x v 1 x=0 i V x i (1) x=0 x=0 x T L G x (i,j) E (x i x j ) λ max = max = max x=0 x T x x=0 i V x i The Rayleigh Quotient is a useful tool for bounding graph spectra. Whereas before we had to consider all possible vectors x, now in order to get an upper bound on λ we need only produce a vector with small Rayleigh quotient. Likewise to get a lower bound on λ max we need only to find a vector with large Rayleigh quotient. Examples next lecture! i -10
11 MIT OpenCourseWare Topics in Theoretical Computer Science: An Algorithmist's Toolkit Fall 009 For information about citing these materials or our Terms of Use, visit:
Specral Graph Theory and its Applications September 7, Lecture 2 L G1,2 = 1 1.
Specral Graph Theory and its Applications September 7, 2004 Lecturer: Daniel A. Spielman Lecture 2 2.1 The Laplacian, again We ll begin by redefining the Laplacian. First, let G 1,2 be the graph on two
More informationAn Algorithmist s Toolkit September 10, Lecture 1
18.409 An Algorithmist s Toolkit September 10, 2009 Lecture 1 Lecturer: Jonathan Kelner Scribe: Jesse Geneson (2009) 1 Overview The class s goals, requirements, and policies were introduced, and topics
More informationSpectral Graph Theory Lecture 3. Fundamental Graphs. Daniel A. Spielman September 5, 2018
Spectral Graph Theory Lecture 3 Fundamental Graphs Daniel A. Spielman September 5, 2018 3.1 Overview We will bound and derive the eigenvalues of the Laplacian matrices of some fundamental graphs, including
More informationRings, Paths, and Paley Graphs
Spectral Graph Theory Lecture 5 Rings, Paths, and Paley Graphs Daniel A. Spielman September 12, 2012 5.1 About these notes These notes are not necessarily an accurate representation of what happened in
More informationRings, Paths, and Cayley Graphs
Spectral Graph Theory Lecture 5 Rings, Paths, and Cayley Graphs Daniel A. Spielman September 16, 2014 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationLecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality
CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:
More informationMATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.
MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry
More informationThe Matrix-Tree Theorem
The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries
More informationA lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo
A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationVery few Moore Graphs
Very few Moore Graphs Anurag Bishnoi June 7, 0 Abstract We prove here a well known result in graph theory, originally proved by Hoffman and Singleton, that any non-trivial Moore graph of diameter is regular
More informationData Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings
Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationPetar Maymounkov Problem Set 1 MIT with Prof. Jonathan Kelner, Fall 07
Petar Maymounkov Problem Set 1 MIT 18.409 with Prof. Jonathan Kelner, Fall 07 Problem 1.2. Part a): Let u 1,..., u n and v 1,..., v n be the eigenvalues of A and B respectively. Since A B is positive semi-definite,
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an
More informationLecture 13 Spectral Graph Algorithms
COMS 995-3: Advanced Algorithms March 6, 7 Lecture 3 Spectral Graph Algorithms Instructor: Alex Andoni Scribe: Srikar Varadaraj Introduction Today s topics: Finish proof from last lecture Example of random
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More information1 Review: symmetric matrices, their eigenvalues and eigenvectors
Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three
More informationSpectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min
Spectral Graph Theory Lecture 2 The Laplacian Daniel A. Spielman September 4, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before
More informationAn Algorithmist s Toolkit Lecture 18
18.409 An Algorithmist s Toolkit 2009-11-12 Lecture 18 Lecturer: Jonathan Kelner Scribe: Colin Jia Zheng 1 Lattice Definition. (Lattice) Given n linearly independent vectors b 1,, b n R m, the lattice
More informationLecture 1 and 2: Random Spanning Trees
Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny
More informationFiedler s Theorems on Nodal Domains
Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationLecture 1: Graphs, Adjacency Matrices, Graph Laplacian
Lecture 1: Graphs, Adjacency Matrices, Graph Laplacian Radu Balan January 31, 2017 G = (V, E) An undirected graph G is given by two pieces of information: a set of vertices V and a set of edges E, G =
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationNumerical Optimization
Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function
More informationSymmetric matrices and dot products
Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality
More informationORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact.
ORIE 6334 Spectral Graph Theory September 8, 2016 Lecture 6 Lecturer: David P. Williamson Scribe: Faisal Alkaabneh 1 The Matrix-Tree Theorem In this lecture, we continue to see the usefulness of the graph
More informationLecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012
CS-62 Theory Gems October 24, 202 Lecture Lecturer: Aleksander Mądry Scribes: Carsten Moldenhauer and Robin Scheibler Introduction In Lecture 0, we introduced a fundamental object of spectral graph theory:
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationSpectra of Adjacency and Laplacian Matrices
Spectra of Adjacency and Laplacian Matrices Definition: University of Alicante (Spain) Matrix Computing (subject 3168 Degree in Maths) 30 hours (theory)) + 15 hours (practical assignment) Contents 1. Spectra
More informationEcon Slides from Lecture 8
Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its
More informationSpectral radius, symmetric and positive matrices
Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationLecture 1: Systems of linear equations and their solutions
Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications
More informationIntroducing the Laplacian
Introducing the Laplacian Prepared by: Steven Butler October 3, 005 A look ahead to weighted edges To this point we have looked at random walks where at each vertex we have equal probability of moving
More informationLaplacian eigenvalues and optimality: II. The Laplacian of a graph. R. A. Bailey and Peter Cameron
Laplacian eigenvalues and optimality: II. The Laplacian of a graph R. A. Bailey and Peter Cameron London Taught Course Centre, June 2012 The Laplacian of a graph This lecture will be about the Laplacian
More informationQuick Tour of Linear Algebra and Graph Theory
Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions Matrices and
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationLecture 13: Spectral Graph Theory
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationOn Hadamard Diagonalizable Graphs
On Hadamard Diagonalizable Graphs S. Barik, S. Fallat and S. Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A2 Abstract Of interest here is a characterization
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationA A x i x j i j (i, j) (j, i) Let. Compute the value of for and
7.2 - Quadratic Forms quadratic form on is a function defined on whose value at a vector in can be computed by an expression of the form, where is an symmetric matrix. The matrix R n Q R n x R n Q(x) =
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationFiedler s Theorems on Nodal Domains
Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors
More informationLecture Semidefinite Programming and Graph Partitioning
Approximation Algorithms and Hardness of Approximation April 16, 013 Lecture 14 Lecturer: Alantha Newman Scribes: Marwa El Halabi 1 Semidefinite Programming and Graph Partitioning In previous lectures,
More informationSpectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides
Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering Some Slides Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104,
More informationMATH 221, Spring Homework 10 Solutions
MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the
More information1 Matrix notation and preliminaries from spectral graph theory
Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More information18.312: Algebraic Combinatorics Lionel Levine. Lecture (u, v) is a horizontal edge K u,v = i (u, v) is a vertical edge 0 else
18.312: Algebraic Combinatorics Lionel Levine Lecture date: Apr 14, 2011 Lecture 18 Notes by: Taoran Chen 1 Kasteleyn s theorem Theorem 1 (Kasteleyn) Let G be a finite induced subgraph of Z 2. Define the
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationU.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21
U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More informationJUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson
JUST THE MATHS SLIDES NUMBER 96 MATRICES 6 (Eigenvalues and eigenvectors) by AJHobson 96 The statement of the problem 962 The solution of the problem UNIT 96 - MATRICES 6 EIGENVALUES AND EIGENVECTORS 96
More informationLecture: Modeling graphs with electrical networks
Stat260/CS294: Spectral Graph Methods Lecture 16-03/17/2015 Lecture: Modeling graphs with electrical networks Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough.
More informationLecture 10: October 27, 2016
Mathematical Toolkit Autumn 206 Lecturer: Madhur Tulsiani Lecture 0: October 27, 206 The conjugate gradient method In the last lecture we saw the steepest descent or gradient descent method for finding
More informationLecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i
8.409 An Algorithmist s oolkit December, 009 Lecturer: Jonathan Kelner Lecture Last time Last time, we reduced solving sparse systems of linear equations Ax = b where A is symmetric and positive definite
More informationKernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman
Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu
More informationLecture 9: SVD, Low Rank Approximation
CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 9: SVD, Low Rank Approimation Lecturer: Shayan Oveis Gharan April 25th Scribe: Koosha Khalvati Disclaimer: hese notes have not been subjected
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More information4. Determinants.
4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationNote on deleting a vertex and weak interlacing of the Laplacian spectrum
Electronic Journal of Linear Algebra Volume 16 Article 6 2007 Note on deleting a vertex and weak interlacing of the Laplacian spectrum Zvi Lotker zvilo@cse.bgu.ac.il Follow this and additional works at:
More informationRON AHARONI, NOGA ALON, AND ELI BERGER
EIGENVALUES OF K 1,k -FREE GRAPHS AND THE CONNECTIVITY OF THEIR INDEPENDENCE COMPLEXES RON AHARONI, NOGA ALON, AND ELI BERGER Abstract. Let G be a graph on n vertices, with maximal degree d, and not containing
More informationAn Algorithmist s Toolkit September 24, Lecture 5
8.49 An Algorithmist s Toolkit September 24, 29 Lecture 5 Lecturer: Jonathan Kelner Scribe: Shaunak Kishore Administrivia Two additional resources on approximating the permanent Jerrum and Sinclair s original
More informationLecture 14: Random Walks, Local Graph Clustering, Linear Programming
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These
More informationFinding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October
Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find
More informationStrongly Regular Graphs, part 1
Spectral Graph Theory Lecture 23 Strongly Regular Graphs, part 1 Daniel A. Spielman November 18, 2009 23.1 Introduction In this and the next lecture, I will discuss strongly regular graphs. Strongly regular
More informationICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization
ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationGraph fundamentals. Matrices associated with a graph
Graph fundamentals Matrices associated with a graph Drawing a picture of a graph is one way to represent it. Another type of representation is via a matrix. Let G be a graph with V (G) ={v 1,v,...,v n
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationLinear Algebra Review
January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationLecture 12 : Graph Laplacians and Cheeger s Inequality
CPS290: Algorithmic Foundations of Data Science March 7, 2017 Lecture 12 : Graph Laplacians and Cheeger s Inequality Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Graph Laplacian Maybe the most beautiful
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationChapter 7: Symmetric Matrices and Quadratic Forms
Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationPaul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions
Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and
More informationProperties of Ramanujan Graphs
Properties of Ramanujan Graphs Andrew Droll 1, 1 Department of Mathematics and Statistics, Jeffery Hall, Queen s University Kingston, Ontario, Canada Student Number: 5638101 Defense: 27 August, 2008, Wed.
More informationLecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis
CS-621 Theory Gems October 18, 2012 Lecture 10 Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis 1 Introduction In this lecture, we will see how one can use random walks to
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationLecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues
Lecture Notes: Eigenvalues and Eigenvectors Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Definitions Let A be an n n matrix. If there
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationMTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More information