Lecture 18 and 19. and a graph is denoted by G = (V, E).

Size: px
Start display at page:

Download "Lecture 18 and 19. and a graph is denoted by G = (V, E)."

Transcription

1 1 Lecture 18 and 19 Spring EE 194, Advanced Control (Prof. Khan) Mar. 27 (Wed.) and Apr. 01 (Mon.), 2013 I. GRAPH THEORY A graph, G, is defined to be a collection of two sets: (i) a vertex-set, V = {1,..., N}, that is a collection of nodes (vertices); and an edge-set, E V V, that is a collection of edges. The edge-set, E, is defined as a set of ordered pairs (i, j) with i, j V such that j is connected to i to be interpreted as j can send information to i. Formally, and a graph is denoted by G = (V, E). E = {(i, j) j i}, (1) A graph is said to be undirected if (i, j) E (j, i) E for all i and j. A graph that does not satisfy this property is called a directed graph or a digraph. Unless otherwise stated, we deal explicitly with undirected graphs in the following. The neighborhood of a node i is defined as N i = {j (i, j) E}. (2) The degree of a node i is defined as the number of nodes that can send information to node i, i.e., N i. For directed graphs, there are two different notions of degree: in-degree and out-degree. A. Graph theory and Linear algebra Analysis of graphs is typically carried out via matrix theory. For this purpose, we define matrices that can define a graph (as opposed to the set notation earlier). The adjacency matrix, A = {a ij }, of a graph is defined as a ij = { 1, j i, 0, otw. Sometimes it is assumed that (i, i) E. With this assumption, the adjacency matrix has all 1 s on the main diagonal. (3) Remark 1. The adjacency matrix of an undirected graph is symmetric.

2 2 The incidence matrix, C = c ij, of a graph is defined as an N M matrix (where M is the total number of edges) such that for the mth edge (i, j) E, the mth column of C has a 1 at the ith location, a 1 at the jth location, and zeros everywhere else. The degree matrix, D, is defined as a diagonal matrix that has N i as the ith element on the main diagonal. The following definitions of a graph Laplacian, L = {l ij }, are equivalent: (i) L = D A. (4) N i, j = i, (ii) l ij = 1, i j, j i, (5) 0, otw. (iii) L = CC T. (6) Remark 2. The Laplacian, L, is symmetric and positive-semidefinite. Proof: Obvious from definition (iii). The eigenvalues of L are denoted by λ 1, λ 2,..., λ N ; the following conventional is typically employed, 0 = λ 1 λ 2... λ N. Remark 3. The Laplacian, L, is singular (rank-deficient), i.e., it has at least one 0 eigenvalue. Proof: Row-sum is 0. A path between node i 1 V and node i K+1 V of length K is defined as a sequence of edges (i 1, i 2 ), (i 2, i 3 ),... (i K, i K+1 ) in E for any distinct i 2,..., i K. An undirected graph is said to be connected if there exists a path from each i V to each j V. A graph is said to be complete or all-to-all if (i, j) E, for all i and j. If a graph is not connected then it can be partitioned into connected components.

3 3 II. WELL-KNOW RESULTS A diagonally-dominant matrix, A, is such that a ii j i a ij, i. A strictly diagonallydominant, A, is such that a ii > j i a ij, i. Lemma 1 (Gershgorin circle theorem). Let A = {a ij } C N N. Let D i be the closed disc centered at a ii with radius j i a ij. Then every eigenvalue of A lies in i D i. Corollary 1. A symmetric diagonally-dominant matrix with non-negative diagonals is PSD. Proof: Follows from Gershgorin circle theorem. Corollary 2. A Laplacian matrix is PSD. Proof: Laplacian matrices are symmetric diagonally-dominant with non-negative elements on the main diagonal. Lemma 2. Let G be connected and let λ 1 λ 2... λ N be the Laplacian eigenvalues. Then λ 2 > 0. Proof: Let u = [u 1, u 2,..., u N ] T be an eigenvector of L with eigenvalue 0. Since Lu = 0 and u T Lu = u T CC T u, we have C T u = 0. Now C T u u i u j = 0, (i, j) E. (7) This implies that u i = u j for all (i, j) E. As the graph is connected, we have u i = u j for all i, j V and the only normalized eigenvector that satisfies Lu = 0 is u = 1 [1, 1,..., 1] T. (8) N }{{} N elements Hence, the there is only one 0 eigenvalue and λ 2 > 0 since L is PSD. Lemma 3. The number of connected components equals to the multiplicity of 0 eigenvalues in its Laplacian. Proof: A disconnected graph is a union of some number of connected components. Each of such component is a connected graph on its own and has exactly one 0 eigenvalue.

4 4 Example 1. Consider a network with N nodes and no edges. There are N connected components (each node). From the above lemma, the Laplacian should have N 0 eigenvalues. Can be verified as the Laplacian in this case is a 0 matrix. An irreducible matrix is such that it cannot be transformed into a block- upper-triangular matrix with any row-column permutation. A block upper triangular matrix is such that it can be decomposed into [ ] 0. Remark 4. A matrix is irreducible if and only if its associated graph is strongly-connected. A symmetric matrix is irreducible if and only if its associated graph is connected. A primitive matrix is such that it is non-negative, square, and its pth integer power, with p > 0, has all positive elements. Remark 5. A primitive matrix is irreducible. Proof: Exercise. Remark 6. An irreducible matrix is not necessarily primitive unless it has a strictly positive diagonal. Proof: Exercise. The following statements can be proved. (i) A graph is connected if and only if its Laplacian is irreducible. (ii) For a comlete graph, λ 2 =,..., λ N = N. The algebraic connectivity of the graph is defined as the second-smallest eigenvalue of its Laplacian, i.e., λ 2. For connected graphs, this measures the strength of connectivity. Remark 7. In a connected graph, adding an edge does not decrease λ 2. Proof: Exercise.

5 5 A. Types of graphs A k-regular graph is such that each node is connected to exactly k other nodes. A nearest neighbor graph is such that each node is connected to all the nodes within a certain communication radius. An m-circulant graph is such that each node is connected to m forward and m backward neighbors. Remark 8. The adjacency and Laplacian matrices of a circulant graph are circulant matrices. The eigenvalues and eigenvectors of a circulant matrix are known in closed-form. The eigenvectors of a circulant matrix, for instance, are given by the well-known DTFT (Vandermonde) matrix. The above graphs are referred to as structured graphs. Typically such graphs are highly clustered (how many of a node s neighbors are neighbors of each other), but have a large mean shortest-path. Can also be related to graph diameter (largest shortest path). A random graph with 1 p 0 is such that every two nodes, i and j, are connected with a probability p. Random graphs have smaller average shortest path but suffer from weak clustering. Example 2. The above is one of the Erdös-Renýi graph generating model. An alternate is to randomly pick (with uniform probability) one graph out of all possible graphs with N nodes and K edges. Consider the following graph generation: Take a structured graph and a positive number 0 p 1. For each edge in the graph, rewire it to a randomly chosen (uniform probability) node with the probability p. Watts-Strogatz model: When p is small and the starting graph is circulant, the resulting graph is shown to exhibit the small-world principle, i.e., small average shortest path and large clustering. Example 3. Transportation networks, electric power grid, network of brain neurons, social networks, six degrees of separation, the sending experiment, the author collaboration network, the famous Erdös number. My Erös number is 5 from three paths (may be 4-cannot prove) mean is 4.6. Khan, U. Moura, J. Püschel, M. Beth, T. Mullin, R. Erdös, P.

6 6 Lec 19: Monday, Apr. 01, 2013 III. MORE ON MATRICES Given an N N matrix, A = {a ij }, its associated graph is defined as G A = (V A, E A ) such that V A = {1,..., N} and E A = {(i, j) a ij 0}. The notions of a graph and a matrix are related. Graph Adjacency matrix Matrix Graph The notions of irreducibility and strong-connectivity are also related. SC Graph Irreducible adjacency matrix Irreducible matrix SC Associated Graph The following are equivalent on irreducible matrices. (i) An irreducible matrix is such it cannot be arranged into a block upper-triangular matrix with arbitrary row-column permutations. (ii) The associated graph of an irreducible matrix is strongly-connected. (iii) If a matrix, A, is irreducible then each of its columns and each of its rows has at least one non-zero element. A primitive matrix is such that it is non-negative, square, and its pth integer power, with p > 0, has all positive elements. A non-negative matrix, A = {a ij } R N N, is such that all of its elements are non-negative, i.e., a ij 0, i, j. We denote this by A 0 or A R N N 0. Furthermore, A B A B 0, where B R N N 0. A. Examples Example 4. The matrix A = is irreducible. Its associated graph is SC. However, this matrix is not primitive. Example 5. A non-negative, square, irreducible matrix with all positive diagonal elements is primitive.

7 7 B. Results Lemma 4. Let A R N N 0 be irreducible and let x 0. If Ax = 0, then x = 0. Proof: From (iii) above, each column of A has at least one non-zero element and b > 0 such that (jth column-sum) i a ij b > 0, j. Assume on the contrary that x 0 and Ax = 0. Then, 0 = Ax, = a ij x j, i j = a ij x j, i b j Since b > 0, we have j x j 0, and since x 0, we must have x = 0, which is a contradiction. x j. j Lemma 5. A non-negative matrix, A R N N is irreducible if and only if (I + A) N 1 > 0. Proof: Sketch: A is non-negative so (I + A) is non-negative with strictly positive diagonals. An irreducible non-negative matrix with all positive diagonal elements is primitive with index of primitivity of N 1.

8 8 C. Topology Theorem 1 (Brouwer Fixed Point). Let B n be a closed unit-disk in R n, i.e., B n = {x R n x x 2 n 1}. Every continuous function, f : B n B n, has at least one fixed point, i.e., x B n such that f(x) = x. (A remarkable result from topology: Equivalently, every map that encloses your current location has a You are here point.) A closed unit-disk in R is a line segment from [ 1, 1]. A closed unit-disk in R 2 is a circle centered at (0, 0) with unit radius. Corollary 3. Let S be the unit simplex: { S = x R n x 0 and i x i = 1 }. If f : S S is a continuous function, then there exists a w S such that f(w) = w. Proof: Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms (topological equivalence), the FP theorem holds for every set that is homeomorphic to a closed ball. In the language of topology, a coffee cup = a donut. Example 6. Every closed interval, [a, b] R is homeomorphic to the closed unit-disk in R. Let f : [a, b] [a, b] be any continuous function. a f(x)=x f(a) b f(b) a b f : [a, b] [a, b] is continuous Brouwer fixed-point theorem: Every continuous function that maps a closed set to itself intersects with the straight line, f(x)=x

9 Latin Way A B C D E F G H J K L M 1 2 Greenleaf Ave. Charnwood Rd. Benham St. Brookings St Stanley Ave. Frederick Ave. Windsor Rd. Fleming St. Renfrew St. Stearling St. Charlton St. Dartmouth St. Sunset Ave Emery St. Capen St. Capen St. Extension 9 Tesla Ave. Upland Rd. Sunset Rd. Chetwynd Rd. Curtis Ave. Conwell Ave. Curtis St. Winthrop St. Winthrop St. Raymond Ave. Curtis St Bellevue St. Fairmount St. Professors Row 35 Sawyer Ave. Whitfield Rd. Teele Ave University Ave Packard Ave North Hill Rd Talbot Ave. Boston Ave Powder House Blvd. Hume Ave. Burget Ave. 26 Professors Row P P College Ave. College Ave. Talbot Ave Boston Ave. Dearborn Rd. Powder House Blvd. 11 N College Ave. Bromfield Rd. Pearson Rd. Bowdoin St. Colby St. St. Clement s Rd. Warner St. Wellesley St. Radcliffe St. Princeton St. 57 Broadway Yale St. To Harvard St. TO T DAVIS SQ. TUFTS UNIVERSITY MEDFORD/SOMERVILLE CAMPUS University Buildings 1 Aidekman Arts Center H9 2 Alumnae Hall H9 3 Anderson Hall H6 4 Balch Arena Theater H9 5 Ballou Hall E6 6 Barnum Hall E6 7 Bendetson Hall E5 8 Bookstore F8 9 Boston School of Occupational Therapy (BSOT) B4 10 Braker Hall G5 11 Bromfield-Pearson J7 12 Bush Hall F10 13 Cabot Center (The Fletcher School) D6 14 Campus Center F9 15 Carmichael Hall C5 16 Chase Center, Carmichael Hall C6 17 Cohen Auditorium H9 18 Community Health (112 Packard Avenue) E9 19 Conference Bureau Office (108 Packard Avenue) E9 20 Summer Session Office (108 Packard Avenue) D9 21 Cousens Gym H3 22 Curtis Hall H5 23 Dewick-MacPhie Dining Hall F9 24 Dowling Hall F4 25 East Hall F5 26 Eaton Hall & Computer Lab G6 27 Eliot-Pearson H1 28 Fine Arts House (11 Talbot Avenue) H8 29 Gantcher Center H2 30 Goddard Chapel F6 31 Granoff Family Hillel Center D5 32 Granoff Music Center H9 33 Halligan Hall H4 34 Haskell Hall F10 35 Health Services C8 36 Hill Hall E4 37 Hillside Apartments E4 38 Hodgdon Hall E9 39 Houston Hall C6 40 International Center D9 41 Jackson Gym G9 42 Lane Hall E4 43 Latin Way Apartments G10 44 Lewis Hall E10 45 Lincoln Filene Center G5 46 Miller Hall D5 47 Miner Hall H6 48 Mugar Hall C7 49 Olin Center for Language and Cultural Studies D6 50 Paige Hall H6 51 Pearson Chemical Lab F9 52 Performance Hanger G9 53 Psychology Building J6 54 Sophia Gordon Hall H7 55 South Hall G10 56 Robinson Hall H7 57 Science & Technology Center M7 58 Tilton Hall E10 59 Tisch Library G7 60 Urban & Environmental Planning (97 Talbot Avenue) E9 61 West Hall E5 P Public Parking

10 10 D. Vector and Matrix norms The max-norm, x, of a vector, x, is defined as its maximum absolute value, i.e., x = max i x i. Given a vector, w > 0, the weighted max-norm, x w, of a vector, x, is defined as max i x i /w i. The Euclidean norm or 2 norm of a vector, x, is defined as x 2 = x x 2 n. Example 7. Notice the difference between the absolute and square norms: x < α is a square with side α, whereas, x 2 < α is a circle with radius α centered at (0, 0). The p norm of a vector, x, is defined as x p = (x p x p n) 1 p. Let A R m n. Given vector norms,, on R n and R m, we can define an induced matrixnorm as the following: A = max{ Ax x R n and x = 1}, { } Ax = max x R n and x 0. x Example 8. Given the weighted max-norm and A R N N, the induced matrix-norm is A w = max x 0 Ax w. x w The Frobenius norm of a matrix is defined as A F = a ij 2 = trace(aa T ). i j The spectral radius, ρ(a), of a matrix, A, is defined as max i λ i, where λ i are the eigenvalues of A. Spectral radius can also be given by Gelfand s formula: ρ(a) = lim k A k 1 k, where is a consistent matrix norm. (All induced norms are consistent.) Lemma 6. Any induced norm,, satisfies ρ(a) A. Proof: Can be proved by Gelfand s formula. Lemma 7. ρ(a) A F Proof: A 2 A F.

11 11 IV. PERRON-FROBENIUS Theorem 2 (Perron-Frobenius). Let A be an N N non-negative matrix with eigenvalues, λ i, ordered as λ 1... λ N. (Arguably, the most important theorem in distributed algorithms.) If A is irreducible then: (a) There exists w > 0 such that Aw = ρ(a)w. (b) The eigenvector w is unique up to a scalar multiplication. Proof: The case n = 1 is trivial and it will be assumed that n 2. (a) Existence statement so define the element first: Consider the following set: { } S = x R n x 0 and x i = 1. i It can be shown that Ax 0 for any 1 x S. Define a function 2, f : S S, f(x) = Ax 1 T Ax. From Brouwer FP theorem, there exists some w S such that f(w) = w. This can be written as f(w) = Aw 1 T Aw = w Aw = ( 1 T Aw ) w, i.e., w is an eigenvector of A with eigenvalue λ 1 T Aw > 0. Now (I + A)w = (1 + λ)w (I + A) N 1 w = (1 + λ) N 1 w. Since A 0 is irreducible, (I + A) is non-negative and irreducible; and (I + A) N 1 > 0 (I + A) N 1 w > 0 (1 + λ) N 1 w > 0 w > 0. }{{} since w 0 Now show that ρ(a) = λ. Firstly, λ ρ(a), by definition. On the other hand, ρ(a) A w, = Aw w, (Exercise) = λw w, = λ w w, = λ. We conclude that λ ρ(a) λ, so ρ(a) = λ. (b) Exercise: Prove uniqueness. 1 Suppose elements of x sum to 1 but Ax = 0; then A is not irreducible which is a contradiction. Furthermore, if Ax = 0, then x = 0 but note that x = 0 / S. 2 Since Ax 0, the denominator is never zero and f is well-defined.

12 12 Lec 4: Wednesday, Feb. 01, 2012 Remarks: Recap Perron-Frobenius. The largest eigenvalue of a non-negative, irreducible matrix is positive-real, i.e., λ N R >0. The eigenvector corresponding to λ N of a non-negative, irreducible matrix is strictly positive and is unique up to a scalar multiplication. For non-negative irreducible matrices, λ N > λ N 1 is not necessarily true. See the next comment. A matrix that is non-negative and irreducible but not primitive can have λ N 1 = λ N. An example is A = The eigenvalues are λ 1,2 = 0.5±j0.867, λ 3 = 1; note λ 1,2 = 1. However, the eigenvector corresponding to λ 3 = 1 only has to be strictly positive, whereas for other eigenvalues with = 1, the eigenvectors may not be strictly positive. Perron-Frobenius for primitive matrices: Theorem s statement plus λ N > λ i, i N.. A. Eigenspace Let A R n n. Then any v that satisfies Av = λv is called the right eigenvector of A. Similarly, any w that satisfies w T A = λw T is called the left eigenvector of A. By definition, the left eigenvectors are the right eigenvectors of A T. This can be seen by w T A = λw T A T w = λw. We call the collection of {v, λ} as the eigenspace of A and the collection of {w, λ} as eigenspace of A T. For a symmetric matrix, A = A T, the left eigenvectors are the same as the right eigenvectors and thus A and A T have the same eigenspace. When we decompose a matrix as A = V DV 1 ; the matrix V consists of the right eigenvectors of A and the matrix V 1 consists of the left eigenvectors of A (as rows of V 1 ). This can be shown as A = V DV 1 A T = V T DV T W DW 1. Since A T = W DW 1, each column of W is the right eigenvector of A T. Since W = (V 1 ) T, each column of W is a row in V 1. A normal matrix is such that it can be diagonalized by a diagonal matrix and a unitary matrix (V V T = I, V is unitary real). A symmetric matrix is a normal matrix. Does A and A T have the same eigenspace? Not unless A is normal, i.e., AA T = A T A. As we have shown above, the relationship between the left and right eigenvectors is given by W = (V 1 ) T. If A is normal, then V 1 = V T and W = V.

13 13 All of the above can be re-written for complex-valued matrices if we replace the transpose with Hermitian (complex conjugate transpose). B. Stochastic matrices A row(column)-stochastic matrix is such that it is non-negative and its row(column)-sum is 1. Lemma 8. The eigenvalues of a row-stochastic matrix lie in the unit circle. Proof: Gershgorin s circle theorem. Lemma 9. The spectral radius of a row-stochastic matrix is 1. Proof: Note that 1 is an eigenvalue and by the above lemma no other eigenvalue exceeds 1. Lemma 10. The eigenvalues of an irreducible row-stochastic matrix follow: λ 1... λ N 1 λ N = 1. The right eigenvector, v N, corresponding to λ N = 1 is a vector of all constants (positive numbers), i.e., v N = 1 N [ ] T, after normalization. In addition, if W is primitive (that can be made sure by adding a a strictly positive diagonal) then λ N 1 < λ N = 1. Proof: Perron-Frobenius, W with a strictly positive diagonal is primitive. A doubly-stochastic matrix is such that it is both row-stochastic and column-stochastic (or A T is row-stochastic). C. Average-consensus algorithm Consider a strongly-connected graph, G = (V, E), with N nodes. Let each node possess a real number, x i (0), at the ith node. Each node implements the following algorithm: x i (k + 1) = {i} N i w ij x j (k), where w ij > 0 for i = j and (i, j) E such that i w ij = 1. The network-level algorithm can be summarized as where W = {w ij } is a weight matrix that collects w ij. x k+1 = W x k, (9)

14 14 Remark 9. The weight matrix, W, is row-stochastic and irreducible. With w ii > 0, i, it is further primitive. From PF theorem, the eigenvalues, λ i, of W are such that λ 1... λ N 1 < λ N = 1. The right eigenvector, v N, corresponding to λ N = 1 is a strictly positive vector of all constants, i.e., v N = 1 [ ] T. N }{{} 1 N Let v i be the eigenvector corresponding to λ i then W = V DV 1, where V = [v N,..., v 1 ] and D is a diagonal matrix with λ N,..., λ 1 on the main diagonal. Consider the asymptotic behavior of (9). x k+1 = W k+1 x 0, = V D k+1 V 1 x 0, = [v N,..., v 1 ] = v N v T Nx 0 + x lim k x k+1 = v N v T Nx 0. N 1 i=1 1 k+1 λ k+1 N 1 λ k+1 i v i v T i x 0, If, in addition, W is symmetric then v N = v N and... λ k+1 1 v T N v T N 1. v T 1 x 0, x lim x k+1 = v N v T k Nx 0 = N 1 T Nx 0 = 1 N N N 11T x 0, (10) where it can be verified that 1 T x 0 /N is the average of the initial condition. Summary: Agreement: If G is strongly-connected and the weights are such that: (i) w ij > 0 for all (i, j) E and (i, i), i V; and (ii) i w ij = 1; then the update in (9) converges to an agreement over all of the nodes in the network. Average-consensus: If G is connected and the weights are such that: (i) w ij > 0 for all (i, j) E and (i, i), i V; (ii) i w ij = 1; and (iii) w ij = w ji ; then the update in (9) converges to the average of the nodal initial conditions.

Lecture 1 and 2: Introduction and Graph theory basics. Spring EE 194, Networked estimation and control (Prof. Khan) January 23, 2012

Lecture 1 and 2: Introduction and Graph theory basics. Spring EE 194, Networked estimation and control (Prof. Khan) January 23, 2012 Lecture 1 and 2: Introduction and Graph theory basics Spring 2012 - EE 194, Networked estimation and control (Prof. Khan) January 23, 2012 Spring 2012: EE-194-02 Networked estimation and control Schedule

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES MIKE BOYLE. Introduction By a nonnegative matrix we mean a matrix whose entries are nonnegative real numbers. By positive matrix we mean a matrix

More information

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications NCS Lecture 8 A Primer on Graph Theory Richard M. Murray Control and Dynamical Systems California Institute of Technology Goals Introduce some motivating cooperative control problems Describe basic concepts

More information

Lecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications

Lecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications Lecture 4: Introduction to Graph Theory and Consensus Richard M. Murray Caltech Control and Dynamical Systems 16 March 2009 Goals Introduce some motivating cooperative control problems Describe basic concepts

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Lecture 3: graph theory

Lecture 3: graph theory CONTENTS 1 BASIC NOTIONS Lecture 3: graph theory Sonia Martínez October 15, 2014 Abstract The notion of graph is at the core of cooperative control. Essentially, it allows us to model the interaction topology

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Consensus Problems on Small World Graphs: A Structural Study

Consensus Problems on Small World Graphs: A Structural Study Consensus Problems on Small World Graphs: A Structural Study Pedram Hovareshti and John S. Baras 1 Department of Electrical and Computer Engineering and the Institute for Systems Research, University of

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Eigenvectors Via Graph Theory

Eigenvectors Via Graph Theory Eigenvectors Via Graph Theory Jennifer Harris Advisor: Dr. David Garth October 3, 2009 Introduction There is no problem in all mathematics that cannot be solved by direct counting. -Ernst Mach The goal

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems

Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems 1 Decentralized Stabilization of Heterogeneous Linear Multi-Agent Systems Mauro Franceschelli, Andrea Gasparri, Alessandro Giua, and Giovanni Ulivi Abstract In this paper the formation stabilization problem

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

Graph fundamentals. Matrices associated with a graph

Graph fundamentals. Matrices associated with a graph Graph fundamentals Matrices associated with a graph Drawing a picture of a graph is one way to represent it. Another type of representation is via a matrix. Let G be a graph with V (G) ={v 1,v,...,v n

More information

Consensus of Information Under Dynamically Changing Interaction Topologies

Consensus of Information Under Dynamically Changing Interaction Topologies Consensus of Information Under Dynamically Changing Interaction Topologies Wei Ren and Randal W. Beard Abstract This paper considers the problem of information consensus among multiple agents in the presence

More information

Follow links Class Use and other Permissions. For more information, send to:

Follow links Class Use and other Permissions. For more information, send  to: COPYRIGHT NOTICE: is published by Princeton University Press and copyrighted, 2009, by Princeton University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh June 2009 1 Linear independence These problems both appeared in a course of Benny Sudakov at Princeton, but the links to Olympiad problems are due to Yufei

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Abed Elhashash and Daniel B. Szyld. Report Revised November 2007

Abed Elhashash and Daniel B. Szyld. Report Revised November 2007 Perron-Frobenius Properties of General Matrices Abed Elhashash and Daniel B. Szyld Report 07-01-10 Revised November 2007 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld

More information

Nonnegative and spectral matrix theory Lecture notes

Nonnegative and spectral matrix theory Lecture notes Nonnegative and spectral matrix theory Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the first part of the course Nonnegative and spectral matrix theory with applications to

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013 PROFESSOR HENRY C. PINKHAM 1. Prerequisites The only prerequisite is Calculus III (Math 1201) or the equivalent: the first semester of multivariable calculus.

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Algebraic Representation of Networks

Algebraic Representation of Networks Algebraic Representation of Networks 0 1 2 1 1 0 0 1 2 0 0 1 1 1 1 1 Hiroki Sayama sayama@binghamton.edu Describing networks with matrices (1) Adjacency matrix A matrix with rows and columns labeled by

More information

Perron Frobenius Theory

Perron Frobenius Theory Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

arxiv:quant-ph/ v1 22 Aug 2005

arxiv:quant-ph/ v1 22 Aug 2005 Conditions for separability in generalized Laplacian matrices and nonnegative matrices as density matrices arxiv:quant-ph/58163v1 22 Aug 25 Abstract Chai Wah Wu IBM Research Division, Thomas J. Watson

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies IEEE TRANSACTIONS ON AUTOMATIC CONTROL, SUBMITTED FOR PUBLICATION AS A TECHNICAL NOTE. 1 Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies Wei Ren, Student Member,

More information

Applications to network analysis: Eigenvector centrality indices Lecture notes

Applications to network analysis: Eigenvector centrality indices Lecture notes Applications to network analysis: Eigenvector centrality indices Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the second part of the course Nonnegative and spectral matrix

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion

Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion Clustering compiled by Alvin Wan from Professor Benjamin Recht s lecture, Samaneh s discussion 1 Overview With clustering, we have several key motivations: archetypes (factor analysis) segmentation hierarchy

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms : Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA 2 Department of Computer

More information

This section is an introduction to the basic themes of the course.

This section is an introduction to the basic themes of the course. Chapter 1 Matrices and Graphs 1.1 The Adjacency Matrix This section is an introduction to the basic themes of the course. Definition 1.1.1. A simple undirected graph G = (V, E) consists of a non-empty

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors

On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors On Distributed Coordination of Mobile Agents with Changing Nearest Neighbors Ali Jadbabaie Department of Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 jadbabai@seas.upenn.edu

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I) CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange

More information

On the convergence of weighted-average consensus

On the convergence of weighted-average consensus On the convergence of weighted-average consensus Francisco Pedroche Miguel Rebollo Carlos Carrascosa Alberto Palomares arxiv:307.7562v [math.oc] 29 Jul 203 Abstract In this note we give sufficient conditions

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

On the distance and distance signless Laplacian eigenvalues of graphs and the smallest Gersgorin disc

On the distance and distance signless Laplacian eigenvalues of graphs and the smallest Gersgorin disc Electronic Journal of Linear Algebra Volume 34 Volume 34 (018) Article 14 018 On the distance and distance signless Laplacian eigenvalues of graphs and the smallest Gersgorin disc Fouzul Atik Indian Institute

More information

Math 775 Homework 1. Austin Mohr. February 9, 2011

Math 775 Homework 1. Austin Mohr. February 9, 2011 Math 775 Homework 1 Austin Mohr February 9, 2011 Problem 1 Suppose sets S 1, S 2,..., S n contain, respectively, 2, 3,..., n 1 elements. Proposition 1. The number of SDR s is at least 2 n, and this bound

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Spectra of Adjacency and Laplacian Matrices

Spectra of Adjacency and Laplacian Matrices Spectra of Adjacency and Laplacian Matrices Definition: University of Alicante (Spain) Matrix Computing (subject 3168 Degree in Maths) 30 hours (theory)) + 15 hours (practical assignment) Contents 1. Spectra

More information

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Spectral Graph Theory and its Applications Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Adjacency matrix and Laplacian Intuition, spectral graph drawing

More information

Announcements Monday, October 29

Announcements Monday, October 29 Announcements Monday, October 29 WeBWorK on determinents due on Wednesday at :59pm. The quiz on Friday covers 5., 5.2, 5.3. My office is Skiles 244 and Rabinoffice hours are: Mondays, 2 pm; Wednesdays,

More information

1.10 Matrix Representation of Graphs

1.10 Matrix Representation of Graphs 42 Basic Concepts of Graphs 1.10 Matrix Representation of Graphs Definitions: In this section, we introduce two kinds of matrix representations of a graph, that is, the adjacency matrix and incidence matrix

More information

7.5 Bipartite Matching

7.5 Bipartite Matching 7. Bipartite Matching Matching Matching. Input: undirected graph G = (V, E). M E is a matching if each node appears in at most edge in M. Max matching: find a max cardinality matching. Bipartite Matching

More information

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies

Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies IEEE TRANSACTIONS ON AUTOMATIC CONTROL, SUBMITTED FOR PUBLICATION AS A TECHNICAL NOTE. Consensus Seeking in Multi-agent Systems Under Dynamically Changing Interaction Topologies Wei Ren, Student Member,

More information

Complex Laplacians and Applications in Multi-Agent Systems

Complex Laplacians and Applications in Multi-Agent Systems 1 Complex Laplacians and Applications in Multi-Agent Systems Jiu-Gang Dong, and Li Qiu, Fellow, IEEE arxiv:1406.186v [math.oc] 14 Apr 015 Abstract Complex-valued Laplacians have been shown to be powerful

More information

5 Flows and cuts in digraphs

5 Flows and cuts in digraphs 5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

First, we review some important facts on the location of eigenvalues of matrices.

First, we review some important facts on the location of eigenvalues of matrices. BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given

More information

Lecture 15 Perron-Frobenius Theory

Lecture 15 Perron-Frobenius Theory EE363 Winter 2005-06 Lecture 15 Perron-Frobenius Theory Positive and nonnegative matrices and vectors Perron-Frobenius theorems Markov chains Economic growth Population dynamics Max-min and min-max characterization

More information

On the mathematical background of Google PageRank algorithm

On the mathematical background of Google PageRank algorithm Working Paper Series Department of Economics University of Verona On the mathematical background of Google PageRank algorithm Alberto Peretti, Alberto Roveda WP Number: 25 December 2014 ISSN: 2036-2919

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

New feasibility conditions for directed strongly regular graphs

New feasibility conditions for directed strongly regular graphs New feasibility conditions for directed strongly regular graphs Sylvia A. Hobart Jason Williford Department of Mathematics University of Wyoming Laramie, Wyoming, U.S.A sahobart@uwyo.edu, jwillif1@uwyo.edu

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003 T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003 A vector x R n is called positive, symbolically x > 0,

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach

Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach Reaching a Consensus in a Dynamically Changing Environment A Graphical Approach M. Cao Yale Univesity A. S. Morse Yale University B. D. O. Anderson Australia National University and National ICT Australia

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Math 240 Calculus III

Math 240 Calculus III Generalized Calculus III Summer 2015, Session II Thursday, July 23, 2015 Agenda 1. 2. 3. 4. Motivation Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make

More information