# Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design

Save this PDF as:

Size: px
Start display at page:

## Transcription

2 rected graph, wherein edge-weight between a pair of vertices is incremented. In section 3, we consider balanced weight increments in a reversible graph, wherein weights of directed edges between a pair of distinct vertices are increased in a specific way. In section 4 we consider a more general case where edge weights in an arbitrary directed graph are increased in complex ways, but focus on majorizations for the dominant eigenvector only. 1.1 Problem Formulation We consider a weighted and directed graph with n vertices, V 1, V 2,..., V n. For a given pair of distinct vertices, V j and V k, there may or may not be a directed edge from V j to V k. If there is a directed edge from V j to V k, we denote the weight of the edge as e jk > 0. In addition, we permit a directed edge from a vertex V j to itself, with an edge-weight e jj > 0. The topology of the graph can be represented using a n n adjacency matrix P defined as follows: P j, k) = e jk, if there is a directed edge from V j to V k = 0, Otherwise. We denote the eigenvalues of P as λ 1, λ 2,..., λ n, where Without Loss Of Generality WLOG) we assume that λ 1 λ 2... λ n. From classical results on non-negative matrices, we immediately see that P will have a real eigenvalue whose magnitude is at least as large in magnitude as any other eigenvalue; let us label this eigenvalue as λ 1. In the case that the graph is strongly connected, this real eigenvalue in fact has algebraic multiplicity 1 and is strictly dominant [9]. It is worth pointing out that the remaining eigenvalues of P may be complex, and may have algebraic or geometric multiplicities greater than one, and may have associated Jordan blocks of size greater than 1. We use the notation v i for any right eigenvector associated with the eigenvalue λ i. In this paper, we study the effect of graph edge-weight increases on the adjacency matrix s eigenvectors, for three different classes of graphs. For each graph class, we consider particular structured topology changes edge-weight increments). The graph classes and topology modifications that we focus on are representative of engineered modifications of networks, and also permit substantial analysis of eigenvectors as detailed below). Let us enumerate the particular cases considered here, in order of increasingly general graph classes. We here point out that in these descriptions and subsequent development of results, we denote the p th component of a vector x as x p. 1) Symmetric Weight Increments to Undirected Symmetric) Graphs As a first case, we focus on the class of graphs such that e jk = e kj for any pair of the distinct vertices V j and V k. We refer to these graphs as undirected or symmetric graphs, and note that the adjacency matrix is symmetric. For undirected graphs, the eigenvalues of P are real and all are in Jordan blocks of size one. Let the eigenvalues of P in decreasing order of magnitude be λ 1 λ 2... λ n with associated eigenvectors v 1, v 2,..., v n. The topology change that we examine for the undirected graph case is as follows: for a given 1) pair of distinct vertices V j and V k, the edge weights e jk and e kj are identically incremented by a certain nonnegative quantity, say α, where α mine jj, e kk ). The self-loops on the vertices V j and V k, or e jj and e kk, are reduced by the same amount α. That is, we consider modifications that increase the interaction strengths between two vertices at the cost of selfinfluences or weights. The graph resulting from such a topology change is also an undirected graph. Let us denote the adjacency matrix of the resulting graph by P. It is easy to see that P = P + P, where P is the following symmetric matrix: α for p = j and q = j α for p = j and q = k P p, q) = α for p = k and q = j 2) α for p = k and q = k 0 otherwise Let the eigenvalues of P be λ1 λ 2... λ n, with associated eigenvectors v 1, v 2,..., v n. We refer to this case as a Symmetric Weight Increment to an Undirected Graph. Our aim is to characterize the eigenvalues and eigenvectors of P as compared to those of P. We present the results for this case in section 2. Symmetric increments to undirected graphs of the described form arise in numerous application domains, including Markov chain analysis, mitigation of disease spread, and characterization of linear consensus protocols [2, 1]; these structural modifications often arise when resources are re-distributed to improve the response properties of the network, see e.g. [2]. 2) Balanced Weight Increments to Reversible Graphs We consider the class of graphs for which 1) the weights of the edges leaving each vertex sum to 1 i.e. e jk = k 1, j), and 2) there exists a unique vector π, such that π 1 = 1 and π j e jk = π k e kj for each graph edge. The graphs associated with homogeneous reversible Markov chains have these properties, and so we refer to these graphs as reversible ones. We note that such reversible graphs also arise in e.g. consensus algorithms for sensor networks [1]. The adjacency matrix for a reversible graph is a stochastic matrix, for which π is a left eigenvector associated with the dominant eigenvalue at unity. In addition to these properties, the adjacency matrix P is also symmetrizable through a diagonal similarity transformation, specifically A = DP D 1 is symmetric if D = diagπ 0.5 ). Here the diagx) operator on an q-component vector x returns a q q diagonal matrix which has the components of x along the main diagonal). For this case, we note that the eigenvalues of P are real and nondefective. We denote them as 1 = λ 1 λ 2... λ n, with v i denoting the right eigenvector associated with eigenvalue λ i. Let us now discuss the topology change we consider in our development. Motivated by both Markov chain and sensor network consensus applications, we consider topology changes that maintain the reversibility structure and leave the left eigenvector associated with

4 Theorem 2. Consider a symmetric weight increment between vertices V j and V k in an undirected graph, as described in Section 1.1- Case 1. Then, λ 1 λ 1, with equality only if v j 1 = vk 1. Moreover, v 1 k v j 1 vk 1 v j 1. Furthermore, if λ q = λ q and v q = v q q p, where p < n is some positive integer, then 1) λ p+1 λ p+1, with equality only if v j p+1 = vk p+1; and 2) v p+1 k v j p+1 vk p+1 v j p+1. Proof: The majorization of eigenvalues specified in the theorem is akin to existing eigenvalue-majorization results [7], but is included here for the sake of completeness and because the eigenvector majorization follows thereof. The majorization of eigenvector components is then proved, by using the Courant - Fischer Theorem to arrive a contradiction. The topology change is structured in such a way that the adjacency matrices, P and P are symmetric. Hence, the eigenvalues as well as the corresponding eigenvectors) of both the matrices can be found using the Courant - Fischer Theorem. In particular, λ 1 and λ 1 can be written as ) λ 1 x T P x 5) x and ) λ 1 y T P y y y y ) y T P + P ) y y T P y α y k y j) ) 2 where the maximums are taken over vectors of unit length, i.e. x 2 = 1 and y 2 = 1. The vectors that achieve the maximums are the eigenvectors associated with the eigenvalues λ 1 and λ 1. Notice that α y k y j) ) 2 0 for any vector y, with equality only if y j = y k. It is therefore straightforward to see that λ 1 λ 1, with equality only if v j 1 = vk 1. To prove the result on the change in eigenvector entries, we again use the Courant - Fischer Theorem Equations 5 and 6) to arrive at a contradiction. Let us denote the vectors that achieve the maximums as x for Equation 5 and y for Equation 6. Also, assume that x ) k x ) j < y ) k y ) j. Then according to Equation 6, we see that 6) x T P x αx k x j ) 2 y T P y αy k y j ) 2 7) Noting that x T P x y T P y, let x T P x = y T P y + C for some C 0. Thus we find that αx k x j ) 2 C + αy k y j ) 2 8) Equations 8 imply that x ) i x ) j y ) k y ) j, which is a contradiction to our assumption. The result on the eigenvectors v 1 and v 1 thus follows directly from this contradiction. The Courant - Fischer Theorem also permits computation of the eigenvalues λ 2,..., λ n. These eigenvalues can be found by optimizing the same objective as in Equations 5 and 6, but with the additional constraints that the vectors) that achieve the maximums) should be orthogonal to the subspace spanned by the eigenvectors associated with eigenvalues larger in magnitude. The result on the v p+1 and λ p+1, when λ q = λ q and v q = v q, q p, follows directly from this fact. We remark that results similar to Theorem 2 can be developed for the smallest most negative) eigenvalue λ n, and the associated eigenvector v n. However, we have focused our attention on characterizing the dominant and sub-dominant eigenvalues and their eigenvectors, since these spectral properties most often inform characterizations of complex-network dynamics and algorithms. 3. BALANCED TOPOLOGY CHANGES IN REVERSIBLE GRAPHS In section 2 we limited ourselves to the case where the original matrix and perturbation are symmetric, as described in Section Case 1). These eigenvector majorization results can be extended for balanced topology changes to reversible graphs, as described in Section Case 2). This extension is made possible by the observation that a diagonal similarity transformation can be used to equivalence a reversible graph s adjacency matrix with a symmetric matrix. The following theorem presents the result for reversible topologies formally. Theorem 3. Consider a reversible graph topology, and assume that a balanced weight increment is performed between vertices V j and V k, as described in Section 1.1-2). Then the eigenvalues/eigenvectors before and after the perturbation will have the following properties: 1) If v j i = v k i for some i = 2,..., n, then λ i = λ i and v i = v i. 2) If v j 2 vk 2, then λ 2 < λ 2 and v j 2 v k 2 ) v j 2 v k 2 ). Moreover, if λ q = λ q and v q = v q, q < p, where p < n is some positive integer then λ p+1 λ p+1 and v j p+1 vk p+1) v j p+1 vk p+1) Proof: From the discussion in section 1.1-2), we know that P represents a reversible graph. Further π is a left eigenvector associated with the unity eigenvalue of P. Therefore, there exists a similarity transformation matrix D = diagπ 0.5 ) such that A = DP D 1 and Â = D P D 1 are symmetric matrices. Moreover, A = Â A is also symmetric, with the following structure: α p = j and q = j αβ p = j and q = k Ap, q) = αβ p = k and q = j 9) β p = k and q = k 0 otherwise To prove the multiple statements in the theorem, we follow arguments similar to those used in the proofs of Theorem 1

6 Then i {1, 2,..., m} such that v i v j > v i v j, for j = m + 1,..., n. We thus recover that, when the eigenvector is normalized to unit length, v i > v i. Proof: Incrementing rows strictly increases the dominant eigenvalue for a non-negative matrix associated with a connected graph [7], thus λ = λ + for some > 0. To prove the majorization on the eigenvector components we consider a particular scaling of the dominant[ eigenvectors. ] v1:m + q Specifically, let the dominant eigenvectors v = 1, v m+1:n + q 2 be scaled so that q 1 is an m component non-positive vector with q 1i = 0 for some i {1,..., m}. Notice that this scaling can be achieved, by scaling down the original eigenvector v after perturbation by the largest among the ratios v i v i for i = 1,..., m. We shall prove that q 2 is element-wise strictly negative for the assumed scaling. To prove that, let us consider the last n m equations of the eigenvector equation P v = λ v. For notational convenience, we partition the matrix P and similarly, P ) as [ ] P11 P P = 12, P 21 P 22 such that P 11 is an m m matrix and the dimensions of the other partitions are consistent. Substituting and v into the eigenvector equation P v = λ v, and also noting the assumed form for v, we obtain λi P 22)q 2 = P 21q 1 v m+1):n, It is easy to see that λi P 22) is an M-matrix. Using the fact that q 1 is entry-wise non-positive and that the inverse of an M-matrix is a non-negative matrix, we see that for the assumed scaling of the v) the vector q 2 is entry-wise strictly negative. The majorization result in the theorem statement follows immediately. The result presented in Theorem 4, gives us insight into how changes to the topology of an arbitrary directed graph would affect the eigen-structure of the adjacency matrix. We now turn our attention to the case where weights of edges to a given vertex rather than out of a given vertex) in a graph are arbitrarily increased, in which case the entries in a given column of the adjacency matrix are increased arbitrarily. We find that, under certain conditions, arbitrarily incrementing the edge weights to the i th vertex also increases the i th component of the dominant eigenvector relative to the other components of the eigenvector. We present this result in the following theorem. Theorem 5. Consider arbitrarily incrementing edge weights incoming to the first vertex WLOG). Let, E = P P and = λ λ. If I E) is non-negative, then v 1 v j > v 1 v j and for j = 2,..., n. We thus recover that, when the eigenvector is normalized to unit length, v 1 > v 1. Proof: It is easy to see that > 0 [7]. Moreover, from the Perron-Frobenius Theorem for non-negative matrices, the dominant eigenvectors are entry-wise non-negative [10]. [ We consider ] the scaling of the eigenvector v for which v v = 1 where q is a n 1 length vector. We wish to v 2:n + q find conditions for which q is element-wise strictly negative, using perturbation results. In order to find such conditions, we study the eigenvector equation P v = λ v. Substituting for λ and v. We arrive at the following equation. [ 0 λi P ) = I E)v q] If I E) is a non-negative matrix, then it can be seen that q has strictly negative entries. In order to find conditions for which I E) is a nonnegative matrix, we can use the various perturbation results for general matrices such as the Ostrowski - Elsner Theorem [7]. We would also like to point out that our results, presented in Theorems 4 and 5, allows for design of new edges that modify the eigenvectors in a specified way. As we pointed out earlier, eigenvectors of graph-adjacency matrices play an important role in characterizing various aspects of complex networked systems and associated algorithms. The results we developed inform on how these eigenvectors change when the underlying topology of the networked systems, and so the behavior of the algorithms, are modified. Using our results, one can develop frameworks for designing and/or changing existing topologies so as to shape various aspects of these systems. 5. REFERENCES [1] Y. Wan, K. Namuduri, S. Akula, and M. Varanasi. Consensus building in distributed sensor networks with bipartite graph structures. In AIAA Guidance, Navigation, and Control Conference, Oregon, [2] Y. Wan, S. Roy, and A. Saberi. Designing spatially heterogeneous strategies for control of virus spread. Systems Biology, IET, 24): , [3] D. Cvetković and S. Simić. Graph spectra in computer science. Linear Algebra and its Applications, 4346): , [4] B. Ruhnau. Eigenvector centrality a node-centrality? Social networks, 224): , [5] A. Langville and C. Meyer. A survey of eigenvector methods for web information retrieval. SIAM review, 471): , [6] A. Berman and R. Plemmons. Nonnegative matrices in the mathematical sciences [7] G. Stewart and J. Sun. Matrix perturbation theory, volume 175. Academic press New York, [8] S. Roy, A. Saberi, and Y. Wan. Majorizations for the dominant eigenvector of a nonnegative matrix. In American Control Conference, Westin Seattle Hotel, Seattle, Washington, USA, pages , [9] F. Chung. Spectral graph theory, volume 92. American Mathematical Society, [10] R. Gallager. Discrete stochastic processes, volume 101. Kluwer Academic Publishers, 1996.

### Markov Chains and Stochastic Sampling

Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

### On the mathematical background of Google PageRank algorithm

Working Paper Series Department of Economics University of Verona On the mathematical background of Google PageRank algorithm Alberto Peretti, Alberto Roveda WP Number: 25 December 2014 ISSN: 2036-2919

### In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

### Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis

CS-621 Theory Gems October 18, 2012 Lecture 10 Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis 1 Introduction In this lecture, we will see how one can use random walks to

### A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents

A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents Sandip Roy Kristin Herlugson Ali Saberi April 11, 2005 Abstract We address the problem of global

### Calculating Web Page Authority Using the PageRank Algorithm

Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively

### MATH 5524 MATRIX THEORY Pledged Problem Set 2

MATH 5524 MATRIX THEORY Pledged Problem Set 2 Posted Tuesday 25 April 207 Due by 5pm on Wednesday 3 May 207 Complete any four problems, 25 points each You are welcome to complete more problems if you like,

### Link Analysis. Leonid E. Zhukov

Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

### Spectral Analysis of k-balanced Signed Graphs

Spectral Analysis of k-balanced Signed Graphs Leting Wu 1, Xiaowei Ying 1, Xintao Wu 1, Aidong Lu 1 and Zhi-Hua Zhou 2 1 University of North Carolina at Charlotte, USA, {lwu8,xying, xwu,alu1}@uncc.edu

### Lecture 3: graph theory

CONTENTS 1 BASIC NOTIONS Lecture 3: graph theory Sonia Martínez October 15, 2014 Abstract The notion of graph is at the core of cooperative control. Essentially, it allows us to model the interaction topology

### AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

### Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

### Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Spectral Graph Theory and You: and Centrality Metrics Jonathan Gootenberg March 11, 2013 1 / 19 Outline of Topics 1 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial

### Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:

### Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

### Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

### Strongly Regular Decompositions of the Complete Graph

Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl

### NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications

NCS Lecture 8 A Primer on Graph Theory Richard M. Murray Control and Dynamical Systems California Institute of Technology Goals Introduce some motivating cooperative control problems Describe basic concepts

### 22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Numerical Determination of Eigenvalues and Eigenvectors 22.4 Introduction In Section 22. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, 2 2 and. This involved firstly solving

### CONSENSUS BUILDING IN SENSOR NETWORKS AND LONG TERM PLANNING FOR THE NATIONAL AIRSPACE SYSTEM. Naga Venkata Swathik Akula

CONSENSUS BUILDING IN SENSOR NETWORKS AND LONG TERM PLANNING FOR THE NATIONAL AIRSPACE SYSTEM Naga Venkata Swathik Akula Thesis Prepared for the Degree of MASTER OF SCIENCE UNIVERSITY OF NORTH TEXAS May

### A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and Ã = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

### Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Jevin West and Carl T. Bergstrom November 25, 2008 1 Overview There

### Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 2. Basic Linear Algebra continued Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki

### Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

### Randomized Gossip Algorithms

2508 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 6, JUNE 2006 Randomized Gossip Algorithms Stephen Boyd, Fellow, IEEE, Arpita Ghosh, Student Member, IEEE, Balaji Prabhakar, Member, IEEE, and Devavrat

### CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory Tim Roughgarden & Gregory Valiant May 2, 2016 Spectral graph theory is the powerful and beautiful theory that arises from

### 1 Searching the World Wide Web

Hubs and Authorities in a Hyperlinked Environment 1 Searching the World Wide Web Because diverse users each modify the link structure of the WWW within a relatively small scope by creating web-pages on

### Google Page Rank Project Linear Algebra Summer 2012

Google Page Rank Project Linear Algebra Summer 2012 How does an internet search engine, like Google, work? In this project you will discover how the Page Rank algorithm works to give the most relevant

### A Note on Google s PageRank

A Note on Google s PageRank According to Google, google-search on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to

### Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

### Lecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications

Lecture 4: Introduction to Graph Theory and Consensus Richard M. Murray Caltech Control and Dynamical Systems 16 March 2009 Goals Introduce some motivating cooperative control problems Describe basic concepts

### Graph Models The PageRank Algorithm

Graph Models The PageRank Algorithm Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 The PageRank Algorithm I Invented by Larry Page and Sergey Brin around 1998 and

### The Google Markov Chain: convergence speed and eigenvalues

U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department

### 1.10 Matrix Representation of Graphs

42 Basic Concepts of Graphs 1.10 Matrix Representation of Graphs Definitions: In this section, we introduce two kinds of matrix representations of a graph, that is, the adjacency matrix and incidence matrix

### Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

### Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

### SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*

SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract

### Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

### Link Analysis Information Retrieval and Data Mining. Prof. Matteo Matteucci

Link Analysis Information Retrieval and Data Mining Prof. Matteo Matteucci Hyperlinks for Indexing and Ranking 2 Page A Hyperlink Page B Intuitions The anchor text might describe the target page B Anchor

### 17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs

Chapter 17 Graphs and Graph Laplacians 17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs Definition 17.1. A directed graph isa pairg =(V,E), where V = {v

### Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

### MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

### 1.3 Convergence of Regular Markov Chains

Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

### Completely positive matrices of order 5 with ĈP -graph

Note di Matematica ISSN 1123-2536, e-issn 1590-0932 Note Mat. 36 (2016) no. 1, 123 132. doi:10.1285/i15900932v36n1p123 Completely positive matrices of order 5 with ĈP -graph Micol Cedolin Castello 2153,

### Refined Inertia of Matrix Patterns

Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

### Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

### Affine iterations on nonnegative vectors

Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider

### How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University

How works or How linear algebra powers the search engine M. Ram Murty, FRSC Queen s Research Chair Queen s University From: gomath.com/geometry/ellipse.php Metric mishap causes loss of Mars orbiter

### Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

### 5 Quiver Representations

5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

### Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

### Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

### 1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue

### CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

CS 277: Data Mining Mining Web Link Structure Class Presentations In-class, Tuesday and Thursday next week 2-person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1-person teams 4 minutes,

### The Eigenvalue Problem: Perturbation Theory

Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

### CS 664 Segmentation (2) Daniel Huttenlocher

CS 664 Segmentation (2) Daniel Huttenlocher Recap Last time covered perceptual organization more broadly, focused in on pixel-wise segmentation Covered local graph-based methods such as MST and Felzenszwalb-Huttenlocher

### Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

### An Application of the permanent-determinant method: computing the Z-index of trees

Department of Mathematics wtrnumber2013-2 An Application of the permanent-determinant method: computing the Z-index of trees Daryl Deford March 2013 Postal address: Department of Mathematics, Washington

### ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS

ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS Giorgio Picci with T. Taylor, ASU Tempe AZ. Wofgang Runggaldier s Birthday, Brixen July 2007 1 CONSENSUS FOR RANDOM GOSSIP ALGORITHMS Consider a finite

### Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

### On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs Saeed Ahmadizadeh a, Iman Shames a, Samuel Martin b, Dragan Nešić a a Department of Electrical and Electronic Engineering, Melbourne

### 1 Principal component analysis and dimensional reduction

Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finite-dimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an

### Lecture 5: Random Walks and Markov Chain

Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

### Lecture 20 : Markov Chains

CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

### eigenvalues, markov matrices, and the power method

eigenvalues, markov matrices, and the power method Slides by Olson. Some taken loosely from Jeff Jauregui, Some from Semeraro L. Olson Department of Computer Science University of Illinois at Urbana-Champaign

### Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems

Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Rani M. R, Mohith Jagalmohanan, R. Subashini Binary matrices having simultaneous consecutive

### Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

### Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

### Computational Aspects of Aggregation in Biological Systems

Computational Aspects of Aggregation in Biological Systems Vladik Kreinovich and Max Shpak University of Texas at El Paso, El Paso, TX 79968, USA vladik@utep.edu, mshpak@utep.edu Summary. Many biologically

### Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Math 34/84 - Exam Blue Exam Solutions December 4, 8 Instructor: Dr. S. Cooper Name: Read each question carefully. Be sure to show all of your work and not just your final conclusion. You may not use your

### NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

### BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

### An indicator for the number of clusters using a linear map to simplex structure

An indicator for the number of clusters using a linear map to simplex structure Marcus Weber, Wasinee Rungsarityotin, and Alexander Schliep Zuse Institute Berlin ZIB Takustraße 7, D-495 Berlin, Germany

### CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

### 4. Ergodicity and mixing

4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation

### On the adjacency matrix of a block graph

On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

### Lecture 2: September 8

CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

### Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

### Web Structure Mining Nodes, Links and Influence

Web Structure Mining Nodes, Links and Influence 1 Outline 1. Importance of nodes 1. Centrality 2. Prestige 3. Page Rank 4. Hubs and Authority 5. Metrics comparison 2. Link analysis 3. Influence model 1.

### Complex Laplacians and Applications in Multi-Agent Systems

1 Complex Laplacians and Applications in Multi-Agent Systems Jiu-Gang Dong, and Li Qiu, Fellow, IEEE arxiv:1406.186v [math.oc] 14 Apr 015 Abstract Complex-valued Laplacians have been shown to be powerful

### System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den

University of Groningen System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF)

### Chapter 2 Spectra of Finite Graphs

Chapter 2 Spectra of Finite Graphs 2.1 Characteristic Polynomials Let G = (V, E) be a finite graph on n = V vertices. Numbering the vertices, we write down its adjacency matrix in an explicit form of n

### Spectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity

Spectral Graph Theory and its Applications Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Adjacency matrix and Laplacian Intuition, spectral graph drawing

### AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS

AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS K. N. RAGHAVAN 1. Statement of the problem Imagine that between two nodes there is a network of electrical connections, as for example in the following picture

### and let s calculate the image of some vectors under the transformation T.

Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

### Data Analysis and Manifold Learning Lecture 7: Spectral Clustering

Data Analysis and Manifold Learning Lecture 7: Spectral Clustering Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline of Lecture 7 What is spectral

### Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

### Spectral Graph Theory

Spectral Graph Theory Aaron Mishtal April 27, 2016 1 / 36 Outline Overview Linear Algebra Primer History Theory Applications Open Problems Homework Problems References 2 / 36 Outline Overview Linear Algebra

### Consensus, Flocking and Opinion Dynamics

Consensus, Flocking and Opinion Dynamics Antoine Girard Laboratoire Jean Kuntzmann, Université de Grenoble antoine.girard@imag.fr International Summer School of Automatic Control GIPSA Lab, Grenoble, France,

### Lecture 1 and 2: Introduction and Graph theory basics. Spring EE 194, Networked estimation and control (Prof. Khan) January 23, 2012

Lecture 1 and 2: Introduction and Graph theory basics Spring 2012 - EE 194, Networked estimation and control (Prof. Khan) January 23, 2012 Spring 2012: EE-194-02 Networked estimation and control Schedule

### Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

### Comparison of perturbation bounds for the stationary distribution of a Markov chain

Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics

### Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google

### Very few Moore Graphs

Very few Moore Graphs Anurag Bishnoi June 7, 0 Abstract We prove here a well known result in graph theory, originally proved by Hoffman and Singleton, that any non-trivial Moore graph of diameter is regular

### UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

### Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

### Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations