Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design

Size: px
Start display at page:

Download "Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design"

Transcription

1 Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design Rahul Dhal Electrical Engineering and Computer Science Washington State University Pullman, WA rdhal@eecs.wsu.edu Sandip Roy Electrical Engineering and Computer Science Washington State University Pullman, WA sroy@eecs.wsu.edu Yan Wan Department of Electrical Engineering University of North Texas Denton, TX yan.wan@unt.edu ABSTRACT We develop majorization results that characterize changes in eigenvector components of a graph s adjacency matrix when its topology is changed. Specifically, for general weighted, directed) graphs, we characterize changes in dominant eigenvector components for single-row and multi-row incrementations. We also show that topology changes can be tailored to set ratios between the components of the dominant eigenvector. For more limited graph classes specifically, undirected and reversibly-structured ones), majorizations for components of the subdominant and other eigenvectors upon graph modifications are also obtained. Keywords Algebraic Graph Theory, Network Design, Dynamical Networks, Adjacency Matrix, Eigenvectors, Majorization 1. INTRODUCTION Designers and operators of complex networks are often concerned with modifying the network s structure to shape its dynamics in certain ways. For instance, in sensor networking applications, designers may adjust the sensor - to - sensor communication topology to permit fast decision-making and therefore reduce energy consumption [1]. Similarly in epidemic control, changing the population structure eg. through restrictions on travel and quarantine policies) can allow faster elimination of virus spread [2]. In general, these modifications effect complex changes in the dynamics of the network beyond achieving the design goal of interest), that are not well understood and require new methods for characterization. In this paper, we investigate the impact of topology changes on the spectral properties of the adjacency matrix This work was supported by the National Science Foundation Grant Number: ) Corresponding Author of a network s graph, which is in many cases a crucial indicator of the network dynamics. Specifically, we focus on developing majorizations for the eigenvector components of the adjacency matrices of both directed and undirected graphs, upon structured modification of the graph topology. We consider several types of topology changes for symmetric, reversible, and arbitrary directed graph topologies, and develop majorizations for the dominant and/or sub-dominant eigenvalues in these cases. Graph spectra have been used extensively in various fields of computer science [3]. Among these fields, eigenvectors of adjacency matrices or matrices related to adjacency matrices) are particularly important in the areas of web information retrieval and social-network analysis. In these domains, eigenvectors of adjacency matrices are used to define the notion of eigenvector centrality, which is a measure of a node s importance in the network s dynamics [4]. Google s PageRank algorithm and its precursor Hyperlink Induced Topic Search HITS) are both based on variants of this sprectral notion of nodal importance. For a brief review of these and other similar algorithms based on eigenvectors of graph matrices, see [5]. For these computing applications, our results show how structural changes to the network topology will affect a node s centrality measure; as such, they provide insight into algorithm- and topology- modification to change centrality scores. The eigenvector majorization results that we develop rely critically on 1) classical results on non-negative matrices eigenvalues [6], and 2) matrix perturbation concepts. From one viewpoint, our result extend existing eigenvalue majorizations analyses of non-negative matrices [7], toward comparative characterizations of eigenvectors see also [8]). From another viewpoint, our results can be viewed as providing tighter bounds on eigenvector components than would be obtained from perturbation arguments see [7]), for the special class of non-negative matrices. The remainder of the article is organized as follows. In section 1.1, we motivate and formulate the study of eigenvector properties for adjacency matrices of graphs of three types: symmetric or undirected), reversible in the sense of Markov-chain reversibility), and arbitrary undirected graphs. We also define graph modifications of interest for these graph classes. In section 2, we provide eigenvector majorization results for the case of symmetric weight increments in an undi-

2 rected graph, wherein edge-weight between a pair of vertices is incremented. In section 3, we consider balanced weight increments in a reversible graph, wherein weights of directed edges between a pair of distinct vertices are increased in a specific way. In section 4 we consider a more general case where edge weights in an arbitrary directed graph are increased in complex ways, but focus on majorizations for the dominant eigenvector only. 1.1 Problem Formulation We consider a weighted and directed graph with n vertices, V 1, V 2,..., V n. For a given pair of distinct vertices, V j and V k, there may or may not be a directed edge from V j to V k. If there is a directed edge from V j to V k, we denote the weight of the edge as e jk > 0. In addition, we permit a directed edge from a vertex V j to itself, with an edge-weight e jj > 0. The topology of the graph can be represented using a n n adjacency matrix P defined as follows: P j, k) = e jk, if there is a directed edge from V j to V k = 0, Otherwise. We denote the eigenvalues of P as λ 1, λ 2,..., λ n, where Without Loss Of Generality WLOG) we assume that λ 1 λ 2... λ n. From classical results on non-negative matrices, we immediately see that P will have a real eigenvalue whose magnitude is at least as large in magnitude as any other eigenvalue; let us label this eigenvalue as λ 1. In the case that the graph is strongly connected, this real eigenvalue in fact has algebraic multiplicity 1 and is strictly dominant [9]. It is worth pointing out that the remaining eigenvalues of P may be complex, and may have algebraic or geometric multiplicities greater than one, and may have associated Jordan blocks of size greater than 1. We use the notation v i for any right eigenvector associated with the eigenvalue λ i. In this paper, we study the effect of graph edge-weight increases on the adjacency matrix s eigenvectors, for three different classes of graphs. For each graph class, we consider particular structured topology changes edge-weight increments). The graph classes and topology modifications that we focus on are representative of engineered modifications of networks, and also permit substantial analysis of eigenvectors as detailed below). Let us enumerate the particular cases considered here, in order of increasingly general graph classes. We here point out that in these descriptions and subsequent development of results, we denote the p th component of a vector x as x p. 1) Symmetric Weight Increments to Undirected Symmetric) Graphs As a first case, we focus on the class of graphs such that e jk = e kj for any pair of the distinct vertices V j and V k. We refer to these graphs as undirected or symmetric graphs, and note that the adjacency matrix is symmetric. For undirected graphs, the eigenvalues of P are real and all are in Jordan blocks of size one. Let the eigenvalues of P in decreasing order of magnitude be λ 1 λ 2... λ n with associated eigenvectors v 1, v 2,..., v n. The topology change that we examine for the undirected graph case is as follows: for a given 1) pair of distinct vertices V j and V k, the edge weights e jk and e kj are identically incremented by a certain nonnegative quantity, say α, where α mine jj, e kk ). The self-loops on the vertices V j and V k, or e jj and e kk, are reduced by the same amount α. That is, we consider modifications that increase the interaction strengths between two vertices at the cost of selfinfluences or weights. The graph resulting from such a topology change is also an undirected graph. Let us denote the adjacency matrix of the resulting graph by P. It is easy to see that P = P + P, where P is the following symmetric matrix: α for p = j and q = j α for p = j and q = k P p, q) = α for p = k and q = j 2) α for p = k and q = k 0 otherwise Let the eigenvalues of P be λ1 λ 2... λ n, with associated eigenvectors v 1, v 2,..., v n. We refer to this case as a Symmetric Weight Increment to an Undirected Graph. Our aim is to characterize the eigenvalues and eigenvectors of P as compared to those of P. We present the results for this case in section 2. Symmetric increments to undirected graphs of the described form arise in numerous application domains, including Markov chain analysis, mitigation of disease spread, and characterization of linear consensus protocols [2, 1]; these structural modifications often arise when resources are re-distributed to improve the response properties of the network, see e.g. [2]. 2) Balanced Weight Increments to Reversible Graphs We consider the class of graphs for which 1) the weights of the edges leaving each vertex sum to 1 i.e. e jk = k 1, j), and 2) there exists a unique vector π, such that π 1 = 1 and π j e jk = π k e kj for each graph edge. The graphs associated with homogeneous reversible Markov chains have these properties, and so we refer to these graphs as reversible ones. We note that such reversible graphs also arise in e.g. consensus algorithms for sensor networks [1]. The adjacency matrix for a reversible graph is a stochastic matrix, for which π is a left eigenvector associated with the dominant eigenvalue at unity. In addition to these properties, the adjacency matrix P is also symmetrizable through a diagonal similarity transformation, specifically A = DP D 1 is symmetric if D = diagπ 0.5 ). Here the diagx) operator on an q-component vector x returns a q q diagonal matrix which has the components of x along the main diagonal). For this case, we note that the eigenvalues of P are real and nondefective. We denote them as 1 = λ 1 λ 2... λ n, with v i denoting the right eigenvector associated with eigenvalue λ i. Let us now discuss the topology change we consider in our development. Motivated by both Markov chain and sensor network consensus applications, we consider topology changes that maintain the reversibility structure and leave the left eigenvector associated with

3 the adjacency matrix unchanged. Specifically, the edge weight between two vertices V j and V k, or e jk, is incremented by an amount α > 0, while the self-loop edge weight e jj is reduced by the same amount. Similarly, the edge weight e kj is increased by an amount β > 0, while the edge weight e kk is reduced by the same amount. Further, the quantities α and β are assumed to satisfy: = πk. Let the adjacency matrix α β π j of the resulting graph be P. It is easy to see that P = P + P where P is the following matrix. α for p = j and q = j α for p = j and q = k P p, q) = β for p = k and q = j 3) β for p = k and q = k 0 otherwise The vector π is the left eigenvector of P associated with the dominant eigenvalue at unity, since β = πj. α π Hence, π P = 0. Further, k π j P j, k) = π j P j, k) + π j α = π k P k, j) + π k β 4) = π k P k, j) Therefore, P is also the adjacency matrix of a reversible graph. Let the eigenvalues of the P be 1 = λ 1 λ 2... λ n, with v i denoting the right eigenvector associated with eigenvalue λ i. We refer to this type of topology change as Balanced Weight Increments to Reversible graphs. Our aim to characterize the sub-dominant eigenvalues and eigenvectors of P as compared to those of P. We present the results for this case in section 3. Let us briefly discuss the relevance of the above problem to the analysis of Markov chains. For a Markov chain, the adjacency matrix serves as the transition matrix of the chain, and the entries of the transition matrix represent conditional probabilities of state transition [10], i.e. P p, q) is the probability of the chain transitioning to state q given that the current state is p. The vector π specifies the steady-state stateoccupancy probabilities. Thus the balanced topology changes described above can be seen as changing transition probabilities in a way that maintains the steady state distribution as well as reversibility. Thus, the analysis of this case provides insight into how transition probabilities in a reversible Markov chain can be modified so as to maintain the steady state while shaping dynamical performance. 3) Arbitrary Increments to Weighted graphs. The previous cases are limited to special graph classes undirected, reversible), and also consider only very specially structured changes to the topology changes that can be made to the topologies. In this article, we also consider a class of quite-general modifications to arbitrary directed graphs. For this case, we focus exclusively on characterizing the dominant eigenvector of the modified topology. Specifically, in section 4, we present two results - one concerned with incrementing weights of the edges departing from multiple vertices, and one concerned with weight changes on edges entering into a given vertex. We also present results on designing the graph topology to achieve certain ratios among the dominant eigenvector components. To avoid confusion with cases 1 and 2 above, we defer the mathematical description of arbitrary topology changes to section SYMMETRIC TOPOLOGY CHANGES IN AN UNDIRECTED GRAPH In this section, we develop majorization results for eigenvector components of adjacency matrices, upon application of symmetric topology changes to undirected graphs, as described in section 1.1 1). First we present a preliminary theorem which indicates that, when the eigenvectors of the adjacency matrices have a certain special structure, then the associated eigenvalues and the eigenvectors themselves are invariant to the defined symmetric changes in the topology Theorem 1). This straightforward result serves as a starting point for more general spectral analyses, as given in Theorem 2. Theorem 1. Consider a symmetric weight increment between vertices V j and V k in an undirected graph, as described in Section 1.1-1). If v j i = vk i for some eigenvector v i associated with the eigenvalue λ i of P, then v i is also an eigenvector of P with eigenvalue λi. In other words, the eigenvalue λ i and its eigenvector v i are invariant to the topology change. Proof: The proof follows immediately from the fact that P v i = P + P )v i = λ iv i + P v i. However, we automatically find that P v i = 0, and so the result follows. Theorem 1 provides us with a mechanism for changing an undirected graph s structure, in such a manner that particular eigenvectors of its adjacency matrix remain unchanged. The above invariance result may prove applicable for algorithm design in sensor networks, in cases where changing the underlying network topology is a primary tool for improving performance; the result shows how changes can be made that do not affect critical mode shapes, while the remaining dynamics is changed. However, the result is restrictive, in the sense that invariance results hold only when edges are added between vertices whose eigenvector components are identical. The simple result of Theorem 1 serves as a stepping stone towards a more general characterization. Specifically, for symmetric topology changes between arbitrary vertices in an undirected graph, we find that the components of the subdominant eigenvector associated with the two ends of the added edge come closer to each other; similar results hold for eigenvectors associated with other eigenvalues, in a certain sequential sense. This result is formalized in the following theorem.

4 Theorem 2. Consider a symmetric weight increment between vertices V j and V k in an undirected graph, as described in Section 1.1- Case 1. Then, λ 1 λ 1, with equality only if v j 1 = vk 1. Moreover, v 1 k v j 1 vk 1 v j 1. Furthermore, if λ q = λ q and v q = v q q p, where p < n is some positive integer, then 1) λ p+1 λ p+1, with equality only if v j p+1 = vk p+1; and 2) v p+1 k v j p+1 vk p+1 v j p+1. Proof: The majorization of eigenvalues specified in the theorem is akin to existing eigenvalue-majorization results [7], but is included here for the sake of completeness and because the eigenvector majorization follows thereof. The majorization of eigenvector components is then proved, by using the Courant - Fischer Theorem to arrive a contradiction. The topology change is structured in such a way that the adjacency matrices, P and P are symmetric. Hence, the eigenvalues as well as the corresponding eigenvectors) of both the matrices can be found using the Courant - Fischer Theorem. In particular, λ 1 and λ 1 can be written as ) λ 1 x T P x 5) x and ) λ 1 y T P y y y y ) y T P + P ) y y T P y α y k y j) ) 2 where the maximums are taken over vectors of unit length, i.e. x 2 = 1 and y 2 = 1. The vectors that achieve the maximums are the eigenvectors associated with the eigenvalues λ 1 and λ 1. Notice that α y k y j) ) 2 0 for any vector y, with equality only if y j = y k. It is therefore straightforward to see that λ 1 λ 1, with equality only if v j 1 = vk 1. To prove the result on the change in eigenvector entries, we again use the Courant - Fischer Theorem Equations 5 and 6) to arrive at a contradiction. Let us denote the vectors that achieve the maximums as x for Equation 5 and y for Equation 6. Also, assume that x ) k x ) j < y ) k y ) j. Then according to Equation 6, we see that 6) x T P x αx k x j ) 2 y T P y αy k y j ) 2 7) Noting that x T P x y T P y, let x T P x = y T P y + C for some C 0. Thus we find that αx k x j ) 2 C + αy k y j ) 2 8) Equations 8 imply that x ) i x ) j y ) k y ) j, which is a contradiction to our assumption. The result on the eigenvectors v 1 and v 1 thus follows directly from this contradiction. The Courant - Fischer Theorem also permits computation of the eigenvalues λ 2,..., λ n. These eigenvalues can be found by optimizing the same objective as in Equations 5 and 6, but with the additional constraints that the vectors) that achieve the maximums) should be orthogonal to the subspace spanned by the eigenvectors associated with eigenvalues larger in magnitude. The result on the v p+1 and λ p+1, when λ q = λ q and v q = v q, q p, follows directly from this fact. We remark that results similar to Theorem 2 can be developed for the smallest most negative) eigenvalue λ n, and the associated eigenvector v n. However, we have focused our attention on characterizing the dominant and sub-dominant eigenvalues and their eigenvectors, since these spectral properties most often inform characterizations of complex-network dynamics and algorithms. 3. BALANCED TOPOLOGY CHANGES IN REVERSIBLE GRAPHS In section 2 we limited ourselves to the case where the original matrix and perturbation are symmetric, as described in Section Case 1). These eigenvector majorization results can be extended for balanced topology changes to reversible graphs, as described in Section Case 2). This extension is made possible by the observation that a diagonal similarity transformation can be used to equivalence a reversible graph s adjacency matrix with a symmetric matrix. The following theorem presents the result for reversible topologies formally. Theorem 3. Consider a reversible graph topology, and assume that a balanced weight increment is performed between vertices V j and V k, as described in Section 1.1-2). Then the eigenvalues/eigenvectors before and after the perturbation will have the following properties: 1) If v j i = v k i for some i = 2,..., n, then λ i = λ i and v i = v i. 2) If v j 2 vk 2, then λ 2 < λ 2 and v j 2 v k 2 ) v j 2 v k 2 ). Moreover, if λ q = λ q and v q = v q, q < p, where p < n is some positive integer then λ p+1 λ p+1 and v j p+1 vk p+1) v j p+1 vk p+1) Proof: From the discussion in section 1.1-2), we know that P represents a reversible graph. Further π is a left eigenvector associated with the unity eigenvalue of P. Therefore, there exists a similarity transformation matrix D = diagπ 0.5 ) such that A = DP D 1 and  = D P D 1 are symmetric matrices. Moreover, A =  A is also symmetric, with the following structure: α p = j and q = j αβ p = j and q = k Ap, q) = αβ p = k and q = j 9) β p = k and q = k 0 otherwise To prove the multiple statements in the theorem, we follow arguments similar to those used in the proofs of Theorem 1

5 and 2, while using the fact that if v i is an eigenvector associated with an eigenvalue λ of P or P ), then Dv i is an eigenvector associated with an eigenvalue λ of A or Â.) The proofs are organized into different cases for ease of presentation. 1) If v j i = vk i then P v i = 0. Therefore, P v i = P v i = λ iv i. The result follows immediately. 2) The eigenvalues of P respectively, P ) are identical to the eigenvalues of symmetric matrix A respectively, Â), since the matrices are related by a similarity transform. We can thus use Courant - Fischer Theorem to characterize the eigenvalues and eigenvectors of P with respect to those of P, in analogy with the proof of Theorem 2. Using the fact that P and P are stochastic matrices and noting the form of the similarity transform, we see that the sub-dominant eigenvalues λ 2 and λ 2 can be written as and ) λ 2 y T Ây y 1 n y 1 n y 1 n ) λ 2 x T Ax x 1 n ) y T A + A) y y T Ay αy j βy k) 2 ), 10) 11) where we have used the notation x 1 n to indicate that the maximum is taken over all vectors of unit length that are orthogonal to the n-component vector whose entries all 1 equal n. The vectors x and y that achieve the maxima, subject to the constraint, are the eigenvectors associated with the eigenvalues λ 2 and λ 2, respectively. We notice that αy j βy k) 2 0 for any vector y, with equality only if βy k = αy j or equivalently v j 2 = vk 2 ). It therefore follows that λ 2 < λ 2 if v j 2 vk 2. To prove the result on the components of the sub-dominant eigenvectors, we again use Courant Fischer Theorem Equations 10 and 11) to arrive at a contradiction. Let us the denote vectors that achieve the maxima subject to the constraints) as x for Equation 10 and y for Equation 11. Also, let us assume to obtain a contradiction) that αx ) j βx ) k < αy ) j βy ) k. Then using x in Equation 11 we see that x T Ax αx j βx k ) 2 y T Ay αy j βy k ) 2 12) Noting that x T Ax y T Ay, let x T Ax = y T Ay + C for some C 0. Thus we find that, λ 2 αx j βx k ) 2 λ 2 C αy j βy k ) 2 13) Equation 13 implies that αx ) j βx ) k αy ) j βy ) k, which is a contradiction to our assumption. The result on the eigenvectors v 2 and v 2 follows from this contradiction, from the fact that v 2 = D 1 x and v 2 = D 1 y, and from the assumed relationship between P j, k) and P k, j). The results on v p+1 and v p+1, under the condition that v p = v p and λ p = λ p p < n, can be proved using a very similar argument. 4. ARBITRARY WEIGHT INCREMENTS IN A DIRECTED GRAPH The results developed in sections 2 and 3 consider very specific changes to undirected or reversible graph topologies. A characterization of the change in the eigenvalues and/or eigenvectors for general topology modifications in arbitrary graphs has wider applications. In this section, we present some characterizations of the dominant eigenvector, for more-general topology changes in arbitrary directed and weighted graphs. Please note that the all digraphs we consider are strongly connected and hence the adjacency matrices are non-negative and irreducible. To avoid confusion between these results and those developed in sections 2 and 3, we use slightly different and simplified) notations here. For the remainder of the paper, we consider a directed and weighted graph of n vertices labeled V 1, V 2,..., V n, with an adjacency matrix denoted as P. We use P to denote the adjacency matrix of a graph resulting from a change in the topology. We assume that both P and P capture strongly-connected graphs, in which case they both have positive real eigenvalues of algebraic multiplicity 1 that are larger in magnitude than all other eigenvalues, i.e. that are dominant. The dominant eigenvectors of P and P are denoted as v and v respectively, while the dominant eigenvalues are denoted as λ and λ respectively. For a vector x, the notation x p:q specifies a q p + 1)-component subvector, which contains the correspondingly indexed components of the x i.e., the components between the pth and qth components, inclusive). The outline of this section is as follows. In Theorem 4 we present a characterization of the dominant eigenvector when weights of edges emerging from m distinct vertices say V 1,..., V m) are arbitrarily increased. In terms of the adjacency matrix, this change in graph topology corresponds to the case where we arbitrarily increment the first m < n rows of P. In Theorem 5 we present a similar result on the arbitrary weight increments to edges directed into a given vertex. This corresponds to arbitrarily incrementing a single column, say the first column WLOG, of the graph adjacency matrix. We next present a characterization of the dominant eigenvector of adjacency matrix, when weights of the edges emerging from m given vertices are incremented arbitrarily increased. We stress here that eigen-structure of the adjacency matrix is graph invariant to within a permutation, i.e. the order of the m vertices is not important; thus we focus on incrementing the first m rows WLOG. Theorem 4. Consider arbitrarily increasing the weights of the edges emerging from vertices V 1, V 2...., V m.

6 Then i {1, 2,..., m} such that v i v j > v i v j, for j = m + 1,..., n. We thus recover that, when the eigenvector is normalized to unit length, v i > v i. Proof: Incrementing rows strictly increases the dominant eigenvalue for a non-negative matrix associated with a connected graph [7], thus λ = λ + for some > 0. To prove the majorization on the eigenvector components we consider a particular scaling of the dominant[ eigenvectors. ] v1:m + q Specifically, let the dominant eigenvectors v = 1, v m+1:n + q 2 be scaled so that q 1 is an m component non-positive vector with q 1i = 0 for some i {1,..., m}. Notice that this scaling can be achieved, by scaling down the original eigenvector v after perturbation by the largest among the ratios v i v i for i = 1,..., m. We shall prove that q 2 is element-wise strictly negative for the assumed scaling. To prove that, let us consider the last n m equations of the eigenvector equation P v = λ v. For notational convenience, we partition the matrix P and similarly, P ) as [ ] P11 P P = 12, P 21 P 22 such that P 11 is an m m matrix and the dimensions of the other partitions are consistent. Substituting and v into the eigenvector equation P v = λ v, and also noting the assumed form for v, we obtain λi P 22)q 2 = P 21q 1 v m+1):n, It is easy to see that λi P 22) is an M-matrix. Using the fact that q 1 is entry-wise non-positive and that the inverse of an M-matrix is a non-negative matrix, we see that for the assumed scaling of the v) the vector q 2 is entry-wise strictly negative. The majorization result in the theorem statement follows immediately. The result presented in Theorem 4, gives us insight into how changes to the topology of an arbitrary directed graph would affect the eigen-structure of the adjacency matrix. We now turn our attention to the case where weights of edges to a given vertex rather than out of a given vertex) in a graph are arbitrarily increased, in which case the entries in a given column of the adjacency matrix are increased arbitrarily. We find that, under certain conditions, arbitrarily incrementing the edge weights to the i th vertex also increases the i th component of the dominant eigenvector relative to the other components of the eigenvector. We present this result in the following theorem. Theorem 5. Consider arbitrarily incrementing edge weights incoming to the first vertex WLOG). Let, E = P P and = λ λ. If I E) is non-negative, then v 1 v j > v 1 v j and for j = 2,..., n. We thus recover that, when the eigenvector is normalized to unit length, v 1 > v 1. Proof: It is easy to see that > 0 [7]. Moreover, from the Perron-Frobenius Theorem for non-negative matrices, the dominant eigenvectors are entry-wise non-negative [10]. [ We consider ] the scaling of the eigenvector v for which v v = 1 where q is a n 1 length vector. We wish to v 2:n + q find conditions for which q is element-wise strictly negative, using perturbation results. In order to find such conditions, we study the eigenvector equation P v = λ v. Substituting for λ and v. We arrive at the following equation. [ 0 λi P ) = I E)v q] If I E) is a non-negative matrix, then it can be seen that q has strictly negative entries. In order to find conditions for which I E) is a nonnegative matrix, we can use the various perturbation results for general matrices such as the Ostrowski - Elsner Theorem [7]. We would also like to point out that our results, presented in Theorems 4 and 5, allows for design of new edges that modify the eigenvectors in a specified way. As we pointed out earlier, eigenvectors of graph-adjacency matrices play an important role in characterizing various aspects of complex networked systems and associated algorithms. The results we developed inform on how these eigenvectors change when the underlying topology of the networked systems, and so the behavior of the algorithms, are modified. Using our results, one can develop frameworks for designing and/or changing existing topologies so as to shape various aspects of these systems. 5. REFERENCES [1] Y. Wan, K. Namuduri, S. Akula, and M. Varanasi. Consensus building in distributed sensor networks with bipartite graph structures. In AIAA Guidance, Navigation, and Control Conference, Oregon, [2] Y. Wan, S. Roy, and A. Saberi. Designing spatially heterogeneous strategies for control of virus spread. Systems Biology, IET, 24): , [3] D. Cvetković and S. Simić. Graph spectra in computer science. Linear Algebra and its Applications, 4346): , [4] B. Ruhnau. Eigenvector centrality a node-centrality? Social networks, 224): , [5] A. Langville and C. Meyer. A survey of eigenvector methods for web information retrieval. SIAM review, 471): , [6] A. Berman and R. Plemmons. Nonnegative matrices in the mathematical sciences [7] G. Stewart and J. Sun. Matrix perturbation theory, volume 175. Academic press New York, [8] S. Roy, A. Saberi, and Y. Wan. Majorizations for the dominant eigenvector of a nonnegative matrix. In American Control Conference, Westin Seattle Hotel, Seattle, Washington, USA, pages , [9] F. Chung. Spectral graph theory, volume 92. American Mathematical Society, [10] R. Gallager. Discrete stochastic processes, volume 101. Kluwer Academic Publishers, 1996.

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Nonnegative and spectral matrix theory Lecture notes

Nonnegative and spectral matrix theory Lecture notes Nonnegative and spectral matrix theory Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the first part of the course Nonnegative and spectral matrix theory with applications to

More information

Applications to network analysis: Eigenvector centrality indices Lecture notes

Applications to network analysis: Eigenvector centrality indices Lecture notes Applications to network analysis: Eigenvector centrality indices Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the second part of the course Nonnegative and spectral matrix

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis

More information

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part

A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part 9. A Control-Theoretic Perspective on the Design of Distributed Agreement Protocols, Part Sandip Roy Ali Saberi Kristin Herlugson Abstract This is the second of a two-part paper describing a control-theoretic

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Eigenvectors Via Graph Theory

Eigenvectors Via Graph Theory Eigenvectors Via Graph Theory Jennifer Harris Advisor: Dr. David Garth October 3, 2009 Introduction There is no problem in all mathematics that cannot be solved by direct counting. -Ernst Mach The goal

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

The local equivalence of two distances between clusterings: the Misclassification Error metric and the χ 2 distance

The local equivalence of two distances between clusterings: the Misclassification Error metric and the χ 2 distance The local equivalence of two distances between clusterings: the Misclassification Error metric and the χ 2 distance Marina Meilă University of Washington Department of Statistics Box 354322 Seattle, WA

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis

Lecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis CS-621 Theory Gems October 18, 2012 Lecture 10 Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis 1 Introduction In this lecture, we will see how one can use random walks to

More information

Spectra of Adjacency and Laplacian Matrices

Spectra of Adjacency and Laplacian Matrices Spectra of Adjacency and Laplacian Matrices Definition: University of Alicante (Spain) Matrix Computing (subject 3168 Degree in Maths) 30 hours (theory)) + 15 hours (practical assignment) Contents 1. Spectra

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68

CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68 CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68 References 1 L. Freeman, Centrality in Social Networks: Conceptual Clarification, Social Networks, Vol. 1, 1978/1979, pp. 215 239. 2 S. Wasserman

More information

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly

More information

Spectral Graph Theory Tools. Analysis of Complex Networks

Spectral Graph Theory Tools. Analysis of Complex Networks Spectral Graph Theory Tools for the Department of Mathematics and Computer Science Emory University Atlanta, GA 30322, USA Acknowledgments Christine Klymko (Emory) Ernesto Estrada (Strathclyde, UK) Support:

More information

On the simultaneous diagonal stability of a pair of positive linear systems

On the simultaneous diagonal stability of a pair of positive linear systems On the simultaneous diagonal stability of a pair of positive linear systems Oliver Mason Hamilton Institute NUI Maynooth Ireland Robert Shorten Hamilton Institute NUI Maynooth Ireland Abstract In this

More information

Lecture: Local Spectral Methods (1 of 4)

Lecture: Local Spectral Methods (1 of 4) Stat260/CS294: Spectral Graph Methods Lecture 18-03/31/2015 Lecture: Local Spectral Methods (1 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

The spectra of super line multigraphs

The spectra of super line multigraphs The spectra of super line multigraphs Jay Bagga Department of Computer Science Ball State University Muncie, IN jbagga@bsuedu Robert B Ellis Department of Applied Mathematics Illinois Institute of Technology

More information

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES MIKE BOYLE. Introduction By a nonnegative matrix we mean a matrix whose entries are nonnegative real numbers. By positive matrix we mean a matrix

More information

On the mathematical background of Google PageRank algorithm

On the mathematical background of Google PageRank algorithm Working Paper Series Department of Economics University of Verona On the mathematical background of Google PageRank algorithm Alberto Peretti, Alberto Roveda WP Number: 25 December 2014 ISSN: 2036-2919

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

Calculating Web Page Authority Using the PageRank Algorithm

Calculating Web Page Authority Using the PageRank Algorithm Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains M. Catral Department of Mathematics and Statistics University of Victoria Victoria, BC Canada V8W 3R4 S. J. Kirkland Hamilton Institute

More information

A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents

A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents A Control-Theoretic Approach to Distributed Discrete-Valued Decision-Making in Networks of Sensing Agents Sandip Roy Kristin Herlugson Ali Saberi April 11, 2005 Abstract We address the problem of global

More information

Nonnegative Matrices I

Nonnegative Matrices I Nonnegative Matrices I Daisuke Oyama Topics in Economic Theory September 26, 2017 References J. L. Stuart, Digraphs and Matrices, in Handbook of Linear Algebra, Chapter 29, 2006. R. A. Brualdi and H. J.

More information

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.

More information

Spectral Properties of Matrix Polynomials in the Max Algebra

Spectral Properties of Matrix Polynomials in the Max Algebra Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics Spectral Graph Theory and You: and Centrality Metrics Jonathan Gootenberg March 11, 2013 1 / 19 Outline of Topics 1 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Lecture Note 12: The Eigenvalue Problem

Lecture Note 12: The Eigenvalue Problem MATH 5330: Computational Methods of Linear Algebra Lecture Note 12: The Eigenvalue Problem 1 Theoretical Background Xianyi Zeng Department of Mathematical Sciences, UTEP The eigenvalue problem is a classical

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Eigenvalues and Eigenvectors: An Introduction

Eigenvalues and Eigenvectors: An Introduction Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Link Analysis. Leonid E. Zhukov

Link Analysis. Leonid E. Zhukov Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

MATH 5524 MATRIX THEORY Pledged Problem Set 2

MATH 5524 MATRIX THEORY Pledged Problem Set 2 MATH 5524 MATRIX THEORY Pledged Problem Set 2 Posted Tuesday 25 April 207 Due by 5pm on Wednesday 3 May 207 Complete any four problems, 25 points each You are welcome to complete more problems if you like,

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Lecture notes for the course Games on Graphs B. Srivathsan Chennai Mathematical Institute, India 1 Markov Chains We will define Markov chains in a manner that will be useful to

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

MAA704, Perron-Frobenius theory and Markov chains.

MAA704, Perron-Frobenius theory and Markov chains. November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Uncertainty and Randomization

Uncertainty and Randomization Uncertainty and Randomization The PageRank Computation in Google Roberto Tempo IEIIT-CNR Politecnico di Torino tempo@polito.it 1993: Robustness of Linear Systems 1993: Robustness of Linear Systems 16 Years

More information

series. Utilize the methods of calculus to solve applied problems that require computational or algebraic techniques..

series. Utilize the methods of calculus to solve applied problems that require computational or algebraic techniques.. 1 Use computational techniques and algebraic skills essential for success in an academic, personal, or workplace setting. (Computational and Algebraic Skills) MAT 203 MAT 204 MAT 205 MAT 206 Calculus I

More information

Discrete quantum random walks

Discrete quantum random walks Quantum Information and Computation: Report Edin Husić edin.husic@ens-lyon.fr Discrete quantum random walks Abstract In this report, we present the ideas behind the notion of quantum random walks. We further

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings

Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline

More information

Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides

Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering Some Slides Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104,

More information

Strongly Regular Decompositions of the Complete Graph

Strongly Regular Decompositions of the Complete Graph Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl

More information

ECEN 689 Special Topics in Data Science for Communications Networks

ECEN 689 Special Topics in Data Science for Communications Networks ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 8 Random Walks, Matrices and PageRank Graphs

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated

More information

Google Page Rank Project Linear Algebra Summer 2012

Google Page Rank Project Linear Algebra Summer 2012 Google Page Rank Project Linear Algebra Summer 2012 How does an internet search engine, like Google, work? In this project you will discover how the Page Rank algorithm works to give the most relevant

More information

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory

CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory Tim Roughgarden & Gregory Valiant May 2, 2016 Spectral graph theory is the powerful and beautiful theory that arises from

More information

Spectral Analysis of k-balanced Signed Graphs

Spectral Analysis of k-balanced Signed Graphs Spectral Analysis of k-balanced Signed Graphs Leting Wu 1, Xiaowei Ying 1, Xintao Wu 1, Aidong Lu 1 and Zhi-Hua Zhou 2 1 University of North Carolina at Charlotte, USA, {lwu8,xying, xwu,alu1}@uncc.edu

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Perron Frobenius Theory

Perron Frobenius Theory Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B

More information

Lecture 3: graph theory

Lecture 3: graph theory CONTENTS 1 BASIC NOTIONS Lecture 3: graph theory Sonia Martínez October 15, 2014 Abstract The notion of graph is at the core of cooperative control. Essentially, it allows us to model the interaction topology

More information

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker Archive of past papers, solutions and homeworks for MATH 224, Linear Algebra 2, Spring 213, Laurence Barker version: 4 June 213 Source file: archfall99.tex page 2: Homeworks. page 3: Quizzes. page 4: Midterm

More information

Consensus Problems on Small World Graphs: A Structural Study

Consensus Problems on Small World Graphs: A Structural Study Consensus Problems on Small World Graphs: A Structural Study Pedram Hovareshti and John S. Baras 1 Department of Electrical and Computer Engineering and the Institute for Systems Research, University of

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Numerical Determination of Eigenvalues and Eigenvectors 22.4 Introduction In Section 22. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, 2 2 and. This involved firstly solving

More information

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications

NCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications NCS Lecture 8 A Primer on Graph Theory Richard M. Murray Control and Dynamical Systems California Institute of Technology Goals Introduce some motivating cooperative control problems Describe basic concepts

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds

Complex Networks CSYS/MATH 303, Spring, Prof. Peter Dodds Complex Networks CSYS/MATH 303, Spring, 2011 Prof. Peter Dodds Department of Mathematics & Statistics Center for Complex Systems Vermont Advanced Computing Center University of Vermont Licensed under the

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

The Second Eigenvalue of the Google Matrix

The Second Eigenvalue of the Google Matrix The Second Eigenvalue of the Google Matrix Taher H. Haveliwala and Sepandar D. Kamvar Stanford University {taherh,sdkamvar}@cs.stanford.edu Abstract. We determine analytically the modulus of the second

More information

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Jevin West and Carl T. Bergstrom November 25, 2008 1 Overview There

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Basis Construction from Power Series Expansions of Value Functions

Basis Construction from Power Series Expansions of Value Functions Basis Construction from Power Series Expansions of Value Functions Sridhar Mahadevan Department of Computer Science University of Massachusetts Amherst, MA 3 mahadeva@cs.umass.edu Bo Liu Department of

More information

Chapter 1. Vectors, Matrices, and Linear Spaces

Chapter 1. Vectors, Matrices, and Linear Spaces 1.7 Applications to Population Distributions 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.7. Applications to Population Distributions Note. In this section we break a population into states and

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

ON THE QUALITY OF SPECTRAL SEPARATORS

ON THE QUALITY OF SPECTRAL SEPARATORS ON THE QUALITY OF SPECTRAL SEPARATORS STEPHEN GUATTERY AND GARY L. MILLER Abstract. Computing graph separators is an important step in many graph algorithms. A popular technique for finding separators

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 2. Basic Linear Algebra continued Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki

More information

CONSENSUS BUILDING IN SENSOR NETWORKS AND LONG TERM PLANNING FOR THE NATIONAL AIRSPACE SYSTEM. Naga Venkata Swathik Akula

CONSENSUS BUILDING IN SENSOR NETWORKS AND LONG TERM PLANNING FOR THE NATIONAL AIRSPACE SYSTEM. Naga Venkata Swathik Akula CONSENSUS BUILDING IN SENSOR NETWORKS AND LONG TERM PLANNING FOR THE NATIONAL AIRSPACE SYSTEM Naga Venkata Swathik Akula Thesis Prepared for the Degree of MASTER OF SCIENCE UNIVERSITY OF NORTH TEXAS May

More information