Majorizations for the Eigenvectors of GraphAdjacency Matrices: A Tool for Complex Network Design


 Martin Arnold
 11 months ago
 Views:
Transcription
1 Majorizations for the Eigenvectors of GraphAdjacency Matrices: A Tool for Complex Network Design Rahul Dhal Electrical Engineering and Computer Science Washington State University Pullman, WA Sandip Roy Electrical Engineering and Computer Science Washington State University Pullman, WA Yan Wan Department of Electrical Engineering University of North Texas Denton, TX ABSTRACT We develop majorization results that characterize changes in eigenvector components of a graph s adjacency matrix when its topology is changed. Specifically, for general weighted, directed) graphs, we characterize changes in dominant eigenvector components for singlerow and multirow incrementations. We also show that topology changes can be tailored to set ratios between the components of the dominant eigenvector. For more limited graph classes specifically, undirected and reversiblystructured ones), majorizations for components of the subdominant and other eigenvectors upon graph modifications are also obtained. Keywords Algebraic Graph Theory, Network Design, Dynamical Networks, Adjacency Matrix, Eigenvectors, Majorization 1. INTRODUCTION Designers and operators of complex networks are often concerned with modifying the network s structure to shape its dynamics in certain ways. For instance, in sensor networking applications, designers may adjust the sensor  to  sensor communication topology to permit fast decisionmaking and therefore reduce energy consumption [1]. Similarly in epidemic control, changing the population structure eg. through restrictions on travel and quarantine policies) can allow faster elimination of virus spread [2]. In general, these modifications effect complex changes in the dynamics of the network beyond achieving the design goal of interest), that are not well understood and require new methods for characterization. In this paper, we investigate the impact of topology changes on the spectral properties of the adjacency matrix This work was supported by the National Science Foundation Grant Number: ) Corresponding Author of a network s graph, which is in many cases a crucial indicator of the network dynamics. Specifically, we focus on developing majorizations for the eigenvector components of the adjacency matrices of both directed and undirected graphs, upon structured modification of the graph topology. We consider several types of topology changes for symmetric, reversible, and arbitrary directed graph topologies, and develop majorizations for the dominant and/or subdominant eigenvalues in these cases. Graph spectra have been used extensively in various fields of computer science [3]. Among these fields, eigenvectors of adjacency matrices or matrices related to adjacency matrices) are particularly important in the areas of web information retrieval and socialnetwork analysis. In these domains, eigenvectors of adjacency matrices are used to define the notion of eigenvector centrality, which is a measure of a node s importance in the network s dynamics [4]. Google s PageRank algorithm and its precursor Hyperlink Induced Topic Search HITS) are both based on variants of this sprectral notion of nodal importance. For a brief review of these and other similar algorithms based on eigenvectors of graph matrices, see [5]. For these computing applications, our results show how structural changes to the network topology will affect a node s centrality measure; as such, they provide insight into algorithm and topology modification to change centrality scores. The eigenvector majorization results that we develop rely critically on 1) classical results on nonnegative matrices eigenvalues [6], and 2) matrix perturbation concepts. From one viewpoint, our result extend existing eigenvalue majorizations analyses of nonnegative matrices [7], toward comparative characterizations of eigenvectors see also [8]). From another viewpoint, our results can be viewed as providing tighter bounds on eigenvector components than would be obtained from perturbation arguments see [7]), for the special class of nonnegative matrices. The remainder of the article is organized as follows. In section 1.1, we motivate and formulate the study of eigenvector properties for adjacency matrices of graphs of three types: symmetric or undirected), reversible in the sense of Markovchain reversibility), and arbitrary undirected graphs. We also define graph modifications of interest for these graph classes. In section 2, we provide eigenvector majorization results for the case of symmetric weight increments in an undi
2 rected graph, wherein edgeweight between a pair of vertices is incremented. In section 3, we consider balanced weight increments in a reversible graph, wherein weights of directed edges between a pair of distinct vertices are increased in a specific way. In section 4 we consider a more general case where edge weights in an arbitrary directed graph are increased in complex ways, but focus on majorizations for the dominant eigenvector only. 1.1 Problem Formulation We consider a weighted and directed graph with n vertices, V 1, V 2,..., V n. For a given pair of distinct vertices, V j and V k, there may or may not be a directed edge from V j to V k. If there is a directed edge from V j to V k, we denote the weight of the edge as e jk > 0. In addition, we permit a directed edge from a vertex V j to itself, with an edgeweight e jj > 0. The topology of the graph can be represented using a n n adjacency matrix P defined as follows: P j, k) = e jk, if there is a directed edge from V j to V k = 0, Otherwise. We denote the eigenvalues of P as λ 1, λ 2,..., λ n, where Without Loss Of Generality WLOG) we assume that λ 1 λ 2... λ n. From classical results on nonnegative matrices, we immediately see that P will have a real eigenvalue whose magnitude is at least as large in magnitude as any other eigenvalue; let us label this eigenvalue as λ 1. In the case that the graph is strongly connected, this real eigenvalue in fact has algebraic multiplicity 1 and is strictly dominant [9]. It is worth pointing out that the remaining eigenvalues of P may be complex, and may have algebraic or geometric multiplicities greater than one, and may have associated Jordan blocks of size greater than 1. We use the notation v i for any right eigenvector associated with the eigenvalue λ i. In this paper, we study the effect of graph edgeweight increases on the adjacency matrix s eigenvectors, for three different classes of graphs. For each graph class, we consider particular structured topology changes edgeweight increments). The graph classes and topology modifications that we focus on are representative of engineered modifications of networks, and also permit substantial analysis of eigenvectors as detailed below). Let us enumerate the particular cases considered here, in order of increasingly general graph classes. We here point out that in these descriptions and subsequent development of results, we denote the p th component of a vector x as x p. 1) Symmetric Weight Increments to Undirected Symmetric) Graphs As a first case, we focus on the class of graphs such that e jk = e kj for any pair of the distinct vertices V j and V k. We refer to these graphs as undirected or symmetric graphs, and note that the adjacency matrix is symmetric. For undirected graphs, the eigenvalues of P are real and all are in Jordan blocks of size one. Let the eigenvalues of P in decreasing order of magnitude be λ 1 λ 2... λ n with associated eigenvectors v 1, v 2,..., v n. The topology change that we examine for the undirected graph case is as follows: for a given 1) pair of distinct vertices V j and V k, the edge weights e jk and e kj are identically incremented by a certain nonnegative quantity, say α, where α mine jj, e kk ). The selfloops on the vertices V j and V k, or e jj and e kk, are reduced by the same amount α. That is, we consider modifications that increase the interaction strengths between two vertices at the cost of selfinfluences or weights. The graph resulting from such a topology change is also an undirected graph. Let us denote the adjacency matrix of the resulting graph by P. It is easy to see that P = P + P, where P is the following symmetric matrix: α for p = j and q = j α for p = j and q = k P p, q) = α for p = k and q = j 2) α for p = k and q = k 0 otherwise Let the eigenvalues of P be λ1 λ 2... λ n, with associated eigenvectors v 1, v 2,..., v n. We refer to this case as a Symmetric Weight Increment to an Undirected Graph. Our aim is to characterize the eigenvalues and eigenvectors of P as compared to those of P. We present the results for this case in section 2. Symmetric increments to undirected graphs of the described form arise in numerous application domains, including Markov chain analysis, mitigation of disease spread, and characterization of linear consensus protocols [2, 1]; these structural modifications often arise when resources are redistributed to improve the response properties of the network, see e.g. [2]. 2) Balanced Weight Increments to Reversible Graphs We consider the class of graphs for which 1) the weights of the edges leaving each vertex sum to 1 i.e. e jk = k 1, j), and 2) there exists a unique vector π, such that π 1 = 1 and π j e jk = π k e kj for each graph edge. The graphs associated with homogeneous reversible Markov chains have these properties, and so we refer to these graphs as reversible ones. We note that such reversible graphs also arise in e.g. consensus algorithms for sensor networks [1]. The adjacency matrix for a reversible graph is a stochastic matrix, for which π is a left eigenvector associated with the dominant eigenvalue at unity. In addition to these properties, the adjacency matrix P is also symmetrizable through a diagonal similarity transformation, specifically A = DP D 1 is symmetric if D = diagπ 0.5 ). Here the diagx) operator on an qcomponent vector x returns a q q diagonal matrix which has the components of x along the main diagonal). For this case, we note that the eigenvalues of P are real and nondefective. We denote them as 1 = λ 1 λ 2... λ n, with v i denoting the right eigenvector associated with eigenvalue λ i. Let us now discuss the topology change we consider in our development. Motivated by both Markov chain and sensor network consensus applications, we consider topology changes that maintain the reversibility structure and leave the left eigenvector associated with
3 the adjacency matrix unchanged. Specifically, the edge weight between two vertices V j and V k, or e jk, is incremented by an amount α > 0, while the selfloop edge weight e jj is reduced by the same amount. Similarly, the edge weight e kj is increased by an amount β > 0, while the edge weight e kk is reduced by the same amount. Further, the quantities α and β are assumed to satisfy: = πk. Let the adjacency matrix α β π j of the resulting graph be P. It is easy to see that P = P + P where P is the following matrix. α for p = j and q = j α for p = j and q = k P p, q) = β for p = k and q = j 3) β for p = k and q = k 0 otherwise The vector π is the left eigenvector of P associated with the dominant eigenvalue at unity, since β = πj. α π Hence, π P = 0. Further, k π j P j, k) = π j P j, k) + π j α = π k P k, j) + π k β 4) = π k P k, j) Therefore, P is also the adjacency matrix of a reversible graph. Let the eigenvalues of the P be 1 = λ 1 λ 2... λ n, with v i denoting the right eigenvector associated with eigenvalue λ i. We refer to this type of topology change as Balanced Weight Increments to Reversible graphs. Our aim to characterize the subdominant eigenvalues and eigenvectors of P as compared to those of P. We present the results for this case in section 3. Let us briefly discuss the relevance of the above problem to the analysis of Markov chains. For a Markov chain, the adjacency matrix serves as the transition matrix of the chain, and the entries of the transition matrix represent conditional probabilities of state transition [10], i.e. P p, q) is the probability of the chain transitioning to state q given that the current state is p. The vector π specifies the steadystate stateoccupancy probabilities. Thus the balanced topology changes described above can be seen as changing transition probabilities in a way that maintains the steady state distribution as well as reversibility. Thus, the analysis of this case provides insight into how transition probabilities in a reversible Markov chain can be modified so as to maintain the steady state while shaping dynamical performance. 3) Arbitrary Increments to Weighted graphs. The previous cases are limited to special graph classes undirected, reversible), and also consider only very specially structured changes to the topology changes that can be made to the topologies. In this article, we also consider a class of quitegeneral modifications to arbitrary directed graphs. For this case, we focus exclusively on characterizing the dominant eigenvector of the modified topology. Specifically, in section 4, we present two results  one concerned with incrementing weights of the edges departing from multiple vertices, and one concerned with weight changes on edges entering into a given vertex. We also present results on designing the graph topology to achieve certain ratios among the dominant eigenvector components. To avoid confusion with cases 1 and 2 above, we defer the mathematical description of arbitrary topology changes to section SYMMETRIC TOPOLOGY CHANGES IN AN UNDIRECTED GRAPH In this section, we develop majorization results for eigenvector components of adjacency matrices, upon application of symmetric topology changes to undirected graphs, as described in section 1.1 1). First we present a preliminary theorem which indicates that, when the eigenvectors of the adjacency matrices have a certain special structure, then the associated eigenvalues and the eigenvectors themselves are invariant to the defined symmetric changes in the topology Theorem 1). This straightforward result serves as a starting point for more general spectral analyses, as given in Theorem 2. Theorem 1. Consider a symmetric weight increment between vertices V j and V k in an undirected graph, as described in Section 1.11). If v j i = vk i for some eigenvector v i associated with the eigenvalue λ i of P, then v i is also an eigenvector of P with eigenvalue λi. In other words, the eigenvalue λ i and its eigenvector v i are invariant to the topology change. Proof: The proof follows immediately from the fact that P v i = P + P )v i = λ iv i + P v i. However, we automatically find that P v i = 0, and so the result follows. Theorem 1 provides us with a mechanism for changing an undirected graph s structure, in such a manner that particular eigenvectors of its adjacency matrix remain unchanged. The above invariance result may prove applicable for algorithm design in sensor networks, in cases where changing the underlying network topology is a primary tool for improving performance; the result shows how changes can be made that do not affect critical mode shapes, while the remaining dynamics is changed. However, the result is restrictive, in the sense that invariance results hold only when edges are added between vertices whose eigenvector components are identical. The simple result of Theorem 1 serves as a stepping stone towards a more general characterization. Specifically, for symmetric topology changes between arbitrary vertices in an undirected graph, we find that the components of the subdominant eigenvector associated with the two ends of the added edge come closer to each other; similar results hold for eigenvectors associated with other eigenvalues, in a certain sequential sense. This result is formalized in the following theorem.
4 Theorem 2. Consider a symmetric weight increment between vertices V j and V k in an undirected graph, as described in Section 1.1 Case 1. Then, λ 1 λ 1, with equality only if v j 1 = vk 1. Moreover, v 1 k v j 1 vk 1 v j 1. Furthermore, if λ q = λ q and v q = v q q p, where p < n is some positive integer, then 1) λ p+1 λ p+1, with equality only if v j p+1 = vk p+1; and 2) v p+1 k v j p+1 vk p+1 v j p+1. Proof: The majorization of eigenvalues specified in the theorem is akin to existing eigenvaluemajorization results [7], but is included here for the sake of completeness and because the eigenvector majorization follows thereof. The majorization of eigenvector components is then proved, by using the Courant  Fischer Theorem to arrive a contradiction. The topology change is structured in such a way that the adjacency matrices, P and P are symmetric. Hence, the eigenvalues as well as the corresponding eigenvectors) of both the matrices can be found using the Courant  Fischer Theorem. In particular, λ 1 and λ 1 can be written as ) λ 1 x T P x 5) x and ) λ 1 y T P y y y y ) y T P + P ) y y T P y α y k y j) ) 2 where the maximums are taken over vectors of unit length, i.e. x 2 = 1 and y 2 = 1. The vectors that achieve the maximums are the eigenvectors associated with the eigenvalues λ 1 and λ 1. Notice that α y k y j) ) 2 0 for any vector y, with equality only if y j = y k. It is therefore straightforward to see that λ 1 λ 1, with equality only if v j 1 = vk 1. To prove the result on the change in eigenvector entries, we again use the Courant  Fischer Theorem Equations 5 and 6) to arrive at a contradiction. Let us denote the vectors that achieve the maximums as x for Equation 5 and y for Equation 6. Also, assume that x ) k x ) j < y ) k y ) j. Then according to Equation 6, we see that 6) x T P x αx k x j ) 2 y T P y αy k y j ) 2 7) Noting that x T P x y T P y, let x T P x = y T P y + C for some C 0. Thus we find that αx k x j ) 2 C + αy k y j ) 2 8) Equations 8 imply that x ) i x ) j y ) k y ) j, which is a contradiction to our assumption. The result on the eigenvectors v 1 and v 1 thus follows directly from this contradiction. The Courant  Fischer Theorem also permits computation of the eigenvalues λ 2,..., λ n. These eigenvalues can be found by optimizing the same objective as in Equations 5 and 6, but with the additional constraints that the vectors) that achieve the maximums) should be orthogonal to the subspace spanned by the eigenvectors associated with eigenvalues larger in magnitude. The result on the v p+1 and λ p+1, when λ q = λ q and v q = v q, q p, follows directly from this fact. We remark that results similar to Theorem 2 can be developed for the smallest most negative) eigenvalue λ n, and the associated eigenvector v n. However, we have focused our attention on characterizing the dominant and subdominant eigenvalues and their eigenvectors, since these spectral properties most often inform characterizations of complexnetwork dynamics and algorithms. 3. BALANCED TOPOLOGY CHANGES IN REVERSIBLE GRAPHS In section 2 we limited ourselves to the case where the original matrix and perturbation are symmetric, as described in Section Case 1). These eigenvector majorization results can be extended for balanced topology changes to reversible graphs, as described in Section Case 2). This extension is made possible by the observation that a diagonal similarity transformation can be used to equivalence a reversible graph s adjacency matrix with a symmetric matrix. The following theorem presents the result for reversible topologies formally. Theorem 3. Consider a reversible graph topology, and assume that a balanced weight increment is performed between vertices V j and V k, as described in Section 1.12). Then the eigenvalues/eigenvectors before and after the perturbation will have the following properties: 1) If v j i = v k i for some i = 2,..., n, then λ i = λ i and v i = v i. 2) If v j 2 vk 2, then λ 2 < λ 2 and v j 2 v k 2 ) v j 2 v k 2 ). Moreover, if λ q = λ q and v q = v q, q < p, where p < n is some positive integer then λ p+1 λ p+1 and v j p+1 vk p+1) v j p+1 vk p+1) Proof: From the discussion in section 1.12), we know that P represents a reversible graph. Further π is a left eigenvector associated with the unity eigenvalue of P. Therefore, there exists a similarity transformation matrix D = diagπ 0.5 ) such that A = DP D 1 and Â = D P D 1 are symmetric matrices. Moreover, A = Â A is also symmetric, with the following structure: α p = j and q = j αβ p = j and q = k Ap, q) = αβ p = k and q = j 9) β p = k and q = k 0 otherwise To prove the multiple statements in the theorem, we follow arguments similar to those used in the proofs of Theorem 1
5 and 2, while using the fact that if v i is an eigenvector associated with an eigenvalue λ of P or P ), then Dv i is an eigenvector associated with an eigenvalue λ of A or Â.) The proofs are organized into different cases for ease of presentation. 1) If v j i = vk i then P v i = 0. Therefore, P v i = P v i = λ iv i. The result follows immediately. 2) The eigenvalues of P respectively, P ) are identical to the eigenvalues of symmetric matrix A respectively, Â), since the matrices are related by a similarity transform. We can thus use Courant  Fischer Theorem to characterize the eigenvalues and eigenvectors of P with respect to those of P, in analogy with the proof of Theorem 2. Using the fact that P and P are stochastic matrices and noting the form of the similarity transform, we see that the subdominant eigenvalues λ 2 and λ 2 can be written as and ) λ 2 y T Ây y 1 n y 1 n y 1 n ) λ 2 x T Ax x 1 n ) y T A + A) y y T Ay αy j βy k) 2 ), 10) 11) where we have used the notation x 1 n to indicate that the maximum is taken over all vectors of unit length that are orthogonal to the ncomponent vector whose entries all 1 equal n. The vectors x and y that achieve the maxima, subject to the constraint, are the eigenvectors associated with the eigenvalues λ 2 and λ 2, respectively. We notice that αy j βy k) 2 0 for any vector y, with equality only if βy k = αy j or equivalently v j 2 = vk 2 ). It therefore follows that λ 2 < λ 2 if v j 2 vk 2. To prove the result on the components of the subdominant eigenvectors, we again use Courant Fischer Theorem Equations 10 and 11) to arrive at a contradiction. Let us the denote vectors that achieve the maxima subject to the constraints) as x for Equation 10 and y for Equation 11. Also, let us assume to obtain a contradiction) that αx ) j βx ) k < αy ) j βy ) k. Then using x in Equation 11 we see that x T Ax αx j βx k ) 2 y T Ay αy j βy k ) 2 12) Noting that x T Ax y T Ay, let x T Ax = y T Ay + C for some C 0. Thus we find that, λ 2 αx j βx k ) 2 λ 2 C αy j βy k ) 2 13) Equation 13 implies that αx ) j βx ) k αy ) j βy ) k, which is a contradiction to our assumption. The result on the eigenvectors v 2 and v 2 follows from this contradiction, from the fact that v 2 = D 1 x and v 2 = D 1 y, and from the assumed relationship between P j, k) and P k, j). The results on v p+1 and v p+1, under the condition that v p = v p and λ p = λ p p < n, can be proved using a very similar argument. 4. ARBITRARY WEIGHT INCREMENTS IN A DIRECTED GRAPH The results developed in sections 2 and 3 consider very specific changes to undirected or reversible graph topologies. A characterization of the change in the eigenvalues and/or eigenvectors for general topology modifications in arbitrary graphs has wider applications. In this section, we present some characterizations of the dominant eigenvector, for moregeneral topology changes in arbitrary directed and weighted graphs. Please note that the all digraphs we consider are strongly connected and hence the adjacency matrices are nonnegative and irreducible. To avoid confusion between these results and those developed in sections 2 and 3, we use slightly different and simplified) notations here. For the remainder of the paper, we consider a directed and weighted graph of n vertices labeled V 1, V 2,..., V n, with an adjacency matrix denoted as P. We use P to denote the adjacency matrix of a graph resulting from a change in the topology. We assume that both P and P capture stronglyconnected graphs, in which case they both have positive real eigenvalues of algebraic multiplicity 1 that are larger in magnitude than all other eigenvalues, i.e. that are dominant. The dominant eigenvectors of P and P are denoted as v and v respectively, while the dominant eigenvalues are denoted as λ and λ respectively. For a vector x, the notation x p:q specifies a q p + 1)component subvector, which contains the correspondingly indexed components of the x i.e., the components between the pth and qth components, inclusive). The outline of this section is as follows. In Theorem 4 we present a characterization of the dominant eigenvector when weights of edges emerging from m distinct vertices say V 1,..., V m) are arbitrarily increased. In terms of the adjacency matrix, this change in graph topology corresponds to the case where we arbitrarily increment the first m < n rows of P. In Theorem 5 we present a similar result on the arbitrary weight increments to edges directed into a given vertex. This corresponds to arbitrarily incrementing a single column, say the first column WLOG, of the graph adjacency matrix. We next present a characterization of the dominant eigenvector of adjacency matrix, when weights of the edges emerging from m given vertices are incremented arbitrarily increased. We stress here that eigenstructure of the adjacency matrix is graph invariant to within a permutation, i.e. the order of the m vertices is not important; thus we focus on incrementing the first m rows WLOG. Theorem 4. Consider arbitrarily increasing the weights of the edges emerging from vertices V 1, V 2...., V m.
6 Then i {1, 2,..., m} such that v i v j > v i v j, for j = m + 1,..., n. We thus recover that, when the eigenvector is normalized to unit length, v i > v i. Proof: Incrementing rows strictly increases the dominant eigenvalue for a nonnegative matrix associated with a connected graph [7], thus λ = λ + for some > 0. To prove the majorization on the eigenvector components we consider a particular scaling of the dominant[ eigenvectors. ] v1:m + q Specifically, let the dominant eigenvectors v = 1, v m+1:n + q 2 be scaled so that q 1 is an m component nonpositive vector with q 1i = 0 for some i {1,..., m}. Notice that this scaling can be achieved, by scaling down the original eigenvector v after perturbation by the largest among the ratios v i v i for i = 1,..., m. We shall prove that q 2 is elementwise strictly negative for the assumed scaling. To prove that, let us consider the last n m equations of the eigenvector equation P v = λ v. For notational convenience, we partition the matrix P and similarly, P ) as [ ] P11 P P = 12, P 21 P 22 such that P 11 is an m m matrix and the dimensions of the other partitions are consistent. Substituting and v into the eigenvector equation P v = λ v, and also noting the assumed form for v, we obtain λi P 22)q 2 = P 21q 1 v m+1):n, It is easy to see that λi P 22) is an Mmatrix. Using the fact that q 1 is entrywise nonpositive and that the inverse of an Mmatrix is a nonnegative matrix, we see that for the assumed scaling of the v) the vector q 2 is entrywise strictly negative. The majorization result in the theorem statement follows immediately. The result presented in Theorem 4, gives us insight into how changes to the topology of an arbitrary directed graph would affect the eigenstructure of the adjacency matrix. We now turn our attention to the case where weights of edges to a given vertex rather than out of a given vertex) in a graph are arbitrarily increased, in which case the entries in a given column of the adjacency matrix are increased arbitrarily. We find that, under certain conditions, arbitrarily incrementing the edge weights to the i th vertex also increases the i th component of the dominant eigenvector relative to the other components of the eigenvector. We present this result in the following theorem. Theorem 5. Consider arbitrarily incrementing edge weights incoming to the first vertex WLOG). Let, E = P P and = λ λ. If I E) is nonnegative, then v 1 v j > v 1 v j and for j = 2,..., n. We thus recover that, when the eigenvector is normalized to unit length, v 1 > v 1. Proof: It is easy to see that > 0 [7]. Moreover, from the PerronFrobenius Theorem for nonnegative matrices, the dominant eigenvectors are entrywise nonnegative [10]. [ We consider ] the scaling of the eigenvector v for which v v = 1 where q is a n 1 length vector. We wish to v 2:n + q find conditions for which q is elementwise strictly negative, using perturbation results. In order to find such conditions, we study the eigenvector equation P v = λ v. Substituting for λ and v. We arrive at the following equation. [ 0 λi P ) = I E)v q] If I E) is a nonnegative matrix, then it can be seen that q has strictly negative entries. In order to find conditions for which I E) is a nonnegative matrix, we can use the various perturbation results for general matrices such as the Ostrowski  Elsner Theorem [7]. We would also like to point out that our results, presented in Theorems 4 and 5, allows for design of new edges that modify the eigenvectors in a specified way. As we pointed out earlier, eigenvectors of graphadjacency matrices play an important role in characterizing various aspects of complex networked systems and associated algorithms. The results we developed inform on how these eigenvectors change when the underlying topology of the networked systems, and so the behavior of the algorithms, are modified. Using our results, one can develop frameworks for designing and/or changing existing topologies so as to shape various aspects of these systems. 5. REFERENCES [1] Y. Wan, K. Namuduri, S. Akula, and M. Varanasi. Consensus building in distributed sensor networks with bipartite graph structures. In AIAA Guidance, Navigation, and Control Conference, Oregon, [2] Y. Wan, S. Roy, and A. Saberi. Designing spatially heterogeneous strategies for control of virus spread. Systems Biology, IET, 24): , [3] D. Cvetković and S. Simić. Graph spectra in computer science. Linear Algebra and its Applications, 4346): , [4] B. Ruhnau. Eigenvector centrality a nodecentrality? Social networks, 224): , [5] A. Langville and C. Meyer. A survey of eigenvector methods for web information retrieval. SIAM review, 471): , [6] A. Berman and R. Plemmons. Nonnegative matrices in the mathematical sciences [7] G. Stewart and J. Sun. Matrix perturbation theory, volume 175. Academic press New York, [8] S. Roy, A. Saberi, and Y. Wan. Majorizations for the dominant eigenvector of a nonnegative matrix. In American Control Conference, Westin Seattle Hotel, Seattle, Washington, USA, pages , [9] F. Chung. Spectral graph theory, volume 92. American Mathematical Society, [10] R. Gallager. Discrete stochastic processes, volume 101. Kluwer Academic Publishers, 1996.
Markov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationOn the mathematical background of Google PageRank algorithm
Working Paper Series Department of Economics University of Verona On the mathematical background of Google PageRank algorithm Alberto Peretti, Alberto Roveda WP Number: 25 December 2014 ISSN: 20362919
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a nonzero column vector X with
Appendix: Matrix Estimates and the PerronFrobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationLecture 10. Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis
CS621 Theory Gems October 18, 2012 Lecture 10 Lecturer: Aleksander Mądry Scribes: Mani Bastani Parizi and Christos Kalaitzis 1 Introduction In this lecture, we will see how one can use random walks to
More informationA ControlTheoretic Approach to Distributed DiscreteValued DecisionMaking in Networks of Sensing Agents
A ControlTheoretic Approach to Distributed DiscreteValued DecisionMaking in Networks of Sensing Agents Sandip Roy Kristin Herlugson Ali Saberi April 11, 2005 Abstract We address the problem of global
More informationCalculating Web Page Authority Using the PageRank Algorithm
Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively
More informationMATH 5524 MATRIX THEORY Pledged Problem Set 2
MATH 5524 MATRIX THEORY Pledged Problem Set 2 Posted Tuesday 25 April 207 Due by 5pm on Wednesday 3 May 207 Complete any four problems, 25 points each You are welcome to complete more problems if you like,
More informationLink Analysis. Leonid E. Zhukov
Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization
More informationSpectral Analysis of kbalanced Signed Graphs
Spectral Analysis of kbalanced Signed Graphs Leting Wu 1, Xiaowei Ying 1, Xintao Wu 1, Aidong Lu 1 and ZhiHua Zhou 2 1 University of North Carolina at Charlotte, USA, {lwu8,xying, xwu,alu1}@uncc.edu
More informationLecture 3: graph theory
CONTENTS 1 BASIC NOTIONS Lecture 3: graph theory Sonia Martínez October 15, 2014 Abstract The notion of graph is at the core of cooperative control. Essentially, it allows us to model the interaction topology
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction PerronFrobenius Matrix Decompositions and Markov Chains Spectral
More informationSpectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics
Spectral Graph Theory and You: and Centrality Metrics Jonathan Gootenberg March 11, 2013 1 / 19 Outline of Topics 1 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial
More informationInvertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices
Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:
More informationSpectral radius, symmetric and positive matrices
Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an
More informationAlgebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes
Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov
More informationStrongly Regular Decompositions of the Complete Graph
Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl
More informationNCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications
NCS Lecture 8 A Primer on Graph Theory Richard M. Murray Control and Dynamical Systems California Institute of Technology Goals Introduce some motivating cooperative control problems Describe basic concepts
More information22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes
Numerical Determination of Eigenvalues and Eigenvectors 22.4 Introduction In Section 22. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, 2 2 and. This involved firstly solving
More informationCONSENSUS BUILDING IN SENSOR NETWORKS AND LONG TERM PLANNING FOR THE NATIONAL AIRSPACE SYSTEM. Naga Venkata Swathik Akula
CONSENSUS BUILDING IN SENSOR NETWORKS AND LONG TERM PLANNING FOR THE NATIONAL AIRSPACE SYSTEM Naga Venkata Swathik Akula Thesis Prepared for the Degree of MASTER OF SCIENCE UNIVERSITY OF NORTH TEXAS May
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices ChiKwong Li RenCang Li July 2004 Let ( H1 E A = E H 2 Abstract and Ã = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationPseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from ThomsonReuters Journal Citations Reports
Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from ThomsonReuters Journal Citations Reports Jevin West and Carl T. Bergstrom November 25, 2008 1 Overview There
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 2. Basic Linear Algebra continued Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki
More informationData Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings
Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble RhoneAlpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationRandomized Gossip Algorithms
2508 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 52, NO 6, JUNE 2006 Randomized Gossip Algorithms Stephen Boyd, Fellow, IEEE, Arpita Ghosh, Student Member, IEEE, Balaji Prabhakar, Member, IEEE, and Devavrat
More informationCS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory
CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory Tim Roughgarden & Gregory Valiant May 2, 2016 Spectral graph theory is the powerful and beautiful theory that arises from
More information1 Searching the World Wide Web
Hubs and Authorities in a Hyperlinked Environment 1 Searching the World Wide Web Because diverse users each modify the link structure of the WWW within a relatively small scope by creating webpages on
More informationGoogle Page Rank Project Linear Algebra Summer 2012
Google Page Rank Project Linear Algebra Summer 2012 How does an internet search engine, like Google, work? In this project you will discover how the Page Rank algorithm works to give the most relevant
More informationA Note on Google s PageRank
A Note on Google s PageRank According to Google, googlesearch on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to
More informationChapter 4 & 5: Vector Spaces & Linear Transformations
Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think
More informationLecture 4: Introduction to Graph Theory and Consensus. Cooperative Control Applications
Lecture 4: Introduction to Graph Theory and Consensus Richard M. Murray Caltech Control and Dynamical Systems 16 March 2009 Goals Introduce some motivating cooperative control problems Describe basic concepts
More informationGraph Models The PageRank Algorithm
Graph Models The PageRank Algorithm AnnaKarin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 The PageRank Algorithm I Invented by Larry Page and Sergey Brin around 1998 and
More informationThe Google Markov Chain: convergence speed and eigenvalues
U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department
More information1.10 Matrix Representation of Graphs
42 Basic Concepts of Graphs 1.10 Matrix Representation of Graphs Definitions: In this section, we introduce two kinds of matrix representations of a graph, that is, the adjacency matrix and incidence matrix
More informationMath 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank
Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationSENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*
SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationLink Analysis Information Retrieval and Data Mining. Prof. Matteo Matteucci
Link Analysis Information Retrieval and Data Mining Prof. Matteo Matteucci Hyperlinks for Indexing and Ranking 2 Page A Hyperlink Page B Intuitions The anchor text might describe the target page B Anchor
More information17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs
Chapter 17 Graphs and Graph Laplacians 17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs Definition 17.1. A directed graph isa pairg =(V,E), where V = {v
More informationBoolean InnerProduct Spaces and Boolean Matrices
Boolean InnerProduct Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More information1.3 Convergence of Regular Markov Chains
Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain
More informationCompletely positive matrices of order 5 with ĈP graph
Note di Matematica ISSN 11232536, eissn 15900932 Note Mat. 36 (2016) no. 1, 123 132. doi:10.1285/i15900932v36n1p123 Completely positive matrices of order 5 with ĈP graph Micol Cedolin Castello 2153,
More informationRefined Inertia of Matrix Patterns
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl
More informationFundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved
Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural
More informationOnline Social Networks and Media. Link Analysis and Web Search
Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information
More informationAffine iterations on nonnegative vectors
Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B348 LouvainlaNeuve Belgium Introduction In this paper we consider
More informationHow works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University
How works or How linear algebra powers the search engine M. Ram Murty, FRSC Queen s Research Chair Queen s University From: gomath.com/geometry/ellipse.php Metric mishap causes loss of Mars orbiter
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More information5 Quiver Representations
5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationPowerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.
Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.
More information1 T 1 = where 1 is the allones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)
CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue
More informationCS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine
CS 277: Data Mining Mining Web Link Structure Class Presentations Inclass, Tuesday and Thursday next week 2person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1person teams 4 minutes,
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 200910 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationCS 664 Segmentation (2) Daniel Huttenlocher
CS 664 Segmentation (2) Daniel Huttenlocher Recap Last time covered perceptual organization more broadly, focused in on pixelwise segmentation Covered local graphbased methods such as MST and FelzenszwalbHuttenlocher
More informationChapter 2. Error Correcting Codes. 2.1 Basic Notions
Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.
More informationAn Application of the permanentdeterminant method: computing the Zindex of trees
Department of Mathematics wtrnumber20132 An Application of the permanentdeterminant method: computing the Zindex of trees Daryl Deford March 2013 Postal address: Department of Mathematics, Washington
More informationALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS
ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS Giorgio Picci with T. Taylor, ASU Tempe AZ. Wofgang Runggaldier s Birthday, Brixen July 2007 1 CONSENSUS FOR RANDOM GOSSIP ALGORITHMS Consider a finite
More informationMath Linear Algebra Final Exam Review Sheet
Math 151 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a componentwise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationOn Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs
On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs Saeed Ahmadizadeh a, Iman Shames a, Samuel Martin b, Dragan Nešić a a Department of Electrical and Electronic Engineering, Melbourne
More information1 Principal component analysis and dimensional reduction
Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finitedimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an
More informationLecture 5: Random Walks and Markov Chain
Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationeigenvalues, markov matrices, and the power method
eigenvalues, markov matrices, and the power method Slides by Olson. Some taken loosely from Jeff Jauregui, Some from Semeraro L. Olson Department of Computer Science University of Illinois at UrbanaChampaign
More informationClassical Complexity and FixedParameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems
Classical Complexity and FixedParameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Rani M. R, Mohith Jagalmohanan, R. Subashini Binary matrices having simultaneous consecutive
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the
More informationSystems of Algebraic Equations and Systems of Differential Equations
Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices
More informationComputational Aspects of Aggregation in Biological Systems
Computational Aspects of Aggregation in Biological Systems Vladik Kreinovich and Max Shpak University of Texas at El Paso, El Paso, TX 79968, USA vladik@utep.edu, mshpak@utep.edu Summary. Many biologically
More informationMath 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:
Math 34/84  Exam Blue Exam Solutions December 4, 8 Instructor: Dr. S. Cooper Name: Read each question carefully. Be sure to show all of your work and not just your final conclusion. You may not use your
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationAn indicator for the number of clusters using a linear map to simplex structure
An indicator for the number of clusters using a linear map to simplex structure Marcus Weber, Wasinee Rungsarityotin, and Alexander Schliep Zuse Institute Berlin ZIB Takustraße 7, D495 Berlin, Germany
More informationCS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works
CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The
More information4. Ergodicity and mixing
4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation
More informationOn the adjacency matrix of a block graph
On the adjacency matrix of a block graph R. B. Bapat StatMath Unit Indian Statistical Institute, Delhi 7SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit
More informationLecture 2: September 8
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationWeb Structure Mining Nodes, Links and Influence
Web Structure Mining Nodes, Links and Influence 1 Outline 1. Importance of nodes 1. Centrality 2. Prestige 3. Page Rank 4. Hubs and Authority 5. Metrics comparison 2. Link analysis 3. Influence model 1.
More informationComplex Laplacians and Applications in MultiAgent Systems
1 Complex Laplacians and Applications in MultiAgent Systems JiuGang Dong, and Li Qiu, Fellow, IEEE arxiv:1406.186v [math.oc] 14 Apr 015 Abstract Complexvalued Laplacians have been shown to be powerful
More informationSystem theory and system identification of compartmental systems Hof, Jacoba Marchiena van den
University of Groningen System theory and system identification of compartmental systems Hof, Jacoba Marchiena van den IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF)
More informationChapter 2 Spectra of Finite Graphs
Chapter 2 Spectra of Finite Graphs 2.1 Characteristic Polynomials Let G = (V, E) be a finite graph on n = V vertices. Numbering the vertices, we write down its adjacency matrix in an explicit form of n
More informationSpectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity
Spectral Graph Theory and its Applications Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Adjacency matrix and Laplacian Intuition, spectral graph drawing
More informationAN APPLICATION OF LINEAR ALGEBRA TO NETWORKS
AN APPLICATION OF LINEAR ALGEBRA TO NETWORKS K. N. RAGHAVAN 1. Statement of the problem Imagine that between two nodes there is a network of electrical connections, as for example in the following picture
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationData Analysis and Manifold Learning Lecture 7: Spectral Clustering
Data Analysis and Manifold Learning Lecture 7: Spectral Clustering Radu Horaud INRIA Grenoble RhoneAlpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline of Lecture 7 What is spectral
More informationReview of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem
Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as
More informationSpectral Graph Theory
Spectral Graph Theory Aaron Mishtal April 27, 2016 1 / 36 Outline Overview Linear Algebra Primer History Theory Applications Open Problems Homework Problems References 2 / 36 Outline Overview Linear Algebra
More informationConsensus, Flocking and Opinion Dynamics
Consensus, Flocking and Opinion Dynamics Antoine Girard Laboratoire Jean Kuntzmann, Université de Grenoble antoine.girard@imag.fr International Summer School of Automatic Control GIPSA Lab, Grenoble, France,
More informationLecture 1 and 2: Introduction and Graph theory basics. Spring EE 194, Networked estimation and control (Prof. Khan) January 23, 2012
Lecture 1 and 2: Introduction and Graph theory basics Spring 2012  EE 194, Networked estimation and control (Prof. Khan) January 23, 2012 Spring 2012: EE19402 Networked estimation and control Schedule
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATHGA 2011.003 / CSCIGA 2945.003, Fall 2014 September 18th,
More informationComparison of perturbation bounds for the stationary distribution of a Markov chain
Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics
More informationGoogle PageRank. Francesco Ricci Faculty of Computer Science Free University of BozenBolzano
Google PageRank Francesco Ricci Faculty of Computer Science Free University of BozenBolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google
More informationVery few Moore Graphs
Very few Moore Graphs Anurag Bishnoi June 7, 0 Abstract We prove here a well known result in graph theory, originally proved by Hoffman and Singleton, that any nontrivial Moore graph of diameter is regular
More informationUMIACSTR July CSTR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.
UMIASTR95 July 99 STR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a ToolBox Linear Equation Systems Discretization of differential equations: solving linear equations
More informationRandomization and Gossiping in TechnoSocial Networks
Randomization and Gossiping in TechnoSocial Networks Roberto Tempo CNRIEIIT Consiglio Nazionale delle Ricerche Politecnico ditorino roberto.tempo@polito.it CPSN Social Network Layer humans Physical Layer
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More information