Network observability and localization of the source of diffusion based on a subset of nodes
|
|
- Adam Hancock
- 6 years ago
- Views:
Transcription
1 Network observability and localization of the source of diffusion based on a subset of nodes Sabina Zejnilović, João Gomes, Bruno Sinopoli Carnegie Mellon University, Department of Electrical and Computer Engineering, Pittsburgh, PA sabinaz@cmu.edu, brunos@ece.cmu.edu Institute for Systems and Robotics Instituto Superior Técnico (ISR/IST), Technical University of Lisbon (UTL), Portugal jpg@isr.ist.utl.pt Abstract Identifying the patient-zero of an epidemic outbreak, locating the person who started a rumor in a social network, finding the computer that initiated the spreading of a computer virus in a network- these are all applications of localizing the source of diffusion in a network. Since most of the networks of interest are very large, we are usually able to observe only a part of the network. In this paper, we first present a model for the dynamics of network diffusion similar to state update of a linear time-varying system. Based on this model, we provide a sufficient condition for observability of the network, i.e., we establish when is the partial information available to us sufficient to uniquely localize the source. Also, we connect the problem of finding the minimum number of observable nodes to the problem of metric dimension of the graph. We then present different methods to perform source localization depending on network observability. I. INTRODUCTION In today s world, we are a part of many different networks in which diffusion of different phenomena takes place. Infectious diseases are spread over contact networks, information and trends are propagated over social networks, and viruses are disseminated over computer networks. Whether it is for the purpose of identifying the culprit by the authorities, for controlling and preventing further infection, or identifying trendsetters, the task of localizing the source of diffusion is an important one. Recently, there has been a surge of research addressing this challenge for different diffusion scenarios. In [1], the goal is to identify the source of a rumor knowing which nodes have been infected by certain time. A source estimator that depends on a metric denoted as rumor centrality is proposed and it corresponds to the maximum likelihood estimator for regular trees. In [2], the observations not only include the state of the nodes (susceptible/infected), but also the times of infection. However, only a subset of nodes can be observed, as in most real world networks it is unfeasible to have access to all the nodes. An optimal estimator for tree networks, and suboptimal for general networks, is presented. Several strategies for the best choice of observable nodes are experimentally compared. An algorithm for identifying multiple sources of epidemics is Support for this research was provided by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through the Carnegie Mellon Portugal Program. proposed in [3], and it is based on the Minimum Description Length principle. The single best source is calculated using the smallest eigenvector of the Laplacian sub-matrix. Once the best source is found, a new source is calculated by removing the previously chosen source and solving again for smaller infected graph. Multiple sources are also localized in [4], by identifying which nodes reduce the most the largest eigenvalue of the adjacency matrix after its removal. With experimental results, it is shown that the proposed technique is able to identify the source nodes if the graph approximates a tree sufficiently well. A greedy algorithm is presented in [5] to identify the sources of rumor involving the observations of preselected number of nodes. Several strategies for choosing the most informative nodes to observe, based on different centrality measures, are experimentally evaluated. In this paper we also assume a propagation model where once infected/informed, the nodes remain in that state as in [1] [5]. This assumption corresponds to trend and rumor propagation, or spreading of certain diseases and viruses where recovery happens on a larger time scale than infection spreading. We address the problem of localizing the source of diffusion based only on the observations of a part of the network, as having access to the states of all the nodes in large scale networks is not a practical assumption. The choice of nodes that are observed influences the performance of source estimators, as shown in [2], [5], where different selection strategies are proposed and evaluated through simulation. We present a new model for dynamics of network diffusion which allows us not only to perform source localization, but also to analyze which choice of observers can lead to correct source identification. Diffusion through the network is modeled similarly to state evolution in linear time-varying systems. The ability to correctly identify the source of diffusion, based only on partial knowledge of the network, corresponds to the concept of system observability. Equivalently to the analysis of system observability, where system noise is not considered, we also only consider the model of deterministic, noiseless propagation. This contrasts with [1], [2], where propagation times between nodes are modeled as random variables. We derive a sufficient condition which, when satisfied, guarantees that the choice of observers is such that a source will be localized,
2 regardless of its placement within a network. Since the resources allocated for the observation of the nodes are usually limited, we are interested in the problem of finding the minimum number of nodes that will allow us to localize the source. This corresponds to finding the minimum number of communities to monitor for the disease outbreak or which individuals to choose to observe their behavior on the social network. Under our propagation assumptions, this problem actually represents a known problem of finding a metric dimension of a graph [6] or determining minimum cardinality of a resolving set [7]. For an arbitrary graph, this problem is NP-hard [6], but explicit results exist for some families of graphs [8]. Here we review some of the results in the context of the minimum number of needed observers and illustrate the performance of the available approximation algorithm. We also address the problem of identifying the source suspects, when the sufficient observability condition is not satisfied. Our formulation of source localization can be viewed as l 0 norm optimization problem and using the results from compressive sensing [9], we resort to l 1 norm relaxation. However, with this relaxation the original optimal solution is still found, due to the structure of our problem. Additionally, when the observations are such that multiple nodes are equally likely suspects for the source, the solution of the relaxed optimization problem correctly identifies all of them. The proposed model for network diffusion is presented in the next section, while network observability and minimum number of observable nodes is discussed in Section III. Source localization approaches for different cases of network observability are described in Section IV and we conclude in Section V. II. MODELING DIFFUSION IN A NETWORK A network of N nodes is represented using a graph G = {V, E}, where V = {1,2,...,N} is the set of vertices representing the nodes and E V V is the set of edges. The nodes in a network can correspond, for example, to people in a social network, or computers in a communication network. There is an edge between nodes i and j, (i,j) E, if nodes i and j can communicate directly. If (i, j) E implies that (j, i) E, then the associated graph is called undirected. We will assume G to be undirected, as infections and rumors spread through contact and ties which are typically bidirectional. Additionally, we will consider a connected graph, meaning that there is a path between any two vertices in a network, otherwise some parts of the network would be isolated and would not be relevant to the diffusion process. We consider the network topology to be known. The distance between two vertices in a graph represents the number of edges in the shortest path connecting them. A walk represents a sequence of vertices (possibly repeated) where each vertex is adjacent to the preceding vertex in the sequence. The length of a walk is the number of edges that it uses. The adjacency matrix A of the graph G is a N N symmetric matrix, with elements a ij = 1 if (i,j) E and a ij = 0 otherwise. The propagation model we will analyze is referred to in the literature as the Susceptible-Infected (SI) model, where once a node is infected or informed it remains in that state. Initially, only one infected node is present in the network, and it is denoted as the source node s. It becomes active at time 0. The source node corresponds, for example, to the patient-zero in an epidemic outbreak or a trendsetter in a social network. We assume deterministic propagation, meaning that once a node is infected at t 1, in the following time instant t, where t is a discrete time index, it will infect all of its neighbors, with probability 1. The time it takes for a certain node to become infected is equal to its distance from the source node s. The nodes whose state can be observed, and whose infection times are known, are denoted as observer nodeso 1,o 2,...,o K. Typically due to limited resources, the number of observer nodesk is much smaller than the total number of nodesn, as not all the individuals report hearing a rumor, nor the infection times of all patients are available. The state of node i at time t is denoted as a binary variable x i (t) and is equal to 1 only if node i has been infected by time t. The vector x(t) R N describes the state of all the nodes at time t. The initial state x(0) is equal to e s, which is a column vector with all entries equal to 0 except for the s-th entry, which is equal to 1 and corresponds to the index of the source node s. The infection time of node i is denoted as t i and it represents the time of change, such that x i (t i 1) = 0 and x i (t i ) = 1. With y(t) R K we denote the states of K observed nodes at time t. The states of observed nodes are obtained from the full state vector through multiplication by K N matrix C = [e o1,e o2,...,e ok ] T, where M T denotes the transpose of matrix M. Below, we denote by M, a binary matrix that has the same sparsity structure as matrix M, but with all nonzero elements replaced with 1, as follows M ij = { 0, if Mij = 0 1, if M ij 0 }. Finally, we can state a theorem that characterizes the evolution of network diffusion under the above stated assumptions. Theorem 1: The dynamics of diffusion in a network, under the deterministic SI propagation model, can be characterized as follows x(t) = Φ(t,0) x(0) y(t) = Cx(t), (1) where Φ(t,0) = A t +A t 1. Proof: The state equation of (1) for node i can be rewritten as x i (t) = j Φ ij (t,0)x j (0) = Φ is (t,0). The last equality holds since x(0) has a single nonzero entry for the source node. Now we refer to the specific properties of the powers of adjacency matrices [10], where the ij-th entry of A t equals to the number of walks of lengthtbetween nodes i and j. Based on this property, we have that if the distance of node i to the source is t i, then Φ is (t,0) = 0 for all t < t i,
3 which consequently gives x i (t) = 0 for all t < t i. At t = t i, both Φ is (t,0) and x i (t) assume the value 1. If there exists a walk of length t i, then there also exists at least one walk of lengtht i +2l, forl = 0,1,..., as any edge included in the walk can be repeated (once in the forward and once in the backward direction) to add a cycle to the walk, increasing its length. Hence A ti+2l > 0, and subsequently Φ is (t i +2l,0) = 1. At times t = t i +(2l+1), Φ is (t,0) is equal to 1, at least due to the term A t 1. Therefore, for all t t i, Φ is (t,0) = 1 and the state of the node is 1, reflecting the fact the node i became infected at time t i. The second equation of (1) models that at each time t, only the state of the observer nodes can be seen. Thus, the equations (1) model the state evolution and available observations for the diffusion process in a general network. III. NETWORK OBSERVABILITY In the previous section, we have presented a model for the dynamics of network diffusion, similar to the space state representation of a linear time-varying system with a constant observation matrix. Our goal is the identification of the source node, based on the infection times only of observer nodes. We now tackle the question of when the choice of observer nodes guarantees correct source identification, and we treat this as a network observability problem. Stacking equations (1) for times 0,...,N 1, we get the following matrix equation or equivalently y(0) y(1). y(n 1) = C CΦ(1,0). CΦ(N 1,0) x(0), Y N 1 = O N 1 x(0). (2) We refer to NK N matrix O N 1 as a network observability matrix. The following theorem states the necessary and sufficient conditions for correct identification of the source based on the infection times of observer nodes. Theorem 2: If the rank of the observability matrix O N 1 is equal to N, then the infection times of the particular choice of observers are sufficient to correctly identify the source, regardless of its position in the network. Then, the initial state can be recovered as x(0) = ( O T N 1 O N 1) 1O T N 1 Y N 1. (3) The necessary condition for correct source identification, for any possible source, is that the observability matrix O N 1 has N unique columns. Proof: In a network of N nodes, the largest distance between any two nodes is at most N 1, meaning that the states of all the observer nodes will be 1, at most by time N 1 and will remain the same for all t N 1. Product CΦ(t,0) has an interesting structure; its ij-th entry is equal to 1, only if there is a path of length smaller or equal to t between observer o i and node j, otherwise it is 0. Hence, all entries of CΦ(t,0) are equal to 1, for t N 1. Therefore, Node 4 Node 5 Node 6 Fig. 1. Node 2 Node 3 Node 1 Example network stacking CΦ(t,0), for t > N 1 will not increase the rank of the observability matrix. This is parallel to the property of linear time-varying systems where, where if the initial state can be recovered, then it can be recovered from observations y(0),...,y(n 1). From (2), if the observability matrix has full column rank, we obtain (3), as a standard result from linear algebra. In order to uniquely identify the source node based on the distances of observer nodes to it, it is necessary that the observer nodes have different distances to all the remaining nodes. If the observability matrix has N unique columns, looking at the structure of the product CΦ(t, 0), this is equivalent to the condition that there are no two nodes with the same distances to the observers, which is exactly needed for correct localization. If the observability matrix O N 1 of a network, with adjacency matrix A and a choice of observers characterized by C, has N unique columns, then we refer to this network as an observable network. Verifying the observability of a network using Theorem 2 does not require knowledge of infection times of the observers. Thus, it is a task that can be efficiently performed offline, before the actual source localization takes place. This would allow timely selection of observer nodes that would guarantee correct source localization, irrespective of the source position in the network. The observability of a network does not depend on the source node, meaning that in such a network regardless of which node is the source, the information from the observers is sufficient to identify it. However, this condition is not necessary for a particular choice of source node, as illustrated by a following example. Example 1 A simple tree network of N = 6 nodes is shown on Figure 1. Assuming that node 3 is the only observer, the observability matrix has only 4 unique columns and rank 4, and therefore the network is not observable with this choice of observers. An example of the observers inability to identify the source would be the case if the infection time of node3was t 3 = 3. This information would be insufficient to distinguish whether the source was either node 4 or 5. However, if t 3 = 2, then we would be able to correctly identify the source as node 2. This illustrates that a network might be generally unobservable given a particular choice of observers, but this
4 does not imply that the information provided by the observers is insufficient to identify the source node in all the cases. In the following section, we present a method to recover the initial state in these special cases, when either the sufficient or necessary condition does not hold, but we are still able to perform correct localization, as well as for the case when there are multiple source suspects and we are interested in identifying all of them. The necessary condition for network observability additionally provides insight into the problem of selecting how many and which observer nodes are needed to achieve network observability, as the next subsection shows. A. Minimum number and location of observers needed for network observability The necessary condition for correct source localization implicitly states that the choice of observers is such that all the nodes in a network have different distances to them. Let us denote with S the set of observer nodes{o 1,o 2,...,o K }, with d(i,o k ) the distance between nodes i and o k and with d(i,s) the k-vector of distances from node i to the set of observer nodes [d(i,o 1 ),...,d(i,o k )]. Then having the set of observer nodes that will satisfy the necessary condition for the correct node localization can be stated as d(i,s) d(j,s) for all i,j pairs of nodes. Stated as such, finding the set of observers with this property corresponds to the problem of finding a resolving set of nodes S of the graph [7]. Determining the resolving set of minimum cardinality is a well-known problem in graph theory called finding the metric basis of the graph and the cardinality of this basis is called the metric dimension [6]. The motivation for this problem came from the placement of detecting devices in a network, such that every vertex can be described in terms of distances to them, and also independently from describing the structure of chemical compounds in pharmaceutical chemistry [11]. Although for an arbitrary graph, computing its metric dimension is NP-hard problem [6], for some families of graphs exact values can be easily determined [8]. Applying these results, we have for example that in path networks the minimal number of observers is one, if the observer is the end (leaf) node, while n 1 observers are needed in complete networks. Explicit results, among others, also exist for tree networks, d-dimensional grids [6] and random networks [12]. For general networks, O(log n) factor approximation algorithm can be used to approximate the metric dimension of a graph in polynomial time [6]. The problem of choosing the minimum number of observers can be cast as the set cover problem where the elements correspond to all pairs of nodes. The approximation algorithm selects one by one the node that distinguishes the highest number of node pairs and the algorithms s performance is illustrated with the following example. Example 2 The Erdős-Rényi is a random graph model, where each pair of nodes is connected with equal probability, independently of other pairs. We generated 50 Erdős-Rényi graphs with 20 nodes, each with different probability of an Minimum number of observers optimal approximation Edge probability Fig. 2. Performance of the approximation algorithm for choosing the minimum number of observers. edge. For each graph, we found the minimal number of observers, by checking all the possible combinations. We compared this to the performance of the approximation algorithm and the results are shown on Figure 2. The example illustrates that as the edge probability increases and the graph becomes more dense, more observers are needed to correctly distinguish between the nodes. For very sparse graphs, the number of observers is small, but as the graph more resembles a complete one, the number of observers tends to n 1. The approximation algorithm chooses at most one observer more than the minimum needed, and in around 70% cases its performance coincides with the optimal. IV. SOURCE LOCALIZATION In the previous section a sufficient condition for network observability was given, as well as the method for recovering the initial state of the network, when the condition is satisfied. As this condition is only sufficient, there are cases where source could be correctly identified, even if the observability matrix is not invertible. This may correspond to the case when the network is actually observable, or as in shown in the Example 1, when the network is unobservable, but it is still possible to identify certain sources. We now present a method for source localization in these cases. Given that the initial state is a binary vector with only one nonzero entry, corresponding to the index of the source node, we can cast source localization as a l 0 norm optimization problem as follows min x(0) x(0) 0 subject to O N 1 x(0) = Y N 1. (4) The optimization problem (4) seeks to find the sparsest vector x(0), i.e., with the fewest nonzero entries, that satisfies the observation model (2), which gives us exactly the desired initial state. However, problem (4) is non-convex and hard to solve. Typically, l 0 optimization problems are relaxed to l 1, which are easier to handle through the use of linear programming [9]. The l 1 relaxed version of problem (4) can
5 be stated as min x(0) x(0) 1 subject to O N 1 x(0) = Y N 1. (5) Generally, the solutions of the original l 0 and relaxed l 1 optimization problems differ. However, given the structure of our problem, a solution of the relaxed problem coincides with the optimal solution of the original problem. Given the described structure of the observability matrix, the constraint O N 1 x(0) = Y N 1 contains N constraints of the form CΦ(t,0)x(0) = y(t). Each row i of these constraints, after simplifying, corresponds to the equation j N l o i x j (0) = I s N l oi, (6) where I is the indicator function and No l i is the l-hop neighborhood of the observer i, for l = 0,...,N 1. Hence for each observer,i = 1,...,k, for l = d(s,o i ) neighborhood that includes the source node, the equation (6) takes the form x j (0) = 1. (7) j N d(s,o i ) o i For all the other neighborhoods l d(s,o i ), equations (6) are equal to 0. Under the above assumptions that the source node can be resolved by the observers, the source node is the only node that is present only in the equations (7). If there is some other node r also at the distance of d(s,o i ) from the observer o i, then there exist at least one other observer o k from which node r is at the different distance than d(s,o k ), otherwise source node and node r could not be distinguished. This means that such node also appears in the equations that are equal to 0. Since the problem (5) minimizes the l 1 norm of x(0), the states of all the nodes, except the source node are set to 0, while the state of the source node is set to 1. Should the state of any other node be different than 0, for example node r, since that node also appears in the equation that is equal to zero, then there should also be another node whose state is nonzero, to satisfy the constraint. This in turn would increase the l 1 norm of the vectorx(0). Therefore, in this case, l 1 minimization yields a solution with the cardinality exactly 1, the sparsest solution possible and a solution to the original l 0 minimization. This allows us to recover the initial state, i.e. identify the source node correctly. In other scenarios, given the choice of observers that make the network unobservable, even if we cannot uniquely identify the source, narrowing down the list of suspect source nodes still can be very useful. This is a likely scenario when there are not enough resources to allocate for the required number of observers to attain observability, and yet we would like to obtain as much information as possible with the existing resources. Hence, we would like to recover all the possible x(0) vectors, with a single nonzero entry, that satisfy the observation model (2). We denote with x i (0) for i = 1,...,p all the possible p solutions of the original problem (4). Again, instead of solving the combinatorial problem (4), and searching for multiple solutions, we resort to solving the relaxed l 1 optimization problem (5). The constraint in the problem still consists of equations of the form (6). However, now there are p nodes that appear only in the equations (7). One of these is the source node itself, but it cannot be distinguished from all the other suspect nodes based on the infection times of the available observers. Again, l 1 minimization sets the state of all the non-suspect nodes to 0, for the same reason as before. Setting the state of one of the suspect node to 1 and all the others to 0 represent each of the p possible solutions with the same value of the cost( function. The combination of these solution, of the form 1 p x 1 (0)+x 2 (0)+...+x p (0) ) is also a solution, from which we can easily recover individual solutions. Hence, l 1 minimization allows us to correctly obtain the list of all the possible suspect source nodes. Note Let us denote with D R k N the distance matrix, whose elements D ij represent the distance between observer o i and node j. Let t R k be the vector of infection times of observers. Then the source localization problem can be stated as finding the column s of the matrix D which is equal to the vector t. In case there are multiple source suspects, then there are multiple columns of D equal to t. Although this is a much simpler way to treat source identification problem compared to l 1 optimization, here we present the first approach. Together with the diffusion model (1), it provides a way to deal with the source localization problem in the case of more realistic assumptions: when the observations are noisy, the activation time of the source is not known and when there is uncertainty in the network topology, which will be our future work. V. CONCLUSION We presented a new model for dynamics of network diffusion, in order to identify the source of diffusion when the infection times are available from only a subset of nodes. We introduced the concept of network observability which reflects when the choice of observable nodes is such that correct source localization is possible. Based on the presented model, we gave necessary and sufficient condition for network observability. We provided a method for source localization when the sufficient condition holds. Also, we showed that under our assumptions, the problem of selecting the minimum number of nodes that makes a network observable is equivalent to the problem of finding the metric dimension of a graph and we reviewed some of the available results in this area. Finally, for the case when the sufficient condition does not hold, we formulated source localization problem as l 1 minimization problem. Solving the source localization problem under more complex scenarios, such as unknown source activation time and uncertain network topology remains for future work. REFERENCES [1] D. Shah and T. Zaman, Rumors in a network: who s the culprit?, IEEE Transactions on Information Theory, [2] P. Pinto, P. Thiran, and M. Vetterli, Locating the source of diffusion in large-scale networks, Physical Review Letters, August 2012.
6 [3] B. Prakash, J. Vrekeen, and C. Faloutsos, Spotting culprits in epidemics: How many and which ones?, IEEE ICDM, [4] V. Fioriti and M. Chinnici, Predicting the sources of an outbreak with a spectral technique, arxiv: [math-ph], [5] E. Seo, P. Mohapatra, and T. F. Abdelzaher, Identifying rumors and their sources in social networks, SPIE Defense, Security, and Sensing, April [6] S. Khuller, B. Raghavachari, and A. Rosenfeld, Landmarks in graphs, Discrete Applied Mathematics, vol. 70, no. 3, pp , [7] G. Chartranda, L. Eroha, M. A. Johnsonb, and O. R. Oellermann, Resolvability in graphs and the metric dimension of a graph, Discrete Applied Mathematics, vol. 105, pp , [8] C. Hernando, M. Mora, I. M. Pelayo, C. Seara, J. Caceres, and M. L. Puertas, On the metric dimension of some families of graphs, Electronic Notes in Discrete Mathematics, vol. 22, pp , [9] D. Donoho, For most large underdetermined systems of equations, the minimal l1-norm near-solution approximates the sparsest near-solution, Communications on Pure and Applied Mathematics, vol. 59, no. 6, June [10] N. Biggs, Algebraic graph theory, Cambridge University Press, [11] W. Goddard and O. R. Oellermann, Chapter Distance in Graphs in Structural Analysis of Complex Networks, Birkauser, [12] B. Bollobas, D. Mitsche, and P. Pralat, Metric dimension for random graphs, arxiv: , 2012.
Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability
Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability 1 arxiv:1202.6389v1 [math.pr] 28 Feb 2012 Dragana Bajović, João Xavier, José M. F. Moura and Bruno Sinopoli
More informationLecture 1: Graphs, Adjacency Matrices, Graph Laplacian
Lecture 1: Graphs, Adjacency Matrices, Graph Laplacian Radu Balan January 31, 2017 G = (V, E) An undirected graph G is given by two pieces of information: a set of vertices V and a set of edges E, G =
More informationBlind Identification of Invertible Graph Filters with Multiple Sparse Inputs 1
Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs Chang Ye Dept. of ECE and Goergen Institute for Data Science University of Rochester cye7@ur.rochester.edu http://www.ece.rochester.edu/~cye7/
More informationAnalytically tractable processes on networks
University of California San Diego CERTH, 25 May 2011 Outline Motivation 1 Motivation Networks Random walk and Consensus Epidemic models Spreading processes on networks 2 Networks Motivation Networks Random
More informationMinimum Sensor Placement for Single Robust. Observability of Structured Complex Networks
Minimum Sensor Placement for Single Robust 1 Observability of Structured Complex Networks Xiaofei Liu 1, Sérgio Pequito 1,2, Soummya Kar 1 Bruno Sinopoli 1 A. Pedro Aguiar 2,3 arxiv:1507.07205v2 [math.oc]
More informationA new centrality measure for probabilistic diffusion in network
ACSIJ Advances in Computer Science: an International Journal, Vol. 3, Issue 5, No., September 204 ISSN : 2322-557 A new centrality measure for probabilistic diffusion in network Kiyotaka Ide, Akira Namatame,
More informationLecture 13: Spectral Graph Theory
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationNetwork Infusion to Infer Information Sources in Networks Soheil Feizi, Ken Duffy, Manolis Kellis, and Muriel Medard
Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-214-28 December 2, 214 Network Infusion to Infer Information Sources in Networks Soheil Feizi, Ken Duffy, Manolis Kellis,
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationk-metric ANTIDIMENSION OF WHEELS AND GRID GRAPHS
k-metric ANTIDIMENSION OF WHEELS AND GRID GRAPHS Mirjana Čangalović 1, Vera Kovačević-Vujčić 2, Jozef Kratica 3 1 University of Belgrade, Faculty of Organizational Sciences, canga@fon.bg.ac.rs 2 University
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationInferring the origin of an epidemic with a dynamic message-passing algorithm
Inferring the origin of an epidemic with a dynamic message-passing algorithm HARSH GUPTA (Based on the original work done by Andrey Y. Lokhov, Marc Mézard, Hiroki Ohta, and Lenka Zdeborová) Paper Andrey
More informationDiffusion and random walks on graphs
Diffusion and random walks on graphs Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural
More informationCS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory
CS168: The Modern Algorithmic Toolbox Lectures #11 and #12: Spectral Graph Theory Tim Roughgarden & Gregory Valiant May 2, 2016 Spectral graph theory is the powerful and beautiful theory that arises from
More informationLarge Deviations Rates for Distributed Inference. Doctor of Philosophy. Electrical and Computer Engineering. Dragana Bajović
Large Deviations Rates for Distributed Inference Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering Dragana Bajović Dipl.
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationReconstruction of graph signals: percolation from a single seeding node
Reconstruction of graph signals: percolation from a single seeding node Santiago Segarra, Antonio G. Marques, Geert Leus, and Alejandro Ribeiro Abstract Schemes to reconstruct signals defined in the nodes
More informationDiscrete Signal Processing on Graphs: Sampling Theory
IEEE TRANS. SIGNAL PROCESS. TO APPEAR. 1 Discrete Signal Processing on Graphs: Sampling Theory Siheng Chen, Rohan Varma, Aliaksei Sandryhaila, Jelena Kovačević arxiv:153.543v [cs.it] 8 Aug 15 Abstract
More information1 Regression with High Dimensional Data
6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationSocial Influence in Online Social Networks. Epidemiological Models. Epidemic Process
Social Influence in Online Social Networks Toward Understanding Spatial Dependence on Epidemic Thresholds in Networks Dr. Zesheng Chen Viral marketing ( word-of-mouth ) Blog information cascading Rumor
More informationSpectral Graph Theory for. Dynamic Processes on Networks
Spectral Graph Theory for Dynamic Processes on etworks Piet Van Mieghem in collaboration with Huijuan Wang, Dragan Stevanovic, Fernando Kuipers, Stojan Trajanovski, Dajie Liu, Cong Li, Javier Martin-Hernandez,
More informationEPIDEMICS have been the object of many modeling
Source Detection for Large-Scale Epidemics Brunella M. Spinelli LCA3, I&C, EPFL Abstract Epidemic modeling is a well-studied problem. We review the two main approaches: random mixing models and contact
More informationCertifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering
Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Shuyang Ling Courant Institute of Mathematical Sciences, NYU Aug 13, 2018 Joint
More informationDie-out Probability in SIS Epidemic Processes on Networks
Die-out Probability in SIS Epidemic Processes on etworks Qiang Liu and Piet Van Mieghem Abstract An accurate approximate formula of the die-out probability in a SIS epidemic process on a network is proposed.
More informationarxiv: v1 [cs.na] 6 Jan 2017
SPECTRAL STATISTICS OF LATTICE GRAPH STRUCTURED, O-UIFORM PERCOLATIOS Stephen Kruzick and José M. F. Moura 2 Carnegie Mellon University, Department of Electrical Engineering 5000 Forbes Avenue, Pittsburgh,
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationBounds for the Zero Forcing Number of Graphs with Large Girth
Theory and Applications of Graphs Volume 2 Issue 2 Article 1 2015 Bounds for the Zero Forcing Number of Graphs with Large Girth Randy Davila Rice University, rrd32@txstate.edu Franklin Kenter Rice University,
More informationAn Optimal Control Problem Over Infected Networks
Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 214 Paper No. 125 An Optimal Control Problem Over Infected Networks Ali Khanafer,
More informationFast Linear Iterations for Distributed Averaging 1
Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider
More informationModeling, Analysis, and Control of Information Propagation in Multi-layer and Multiplex Networks. Osman Yağan
Modeling, Analysis, and Control of Information Propagation in Multi-layer and Multiplex Networks Osman Yağan Department of ECE Carnegie Mellon University Joint work with Y. Zhuang and V. Gligor (CMU) Alex
More informationNode seniority ranking
Node seniority ranking Vincenzo Fioriti 1 * and Marta Chinnici 1 1 ENEA, Casaccia Laboratories, via Anguillarese 301, S. Maria in Galeria, 00123, Roma, Italy *Correspondence to: vincenzo.fioriti@enea.it
More informationTOPOLOGY FOR GLOBAL AVERAGE CONSENSUS. Soummya Kar and José M. F. Moura
TOPOLOGY FOR GLOBAL AVERAGE CONSENSUS Soummya Kar and José M. F. Moura Department of Electrical and Computer Engineering Carnegie Mellon University, Pittsburgh, PA 15213 USA (e-mail:{moura}@ece.cmu.edu)
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More informationLecture 9: Laplacian Eigenmaps
Lecture 9: Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 18, 2017 Optimization Criteria Assume G = (V, W ) is a undirected weighted graph with
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationLecture 13 Spectral Graph Algorithms
COMS 995-3: Advanced Algorithms March 6, 7 Lecture 3 Spectral Graph Algorithms Instructor: Alex Andoni Scribe: Srikar Varadaraj Introduction Today s topics: Finish proof from last lecture Example of random
More informationLecture 4: An FPTAS for Knapsack, and K-Center
Comp 260: Advanced Algorithms Tufts University, Spring 2016 Prof. Lenore Cowen Scribe: Eric Bailey Lecture 4: An FPTAS for Knapsack, and K-Center 1 Introduction Definition 1.0.1. The Knapsack problem (restated)
More informationCSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68
CSI 445/660 Part 6 (Centrality Measures for Networks) 6 1 / 68 References 1 L. Freeman, Centrality in Social Networks: Conceptual Clarification, Social Networks, Vol. 1, 1978/1979, pp. 215 239. 2 S. Wasserman
More informationarxiv: v1 [cs.it] 26 Sep 2018
SAPLING THEORY FOR GRAPH SIGNALS ON PRODUCT GRAPHS Rohan A. Varma, Carnegie ellon University rohanv@andrew.cmu.edu Jelena Kovačević, NYU Tandon School of Engineering jelenak@nyu.edu arxiv:809.009v [cs.it]
More informationUncertainty Principle and Sampling of Signals Defined on Graphs
Uncertainty Principle and Sampling of Signals Defined on Graphs Mikhail Tsitsvero, Sergio Barbarossa, and Paolo Di Lorenzo 2 Department of Information Eng., Electronics and Telecommunications, Sapienza
More informationToward Understanding Spatial Dependence on Epidemic Thresholds in Networks
Toward Understanding Spatial Dependence on Epidemic Thresholds in Networks Zesheng Chen Department of Computer Science Indiana University - Purdue University Fort Wayne, Indiana 4685 Email: chenz@ipfw.edu
More informationControl and synchronization in systems coupled via a complex network
Control and synchronization in systems coupled via a complex network Chai Wah Wu May 29, 2009 2009 IBM Corporation Synchronization in nonlinear dynamical systems Synchronization in groups of nonlinear
More information1 Adjacency matrix and eigenvalues
CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected
More informationTutorial: Sparse Signal Recovery
Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:
More informationEquivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,
More informationDesigning Information Devices and Systems I Spring 2018 Homework 13
EECS 16A Designing Information Devices and Systems I Spring 2018 Homework 13 This homework is due April 30, 2018, at 23:59. Self-grades are due May 3, 2018, at 23:59. Submission Format Your homework submission
More informationSpectral densest subgraph and independence number of a graph 1
Spectral densest subgraph and independence number of a graph 1 Reid Andersen (Microsoft Research, One Microsoft Way,Redmond, WA 98052 E-mail: reidan@microsoft.com) Sebastian M. Cioabă 2 (Department of
More informationRecursive Distributed Detection for Composite Hypothesis Testing: Nonlinear Observation Models in Additive Gaussian Noise
Recursive Distributed Detection for Composite Hypothesis Testing: Nonlinear Observation Models in Additive Gaussian Noise Anit Kumar Sahu, Student Member, IEEE and Soummya Kar, Member, IEEE Abstract This
More informationThis section is an introduction to the basic themes of the course.
Chapter 1 Matrices and Graphs 1.1 The Adjacency Matrix This section is an introduction to the basic themes of the course. Definition 1.1.1. A simple undirected graph G = (V, E) consists of a non-empty
More informationExact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations
Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More information6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities
6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov
More informationLearning Graphs from Data: A Signal Representation Perspective
1 Learning Graphs from Data: A Signal Representation Perspective Xiaowen Dong*, Dorina Thanou*, Michael Rabbat, and Pascal Frossard arxiv:1806.00848v1 [cs.lg] 3 Jun 2018 The construction of a meaningful
More informationNetworks and Their Spectra
Networks and Their Spectra Victor Amelkin University of California, Santa Barbara Department of Computer Science victor@cs.ucsb.edu December 4, 2017 1 / 18 Introduction Networks (= graphs) are everywhere.
More informationLecture 1: From Data to Graphs, Weighted Graphs and Graph Laplacian
Lecture 1: From Data to Graphs, Weighted Graphs and Graph Laplacian Radu Balan February 5, 2018 Datasets diversity: Social Networks: Set of individuals ( agents, actors ) interacting with each other (e.g.,
More informationarxiv: v1 [q-bio.qm] 9 Sep 2016
Optimal Disease Outbreak Detection in a Community Using Network Observability Atiye Alaeddini 1 and Kristi A. Morgansen 2 arxiv:1609.02654v1 [q-bio.qm] 9 Sep 2016 Abstract Given a network, we would like
More informationLocating the Source of Diffusion in Large-Scale Networks
Locating the Source of Diffusion in Large-Scale Networks Supplemental Material Pedro C. Pinto, Patrick Thiran, Martin Vetterli Contents S1. Detailed Proof of Proposition 1..................................
More informationA Polynomial-Time Algorithm for Pliable Index Coding
1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n
More informationADMM and Fast Gradient Methods for Distributed Optimization
ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work
More informationarxiv: v3 [cs.si] 26 Dec 2016
Observer Placement for Source Localization: The Effect of Budgets and Transmission Variance Brunella Spinelli, L. Elisa Celis, Patrick Thiran arxiv:608.04567v3 [cs.si] 26 Dec 206 Abstract When an epidemic
More informationSparsity of Matrix Canonical Forms. Xingzhi Zhan East China Normal University
Sparsity of Matrix Canonical Forms Xingzhi Zhan zhan@math.ecnu.edu.cn East China Normal University I. Extremal sparsity of the companion matrix of a polynomial Joint work with Chao Ma The companion matrix
More informationLecture 1 and 2: Introduction and Graph theory basics. Spring EE 194, Networked estimation and control (Prof. Khan) January 23, 2012
Lecture 1 and 2: Introduction and Graph theory basics Spring 2012 - EE 194, Networked estimation and control (Prof. Khan) January 23, 2012 Spring 2012: EE-194-02 Networked estimation and control Schedule
More informationQuick Tour of Linear Algebra and Graph Theory
Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions Matrices and
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationOn Symmetry and Controllability of Multi-Agent Systems
53rd IEEE Conference on Decision and Control December 15-17, 2014. Los Angeles, California, USA On Symmetry and Controllability of Multi-Agent Systems Airlie Chapman and Mehran Mesbahi Abstract This paper
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationNetwork Topology Inference from Non-stationary Graph Signals
Network Topology Inference from Non-stationary Graph Signals Rasoul Shafipour Dept. of Electrical and Computer Engineering University of Rochester rshafipo@ece.rochester.edu http://www.ece.rochester.edu/~rshafipo/
More informationOnline Dictionary Learning with Group Structure Inducing Norms
Online Dictionary Learning with Group Structure Inducing Norms Zoltán Szabó 1, Barnabás Póczos 2, András Lőrincz 1 1 Eötvös Loránd University, Budapest, Hungary 2 Carnegie Mellon University, Pittsburgh,
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationSpectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics
Spectral Graph Theory and You: and Centrality Metrics Jonathan Gootenberg March 11, 2013 1 / 19 Outline of Topics 1 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial
More information6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search
6.207/14.15: Networks Lecture 7: Search on Networks: Navigation and Web Search Daron Acemoglu and Asu Ozdaglar MIT September 30, 2009 1 Networks: Lecture 7 Outline Navigation (or decentralized search)
More informationSparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery
Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:
More informationTractable Upper Bounds on the Restricted Isometry Constant
Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.
More informationOptimization of Quadratic Forms: NP Hard Problems : Neural Networks
1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationSpectral Clustering. Guokun Lai 2016/10
Spectral Clustering Guokun Lai 2016/10 1 / 37 Organization Graph Cut Fundamental Limitations of Spectral Clustering Ng 2002 paper (if we have time) 2 / 37 Notation We define a undirected weighted graph
More informationApplications of Eigenvalues in Extremal Graph Theory
Applications of Eigenvalues in Extremal Graph Theory Olivia Simpson March 14, 201 Abstract In a 2007 paper, Vladimir Nikiforov extends the results of an earlier spectral condition on triangles in graphs.
More informationData Mining and Analysis: Fundamental Concepts and Algorithms
: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA 2 Department of Computer
More informationMachine Learning for Data Science (CS4786) Lecture 11
Machine Learning for Data Science (CS4786) Lecture 11 Spectral clustering Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ ANNOUNCEMENT 1 Assignment P1 the Diagnostic assignment 1 will
More informationECS 289 F / MAE 298, Lecture 15 May 20, Diffusion, Cascades and Influence
ECS 289 F / MAE 298, Lecture 15 May 20, 2014 Diffusion, Cascades and Influence Diffusion and cascades in networks (Nodes in one of two states) Viruses (human and computer) contact processes epidemic thresholds
More informationAn algebraic perspective on integer sparse recovery
An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:
More informationDistributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract
More informationMetrics: Growth, dimension, expansion
Metrics: Growth, dimension, expansion Social and Technological Networks Rik Sarkar University of Edinburgh, 2017. Metric A distance measure d is a metric if: d(u,v) 0 d(u,v) = 0 iff u=v d(u,v) = d(u,v)
More informationLAPLACIAN MATRIX AND APPLICATIONS
LAPLACIAN MATRIX AND APPLICATIONS Alice Nanyanzi Supervisors: Dr. Franck Kalala Mutombo & Dr. Simukai Utete alicenanyanzi@aims.ac.za August 24, 2017 1 Complex systems & Complex Networks 2 Networks Overview
More informationData Mining and Matrices
Data Mining and Matrices 08 Boolean Matrix Factorization Rainer Gemulla, Pauli Miettinen June 13, 2013 Outline 1 Warm-Up 2 What is BMF 3 BMF vs. other three-letter abbreviations 4 Binary matrices, tiles,
More informationChapter 7 Network Flow Problems, I
Chapter 7 Network Flow Problems, I Network flow problems are the most frequently solved linear programming problems. They include as special cases, the assignment, transportation, maximum flow, and shortest
More informationRecovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies
July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:
More informationSource Locating of Spreading Dynamics in Temporal Networks
Source Locating of Spreading Dynamics in Temporal Networks Qiangjuan Huang School of Science National University of Defense Technology Changsha, Hunan, China qiangjuanhuang@foxmail.com «Supervised by Professor
More informationSampling of graph signals with successive local aggregations
Sampling of graph signals with successive local aggregations Antonio G. Marques, Santiago Segarra, Geert Leus, and Alejandro Ribeiro Abstract A new scheme to sample signals defined in the nodes of a graph
More informationSpectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods
Spectral Clustering Seungjin Choi Department of Computer Science POSTECH, Korea seungjin@postech.ac.kr 1 Spectral methods Spectral Clustering? Methods using eigenvectors of some matrices Involve eigen-decomposition
More informationSparse Subspace Clustering
Sparse Subspace Clustering Based on Sparse Subspace Clustering: Algorithm, Theory, and Applications by Elhamifar and Vidal (2013) Alex Gutierrez CSCI 8314 March 2, 2017 Outline 1 Motivation and Background
More informationMinimum Sparsity of Unobservable. Power Network Attacks
Minimum Sparsity of Unobservable 1 Power Network Attacks Yue Zhao, Andrea Goldsmith, H. Vincent Poor Abstract Physical security of power networks under power injection attacks that alter generation and
More informationLink Operations for Slowing the Spread of Disease in Complex Networks. Abstract
PACS: 89.75.Hc; 88.80.Cd; 89.65.Ef Revision 1: Major Areas with Changes are Highlighted in Red Link Operations for Slowing the Spread of Disease in Complex Networks Adrian N. Bishop and Iman Shames NICTA,
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationLab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018
Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google
More informationOn the metric dimension of the total graph of a graph
Notes on Number Theory and Discrete Mathematics Print ISSN 1310 5132, Online ISSN 2367 8275 Vol. 22, 2016, No. 4, 82 95 On the metric dimension of the total graph of a graph B. Sooryanarayana 1, Shreedhar
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationA New Space for Comparing Graphs
A New Space for Comparing Graphs Anshumali Shrivastava and Ping Li Cornell University and Rutgers University August 18th 2014 Anshumali Shrivastava and Ping Li ASONAM 2014 August 18th 2014 1 / 38 Main
More information