for an m-state homogeneous irreducible Markov chain with transition probability matrix

Size: px
Start display at page:

Download "for an m-state homogeneous irreducible Markov chain with transition probability matrix"

Transcription

1 UPDATING MARKOV CHAINS AMY N LANGVILLE AND CARL D MEYER 1 Introduction Suppose that the stationary distribution vector φ T =(φ 1,φ 2,,φ m ) for an m-state homogeneous irreducible Markov chain with transition probability matrix Q m m is known, but the chain requires updating by altering some of its transition probabilities or by adding or deleting some states Suppose that the updated transition probability matrix P n n is also irreducible The updating problem is to compute the updated stationary distribution π T =(π 1,π 2,,π n ) for P by somehow using the components in φ T to produce π T with less effort than that required by starting from scratch 2 The Power Method For the simple case in which the updating process calls for perturbing transition probabilities in Q to produce the updated matrix P without creating or destroying states, restarting the power method is a possible updating technique In other words, simply iterate with the new transition matrix but use the old stationary distribution as the initial vector (1) x T j+1 = x T j P with x T 0 = φ T Will this produce an acceptably accurate approximation to π T in fewer iterations than are required when an arbitrary initial vector is used? To some degree this is true, but intuition generally overestimates the extent, even when updating produces a P that is close to Q For example, if the entries of P Q are small enough to ensure that each component π i agrees with φ i in the first significant digit, and if the goal is to compute the update π T to twelve significant places by using (1), then about 11/R iterations are required, whereas starting from scratch with an initial vector containing no significant digits of accuracy requires about 12/R iterations, where R = log 10 λ 2 is the asymptotic rate of convergence with λ 2 being the subdominant eigenvalue of P In other words, the effort is reduced by about 8% for each correct significant digit that is built into x T 0 [22] In general, the restarted power method is not particularly effective as an updating technique, even when the updates represent small perturbations 3 Rank-One Updating When no states are added or deleted, the updating problem can be formulated in terms of updating Q one row at a time by adapting the Sherman Morrison rank-one updating formula [29] to the singular matrix A = I Q The mechanism for doing this is by means of the group inverse A # for A, which is often involved in questions concerning Markov chains [6, 9, 11, 14, 23, 25, 27, 28, 31, 32, 38] A # is the unique matrix satisfying AA # A = A, A # AA # = A #, and AA # = A # A Theorem 31 [31] If the ith row q T of Q is updated to produce p T = q T δ T, the ith row of P, and if φ T and π T are the respective stationary probability distribu- Department of Mathematics, College of Charleston, Charleston, SC 29424, (langvillea@cofcedu) Department of Mathematics, North Carolina State University, Raleigh, NC , (meyer@ncsuedu) 229

2 230 AMY N LANGVILLE AND CARL D MEYER tions of Q and P, then then π T = φ T ɛ T, where [ ] ɛ T φ i (2) = 1+δ T A # δ T A # (A # i = the ith column of A# ) i Multiple row updates to Q are accomplished by sequentially applying this formula one row at a time, which means that the group inverse must be sequentially updated The formula for updating (I Q) # to (I P) # is as follows: (3) (I P) # = A # + eɛ T [ A # γi ] A# i ɛt φ i, where γ = ɛt A # i φ i If more than just one or two rows are involved, then Theorem 31 is not computationally efficient If every row needs to be touched, then using (2) together with (3) requires O(n 3 ) floating point operations, which is comparable to the cost of starting from scratch Other updating formulas exist [9, 12, 16, 19, 36], but all are variations of a Sherman Morrison type of formula, and all are O(n 3 ) algorithms for a general update Moreover, rank-one updating techniques are not easily adapted to handle the creation or destruction of states 4 Aggregation Consider an irreducible n-state Markov chain whose state space has been partitioned into k disjoint groups S = G 1 G 2 G k with associated transition probability matrix (4) P n n = G 1 G 2 G k G 1 P 11 P 12 P 1k G 2 P 21 P 22 P 2k G k P k1 P k2 P kk (square diagonal blocks) This parent chain induces k smaller Markov chains called censored chains The censored chain associated with G i is defined to be the Markov process that records the location of the parent chain only when the parent chain visits states in G i Visits to states outside of G i are ignored The transition probability matrix for the ith censored chain is the ith stochastic complement [26] defined by (5) S i = P ii + P i (I P i ) 1 P i, in which P i and P i are, respectively, the ith row and the ith column of blocks with P ii removed, and P i is the principal submatrix of P obtained by deleting the ith row and ith column of blocks For example, if S = G 1 G 2, then the respective transition matrices for the two censored chains are the two stochastic complements S 1 = P 11 + P 12 (I P 22 ) 1 P 21 and S 2 = P 22 + P 21 (I P 11 ) 1 P 12 In general, if the stationary distribution for P is π T =(π T 1 π T 2 π T k ) (partitioned conformably with P), then the ith censored distribution (the stationary distribution for S i )is (6) s T i = πt i π T where e is an appropriately sized column of ones [26] i e,

3 UPDATING MARKOV CHAINS 231 For aperiodic chains, the jth component of s T i is the limiting conditional probability of being in the jth state of group G i given that the process is somewhere in G i Each group G i is compressed into a single state in a smaller k-state aggregated chain by squeezing the original transition matrix P down to an aggregated transition matrix s T 1 P 11 e s T 1 P 1k e A k k = (7), s T k P k1e s T k P kke which is stochastic and irreducible whenever P is [26] For aperiodic chains, transitions between states in the aggregated chain defined by A correspond to transitions between groups G i in the parent chain when the parent chain is in equilibrium The remarkable feature of aggregation is that it allows the parent chain to be decomposed into k small censored chains that can be independently solved, and the resulting censored distributions s T i can be combined with the stationary distribution of A to construct the parent stationary distribution π T This is the aggregation theorem Theorem 41 (the aggregation theorem [26]) If P is the block-partitioned transition probability matrix (4) for an irreducible n-state Markov chain whose stationary probability distribution is π T =(π T 1 π T 2 π T k ) (partitioned conformably with P), and if α T =(α 1,α 2,,α k ) is the stationary distribution for the aggregated chain defined by the matrix A k k in (7), then α i = π T i e, and the stationary distribution for P is π T = ( α 1 s T 1 α 2 s T 2 α k s T k ), where s T i is the censored distribution for the stochastic complement S i in (5) 5 Approximate Updating by Aggregation Aggregation as presented in Theorem 41 is mathematically elegant but numerically inefficient because costly inversions are embedded in the stochastic complements (5) that are required to produce the censored distributions s T i Consequently, the approach is to derive computationally cheap estimates of the censored distributions as described below In many large-scale problems the effects of updating are localized That is, not all stationary probabilities are equally affected some changes may be significant while others are hardly perceptible eg, this is generally true in applications such as Google s PageRank Problem [21] in which the stationary probabilities obey a power-law distribution (discussed in section 72) Partition the state space of the updated chain as S = G G, where G contains the states that are most affected by updating along with any new states created by updating techniques for determining these states are discussed in section 7 Some nearest neighbors of newly created states might also go into G Partition the updated (and reordered) transition matrix P as p 11 p 1g P 1 ( G G ) P = G P 11 P 12 (8) =, p g1 p gg P g G P 21 P 22 P 1 P g P 22

4 232 AMY N LANGVILLE AND CARL D MEYER where p 11 p 1g P 1 P 11 =, P 12 =, and P 21 =(P 1 P g ) p g1 p gg P g Let φ T and π T be the respective stationary distributions of the pre- and post-updated chains, and let if φ T and π T contain the respective stationary probabilities from φ T and π T that correspond to the states in G The stipulation that G contains the nearly unaffected states translates to saying that π T φ T When viewed as a partitioned matrix with g + 1 diagonal blocks, the first g diagonal blocks in P are 1 1, and the lower right-hand block is the (n g) (n g) matrix P 22 that is associated with the states in G The stochastic complements in P are S 1 = = S g =1, and S g+1 = P 22 + P 21 ( I P11 ) 1P12 Consequently, the aggregated transition matrix (7) becomes p 11 p 1g P 1 e (9) A = p g1 p gg P g e s T P 1 s T P g s T P 22 e (g+1) (g+1) ( ) ( ) P11 P 12 e P11 P 12 e = =, s T P 21 s T P 22 e s T P 21 1 s T P 21 e where s T is the censored distribution derived from the only significant stochastic complement S = S g+1 If the stationary distribution for A is α T = ( α 1,,α g,α g+1 ), then Theorem 41 says that the stationary distribution for P is (10) π T =(π 1,π g π g+1,,π n )= ( π 1,,π g π T ) = ( α 1,,α g π T ) Since s T i (11) = π T i /πt i e, and since πt φ T, it follows that s T = φt φ T s T e is a good approximation to s T that is available from the pre-updated distribution Using this in (9) produces an approximate aggregated transition matrix ( ) P11 P 12 e (12) Ã = s T P 21 1 s T P 21 e

5 UPDATING MARKOV CHAINS 233 Notice that ( ) ( ) A à = ( ) δ T P 21 δ T = P 21 e δ T P 21 I e, where δ T = s T s T Consequently, A à and st s T are of the same order of magnitude, so the stationary distribution α T of à can provide a good approximation to αt, the stationary distribution of A That is, α T =( α 1, α 2,, α g, α g+1 ) ( α 1,,α g,α g+1 ) = α T Use α i α i for 1 i g in (10) along with π T φ T to obtain the approximation (13) π T π T = ( α 1, α 2,, α g φ T ) Thus an approximate updated distribution is obtained The degree to which this approximation is accurate clearly depends on the degree to which π T φ T If (13) does not provide the desired accuracy, it can be viewed as the first step in an iterative aggregation scheme described below that performs remarkably well 6 Updating by Iterative Aggregation Iterative aggregation as described in [38] is not a general-purpose technique, because it usually does not work for chains that are not nearly uncoupled However, iterative aggregation can be adapted to the updating problem, and these variations work extremely well, even for chains that are not nearly uncoupled This is in part due to the fact that the approximate aggregation matrix (12) differs from the exact aggregation matrix (9) in only one row Our iterative aggregation updating algorithm is described below Assume that the stationary distribution φ T = (φ 1,φ 2,,φ m ) for some irreducible Markov chain C is already known, perhaps from prior computations, and suppose that C needs to be updated As in earlier sections, let the transition probability matrix and stationary distribution for the updated chain be denoted by P and π T =(π 1,π 2,,π n ), respectively The updated matrix P is assumed to be irreducible It is important to note that m is not necessarily equal to n because the updating process allows for the creation or destruction of states as well as the alteration of transition probabilities The Iterative Aggregation Updating Algorithm Initialization i Partition the states of the updated chain as S = G G and reorder P as described in (8) ii φ T the components from φ T that correspond to the states in G iii s T φ T /(φ T e) (an initial approximate censored distribution)

6 234 AMY N LANGVILLE AND CARL D MEYER Iterate until convergence ( ) P11 P 12 e 1 A s T P 21 1 s T P 21 e (g+1) (g+1) ( g = G ) 2 α T (α 1,α 2,,α g,α g+1 ) (the stationary distribution of A) 3 χ T ( α 1,α 2,,α g α g+1 s ) T 4 ψ T χ T P (see note following the algorithm) 5 If ψ T χ T <τ for a given tolerance τ, then quit else s T ψ T /ψ T e and go to step 1 Note concerning step 4 Step 4 is necessary because the vector χ T generated in step 3 is a fixed point in the sense that if Step 4 is omitted and the process is restarted using χ T instead of ψ T, then the same χ T is simply reproduced at Step 3 on each subsequent iteration Step 4 has two purposes it moves the iterate off the fixed point while simultaneously contributing to the convergence process That is, the ψ T resulting from step 4 can be used to restart the algorithm as well as produce a better approximation because applying a power step makes small progress toward the stationary solution In the past, some authors [38] have used Gauss Seidel in place of the power method at Step 4 While precise rates of convergence for general iterative aggregation algorithms are difficult to articulate, the specialized nature of our iterative aggregation updating algorithm allows us to easily establish its rate of convergence The following theorem shows that this rate is directly dependent on how fast the powers of the one significant stochastic complement S = P 22 + P 21 (I P 11 ) 1 P 12 converge In other words, since S is an irreducible stochastic matrix, the rate of convergence is completely dictated by the magnitude and Jordan structure of the largest subdominant eigenvalue of S Theorem 61 [22] The iterative aggregation updating algorithm defined above converges to the stationary distribution π T of P for all partitions S = G G The rate at which the iterates converge to π T is exactly the rate at which the powers S n converge, which is governed by the magnitude and Jordan structure of largest subdominant eigenvalue λ 2 (S) of S If λ 2 (S) is real and simple, then the asymptotic rate of convergence is R = log 10 λ 2 (S) 7 Determining The Partition The iterative aggregation updating algorithm is globally convergent, and it never requires more iterations than the power method to attain a given level of convergence [17] However, iterative aggregation clearly requires more work per iteration than the power method One iteration of iterative aggregation requires forming the aggregation matrix, solving for its stationary vector, and executing one power iteration The key to realizing an improvement in iterative aggregation over the power method rests in properly choosing the partition S = G G As Theorem 61 shows, good partitions are precisely those that yield a stochastic complement S = P 22 + P 21 (I P 11 ) 1 P 12 whose subdominant eigenvalue λ 2 (S) is small in magnitude Experience indicates that as G = g (the size of P 11 ) becomes larger, iterative aggregation tends to converge in fewer iterations But as g becomes larger, each iteration requires more work, so the trick is to strike an acceptable balance A small g that significantly reduces λ 2 (S) is the ideal situation

7 UPDATING MARKOV CHAINS 235 Even for moderately sized problems there is an extremely large number of possible partitions, but there are some useful heuristics that can help guide the choice of G that will produce reasonably good results For example, a relatively simple approach is to take G to be the set of all states near the updates, where near might be measured in a graph theoretic sense or else by transient flow [7] (ie, using the magnitude of entries of x T j+1 = xt j P after j iterations, where j is small, say 5 or 10) In the absence of any other information, this naive strategy is at least a good place to start However, there are usually additional options that lead to even better G-sets, and some of these are described below 71 Partitioning by differing time scales In most aperiodic chains, evolution is not at a uniform rate, and consequently most iterative techniques, including the power method, often spend the majority of the time in resolving a small number of components the slow evolving states The slow states can be isolated either by monitoring the process for a few iterations or by theoretical means [22] If the slow states are placed in G while the faster-converging states are lumped into G, then the iterative aggregation algorithm concentrates its effort on resolving the smaller number of slow-converging states In loose terms, the effect of steps 1 3 in the iterative aggregation algorithm is essentially to make progress toward achieving a steady state for a smaller chain consisting of just the slow states in G together with one additional lumped state that accounts for all fast states in G The power iteration in step 4 moves the entire process ahead on a global basis, so if the slow states in G are substantially resolved by the relatively cheaper steps 1 3, then not many of the more costly global power steps are required to push the entire chain toward its global equilibrium This is the essence of the original Simon Ando idea first proposed in 1961 and explained and analyzed in [26, 37] As g = G becomes smaller relative to n, steps 1 3 become significantly cheaper to execute, and the process converges rapidly in both iteration count and wall-clock time Examples and reports on experiments are given in [22] In some applications the slow states are particularly easy to identify because they are the ones having the larger stationary probabilities This is a particularly nice state of affairs for the updating problem because we have the stationary probabilities from the prior period at our disposal, and thus all we have to do to construct a good G-set is to include the states with prior large stationary probabilities and throw in the states that were added or updated along with a few of their nearest neighbors Clearly, this is an advantage only when there are just a few large states However, it turns out that this is a characteristic feature of scale-free networks with power-law distributions [1, 2, 5, 10] discussed below 72 Scale-free networks A scale-free networks with a power-law distribution is a network in which the number of nodes n(l) having l edges (possibly directed) is proportional to l k where k is a constant that does not change as the network expands (hence the term scale-free ) In other words, the distribution of nodal degrees obeys a power-law distribution in the sense that P [deg(n) =d] 1 d k for some k>1 ( means proportional to ) A Markov chain with a power-law distribution is a chain in which there are relatively very few states that have a significant stationary probability while the overwhelming majority of states have nearly negligible stationary probabilities Google s

8 236 AMY N LANGVILLE AND CARL D MEYER PageRank application [21, 22] is an important example Consequently, when the stationary probabilities are plotted in order of decreasing magnitude, the resulting graph has a pronounced L-shape with an extremely sharp bend It is this characteristic L-shape that reveals a near optimal partition S = G G for the iterative aggregation updating algorithm presented in section 6 Experiments indicate that the size of the G-set used in our iterative aggregation updating algorithm is nearly optimal around a point that is just to the right-hand side of the pronounced bend in the L-curve In other words, an apparent method for constructing a reasonably good partition S = G G for the iterative aggregation updating algorithm is as follows 1 First put all new states and states with newly created or destroyed connections (perhaps along with some of their nearest neighbors) into G 2 Add other states that remain after the update in order of the magnitude of their prior stationary probabilities up to the point where these stationary probabilities level off Of course, there is some subjectiveness to this strategy However, the leveling-off point is relatively easy to discern in distributions having a sharply defined bend in the L-curve, and only distributions that gradually die away or do not conform to a power law are problematic If, when ordered by magnitude, the stationary probabilities π(1) π(2) π(n) for an irreducible chain conform to a power-law distribution so that there are constants α>0 and k>0 such that π(i) αi k, then the leveling-off point i level can be taken to be the smallest value for which dπ(i)/di ɛ for some user-defined tolerance ɛ That is, i level (kα/ɛ) 1/k+1 This provides a rough estimate of g opt, the optimal size of G, but empirical evidence suggests that better estimates require a scaling factor σ(n) that accounts for the size of the chain; ie, ( ) 1/k+1 ( ) 1/k+1 kα kπ(1) g opt σ(n) σ(n) ɛ ɛ More research and testing is needed to resolve some of these issues REFERENCES [1] Albert-Laszlo Barabasi Linked: The New Science of Networks Plume, 2003 [2] Albert-Laszlo Barabasi, Reka Albert, and Hawoong Jeong Scale-free characteristics of random networks: the topology of the world-wide web Physica A, 281:69 77, 2000 [3] Sergey Brin and Lawrence Page The anatomy of a large-scale hypertextual web search engine Computer Networks and ISDN Systems, 33: , 1998 [4] Sergey Brin, Lawrence Page, R Motwami, and Terry Winograd The PageRank citation ranking: bringing order to the web Technical report, Computer Science Department, Stanford University, 1998 [5] A Broder, R Kumar, F Maghoul, P Raghavan, S Rajagopalan, R Stata, A Tomkins, and J Wiener Graph structure in the web In The Ninth International WWW Conference, May [6] Steven Campbell and Carl D Meyer Generalized Inverses of Linear Transformations Pitman, San Francisco, 1979 [7] Steve Chien, Cynthia Dwork, Ravi Kumar, and D Sivakumar Towards exploiting link evolution In Workshop on algorithms and models for the Web graph, 2001 [8] Grace E Cho and Carl D Meyer Markov chain sensitivity measured by mean first passage times Linear Algebra and its Applications, 313:21 28, 2000

9 UPDATING MARKOV CHAINS 237 [9] Grace E Cho and Carl D Meyer Comparison of perturbation bounds for the stationary distribution of a Markov chain Linear Algebra and its Applications, 335(1 3): , 2001 [10] D Donato, L Laura, S Leonardi, and S Millozzi Large scale properties of the webgraph The European Physical Journal B, 38: , 2004 [11] Robert E Funderlic and Carl D Meyer Sensitivity of the stationary distribution vector for an ergodic Markov chain Linear Algebra and its Applications, 76:1 17, 1986 [12] Robert E Funderlic and Robert J Plemmons Updating LU factorizations for computing stationary distributions SIAM Journal on Algebraic and Discrete Methods, 7(1):30 42, 1986 [13] Gene H Golub and Charles F Van Loan Matrix Computations Johns Hopkins University Press, Baltimore, 1996 [14] Gene H Golub and Carl D Meyer Using the QR factorization and group inverse to compute, differentiate and estimate the sensitivity of stationary probabilities for Markov chains SIAM Journal on Algebraic and Discrete Methods, 17: , 1986 [15] Roger A Horn and Charles R Johnson Matrix Analysis Cambridge University Press, 1990 [16] Jeffrey J Hunter Stationary distributions of perturbed Markov chains Linear Algebra and its Applications, 82: , 1986 [17] Ilse C F Ipsen and Steve Kirkland Convergence analysis of an improved PageRank algorithm Technical Report CRSC-TR04-02, North Carolina State University, 2004 [18] Ilse C F Ipsen and Carl D Meyer Uniform stability of Markov chains SIAM Journal on Matrix Analysis and Applications, 15(4): , 1994 [19] John G Kemeny and Laurie J Snell Finite Markov Chains D Van Nostrand, New York, 1960 [20] Amy N Langville and Carl D Meyer A survey of eigenvector methods of web information retrieval SIAM Rev, 47(1): , 2005 [21] Amy N Langville and Carl D Meyer Google s PageRank and Beyond: The Science of Search Engine Rankings Princeton University Press, 2006 [22] Amy N Langville and Carl D Meyer Updating the stationary vector of an irreducible Markov chain with an eye on Google s PageRank SIAM Journal on Matrix Analysis and Applications, 27: , 2006 [23] Carl D Meyer The role of the group generalized inverse in the theory of finite Markov chains SIAM Rev, 17: , 1975 [24] Carl D Meyer The condition of a finite Markov chain and perturbation bounds for the limiting probabilities SIAM Journal on Algebraic and Discrete Methods, 1: , 1980 [25] Carl D Meyer Analysis of finite Markov chains by group inversion techniques Recent Applications of Generalized Inverses, Research Notes in Mathematics, Pitman, Ed S L Campbell, 66:50 81, 1982 [26] Carl D Meyer Stochastic complementation, uncoupling Markov chains, and the theory of nearly reducible systems SIAM Review, 31(2): , 1989 [27] Carl D Meyer The character of a finite Markov chain Linear Algebra, Markov Chains, and Queueing Models, IMA Volumes in Mathematics and its Applications, Ed, Carl D Meyer and Robert J Plemmons, Springer-Verlag, 48:47 58, 1993 [28] Carl D Meyer Sensitivity of the stationary distribution of a Markov chain SIAM Journal on Matrix Analysis and Applications, 15(3): , 1994 [29] Carl D Meyer Matrix Analysis and Applied Linear Algebra SIAM, Philadelphia, 2000 [30] Carl D Meyer and Robert J Plemmons Linear Algebra, Markov Chains, and Queueing Models Springer-Verlag, 1993 [31] Carl D Meyer and James M Shoaf Updating finite Markov chains by using techniques of group matrix inversion Journal of Statistical Computation and Simulation, 11: , 1980 [32] Carl D Meyer and G W Stewart Derivatives and perturbations of eigenvectors SIAM J Numer Anal, 25: , 1988 [33] Cleve Moler The world s largest matrix computation Matlab News and Notes, pages 12 13, October 2002 [34] Gopal Pandurangan, Prabhakar Raghavan, and Eli Upfal Using PageRank to Characterize Web Structure In 8th Annual International Computing and Combinatorics Conference (COCOON), 2002 [35] Eugene Seneta Non-negative matrices and Markov chains Springer-Verlag, 1981 [36] Eugene Seneta Sensivity analysis, ergodicity coefficients, and rank-one updates for finite Markov chains In William J Stewart, editor, Numerical Solutions of Markov Chains, pages , 1991

10 238 AMY N LANGVILLE AND CARL D MEYER [37] Herbert A Simon and Albert Ando Aggregation of variables in dynamic systems Econometrica, 29: , 1961 [38] William J Stewart Introduction to the Numerical Solution of Markov Chains Princeton University Press, 1994 [39] Twelfth International World Wide Web Conference Extrapolation Methods for Accelerating PageRank Computations, 2003

Updating Markov Chains Carl Meyer Amy Langville

Updating Markov Chains Carl Meyer Amy Langville Updating Markov Chains Carl Meyer Amy Langville Department of Mathematics North Carolina State University Raleigh, NC A. A. Markov Anniversary Meeting June 13, 2006 Intro Assumptions Very large irreducible

More information

Updating PageRank. Amy Langville Carl Meyer

Updating PageRank. Amy Langville Carl Meyer Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software

More information

UpdatingtheStationary VectorofaMarkovChain. Amy Langville Carl Meyer

UpdatingtheStationary VectorofaMarkovChain. Amy Langville Carl Meyer UpdatingtheStationary VectorofaMarkovChain Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC NSMC 9/4/2003 Outline Updating and Pagerank Aggregation Partitioning

More information

CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER

CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER ILSE C.F. IPSEN AND STEVE KIRKLAND Abstract. The PageRank updating algorithm proposed by Langville and Meyer is a special case

More information

The Second Eigenvalue of the Google Matrix

The Second Eigenvalue of the Google Matrix The Second Eigenvalue of the Google Matrix Taher H. Haveliwala and Sepandar D. Kamvar Stanford University {taherh,sdkamvar}@cs.stanford.edu Abstract. We determine analytically the modulus of the second

More information

Mathematical Properties & Analysis of Google s PageRank

Mathematical Properties & Analysis of Google s PageRank Mathematical Properties & Analysis of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca M. Wills Cedya p.1 PageRank An objective measure of the citation importance

More information

Analysis of Google s PageRank

Analysis of Google s PageRank Analysis of Google s PageRank Ilse Ipsen North Carolina State University Joint work with Rebecca M. Wills AN05 p.1 PageRank An objective measure of the citation importance of a web page [Brin & Page 1998]

More information

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract

More information

Comparison of perturbation bounds for the stationary distribution of a Markov chain

Comparison of perturbation bounds for the stationary distribution of a Markov chain Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Link Analysis. Leonid E. Zhukov

Link Analysis. Leonid E. Zhukov Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

Calculating Web Page Authority Using the PageRank Algorithm

Calculating Web Page Authority Using the PageRank Algorithm Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Analysis and Computation of Google s PageRank

Analysis and Computation of Google s PageRank Analysis and Computation of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca S. Wills ANAW p.1 PageRank An objective measure of the citation importance of a web

More information

Department of Applied Mathematics. University of Twente. Faculty of EEMCS. Memorandum No. 1712

Department of Applied Mathematics. University of Twente. Faculty of EEMCS. Memorandum No. 1712 Department of Applied Mathematics Faculty of EEMCS t University of Twente The Netherlands P.O. Box 217 7500 AE Enschede The Netherlands Phone: +31-53-4893400 Fax: +31-53-4893114 Email: memo@math.utwente.nl

More information

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains M. Catral Department of Mathematics and Statistics University of Victoria Victoria, BC Canada V8W 3R4 S. J. Kirkland Hamilton Institute

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 2, pp. 305 32 c 2005 Society for Industrial and Applied Mathematics JORDAN CANONICAL FORM OF THE GOOGLE MATRIX: A POTENTIAL CONTRIBUTION TO THE PAGERANK COMPUTATION

More information

A Fast Two-Stage Algorithm for Computing PageRank

A Fast Two-Stage Algorithm for Computing PageRank A Fast Two-Stage Algorithm for Computing PageRank Chris Pan-Chi Lee Stanford University cpclee@stanford.edu Gene H. Golub Stanford University golub@stanford.edu Stefanos A. Zenios Stanford University stefzen@stanford.edu

More information

Computing PageRank using Power Extrapolation

Computing PageRank using Power Extrapolation Computing PageRank using Power Extrapolation Taher Haveliwala, Sepandar Kamvar, Dan Klein, Chris Manning, and Gene Golub Stanford University Abstract. We present a novel technique for speeding up the computation

More information

Algebraic Representation of Networks

Algebraic Representation of Networks Algebraic Representation of Networks 0 1 2 1 1 0 0 1 2 0 0 1 1 1 1 1 Hiroki Sayama sayama@binghamton.edu Describing networks with matrices (1) Adjacency matrix A matrix with rows and columns labeled by

More information

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google

More information

How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University

How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University How works or How linear algebra powers the search engine M. Ram Murty, FRSC Queen s Research Chair Queen s University From: gomath.com/geometry/ellipse.php Metric mishap causes loss of Mars orbiter

More information

A Two-Stage Algorithm for Computing PageRank and Multistage Generalizations

A Two-Stage Algorithm for Computing PageRank and Multistage Generalizations Internet Mathematics Vol. 4, No. 4: 299 327 A Two-Stage Algorithm for Computing PageRank and Multistage Generalizations ChrisP.Lee,GeneH.Golub,andStefanosA.Zenios Abstract. The PageRank model pioneered

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 10 Graphs II Rainer Gemulla, Pauli Miettinen Jul 4, 2013 Link analysis The web as a directed graph Set of web pages with associated textual content Hyperlinks between webpages

More information

STOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS

STOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS STOCHASTIC COMLEMENTATION, UNCOULING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS C D MEYER Abstract A concept called stochastic complementation is an idea which occurs naturally, although

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 29, No 4, pp 1281 1296 c 2007 Society for Industrial and Applied Mathematics PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES ILSE C F IPSEN AND TERESA M SELEE

More information

PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES

PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES ILSE CF IPSEN AND TERESA M SELEE Abstract We present a simple algorithm for computing the PageRank (stationary distribution) of the stochastic

More information

ECEN 689 Special Topics in Data Science for Communications Networks

ECEN 689 Special Topics in Data Science for Communications Networks ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 8 Random Walks, Matrices and PageRank Graphs

More information

Fast PageRank Computation Via a Sparse Linear System (Extended Abstract)

Fast PageRank Computation Via a Sparse Linear System (Extended Abstract) Fast PageRank Computation Via a Sparse Linear System (Extended Abstract) Gianna M. Del Corso 1 Antonio Gullí 1,2 Francesco Romani 1 1 Dipartimento di Informatica, University of Pisa, Italy 2 IIT-CNR, Pisa

More information

1 Searching the World Wide Web

1 Searching the World Wide Web Hubs and Authorities in a Hyperlinked Environment 1 Searching the World Wide Web Because diverse users each modify the link structure of the WWW within a relatively small scope by creating web-pages on

More information

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine CS 277: Data Mining Mining Web Link Structure Class Presentations In-class, Tuesday and Thursday next week 2-person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1-person teams 4 minutes,

More information

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly

More information

Eigenvalue Problems Computation and Applications

Eigenvalue Problems Computation and Applications Eigenvalue ProblemsComputation and Applications p. 1/36 Eigenvalue Problems Computation and Applications Che-Rung Lee cherung@gmail.com National Tsing Hua University Eigenvalue ProblemsComputation and

More information

The Push Algorithm for Spectral Ranking

The Push Algorithm for Spectral Ranking The Push Algorithm for Spectral Ranking Paolo Boldi Sebastiano Vigna March 8, 204 Abstract The push algorithm was proposed first by Jeh and Widom [6] in the context of personalized PageRank computations

More information

On the mathematical background of Google PageRank algorithm

On the mathematical background of Google PageRank algorithm Working Paper Series Department of Economics University of Verona On the mathematical background of Google PageRank algorithm Alberto Peretti, Alberto Roveda WP Number: 25 December 2014 ISSN: 2036-2919

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Page rank computation HPC course project a.y

Page rank computation HPC course project a.y Page rank computation HPC course project a.y. 2015-16 Compute efficient and scalable Pagerank MPI, Multithreading, SSE 1 PageRank PageRank is a link analysis algorithm, named after Brin & Page [1], and

More information

Fast PageRank Computation via a Sparse Linear System

Fast PageRank Computation via a Sparse Linear System Internet Mathematics Vol. 2, No. 3: 251-273 Fast PageRank Computation via a Sparse Linear System Gianna M. Del Corso, Antonio Gullí, and Francesco Romani Abstract. Recently, the research community has

More information

Link Analysis Ranking

Link Analysis Ranking Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query

More information

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem Michael Neumann Nung Sing Sze Abstract The inverse mean first passage time problem is given a positive matrix M R n,n,

More information

Lecture 7 Mathematics behind Internet Search

Lecture 7 Mathematics behind Internet Search CCST907 Hidden Order in Daily Life: A Mathematical Perspective Lecture 7 Mathematics behind Internet Search Dr. S. P. Yung (907A) Dr. Z. Hua (907B) Department of Mathematics, HKU Outline Google is the

More information

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information

More information

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) In class, we saw this graph, with each node representing people who are following each other on Twitter: Our

More information

Inf 2B: Ranking Queries on the WWW

Inf 2B: Ranking Queries on the WWW Inf B: Ranking Queries on the WWW Kyriakos Kalorkoti School of Informatics University of Edinburgh Queries Suppose we have an Inverted Index for a set of webpages. Disclaimer Not really the scenario of

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

Uncertainty and Randomization

Uncertainty and Randomization Uncertainty and Randomization The PageRank Computation in Google Roberto Tempo IEIIT-CNR Politecnico di Torino tempo@polito.it 1993: Robustness of Linear Systems 1993: Robustness of Linear Systems 16 Years

More information

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis

More information

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES MICHELE BENZI AND CHRISTINE KLYMKO Abstract This document contains details of numerical

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

Utilizing Network Structure to Accelerate Markov Chain Monte Carlo Algorithms

Utilizing Network Structure to Accelerate Markov Chain Monte Carlo Algorithms algorithms Article Utilizing Network Structure to Accelerate Markov Chain Monte Carlo Algorithms Ahmad Askarian, Rupei Xu and András Faragó * Department of Computer Science, The University of Texas at

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

Affine iterations on nonnegative vectors

Affine iterations on nonnegative vectors Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider

More information

Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design

Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design Majorizations for the Eigenvectors of Graph-Adjacency Matrices: A Tool for Complex Network Design Rahul Dhal Electrical Engineering and Computer Science Washington State University Pullman, WA rdhal@eecs.wsu.edu

More information

Two Characterizations of Matrices with the Perron-Frobenius Property

Two Characterizations of Matrices with the Perron-Frobenius Property NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2009;??:1 6 [Version: 2002/09/18 v1.02] Two Characterizations of Matrices with the Perron-Frobenius Property Abed Elhashash and Daniel

More information

A Note on Google s PageRank

A Note on Google s PageRank A Note on Google s PageRank According to Google, google-search on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to

More information

Censoring Technique in Studying Block-Structured Markov Chains

Censoring Technique in Studying Block-Structured Markov Chains Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains

More information

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix Steve Kirkland University of Regina June 5, 2006 Motivation: Google s PageRank algorithm finds the stationary vector of a stochastic

More information

Node and Link Analysis

Node and Link Analysis Node and Link Analysis Leonid E. Zhukov School of Applied Mathematics and Information Science National Research University Higher School of Economics 10.02.2014 Leonid E. Zhukov (HSE) Lecture 5 10.02.2014

More information

Link Analysis. Stony Brook University CSE545, Fall 2016

Link Analysis. Stony Brook University CSE545, Fall 2016 Link Analysis Stony Brook University CSE545, Fall 2016 The Web, circa 1998 The Web, circa 1998 The Web, circa 1998 Match keywords, language (information retrieval) Explore directory The Web, circa 1998

More information

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University http://cs224w.stanford.edu How to organize/navigate it? First try: Human curated Web directories Yahoo, DMOZ, LookSmart

More information

Google Page Rank Project Linear Algebra Summer 2012

Google Page Rank Project Linear Algebra Summer 2012 Google Page Rank Project Linear Algebra Summer 2012 How does an internet search engine, like Google, work? In this project you will discover how the Page Rank algorithm works to give the most relevant

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

A Residual Inverse Power Method

A Residual Inverse Power Method University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR 2007 09 TR 4854 A Residual Inverse Power Method G. W. Stewart February 2007 ABSTRACT The inverse

More information

Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm

Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm François Fouss 1, Jean-Michel Renders 2 & Marco Saerens 1 {saerens,fouss}@isys.ucl.ac.be, jean-michel.renders@xrce.xerox.com

More information

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors Michael K. Ng Centre for Mathematical Imaging and Vision and Department of Mathematics

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Online solution of the average cost Kullback-Leibler optimization problem

Online solution of the average cost Kullback-Leibler optimization problem Online solution of the average cost Kullback-Leibler optimization problem Joris Bierkens Radboud University Nijmegen j.bierkens@science.ru.nl Bert Kappen Radboud University Nijmegen b.kappen@science.ru.nl

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Markov chains (week 6) Solutions

Markov chains (week 6) Solutions Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A

More information

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS LINK ANALYSIS Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Retrieval models Retrieval evaluation Link analysis Models

More information

arxiv: v1 [physics.soc-ph] 15 Dec 2009

arxiv: v1 [physics.soc-ph] 15 Dec 2009 Power laws of the in-degree and out-degree distributions of complex networks arxiv:0912.2793v1 [physics.soc-ph] 15 Dec 2009 Shinji Tanimoto Department of Mathematics, Kochi Joshi University, Kochi 780-8515,

More information

Extrapolation Methods for Accelerating PageRank Computations

Extrapolation Methods for Accelerating PageRank Computations Extrapolation Methods for Accelerating PageRank Computations Sepandar D. Kamvar Stanford University sdkamvar@stanford.edu Taher H. Haveliwala Stanford University taherh@cs.stanford.edu Christopher D. Manning

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

Discrete quantum random walks

Discrete quantum random walks Quantum Information and Computation: Report Edin Husić edin.husic@ens-lyon.fr Discrete quantum random walks Abstract In this report, we present the ideas behind the notion of quantum random walks. We further

More information

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Jevin West and Carl T. Bergstrom November 25, 2008 1 Overview There

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

1998: enter Link Analysis

1998: enter Link Analysis 1998: enter Link Analysis uses hyperlink structure to focus the relevant set combine traditional IR score with popularity score Page and Brin 1998 Kleinberg Web Information Retrieval IR before the Web

More information

Math 304 Handout: Linear algebra, graphs, and networks.

Math 304 Handout: Linear algebra, graphs, and networks. Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm

Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm Some relationships between Kleinberg s hubs and authorities, correspondence analysis, and the Salsa algorithm François Fouss 1, Jean-Michel Renders 2, Marco Saerens 1 1 ISYS Unit, IAG Université catholique

More information

Links between Kleinberg s hubs and authorities, correspondence analysis, and Markov chains

Links between Kleinberg s hubs and authorities, correspondence analysis, and Markov chains Links between Kleinberg s hubs and authorities, correspondence analysis, and Markov chains Francois Fouss, Jean-Michel Renders & Marco Saerens Université Catholique de Louvain and Xerox Research Center

More information

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y.

Institute for Advanced Computer Studies. Department of Computer Science. On Markov Chains with Sluggish Transients. G. W. Stewart y. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{94{77 TR{3306 On Markov Chains with Sluggish Transients G. W. Stewart y June, 994 ABSTRACT

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 Web pages are not equally important www.joe-schmoe.com

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

More information

The Google Markov Chain: convergence speed and eigenvalues

The Google Markov Chain: convergence speed and eigenvalues U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department

More information

Node Centrality and Ranking on Networks

Node Centrality and Ranking on Networks Node Centrality and Ranking on Networks Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Social

More information

eigenvalues, markov matrices, and the power method

eigenvalues, markov matrices, and the power method eigenvalues, markov matrices, and the power method Slides by Olson. Some taken loosely from Jeff Jauregui, Some from Semeraro L. Olson Department of Computer Science University of Illinois at Urbana-Champaign

More information

A Cycle-Based Bound for Subdominant Eigenvalues of Stochastic Matrices

A Cycle-Based Bound for Subdominant Eigenvalues of Stochastic Matrices A Cycle-Based Bound for Subdominant Eigenvalues of Stochastic Matrices Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada, S4S 0A2 August 3, 2007

More information

Fast Linear Iterations for Distributed Averaging 1

Fast Linear Iterations for Distributed Averaging 1 Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Agreement algorithms for synchronization of clocks in nodes of stochastic networks

Agreement algorithms for synchronization of clocks in nodes of stochastic networks UDC 519.248: 62 192 Agreement algorithms for synchronization of clocks in nodes of stochastic networks L. Manita, A. Manita National Research University Higher School of Economics, Moscow Institute of

More information

Methods in Clustering

Methods in Clustering Methods in Clustering Katelyn Gao Heather Hardeman Edward Lim Cristian Potter Carl Meyer Ralph Abbey August 5, 2011 Abstract Cluster Analytics helps to analyze the massive amounts of data which have accrued

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information