Updating Markov Chains Carl Meyer Amy Langville

Size: px
Start display at page:

Download "Updating Markov Chains Carl Meyer Amy Langville"

Transcription

1 Updating Markov Chains Carl Meyer Amy Langville Department of Mathematics North Carolina State University Raleigh, NC A. A. Markov Anniversary Meeting June 13, 2006

2 Intro Assumptions Very large irreducible chain m = O(10 9 ) number of states Q m m old transition matrix φ T =(φ 1,φ 2,...,φ m ) old stationary distribution

3 Intro Assumptions Very large irreducible chain m = O(10 9 ) number of states Q m m old transition matrix φ T =(φ 1,φ 2,...,φ m ) old stationary distribution Updates Change some transition probabilities Add or delete some states P n n new transition matrix (irreducible) π T =(π 1,π 2,...,π n ) new distribution (unknown)

4 Intro Assumptions Very large irreducible chain m = O(10 9 ) number of states Q m m old transition matrix φ T =(φ 1,φ 2,...,φ m ) old stationary distribution Updates Change some transition probabilities Add or delete some states P n n new transition matrix (irreducible) π T =(π 1,π 2,...,π n ) new distribution (unknown) Aim Use φ T to compute π T

5 Exact (Theoretical) Updating Perturbation Formula P = Q E = π T = φ T ɛ T

6 Exact (Theoretical) Updating Perturbation Formula P = Q E = π T = φ T ɛ T ɛ T = φ T EZ(I + EZ) 1 Z = { Fundamental Matrix (I Q) # (group inverse)

7 Exact (Theoretical) Updating Perturbation Formula P = Q E = π T = φ T ɛ T ɛ T = φ T EZ(I + EZ) 1 { Fundamental Matrix Requires m = n (No states added or deleted) Z = (I Q) # (group inverse)

8 Exact (Theoretical) Updating Perturbation Formula P = Q E = π T = φ T ɛ T ɛ T = φ T EZ(I + EZ) 1 { Fundamental Matrix Requires m = n (No states added or deleted) Z = (I Q) # (group inverse) One Row At A Time (Sherman Morrison) p T i = q T i δ T i = π T = φ T ɛ T

9 Exact (Theoretical) Updating Perturbation Formula P = Q E = π T = φ T ɛ T ɛ T = φ T EZ(I + EZ) 1 Requires m = n (No states added or deleted) Z = { Fundamental Matrix (I Q) # (group inverse) One Row At A Time (Sherman Morrison) p T i = q T i δ T i = π T = φ T ɛ T ɛ T = ( ) φ i 1 + δ T i A # δ T A # A # = (I Q) # i

10 Exact (Theoretical) Updating Perturbation Formula P = Q E = π T = φ T ɛ T ɛ T = φ T EZ(I + EZ) 1 Requires m = n (No states added or deleted) Z = { Fundamental Matrix (I Q) # (group inverse) One Row At A Time (Sherman Morrison) p T i = q T i δ T i = π T = φ T ɛ T ɛ T = ( ) φ i 1 + δ T i A # δ T A # A # = (I Q) # i (I P) # = A # + eɛ T [ A # γi ] A# iɛ T γ = ɛt A # i φ i φ i

11 Exact (Theoretical) Updating Perturbation Formula P = Q E = π T = φ T ɛ T ɛ T = φ T EZ(I + EZ) 1 Requires m = n (No states added or deleted) Z = { Fundamental Matrix (I Q) # (group inverse) One Row At A Time (Sherman Morrison) p T i = q T i δ T i = π T = φ T ɛ T ɛ T = ( ) φ i 1 + δ T i A # δ T A # A # = (I Q) # i (I P) # = A # + eɛ T [ A # γi ] A# iɛ T Not Practical For Large Problems γ = ɛt A # i φ i φ i

12 Restarted Power Method x T j+1 = xt j P with xt 0 = φt (assume aperiodic)

13 Restarted Power Method x T j+1 = xt j P with xt 0 = φt (assume aperiodic) Requires m = n (no states added or deleted)

14 Restarted Power Method x T j+1 = xt j P with xt 0 = φt (assume aperiodic) Requires m = n (no states added or deleted) Requires φ T π T (generally means P Q)

15 Restarted Power Method x T j+1 = xt j P with xt 0 = φt (assume aperiodic) Requires m = n (no states added or deleted) Requires φ T π T (generally means P Q) Asymptotic rate of convergence: R = log 10 λ 2 Need about 1/R iterations to eventually gain one additional significant digit of accuracy

16 Restarted Power Method x T j+1 = xt j P with xt 0 = φt (assume aperiodic) Requires m = n (no states added or deleted) Requires φ T π T (generally means P Q) Asymptotic rate of convergence: R = log 10 λ 2 Example: Need about 1/R iterations to eventually gain one additional significant digit of accuracy Suppose digit 1 (φ T )=digit 1 (π T ) Want 12 digits of accuracy

17 Restarted Power Method x T j+1 = xt j P with xt 0 = φt (assume aperiodic) Requires m = n (no states added or deleted) Requires φ T π T (generally means P Q) Asymptotic rate of convergence: R = log 10 λ 2 Example: Need about 1/R iterations to eventually gain one additional significant digit of accuracy Suppose digit 1 (φ T )=digit 1 (π T ) Want 12 digits of accuracy RPM about 11/R iterations Start from scratch about 12/R iterations

18 Restarted Power Method x T j+1 = xt j P with xt 0 = φt (assume aperiodic) Requires m = n (no states added or deleted) Requires φ T π T (generally means P Q) Asymptotic rate of convergence: R = log 10 λ 2 Example: Need about 1/R iterations to eventually gain one additional significant digit of accuracy Suppose digit 1 (φ T )=digit 1 (π T ) Want 12 digits of accuracy RPM about 11/R iterations Start from scratch about 12/R iterations Only 8.3% reduction

19 Restarted Power Method x T j+1 = xt j P with xt 0 = φt (assume aperiodic) Requires m = n (no states added or deleted) Requires φ T π T (generally means P Q) Asymptotic rate of convergence: R = log 10 λ 2 Example: Need about 1/R iterations to eventually gain one additional significant digit of accuracy Suppose digit 1 (φ T )=digit 1 (π T ) Want 12 digits of accuracy RPM about 11/R iterations Start from scratch about 12/R iterations Only 8.3% reduction A Little Better, But Not Great

20 Censoring Partition (not necessarily NCD) P n n = G 1 G 2... G k G 1 P 11 P P 1k G 2 P 21 P P 2k G k P k1 P k2... P kk

21 Censoring Partition (not necessarily NCD) P n n = G 1 G 2... G k G 1 P 11 P P 1k G 2 P 21 P P 2k G k P k1 P k2... P kk Censored Chains Records state of process only when chain visits states in G i. Visits to states outside of G i are ignored.

22 Censoring Partition (not necessarily NCD) P n n = G 1 G 2... G k G 1 P 11 P P 1k G 2 P 21 P P 2k G k P k1 P k2... P kk Censored Chains Records state of process only when chain visits states in G i. Visits to states outside of G i are ignored. Censored Transition Matrices C i = P ii + P i (I P i ) 1 P i Stochastic Complements

23 Censored Distributions s T i C i = s T i Aggregation

24 Censored Distributions s T i C i = s T i Aggregation Aggregation Matrix s T 1 P 11e... s T 1 P 1ke A k k = e = s T k P k1e... s T k P kke

25 Censored Distributions s T i C i = s T i Aggregation Aggregation Matrix s T 1 P 11e... s T 1 P 1ke A k k = e = s T k P k1e... s T k P kke Aggregated Distribution α T A = α T α T = (α 1,α 2,...,α k )

26 Censored Distributions s T i C i = s T i Aggregation Aggregation Matrix s T 1 P 11e... s T 1 P 1ke A k k = e = s T k P k1e... s T k P kke Aggregated Distribution α T A = α T α T = (α 1,α 2,...,α k ) Aggregation Theorem π T =(π T 1 πt 2... π T k )=(α 1s T 1 α 2s T 2... α 2 s T 2 )

27 Partitioning Intuition Update relatively small number of states in large sparse chain Effects are primarily local Most stationary probabilities not significantly affected.

28 Partitioning Intuition Update relatively small number of states in large sparse chain Effects are primarily local Most stationary probabilities not significantly affected. State Space Partition S = G G G = { states likely to be most affected newly added states

29 Partitioning Intuition Update relatively small number of states in large sparse chain Effects are primarily local Most stationary probabilities not significantly affected. State Space Partition S = G G G = g = G << G { states likely to be most affected newly added states

30 Partitioning Intuition Update relatively small number of states in large sparse chain Effects are primarily local Most stationary probabilities not significantly affected. State Space Partition S = G G G = g = G << G { states likely to be most affected newly added states Induced Matrix Partition P n n = ( G G ) G P 11 P 12 G P 21 P 22 = p p 1g P 1... p g1... p gg P g P 1... P g P 22

31 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12

32 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1

33 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e (g+1) (g+1)

34 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e (g+1) (g+1) Aggregation Theorem π T = ( π 1,...,π g π T) =(α 1,...,α g, α g+1 s T g+1 ) Aggregated Distribution α T = ( ) α 1,...,α g,α g+1

35 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e (g+1) (g+1) Aggregation Theorem π T = ( π 1,...,π g π T) =(α 1,...,α g, α g+1 s T g+1 ) Old Distribution φ T =(φ 1,φ 2,... φ T ) (reorded) Aggregated Distribution α T = ( ) α 1,...,α g,α g+1

36 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e (g+1) (g+1) Aggregation Theorem π T = ( π 1,...,π g π T) =(α 1,...,α g, α g+1 s T g+1 ) Old Distribution φ T =(φ 1,φ 2,... φ T ) The Assumption φ T π T (reorded) Aggregated Distribution α T = ( ) α 1,...,α g,α g+1

37 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e (g+1) (g+1) Aggregation Theorem π T = ( π 1,...,π g π T) =(α 1,...,α g, α g+1 s T g+1 ) Old Distribution φ T =(φ 1,φ 2,... φ T ) The Assumption (reorded) φ T π T = φ T α g+1 s T g+1 Aggregated Distribution α T = ( ) α 1,...,α g,α g+1

38 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e (g+1) (g+1) Aggregation Theorem π T = ( π 1,...,π g π T) =(α 1,...,α g, α g+1 s T g+1 ) Old Distribution φ T =(φ 1,φ 2,... φ T ) The Assumption (reorded) Aggregated Distribution α T = ( ) α 1,...,α g,α g+1 φ T π T = φ T α g+1 s T g+1 = s T g+1 φt φ T e

39 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e (g+1) (g+1) Aggregation Theorem π T = ( π 1,...,π g π T) =(α 1,...,α g, α g+1 s T g+1 ) Old Distribution φ T =(φ 1,φ 2,... φ T ) The Assumption (reorded) Aggregated Distribution α T = ( ) α 1,...,α g,α g+1 φ T π T = φ T α g+1 s T g+1 = s T g+1 φt φ T e

40 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e Aggregated Distribution α T = ( ) α 1,...,α g,α g+1 (g+1) (g+1) Aggregation Theorem π T = ( π 1,...,π g π T) =(α 1,...,α g, α g+1 s T g+1 ) Old Distribution φ T =(φ 1,φ 2,... φ T ) The Assumption (reorded) φ T π T = φ T α g+1 s T g+1 = s T g+1 φt φ T e

41 Specialized Aggregation Censored Transition Matrices C 1 =... = C g =[1] 1 1 C g+1 = P 22 + P 21 (I P 11 ) 1 P 12 Censored Distributions s T 1 =... = s T g = 1 s T g+1 = st g+1 C g+1 Aggregation [ Matrix ] P11 P 12 e A = s T g+1 P 21 1 s T g+1 P 21e Aggregation Theorem Aggregated Distribution α T = ( ) α 1,...,α g,α g+1 (g+1) (g+1) π T =(α 1,...,α g, α g+1 s T g+1 ) Old Distribution (reorded) Updated Distribution φ T =(φ 1,φ 2,... φ T ) The Assumption φ T π T = φ T α g+1 s T g+1 = s T g+1 φt φ T e

42 Summary Reorder & Partition Updated State Space S = G G G = {New + Most affected} G = {Less affected}

43 Summary Reorder & Partition Updated State Space S = G G G = {New + Most affected} G = {Less affected} Use Less Affected Components From Old Distribution φ T components from φ T corresponding to states in G

44 Summary Reorder & Partition Updated State Space S = G G G = {New + Most affected} G = {Less affected} Use Less Affected Components From Old Distribution φ T components from φ T corresponding to states in G Approximate Censored Distribution s T φ T /(φ T e)

45 Summary Reorder & Partition Updated State Space S = G G G = {New + Most affected} G = {Less affected} Use Less Affected Components From Old Distribution φ T components from φ T corresponding to states in G Approximate Censored Distribution s T φ T /(φ T e) Form Approximate [ Aggregation ] Matrix P11 P 12 e A s T P 21 1 s T P 21 e (g+1) (g+1)

46 Summary Reorder & Partition Updated State Space S = G G G = {New + Most affected} G = {Less affected} Use Less Affected Components From Old Distribution φ T components from φ T corresponding to states in G Approximate Censored Distribution s T φ T /(φ T e) Form Approximate [ Aggregation ] Matrix P11 P 12 e A s T P 21 1 s T P 21 e (g+1) (g+1) Compute Approximate Aggregated Distribution α T ( ) α 1,...,α g,α g+1

47 Summary Reorder & Partition Updated State Space S = G G G = {New + Most affected} G = {Less affected} Use Less Affected Components From Old Distribution φ T components from φ T corresponding to states in G Approximate Censored Distribution s T φ T /(φ T e) Form Approximate [ Aggregation ] Matrix P11 P 12 e A s T P 21 1 s T P 21 e (g+1) (g+1) Compute Approximate Aggregated Distribution α T ( ) α 1,...,α g,α g+1 Approximate Updated Distribution π T (α 1,...,α g, α g+1 s T )

48 Summary Reorder & Partition Updated State Space S = G G G = {New + Most affected} G = {Less affected} Use Less Affected Components From Old Distribution φ T components from φ T corresponding to states in G Approximate Censored Distribution s T φ T /(φ T e) Form Approximate [ Aggregation ] Matrix P11 P 12 e A s T P 21 1 s T P 21 e (g+1) (g+1) Compute Approximate Aggregated Distribution α T ( ) α 1,...,α g,α g+1 Approximate Updated Distribution π T (α 1,...,α g, α g+1 s T ) Iterate?

49 Summary Reorder & Partition Updated State Space S = G G G = {New + Most affected} G = {Less affected} Use Less Affected Components From Old Distribution φ T components from φ T corresponding to states in G Approximate Censored Distribution s T φ T /(φ T e) Form Approximate [ Aggregation ] Matrix P11 P 12 e A s T P 21 1 s T P 21 e (g+1) (g+1) Compute Approximate Aggregated Distribution α T ( ) α 1,...,α g,α g+1 Approximate Updated Distribution π T (α 1,...,α g, α g+1 s T ) Iterate? No! At A Fixed Point

50 Iterative Aggregation Move Off Of Fixed Point With Power Step s T φ T /(φ T e) [ ] P11 P 12 e A s T P 21 1 s T P 21 e α T ( ) α 1,...,α g,α g+1 π T (α 1,...,α g, α g+1 s T ) ψ T π T P (g+1) (g+1) If ψ T χ T <τ then quit else s T ψ T /(ψ T e) (Also makes progress toward convergence when aperiodic)

51 Iterative Aggregation Move Off Of Fixed Point With Power Step s T φ T /(φ T e) [ ] P11 P 12 e A s T P 21 1 s T P 21 e α T ( ) α 1,...,α g,α g+1 π T (α 1,...,α g, α g+1 s T ) ψ T π T P (g+1) (g+1) If ψ T χ T <τ then quit else s T ψ T /(ψ T e) (Also makes progress toward convergence when aperiodic) Theorem If C = P 22 + P 21 (I P 11 ) 1 P 12 all partitions S = G G is aperiodic, then convergent for

52 Iterative Aggregation Move Off Of Fixed Point With Power Step s T φ T /(φ T e) [ ] P11 P 12 e A s T P 21 1 s T P 21 e α T ( ) α 1,...,α g,α g+1 π T (α 1,...,α g, α g+1 s T ) ψ T π T P (g+1) (g+1) (Also makes progress toward convergence when aperiodic) If ψ T χ T <τ then quit else s T ψ T /(ψ T e) Theorem If C = P 22 + P 21 (I P 11 ) 1 P 12 is aperiodic, then convergent for all partitions S = G G λ 2 (C) determines rate of convergence

53 Google s PageRank Random Walk On WWW Link Structure { 1/(total # outlinks from page Pi ) if P H ij = i P j, 0 otherwise

54 Google s PageRank Random Walk On WWW Link Structure { 1/(total # outlinks from page Pi ) if P H ij = i P j, 0 otherwise Google Matrix P = α(h + E)+(1 α)f (H + E) &F are stochastic rank (E) = rank (F) = 1 0 <α<1

55 Google s PageRank Random Walk On WWW Link Structure { 1/(total # outlinks from page Pi ) if P H ij = i P j, 0 otherwise Google Matrix P = α(h + E)+(1 α)f (H + E) &F are stochastic rank (E) = rank (F) = 1 0 <α<1 PageRank = π T

56 Google s PageRank Random Walk On WWW Link Structure { 1/(total # outlinks from page Pi ) if P H ij = i P j, 0 otherwise Google Matrix P = α(h + E)+(1 α)f (H + E) &F are stochastic rank (E) = rank (F) = 1 0 <α<1 PageRank = π T Power Law Distribution If ordered by magnitude π(i) αi k for k π(1) π(2)... π(n), then [Donato, Laura, Leonardi, 2002] [Pandurangan, Raghavan,& Upfal, 2004] Relatively few large states (i.e., important sites)

57 L Curves 0.04 movies 0.05 mathworks censorship abortion genetics 0.01 CA

58 Experiments The Updates # Nodes Added = 3 # Nodes Removed = 50 # Links Added = 10 (Different values have little effect on results) # Links Removed = 20 Stopping Criterion 1-norm of residual < 10 10

59 Movies Power Method Iterative Aggregation Iterations Time nodes = 451 links = 713 G Iterations Time

60 Censorship Power Method Iterative Aggregation Iterations Time nodes = 562 links = 736 G Iterations Time

61 MathWorks Power Method Iterative Aggregation Iterations Time nodes = 517 links = 13, 531 G Iterations Time

62 Abortion Power Method Iterative Aggregation Iterations Time nodes = 1, 693 links = 4, 325 G Iterations Time

63 Genetics Power Method Iterative Aggregation Iterations Time nodes = 2, 952 links = 6, 485 G Iterations Time

64 California Power Method Iterative Aggregation Iterations Time nodes = 9, 664 links = 16, 150 G Iterations Time

65 L Curves 0.04 movies 0.05 mathworks censorship abortion genetics 0.01 CA

66 0.04 movies L Curves 0.05 mathworks censorship abortion genetics 0.01 CA

67 Quadratic Extrapolation nodes = 10,000 links = 101,118 [Kamvar, Haveliwala, Manning, Golub, 2003] 0 power method power method with quad. extrap. 2 4 log 10 residual iteration

68 Quadratic Extrapolation nodes = 10,000 links = 101,118 [Kamvar, Haveliwala, Manning, Golub, 2003] 0 power method power method with quad. extrap. i A D 2 4 log 10 residual iteration

69 Quadratic Extrapolation nodes = 10,000 links = 101,118 [Kamvar, Haveliwala, Manning, Golub, 2003] 0 2 power method power method with quad. extrap. i A D i A D with quad. extrap. 4 log 10 residual iteration

70 Timings Iterations Time (sec) G Power Power+Quad IAD IAD+Quad nodes = 10, 000 links = 101, 118

71 Conclusion Iterative aggregation shows promise for updating Markov chains Especially for those having power law distributions

72 Leveling Off Point π(i) αi k dπ(i) di ɛ for some user-defined tolerance ɛ i level ( kα ɛ ) 1/k+1 Perhaps better: ( kα i level f(n) ɛ ) 1/k+1 For WWW: [ 2.109α g opt f(n) ɛ ] 1/3.109

Updating PageRank. Amy Langville Carl Meyer

Updating PageRank. Amy Langville Carl Meyer Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software

More information

UpdatingtheStationary VectorofaMarkovChain. Amy Langville Carl Meyer

UpdatingtheStationary VectorofaMarkovChain. Amy Langville Carl Meyer UpdatingtheStationary VectorofaMarkovChain Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC NSMC 9/4/2003 Outline Updating and Pagerank Aggregation Partitioning

More information

for an m-state homogeneous irreducible Markov chain with transition probability matrix

for an m-state homogeneous irreducible Markov chain with transition probability matrix UPDATING MARKOV CHAINS AMY N LANGVILLE AND CARL D MEYER 1 Introduction Suppose that the stationary distribution vector φ T =(φ 1,φ 2,,φ m ) for an m-state homogeneous irreducible Markov chain with transition

More information

Link Analysis. Leonid E. Zhukov

Link Analysis. Leonid E. Zhukov Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

The Second Eigenvalue of the Google Matrix

The Second Eigenvalue of the Google Matrix The Second Eigenvalue of the Google Matrix Taher H. Haveliwala and Sepandar D. Kamvar Stanford University {taherh,sdkamvar}@cs.stanford.edu Abstract. We determine analytically the modulus of the second

More information

Analysis of Google s PageRank

Analysis of Google s PageRank Analysis of Google s PageRank Ilse Ipsen North Carolina State University Joint work with Rebecca M. Wills AN05 p.1 PageRank An objective measure of the citation importance of a web page [Brin & Page 1998]

More information

Computing PageRank using Power Extrapolation

Computing PageRank using Power Extrapolation Computing PageRank using Power Extrapolation Taher Haveliwala, Sepandar Kamvar, Dan Klein, Chris Manning, and Gene Golub Stanford University Abstract. We present a novel technique for speeding up the computation

More information

Mathematical Properties & Analysis of Google s PageRank

Mathematical Properties & Analysis of Google s PageRank Mathematical Properties & Analysis of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca M. Wills Cedya p.1 PageRank An objective measure of the citation importance

More information

CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER

CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER CONVERGENCE ANALYSIS OF A PAGERANK UPDATING ALGORITHM BY LANGVILLE AND MEYER ILSE C.F. IPSEN AND STEVE KIRKLAND Abstract. The PageRank updating algorithm proposed by Langville and Meyer is a special case

More information

Calculating Web Page Authority Using the PageRank Algorithm

Calculating Web Page Authority Using the PageRank Algorithm Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

ECEN 689 Special Topics in Data Science for Communications Networks

ECEN 689 Special Topics in Data Science for Communications Networks ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 8 Random Walks, Matrices and PageRank Graphs

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google

More information

Link Analysis Ranking

Link Analysis Ranking Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Link Analysis. Stony Brook University CSE545, Fall 2016

Link Analysis. Stony Brook University CSE545, Fall 2016 Link Analysis Stony Brook University CSE545, Fall 2016 The Web, circa 1998 The Web, circa 1998 The Web, circa 1998 Match keywords, language (information retrieval) Explore directory The Web, circa 1998

More information

Analysis and Computation of Google s PageRank

Analysis and Computation of Google s PageRank Analysis and Computation of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca S. Wills ANAW p.1 PageRank An objective measure of the citation importance of a web

More information

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information

More information

1998: enter Link Analysis

1998: enter Link Analysis 1998: enter Link Analysis uses hyperlink structure to focus the relevant set combine traditional IR score with popularity score Page and Brin 1998 Kleinberg Web Information Retrieval IR before the Web

More information

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Jevin West and Carl T. Bergstrom November 25, 2008 1 Overview There

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 2, pp. 305 32 c 2005 Society for Industrial and Applied Mathematics JORDAN CANONICAL FORM OF THE GOOGLE MATRIX: A POTENTIAL CONTRIBUTION TO THE PAGERANK COMPUTATION

More information

A Fast Two-Stage Algorithm for Computing PageRank

A Fast Two-Stage Algorithm for Computing PageRank A Fast Two-Stage Algorithm for Computing PageRank Chris Pan-Chi Lee Stanford University cpclee@stanford.edu Gene H. Golub Stanford University golub@stanford.edu Stefanos A. Zenios Stanford University stefzen@stanford.edu

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 Web pages are not equally important www.joe-schmoe.com

More information

A Two-Stage Algorithm for Computing PageRank and Multistage Generalizations

A Two-Stage Algorithm for Computing PageRank and Multistage Generalizations Internet Mathematics Vol. 4, No. 4: 299 327 A Two-Stage Algorithm for Computing PageRank and Multistage Generalizations ChrisP.Lee,GeneH.Golub,andStefanosA.Zenios Abstract. The PageRank model pioneered

More information

STOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS

STOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS STOCHASTIC COMLEMENTATION, UNCOULING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS C D MEYER Abstract A concept called stochastic complementation is an idea which occurs naturally, although

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

Data and Algorithms of the Web

Data and Algorithms of the Web Data and Algorithms of the Web Link Analysis Algorithms Page Rank some slides from: Anand Rajaraman, Jeffrey D. Ullman InfoLab (Stanford University) Link Analysis Algorithms Page Rank Hubs and Authorities

More information

Page rank computation HPC course project a.y

Page rank computation HPC course project a.y Page rank computation HPC course project a.y. 2015-16 Compute efficient and scalable Pagerank MPI, Multithreading, SSE 1 PageRank PageRank is a link analysis algorithm, named after Brin & Page [1], and

More information

PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES

PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES ILSE CF IPSEN AND TERESA M SELEE Abstract We present a simple algorithm for computing the PageRank (stationary distribution) of the stochastic

More information

NCDREC: A Decomposability Inspired Framework for Top-N Recommendation

NCDREC: A Decomposability Inspired Framework for Top-N Recommendation NCDREC: A Decomposability Inspired Framework for Top-N Recommendation Athanasios N. Nikolakopoulos,2 John D. Garofalakis,2 Computer Engineering and Informatics Department, University of Patras, Greece

More information

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix Steve Kirkland University of Regina June 5, 2006 Motivation: Google s PageRank algorithm finds the stationary vector of a stochastic

More information

Rank and Rating Aggregation. Amy Langville Mathematics Department College of Charleston

Rank and Rating Aggregation. Amy Langville Mathematics Department College of Charleston Rank and Rating Aggregation Amy Langville Mathematics Department College of Charleston Hamilton Institute 8/6/2008 Collaborators Carl Meyer, N. C. State University College of Charleston (current and former

More information

Department of Applied Mathematics. University of Twente. Faculty of EEMCS. Memorandum No. 1712

Department of Applied Mathematics. University of Twente. Faculty of EEMCS. Memorandum No. 1712 Department of Applied Mathematics Faculty of EEMCS t University of Twente The Netherlands P.O. Box 217 7500 AE Enschede The Netherlands Phone: +31-53-4893400 Fax: +31-53-4893114 Email: memo@math.utwente.nl

More information

Lecture: Local Spectral Methods (2 of 4) 19 Computing spectral ranking with the push procedure

Lecture: Local Spectral Methods (2 of 4) 19 Computing spectral ranking with the push procedure Stat260/CS294: Spectral Graph Methods Lecture 19-04/02/2015 Lecture: Local Spectral Methods (2 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis

More information

Applications. Nonnegative Matrices: Ranking

Applications. Nonnegative Matrices: Ranking Applications of Nonnegative Matrices: Ranking and Clustering Amy Langville Mathematics Department College of Charleston Hamilton Institute 8/7/2008 Collaborators Carl Meyer, N. C. State University David

More information

Google Page Rank Project Linear Algebra Summer 2012

Google Page Rank Project Linear Algebra Summer 2012 Google Page Rank Project Linear Algebra Summer 2012 How does an internet search engine, like Google, work? In this project you will discover how the Page Rank algorithm works to give the most relevant

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 29, No 4, pp 1281 1296 c 2007 Society for Industrial and Applied Mathematics PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES ILSE C F IPSEN AND TERESA M SELEE

More information

How does Google rank webpages?

How does Google rank webpages? Linear Algebra Spring 016 How does Google rank webpages? Dept. of Internet and Multimedia Eng. Konkuk University leehw@konkuk.ac.kr 1 Background on search engines Outline HITS algorithm (Jon Kleinberg)

More information

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine CS 277: Data Mining Mining Web Link Structure Class Presentations In-class, Tuesday and Thursday next week 2-person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1-person teams 4 minutes,

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University

CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University http://cs224w.stanford.edu How to organize/navigate it? First try: Human curated Web directories Yahoo, DMOZ, LookSmart

More information

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These

More information

Fast PageRank Computation Via a Sparse Linear System (Extended Abstract)

Fast PageRank Computation Via a Sparse Linear System (Extended Abstract) Fast PageRank Computation Via a Sparse Linear System (Extended Abstract) Gianna M. Del Corso 1 Antonio Gullí 1,2 Francesco Romani 1 1 Dipartimento di Informatica, University of Pisa, Italy 2 IIT-CNR, Pisa

More information

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS LINK ANALYSIS Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Retrieval models Retrieval evaluation Link analysis Models

More information

Link Mining PageRank. From Stanford C246

Link Mining PageRank. From Stanford C246 Link Mining PageRank From Stanford C246 Broad Question: How to organize the Web? First try: Human curated Web dictionaries Yahoo, DMOZ LookSmart Second try: Web Search Information Retrieval investigates

More information

A Note on Google s PageRank

A Note on Google s PageRank A Note on Google s PageRank According to Google, google-search on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Mining Graph/Network Data Instructor: Yizhou Sun yzsun@ccs.neu.edu November 16, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining Matrix Data Decision

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University.

CS246: Mining Massive Datasets Jure Leskovec, Stanford University. CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu What is the structure of the Web? How is it organized? 2/7/2011 Jure Leskovec, Stanford C246: Mining Massive

More information

How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University

How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University How works or How linear algebra powers the search engine M. Ram Murty, FRSC Queen s Research Chair Queen s University From: gomath.com/geometry/ellipse.php Metric mishap causes loss of Mars orbiter

More information

CS249: ADVANCED DATA MINING

CS249: ADVANCED DATA MINING CS249: ADVANCED DATA MINING Graph and Network Instructor: Yizhou Sun yzsun@cs.ucla.edu May 31, 2017 Methods Learnt Classification Clustering Vector Data Text Data Recommender System Decision Tree; Naïve

More information

Link Analysis. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Link Analysis. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze Link Analysis Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze 1 The Web as a Directed Graph Page A Anchor hyperlink Page B Assumption 1: A hyperlink between pages

More information

Manipulability of PageRank under Sybil Strategies

Manipulability of PageRank under Sybil Strategies Manipulability of PageRank under Sybil Strategies Alice Cheng Eric Friedman Abstract The sybil attack is one of the easiest and most common methods of manipulating reputation systems. In this paper, we

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 10 Graphs II Rainer Gemulla, Pauli Miettinen Jul 4, 2013 Link analysis The web as a directed graph Set of web pages with associated textual content Hyperlinks between webpages

More information

Uncertainty and Randomization

Uncertainty and Randomization Uncertainty and Randomization The PageRank Computation in Google Roberto Tempo IEIIT-CNR Politecnico di Torino tempo@polito.it 1993: Robustness of Linear Systems 1993: Robustness of Linear Systems 16 Years

More information

Robust PageRank: Stationary Distribution on a Growing Network Structure

Robust PageRank: Stationary Distribution on a Growing Network Structure oname manuscript o. will be inserted by the editor Robust PageRank: Stationary Distribution on a Growing etwork Structure Anna Timonina-Farkas Received: date / Accepted: date Abstract PageRank PR is a

More information

Comparison of perturbation bounds for the stationary distribution of a Markov chain

Comparison of perturbation bounds for the stationary distribution of a Markov chain Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Mining Graph/Network Data Instructor: Yizhou Sun yzsun@ccs.neu.edu March 16, 2016 Methods to Learn Classification Clustering Frequent Pattern Mining Matrix Data Decision

More information

Complex Social System, Elections. Introduction to Network Analysis 1

Complex Social System, Elections. Introduction to Network Analysis 1 Complex Social System, Elections Introduction to Network Analysis 1 Complex Social System, Network I person A voted for B A is more central than B if more people voted for A In-degree centrality index

More information

MPageRank: The Stability of Web Graph

MPageRank: The Stability of Web Graph Vietnam Journal of Mathematics 37:4(2009) 475-489 VAST 2009 MPageRank: The Stability of Web Graph Le Trung Kien 1, Le Trung Hieu 2, Tran Loc Hung 1, and Le Anh Vu 3 1 Department of Mathematics, College

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

Traps and Pitfalls of Topic-Biased PageRank

Traps and Pitfalls of Topic-Biased PageRank Traps and Pitfalls of Topic-Biased PageRank Paolo Boldi Roberto Posenato Massimo Santini Sebastiano Vigna Dipartimento di Scienze dell Informazione, Università degli Studi di Milano, Italy and Dipartimento

More information

Markov Chains and Web Ranking: a Multilevel Adaptive Aggregation Method

Markov Chains and Web Ranking: a Multilevel Adaptive Aggregation Method Markov Chains and Web Ranking: a Multilevel Adaptive Aggregation Method Hans De Sterck Department of Applied Mathematics, University of Waterloo Quoc Nguyen; Steve McCormick, John Ruge, Tom Manteuffel

More information

Coding the Matrix! Divya Padmanabhan. Intelligent Systems Lab, Dept. of CSA, Indian Institute of Science, Bangalore. June 28, 2013

Coding the Matrix! Divya Padmanabhan. Intelligent Systems Lab, Dept. of CSA, Indian Institute of Science, Bangalore. June 28, 2013 Coding the Matrix! Divya Padmanabhan Intelligent Systems Lab, Dept. of CSA, Indian Institute of Science, Bangalore June 28, 203 Divya Padmanabhan Coding the Matrix! June 28, 203 How does the most popular

More information

PageRank algorithm Hubs and Authorities. Data mining. Web Data Mining PageRank, Hubs and Authorities. University of Szeged.

PageRank algorithm Hubs and Authorities. Data mining. Web Data Mining PageRank, Hubs and Authorities. University of Szeged. Web Data Mining PageRank, University of Szeged Why ranking web pages is useful? We are starving for knowledge It earns Google a bunch of money. How? How does the Web looks like? Big strongly connected

More information

MULTILEVEL ADAPTIVE AGGREGATION FOR MARKOV CHAINS, WITH APPLICATION TO WEB RANKING

MULTILEVEL ADAPTIVE AGGREGATION FOR MARKOV CHAINS, WITH APPLICATION TO WEB RANKING MULTILEVEL ADAPTIVE AGGREGATION FOR MARKOV CHAINS, WITH APPLICATION TO WEB RANKING H. DE STERCK, THOMAS A. MANTEUFFEL, STEPHEN F. MCCORMICK, QUOC NGUYEN, AND JOHN RUGE Abstract. A multilevel adaptive aggregation

More information

Markov chains (week 6) Solutions

Markov chains (week 6) Solutions Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly

More information

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract

More information

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors Michael K. Ng Centre for Mathematical Imaging and Vision and Department of Mathematics

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Computing approximate PageRank vectors by Basis Pursuit Denoising

Computing approximate PageRank vectors by Basis Pursuit Denoising Computing approximate PageRank vectors by Basis Pursuit Denoising Michael Saunders Systems Optimization Laboratory, Stanford University Joint work with Holly Jin, LinkedIn Corp SIAM Annual Meeting San

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

5. THE JUMP START POWER METHOD

5. THE JUMP START POWER METHOD 5. THE JUMP START POWER METHOD This chapter is based on [32, 35]. This chapter presents a new numerical method for approximately computing the ergodic projector of a finite aperiodic Markov multi-chain.

More information

Lecture 12: Link Analysis for Web Retrieval

Lecture 12: Link Analysis for Web Retrieval Lecture 12: Link Analysis for Web Retrieval Trevor Cohn COMP90042, 2015, Semester 1 What we ll learn in this lecture The web as a graph Page-rank method for deriving the importance of pages Hubs and authorities

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Algebraic Representation of Networks

Algebraic Representation of Networks Algebraic Representation of Networks 0 1 2 1 1 0 0 1 2 0 0 1 1 1 1 1 Hiroki Sayama sayama@binghamton.edu Describing networks with matrices (1) Adjacency matrix A matrix with rows and columns labeled by

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

Social network analysis: social learning

Social network analysis: social learning Social network analysis: social learning Donglei Du (ddu@unb.edu) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 October 20, 2016 Donglei Du (UNB) AlgoTrading

More information

The Google Markov Chain: convergence speed and eigenvalues

The Google Markov Chain: convergence speed and eigenvalues U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department

More information

Ranking using Matrices.

Ranking using Matrices. Anjela Govan, Amy Langville, Carl D. Meyer Northrop Grumman, College of Charleston, North Carolina State University June 2009 Outline Introduction Offense-Defense Model Generalized Markov Model Data Game

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

The PageRank Problem, Multi-Agent. Consensus and Web Aggregation

The PageRank Problem, Multi-Agent. Consensus and Web Aggregation The PageRank Problem, Multi-Agent Consensus and Web Aggregation A Systems and Control Viewpoint arxiv:32.904v [cs.sy] 6 Dec 203 Hideaki Ishii and Roberto Tempo PageRank is an algorithm introduced in 998

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Mark Schmidt University of British Columbia Winter 2019 Last Time: Monte Carlo Methods If we want to approximate expectations of random functions, E[g(x)] = g(x)p(x) or E[g(x)]

More information

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection

More information