Probability & Computing

Size: px
Start display at page:

Download "Probability & Computing"

Transcription

1 Probability & Computing

2 Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

3 Markov Property dependency structure of X 0,X,X 2,... Markov property: (memoryless) X t+ depends only on X t 8t =0,, 2,...,8x 0,x,...,x t,x,y 2 Pr[X t+ = y X 0 = x 0,...,X t = x t,x t = x] =Pr[X t+ = y X t = x] Markov chain: discrete time discrete space stochastic process with Markov property.

4 Transition Matrix Markov chain: X 0,X,X 2,... Pr[X t+ = y X 0 = x 0,...,X t = x t,x t = x] =Pr[X t+ = y X t = x] = P (t) xy = P xy (time-homogenous) P y 2 x 2 P xy stochastic matrix P =

5 chain: X 0, X, X 2,... distribution: (0) () (2) 2 [0, ] X (t) x =Pr[X t = x] x2 x = (t+) = (t) P (t+) y =Pr[X t+ = y] = X x2 Pr[X t = x]pr[x t+ = y X t = x] = X x2 x (t) P xy =( (t) P ) y

6 (0) P! () P! (t) P! (t+) P! initial distribution: (0) transition matrix: P (t) = (0) P t Markov chain: M =(,P)

7 /3 /3 /3 2 3 /3 2/3 0 0 P = /3 0 2/3 /3 /3 /3

8 Convergence /3 3 /3 /3 /3 2 2/3 0 0 P = /3 0 2/3 /3 /3 / P distribution, P 20 ( 4, 3 8, 3 8 )

9 Stationary Distribution Markov chain M =(,P) stationary distribution π: P = (fixed point) Perron-Frobenius Theorem: stochastic matrix P: P = is also a left eigenvalue of P (eigenvalue of P T ) the left eigenvector P = is nonnegative stationary distribution always exists

10 Perron-Frobenius Perron-Frobenius Theorem: A : a nonnegative n n matrix with spectral radius ρ(a) ρ(a) > 0 is an eigenvalue of A; there is a nonnegative (left and right) eigenvector associated with ρ(a); if further A is irreducible, then: there is a positive (left and right) eigenvector associated with ρ(a) that is of multiplicity ; for stochastic matrix A the spectral radius ρ(a)=.

11 Stationary Distribution Markov chain M =(,P) stationary distribution π: P = (fixed point) Perron-Frobenius Theorem: stochastic matrix P: P = is also a left eigenvalue of P (eigenvalue of P T ) the left eigenvector P = is nonnegative stationary distribution always exists

12 Convergence /3 3 /3 /3 /3 2 2/3 0 0 P = /3 0 2/3 /3 /3 / P ergodic: convergent to stationary distribution

13 2 2 3 reducible /2 /2 0 0 P = /3 2/ /4 / P /4 3/4

14 periodic P = 0 0 P 2 = 0 0 P 2k = 0 0 P 2k+ = 0 0

15 reducible 4 /3 /3 / /3 2/3 periodic

16 Fundamental Theorem of Markov Chain: If a finite Markov chain M =(,P) aperiodic, then initial distribution (ergodic) where lim t! (0) P t = is irreducible and (0) is a unique stationary distribution satisfying P =

17 Irreducibility y is accessible from x: access 9t, P t (x, y) > 0 x communicate y x communicates with y: x is accessible from y y is accessible from x MC is irreducible: all pairs of states communicate communicating classes

18 Reducible Chains component A component B P = P A 0 0 P B stationary distributions: = A + ( ) B component A component B stationary distribution: =(0, B)

19 Aperiodicity period of state x: d x = gcd{t P t (x, x) > 0} aperiodic chain: all states have period period: the gcd of lengths of cycles x d x =3

20 Lemma (period is a class property) x and y communicate d x = d y n x y 8 cycle of y n n 2 d x (n + n 2 ) d x (n + n 2 + n) o ) d x n ) d x apple d y

21 Fundamental Theorem of Markov Chain: If a finite Markov chain M =(,P) aperiodic, then initial distribution is irreducible and (0) where lim t! (0) P t = is a unique stationary distribution satisfying P = finiteness irreducibility existence uniqueness } Perron-Frobenius ergodicity convergence

22 Coupling of Markov Chains Markov chain M =(,P) (0) : initial X 0,X,X 2,... stationary : X0,X 0,X 0 2,... 0 faithful running of MC initial (0) : X 0 X X n X 00 t stationary : X 0 X X n X n+ Goal: lim t! Pr[X00 t = X 0 t]=

23 Coupling of Markov Chains Markov chain M =(,P) (0) : initial X 0,X,X 2,... stationary : X0,X 0,X 0 2,... 0 faithful running of MC initial (0) : X 0 X X n X 00 t stationary : X 0 X X n X n+ Goal: conditioning on X 0 = x, X 0 0 = y 9n, Pr[X n = X 0 n = y] > 0

24 Markov chain M =(,P) initial (0) : X 0 X X n stationary : X 0 X X n X n+ conditioning on X 0 = x, X 0 0 = y irreducible: 9n, P n (x, y) > 0 aperiodic: 9n 2, n = n + n 2 ( P n 2 (y, y) > 0 P n +n 2 (y, y) > 0 Pr[X n = X 0 n = y] Pr[X n = y]pr[x 0 n = y] P n (x, y)p n 2 (y, y)p n (y, y) > 0

25 Markov chain M =(,P) faithful running of MC initial (0) : X 0 X X n X 00 t stationary : X 0 X X n X n+ conditioning on X 0 = x, X 0 0 = y 9n, Pr[X n = X 0 n] > 0 lim t! Pr[X00 t = X 0 t]=

26 Fundamental Theorem of Markov Chain: If a finite Markov chain M =(,P) aperiodic, then initial distribution (ergodic) where lim t! (0) P t = is irreducible and (0) is a unique stationary distribution satisfying P = finiteness irreducibility existence uniqueness } Perron-Frobenius ergodicity convergence

27 Fundamental Theorem of Markov Chain: If a Markov chain M =(,P) is irreducible and ergodic, then initial distribution (0) where lim t! (0) P t = is a unique stationary distribution satisfying P = finite: ergodic = aperiodic infinite: ergodic aperiodic + non-null persistent

28 Infinite Markov Chains fxy : prob. of ever reaching y starting from x h xy : expected time of first reaching y starting from x f xy (t), Pr[y 62 {X,...,X t } ^ X t = y X 0 = x] f xy, X f xy (t) h xy, t= t= X tf xy (t) x is transient if f xx < x is recurrent or persistent if f xx = x is null-persistent if x is persistent and h xx = x is non-null-persistent if x is persistent and h xx < a chain is non-null-persistent if all states are

29 Fundamental Theorem of Markov Chain: If a Markov chain M =(,P) is irreducible and ergodic, then initial distribution (0) where lim t! (0) P t = is a unique stationary distribution satisfying P = finite: ergodic = aperiodic infinite: ergodic aperiodic + non-null persistent irreducible + non-null persistent unique stationary distribution aperiodic + non-null persistent ergodic

30 PageRank Rank: importance of a page A page has higher rank if pointed by more highrank pages. High-rank pages have greater influence. Pages pointing to few others have greater influence.

31 PageRank (simplified) the web graph G(V,E) rank of a page: r(v) r(v) = u:(u,v) E r(u) d + (u) d + (u) : out-degree of u random walk: P (u, v) = d + (u) if (u, v) E, 0 otherwise. stationary distribution: rp = r a tireless random surfer

32 Random Walk on Graph undirected graph G(V, E) walk: v,v 2,... V that v i+ v i random walk: v i+ is uniformly chosen from N(v i ) P (u, v) = d(u) u v 0 u v P = D A adjacency matrix A D(u, v) = d(u) u = v 0 u = v

33 Random Walk on Graph stationary: convergence (ergodicity); stationary distribution; hitting time: time to reach a vertex; cover time: time to reach all vertices; mixing time: time to converge.

34 Random Walk on Graph G(V,E) for finite chain: P (u, v) = d(u) u v 0 u v irreducible and aperiodic converge irreducible G is connected P t (u, v) > 0 A t (u, v) > 0 aperiodic G is non-bipartite bipartite no odd cycle period =2 non-bipartite (2k+)-cycle undirected 2-cycle aperiodic

35 Lazy Random Walk undirected graph G(V, E) lazy random walk: flip a coin to decide whether to stay P (u, v) = u = v 2 u v 2d(u) 0 otherwise P = 2 (I + D A) adjacency matrix A D(u, v) = d(u) u = v 0 u = v always aperiodic!

36 Random Walk on Graph G(V,E) P (u, v) = d(u) u v 0 u v Stationary distribution : v V v = v v V, v = d(v) V 2m d(v) 2m = 2m v V d(v) = ( P ) v = u V up (u, v) = u N(v) d(u) 2m d(u) = d(v) 2m = v regular graph uniform distribution

37 Random Walk on Graph G(V,E) P (u, v) = u = v 2 u v 2d(u) 0 otherwise Stationary distribution : v V, v = d(v) 2m ( P ) v = u V up (u, v) = 2 d(v) 2m + 2 X u2n(v) d(u) 2m d(u) = d(v) 2m = v

38 Reversibility ergodic flow detailed balance equation: (x)p (x, y) = (y)p (y, x) time-reversible Markov chain: 9, 8,x,y 2, (x)p (x, y) = (y)p (y, x) stationary distribution: ( P )y = X x (x)p (x, y) = X x (y)p (y, x) = (y) time-reversible: when start from Pr[X 0 = x 0 X = x... X n = x n ] = Pr[X 0 = x n X = x n... X n = x 0 ]

39 Reversibility ergodic flow detailed balance equation: (x)p (x, y) = (y)p (y, x) time-reversible Markov chain: 9, 8,x,y 2, (x)p (x, y) = (y)p (y, x) stationary distribution: ( P )y = X x (x)p (x, y) = X x (y)p (y, x) = (y) time-reversible: when start from (X 0,X,...,X n ) (X n,x n,...,x 0 )

40 Reversibility ergodic flow detailed balance equation: (x)p (x, y) = (y)p (y, x) time-reversible Markov chain: 9, 8,x,y 2, (x)p (x, y) = (y)p (y, x) stationary distribution: ( P )y = X x (x)p (x, y) = X x (y)p (y, x) = (y) the Markov chain can be uniquely represented as an edge-weighted undirected graph G(V,E) with ergodic flows as edge weights

41 Random Walk on Graph G(V,E) P (u, v) = P (u, v) = d(u) u v 0 u v u = v 2 u v 2d(u) 0 otherwise detailed balance equation: (u)p (u, v) = (v)p (v, u) u = v : u 6 v : u v : hold for free (u) / P (u, v)

42 Random Walk on Graph undirected graph G(V, E) (not necessarily regular) lazy random walk Goal: a uniform stationary distribution π Goal: an arbitrary stationary distribution π

43 Random Walk stationary: convergence (ergodicity); stationary distribution; hitting time: time to reach a vertex; cover time: time to reach all vertices; mixing time: time to converge.

44 Hitting and Covering consider a random walk on G(V,E) hitting time: expected time to reach v from u u,v = E min n>0 X n = v X 0 = u cover time: expected time to visit all vertices C u = E min n {X 0,...,X n } = V X 0 = u C(G) = max C u u V

45 G = K n C(G) = (n log n) G : path or cycle C(G) = (n 2 ) K n 2 n 2 G : lollipop C(G) = (n 3 )

46 Hitting Time u,v = E min n>0 X n = v X 0 = u stationary distribution v = d(v) 2m Renewal Theorem: irreducible v,v = v = 2m d(v)

47 Renewal Theorem: irreducible (x) =Pr[X 0 = x] v,v = v = 2m d(v) X 0,X,X 2,... T x : first t>0 that X t = x integer X 0 E[X] = X n 0 Pr[X >n] x,x = E[T x X 0 = x] = X n Pr[T x n X 0 = x] x,x x = X n Pr[T x n X 0 = x]pr[x 0 = x] = X n Pr[T x n ^ X 0 = x] =Pr[X 0 = x]+ X n 2 (Pr[8 apple t apple n,x t 6= x] Pr[80 apple t apple n,x t 6= x]) =Pr[X 0 = x]+ X n 2 (Pr[80 apple t apple n 2,X t 6= x] Pr[80 apple t apple n,x t 6= x]) =Pr[X 0 = x]+pr[x 0 6= x] lim n! a n = (irreducible) a n-

48 random walk on G(V,E) v,v = v = 2m d(v) Lemma: uv 2 E u,v < 2m v,v = X wv2e d(v) ( + w,v) v u 2m = X wv2e ( + w,v ) u,v < 2m

49 Cover Time h C u = E min n {X 0,...,X n } = V X 0 = u i C(G) = max C u u V Theorem: C(G) apple 4nm pick a spanning tree T of G Eulerian tour: v v 2 v 2(n ) v 2n = v C(G) 2(n ) i= < 4nm v i,v i+

50 USTCON (undirected s-t connectivity) Instance: undirected G(V, E); vertices: s, t s-t connected? deterministic: traverse: linear space log-space? s undirected G t

51 Theorem (Aleliunas-Karp-Lipton-Lovász-Rackoff 979) USTCON can be solved by a poly-time Monte Carlo randomized algorithm with bounded one-sided error, which uses O(log n) extra space. start a random walk at s; if reach t in 4n 3 steps return yes else return no space: O(log n) unconnected no connected Cover time: C(G) 4nm < 2n 3 Markov s inequality Pr[ no ] < /2

52 Electric Network edge uv : resistance R uv vertex v : potential v edge orientation u! v: current flow C u!v potential difference u,v = u v Kirchhoff s Law: vertex v, flow-in = flow-out Ohm s Law: edge uv, C u!v = u,v R uv

53 Effective Resistance electrical network: u effective resistance R(u,v): potential difference between u and v required to send unit of flow current from u to v

54 each edge e resistance R e = u Theorem (Chandra-Raghavan-Ruzzo-Smolensky-Tiwari 989) 8u, v, 2 V, u,v + v,u =2mR(u, v)

55 graph G(V,E) construct an electrical network: for every edge e : a special vertex v : R e = each vertex u : inject d(u) units current flow into u remove all 2m units current flow from v Lemma: 8u 2 V, u,v = u,v

56 Lemma: 8u 2 V, u,v = u,v u,v =+ d(u) d(u) = X uw2e = X uw2e C u!w (Kirchhoff) (Ohm) X = ( u,v w,v ) uw2e u,w = d(u) u,v X X uw2e uw2e w,v w,v

57 Lemma: 8u 2 V, u,v = u,v u,v =+ d(u) X uw2e w,v u,v = E min n>0 X n = v X 0 = u X u,v = wu2e =+ d(u) d(u) ( + w,v) X wu2e w,v

58 Lemma: 8u 2 V, u,v = u,v u,v =+ d(u) u,v =+ d(u) X uw2e X wu2e w,v w,v ) has the same unique solution

59 Effective Resistance electrical network: each edge e resistance R e = u effective resistance R(u,v): potential difference between u and v required to send unit of flow current from u to v

60 each edge e resistance R e = u Theorem (Chandra-Raghavan-Ruzzo-Smolensky-Tiwari 989) 8u, v, 2 V, u,v + v,u =2mR(u, v) A: B: Scenario B

61 Lemma: 8u 2 V, u,v = u,v u,v =+ d(u) u,v =+ d(u) X uw2e X wu2e w,v w,v ) has the same unique solution

62 Theorem (Chandra-Raghavan-Ruzzo-Smolensky-Tiwari 989) 8u, v, 2 V, u,v + v,u =2mR(u, v) A: B: Scenario B A u,v = u,v B v,u = v,u

63 Theorem (Chandra-Raghavan-Ruzzo-Smolensky-Tiwari 989) 8u, v, 2 V, u,v + v,u =2mR(u, v) A: B: C: Scenario B Scenario C A u,v = u,v B v,u = v,u C u,v = B v,u = v,u D: D = A + C Scenario D D u,v = A u,v + C u,v = u,v + v,u D u,v : potential difference between u and v to send 2m units current flow from u to v

64 Theorem (Chandra-Raghavan-Ruzzo-Smolensky-Tiwari 989) 8u, v, 2 V, u,v + v,u =2mR(u, v) A: B: C: Scenario B Scenario C A u,v = u,v B v,u = v,u C u,v = B v,u = v,u D: D = A + C Scenario D D u,v = A u,v + C u,v = u,v + v,u R(u, v) = D u,v 2m

65 Theorem (Chandra-Raghavan-Ruzzo-Smolensky-Tiwari 989) 8u, v, 2 V, u,v + v,u =2mR(u, v) Theorem: C(G) apple 2nm pick a spanning tree T of G Eulerian tour: v v 2 v 2(n ) v 2n = v C(G) 2(n ) i= v i,v i+ = X ( u,v + v,u ) uv2t X =2m R(u, v) apple 2mn uv2t

66 Theorem (Chandra-Raghavan-Ruzzo-Smolensky-Tiwari 989) 8u, v, 2 V, u,v + v,u =2mR(u, v) G: path u n v C(G) apple 2nm =2n 2 = O(n 2 ) C(G) = u,v v,u = (n 2 ) C(G) = (n 2 ) u,v + v,u =2mR(u, v) =2n(n )

67 Theorem (Chandra-Raghavan-Ruzzo-Smolensky-Tiwari 989) 8u, v, 2 V, u,v + v,u =2mR(u, v) G: lollipop K n u 2 n 2 v C(G) apple 2nm = O(n 3 ) C(G) u,v = (n 3 C(G) = (n 3 ) ) u,v + v,u =2mR u,v = (n 3 ) v,u = O(n 2 )

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+ EECS 495: Randomized Algorithms Lecture 14 Random Walks Reading: Motwani-Raghavan Chapter 6 Powerful tool for sampling complicated distributions since use only local moves Given: to explore state space.

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.

More information

Two Proofs of Commute Time being Proportional to Effective Resistance

Two Proofs of Commute Time being Proportional to Effective Resistance Two Proofs of Commute Time being Proportional to Effective Resistance Sandeep Nuckchady Report for Prof. Peter Winkler February 17, 2013 Abstract Chandra et al. have shown that the effective resistance,

More information

Approximate Counting and Markov Chain Monte Carlo

Approximate Counting and Markov Chain Monte Carlo Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

The Markov Chain Monte Carlo Method

The Markov Chain Monte Carlo Method The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov

More information

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Generating random spanning trees. Andrei Broder. DEC - Systems Research Center 130 Lytton Ave., Palo Alto, CA

Generating random spanning trees. Andrei Broder. DEC - Systems Research Center 130 Lytton Ave., Palo Alto, CA Generating random spanning trees Andrei Broder DEC - Systems Research Center 30 Lytton Ave., Palo Alto, CA 9430 Extended abstract Abstract. This paper describes a probabilistic algorithm that, given a

More information

6.842 Randomness and Computation February 24, Lecture 6

6.842 Randomness and Computation February 24, Lecture 6 6.8 Randomness and Computation February, Lecture 6 Lecturer: Ronitt Rubinfeld Scribe: Mutaamba Maasha Outline Random Walks Markov Chains Stationary Distributions Hitting, Cover, Commute times Markov Chains

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.] Lectures 16-17: Random walks on graphs CSE 525, Winter 2015 Instructor: James R. Lee 1 Random walks Let G V, E) be an undirected graph. The random walk on G is a Markov chain on V that, at each time step,

More information

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration

More information

Link Analysis Ranking

Link Analysis Ranking Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Modern Discrete Probability Spectral Techniques

Modern Discrete Probability Spectral Techniques Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

ECEN 689 Special Topics in Data Science for Communications Networks

ECEN 689 Special Topics in Data Science for Communications Networks ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 8 Random Walks, Matrices and PageRank Graphs

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

Deterministic approximation for the cover time of trees

Deterministic approximation for the cover time of trees Deterministic approximation for the cover time of trees Uriel Feige Ofer Zeitouni September 10, 2009 Abstract We present a deterministic algorithm that given a tree T with n vertices, a starting vertex

More information

Disjointness and Additivity

Disjointness and Additivity Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 4: October 21, 2003 Bounding the mixing time via coupling Eric Vigoda 4.1 Introduction In this lecture we ll use the

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These

More information

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis

More information

Perron Frobenius Theory

Perron Frobenius Theory Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B

More information

The Monte Carlo Method

The Monte Carlo Method The Monte Carlo Method Example: estimate the value of π. Choose X and Y independently and uniformly at random in [0, 1]. Let Pr(Z = 1) = π 4. 4E[Z] = π. { 1 if X Z = 2 + Y 2 1, 0 otherwise, Let Z 1,...,

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Markov chains (week 6) Solutions

Markov chains (week 6) Solutions Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A

More information

Hittingtime and PageRank

Hittingtime and PageRank San Jose State University SJSU ScholarWorks Master's Theses Master's Theses and Graduate Research Spring 2014 Hittingtime and PageRank Shanthi Kannan San Jose State University Follow this and additional

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Lecture #5. Dependencies along the genome

Lecture #5. Dependencies along the genome Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3., Polanski&Kimmel Section 2.8. Prepared by Shlomo Moran, based on Danny Geiger s and Nir Friedman s. Dependencies along the genome

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Approximate Counting

Approximate Counting Approximate Counting Andreas-Nikolas Göbel National Technical University of Athens, Greece Advanced Algorithms, May 2010 Outline 1 The Monte Carlo Method Introduction DNFSAT Counting 2 The Markov Chain

More information

ACO Comprehensive Exam October 14 and 15, 2013

ACO Comprehensive Exam October 14 and 15, 2013 1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for

More information

Random walk: notes. Contents. December 5, 2017

Random walk: notes. Contents. December 5, 2017 Random walk: notes László Lovász December 5, 2017 Contents 1 Basics 2 1.1 Random walks and finite Markov chains..................... 2 1.2 Matrices of graphs................................. 4 1.3 Stationary

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

Link Analysis. Leonid E. Zhukov

Link Analysis. Leonid E. Zhukov Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Quantum walk algorithms

Quantum walk algorithms Quantum walk algorithms Andrew Childs Institute for Quantum Computing University of Waterloo 28 September 2011 Randomized algorithms Randomness is an important tool in computer science Black-box problems

More information

From Determinism to Stochasticity

From Determinism to Stochasticity From Determinism to Stochasticity Reading for this lecture: (These) Lecture Notes. Outline of next few lectures: Probability theory Stochastic processes Measurement theory You Are Here A B C... α γ β δ

More information

An Introduction to Randomized algorithms

An Introduction to Randomized algorithms An Introduction to Randomized algorithms C.R. Subramanian The Institute of Mathematical Sciences, Chennai. Expository talk presented at the Research Promotion Workshop on Introduction to Geometric and

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,

More information

What can be sampled locally?

What can be sampled locally? What can be sampled locally? Yitong Yin Nanjing University Joint work with: Weiming Feng, Yuxin Sun Local Computation Locality in distributed graph algorithms. [Linial, FOCS 87, SICOMP 92] the LOCAL model:

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

Random Variable. Pr(X = a) = Pr(s)

Random Variable. Pr(X = a) = Pr(s) Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models Today (Ch 14) Markov chains and hidden Markov models Graphical representation Transition probability matrix Propagating state distributions The stationary distribution Next lecture (Ch 14) Markov chains

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS

LINK ANALYSIS. Dr. Gjergji Kasneci Introduction to Information Retrieval WS LINK ANALYSIS Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Retrieval models Retrieval evaluation Link analysis Models

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................

More information

1 Adjacency matrix and eigenvalues

1 Adjacency matrix and eigenvalues CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Lecture 1 : Probabilistic Method

Lecture 1 : Probabilistic Method IITM-CS6845: Theory Jan 04, 01 Lecturer: N.S.Narayanaswamy Lecture 1 : Probabilistic Method Scribe: R.Krithika The probabilistic method is a technique to deal with combinatorial problems by introducing

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Spectral Graph Theory and its Applications

Spectral Graph Theory and its Applications Spectral Graph Theory and its Applications Yi-Hsuan Lin Abstract This notes were given in a series of lectures by Prof Fan Chung in National Taiwan University Introduction Basic notations Let G = (V, E)

More information

Universal Traversal Sequences with Backtracking

Universal Traversal Sequences with Backtracking Universal Traversal Sequences with Backtracking Michal Koucký Department of Computer Science Rutgers University, Piscataway NJ 08854, USA mkoucky@paul.rutgers.edu April 9, 001 Abstract In this paper we

More information

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) CS5314 Randomized Algorithms Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) 1 Introduce two topics: De-randomize by conditional expectation provides a deterministic way to construct

More information

The cover time of sparse random graphs.

The cover time of sparse random graphs. The cover time of sparse random graphs Colin Cooper Alan Frieze May 29, 2005 Abstract We study the cover time of a random walk on graphs G G n,p when p = c n, c > 1 We prove that whp the cover time is

More information

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 Web pages are not equally important www.joe-schmoe.com

More information