Characterization of cutoff for reversible Markov chains

Size: px
Start display at page:

Download "Characterization of cutoff for reversible Markov chains"

Transcription

1 Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 23 Feb 2015 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 1 / 35

2 Notation Transition matrix - P (reversible). Stationary dist. - π. Reversibility: π(x)p (x, y) = π(y)p (y, x), x, y Ω. Laziness P (x, x) 1/2, x Ω. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 2 / 35

3 TV distance The total-variation distance between 2 dist. µ, ν on Ω is: µ ν TV = d max µ(a) ν(a). A Ω where P t x(y) := P t (x, y). d(t) = d max x Pt x π TV, Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 3 / 35

4 TV distance The total-variation distance between 2 dist. µ, ν on Ω is: µ ν TV = d max µ(a) ν(a). A Ω where P t x(y) := P t (x, y). d(t) = d max x Pt x π TV, The ɛ-mixing-time (0 < ɛ < 1) is: t mix(ɛ) d = min {t : d(t) ɛ} t mix d = t mix(1/4). Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 3 / 35

5 Cutoff - definition Def: a sequence of MCs (X (n) t ) exhibits cutoff if t (n) mix (ɛ) t(n) (1 ɛ) = o(t(n) ), 0 < ɛ < 1/4. (1) mix mix Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 4 / 35

6 Cutoff - definition Def: a sequence of MCs (X (n) t ) exhibits cutoff if t (n) mix (ɛ) t(n) (1 ɛ) = o(t(n) ), 0 < ɛ < 1/4. (1) mix mix ( (w n) is called a cutoff window for (X (n) t ) if: w n = o t (n) mix t (n) mix ), and (ɛ) t(n) (1 ɛ) cɛwn, n 1, ɛ (0, 1/4). mix Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 4 / 35

7 Cutoff Figure : cutoff Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 5 / 35

8 Examples Cutoff was first identified for random transpositions (Diaconis & Shashahani 81). t mix = n log n 2 and w n = n. Many other card shuffling schemes have cutoff. Lazy simple random walk on the hypercube {0, 1} n exhibits a cutoff with t mix = n log n 2 (half the coupon collector time) and w n = n (Aldous 83). Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 6 / 35

9 Examples Cutoff was first identified for random transpositions (Diaconis & Shashahani 81). t mix = n log n 2 and w n = n. Many other card shuffling schemes have cutoff. Lazy simple random walk on the hypercube {0, 1} n exhibits a cutoff with t mix = n log n 2 (half the coupon collector time) and w n = n (Aldous 83). Biased nearest neighbor random walk on an {1, 2,..., n} has cutoff, with a bias toward n has cutoff around E 1[T n]. The glauber dynamics of the Ising model in large temperature has cutoff for many underlying graphs (LS). Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 6 / 35

10 Examples Cutoff was first identified for random transpositions (Diaconis & Shashahani 81). t mix = n log n 2 and w n = n. Many other card shuffling schemes have cutoff. Lazy simple random walk on the hypercube {0, 1} n exhibits a cutoff with t mix = n log n 2 (half the coupon collector time) and w n = n (Aldous 83). Biased nearest neighbor random walk on an {1, 2,..., n} has cutoff, with a bias toward n has cutoff around E 1[T n]. The glauber dynamics of the Ising model in large temperature has cutoff for many underlying graphs (LS). Lazy simple random walk on the n-cycle does not exhibit a cutoff (t mix = Θ(n 2 )). Same for higher dimensional tori of side length n. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 6 / 35

11 Background Many chains are believed to exhibit cutoff. Verifying the occurrence of cutoff rigorously is usually hard. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 7 / 35

12 Background Many chains are believed to exhibit cutoff. Verifying the occurrence of cutoff rigorously is usually hard. The name cutoff was coined by Aldous and Diaconis in their seminal 86 paper: (AD 86) the most interesting open problem : Find verifiable conditions for cutoff. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 7 / 35

13 Background Many chains are believed to exhibit cutoff. Verifying the occurrence of cutoff rigorously is usually hard. The name cutoff was coined by Aldous and Diaconis in their seminal 86 paper: (AD 86) the most interesting open problem : Find verifiable conditions for cutoff. So far cutoff was mostly studied by understanding examples. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 7 / 35

14 Spectral gap & relaxation-time Let λ 2 be the largest non-trivial e.v. of P. Definition: gap = 1 λ 2 - the spectral gap. Def: t rel := 1 - the relaxation-time. gap Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 8 / 35

15 The product condition (Prod. cond.) In a 2004 Aim workshop I proposed that The product condition (Prod. Cond.) - gap (n) t (n) mix (equivalently, t(n) rel = o(t (n) mix )) should imply cutoff for nice reversible chains. (It is a necessary condition for cutoff) Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 9 / 35

16 The product condition (Prod. cond.) In a 2004 Aim workshop I proposed that The product condition (Prod. Cond.) - gap (n) t (n) mix (equivalently, t(n) rel = o(t (n) mix )) should imply cutoff for nice reversible chains. (It is a necessary condition for cutoff) It is not always sufficient (Aldous and Pak). Problem: Find families of MCs s.t. Prod. Cond. = cutoff. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 9 / 35

17 Aldous example Figure : Fixed bias to the right conditioned on a non-lazy step. Different holding probabilities along 2 parallel branches of equal length. t (n) rel = O(1). Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 10 / 35

18 Aldous example d(t) Figure : d(t) is up to negligible terms the probability that the rightmost state was not hit by time t (starting from the leftmost state). Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 11 / 35

19 Hitting and Mixing Def: The hitting time of a set A Ω = T A := min{t : X t A}. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 12 / 35

20 Hitting and Mixing Def: The hitting time of a set A Ω = T A := min{t : X t A}. Hitting times of worst sets are related to mixing: Fact (Random Target): Under laziness and reversibility t mix C max x y π(y)ex[ty] C maxx,y Ex[Ty]. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 12 / 35

21 Hitting and Mixing Def: The hitting time of a set A Ω = T A := min{t : X t A}. Hitting times of worst sets are related to mixing: Fact (Random Target): Under laziness and reversibility t mix C max x y π(y)ex[ty] C maxx,y Ex[Ty]. Generalization (Aldous 82 LW 95) under reversibility and laziness t mix C max x,a π(a)ex[ta]. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 12 / 35

22 Hitting and Mixing Oliveira (2011) and independently P.-Sousi (2011) (+ an by Griffiths et al. (2012)): for any irreducible reversible lazy MC t mix = Θ(t H), where t H := max Ex[TA]. (2) x,a: π(a) 1/2 The threshold 1/2 cannot be improved. For two n-cliques connected by a single edge sets of π measure 1/2 + ɛ are hit in constant number of steps. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 13 / 35

23 Hitting and Mixing Oliveira (2011) and independently P.-Sousi (2011) (+ an by Griffiths et al. (2012)): for any irreducible reversible lazy MC t mix = Θ(t H), where t H := max Ex[TA]. (2) x,a: π(a) 1/2 The threshold 1/2 cannot be improved. For two n-cliques connected by a single edge sets of π measure 1/2 + ɛ are hit in constant number of steps. d(t) 1 P(the other clique was hit by time t) = maybe we should look at tails 2 rather than on expectations. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 13 / 35

24 Hitting and Cutoff Diaconis & Saloff-Coste (06) (separation cutoff) and Ding-Lubetzky-P. (10) (TV cutoff): A seq. of birth and death chains exhibits cutoff iff the Prod. Cond. holds. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 14 / 35

25 Hitting and Cutoff Diaconis & Saloff-Coste (06) (separation cutoff) and Ding-Lubetzky-P. (10) (TV cutoff): A seq. of birth and death chains exhibits cutoff iff the Prod. Cond. holds. Both proofs reduce the problem to establishing concentration of hitting times. We extend their results to weighted nearest-neighbor RWs on trees. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 14 / 35

26 Cutoff for trees Theorem Let (V, P, π) be a lazy Markov chain on a tree T = (V, E). Then t mix(ɛ) t mix(1 ɛ) C log ɛ t rel t mix, for any 0 < ɛ 1/4. In particular, the Prod. Cond. implies cutoff with a cutoff window w n = t (n) rel t(n) mix. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 15 / 35

27 Cutoff for trees Theorem Let (V, P, π) be a lazy Markov chain on a tree T = (V, E). Then t mix(ɛ) t mix(1 ɛ) C log ɛ t rel t mix, for any 0 < ɛ 1/4. In particular, the Prod. Cond. implies cutoff with a cutoff window w n = t (n) rel t(n) mix. DLP (10) - for lazy BD chains t mix(ɛ) t mix(1 ɛ) O(ɛ 1 t rel t mix). ( ) In many cases any cutoff window must be Ω t (n) rel t(n) mix. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 15 / 35

28 A mixing parameter in terms of hitting times Let 0 < ɛ < 1. hit(ɛ) := min{t : P x[t A > t] ɛ : for all x and A Ω s.t. π(a) 1/2}. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 16 / 35

29 Main abstract result Definition: A sequence has hit-cutoff if hit (n) (ɛ) hit (n) (1 ɛ) = o(hit (n) (1/4)) for all 0 < ɛ < 1/4. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 17 / 35

30 Main abstract result Definition: A sequence has hit-cutoff if hit (n) (ɛ) hit (n) (1 ɛ) = o(hit (n) (1/4)) for all 0 < ɛ < 1/4. Main abstract result: Theorem Let (Ω n, P n, π n) be a seq. of finite reversible lazy MCs. Then TFAE: (i) Cutoff. (ii) hit-cutoff. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 17 / 35

31 Main abstract result Definition: A sequence has hit-cutoff if hit (n) (ɛ) hit (n) (1 ɛ) = o(hit (n) (1/4)) for all 0 < ɛ < 1/4. Main abstract result: Theorem Let (Ω n, P n, π n) be a seq. of finite reversible lazy MCs. Then TFAE: (i) Cutoff. (ii) hit-cutoff. We show that (ii) implies the Prod. Cond. (for (i) this is known). Hence (i) (ii) follows from the following quantitative relation between hit and t mix using hit = Θ(t mix): Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 17 / 35

32 To mix - escape and then relax For any reversible irreducible finite lazy chain and any 0 < ɛ 1/4, hit(3ɛ) t rel log(2ɛ) t mix(2ɛ) hit(ɛ) + t rel log(2ɛ) Terms involving t rel are negligible under the Prod. Cond.. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 18 / 35

33 Hitting times when X 0 π Easy direction: to mix, the chain must first escape from small sets = first stage of mixing. Fact: Let A Ω be such that π(a) 1/2. Then (under reversibility) P π[t A > 2st rel ] e s, for all s 0. 2 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 19 / 35

34 Hitting times when X 0 π Easy direction: to mix, the chain must first escape from small sets = first stage of mixing. Fact: Let A Ω be such that π(a) 1/2. Then (under reversibility) P π[t A > 2st rel ] e s, for all s 0. 2 Follows by the spectral decomposition of P A c (the chain killed upon hitting A). Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 19 / 35

35 Hitting times when X 0 π Easy direction: to mix, the chain must first escape from small sets = first stage of mixing. Fact: Let A Ω be such that π(a) 1/2. Then (under reversibility) P π[t A > 2st rel ] e s, for all s 0. 2 Follows by the spectral decomposition of P A c (the chain killed upon hitting A). By a coupling argument, P x[t A > t + 2st rel ] d(t) + P π[t A > 2st rel ]. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 19 / 35

36 The hard direction We say the chain is ɛ-mixed w.r.t. A Ω at time t + s if for every x P t+s x (A) π(a) ɛ. We say that the event T G t serves as a certificate of being ɛ-mixed w.r.t. A Ω at time t + s if P x[x t+s A T G t] π(a) ɛ. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 20 / 35

37 Application of the max ineq. Claim: Denote t := hit(ɛ), s := t rel log(2/ɛ). Then t mix(2ɛ) t + s. Proof: want for every A Ω to have G = G s(a) Ω s.t. π(g) 1/2 and T G t serves as a certificate of being ɛ-mixed w.r.t. A at time t + s. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 21 / 35

38 Application of the max ineq. Claim: Denote t := hit(ɛ), s := t rel log(2/ɛ). Then t mix(2ɛ) t + s. Proof: want for every A Ω to have G = G s(a) Ω s.t. π(g) 1/2 and T G t serves as a certificate of being ɛ-mixed w.r.t. A at time t + s. Consider G = G s(a) := { } g : s s, P s g(a) π(a) 2e s/t rel = ɛ. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 21 / 35

39 Application of the max ineq. Claim: Denote t := hit(ɛ), s := t rel log(2/ɛ). Then t mix(2ɛ) t + s. Proof: want for every A Ω to have G = G s(a) Ω s.t. π(g) 1/2 and T G t serves as a certificate of being ɛ-mixed w.r.t. A at time t + s. Consider G = G s(a) := { } g : s s, P s g(a) π(a) 2e s/t rel = ɛ. A consequence of Starr s maximal ineq. is that π(g) 1/2. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 21 / 35

40 Application of the max ineq. Claim: Denote t := hit(ɛ), s := t rel log(2/ɛ). Then t mix(2ɛ) t + s. Proof: want for every A Ω to have G = G s(a) Ω s.t. π(g) 1/2 and T G t serves as a certificate of being ɛ-mixed w.r.t. A at time t + s. Consider G = G s(a) := { } g : s s, P s g(a) π(a) 2e s/t rel = ɛ. A consequence of Starr s maximal ineq. is that π(g) 1/2. = P x[t G > t] ɛ. (by def. of t). For any x, A: P t+s x (A) π(a) P x[t G > t] + max g G, s s P s g(a) π(a) 2ɛ. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 21 / 35

41 Remark: The threshold 1/2 in the definition of hit can be replaced by any 0 < α < 1, i.e. at a price of an additive error of t rel log(1 α) the analysis extends to hit α,x(ɛ) := min{t : P x[t A > t] ɛ : for all A Ω s.t. π(a) α}, hit α(ɛ) := max x Ω hitα,x(ɛ) If α > 1/2 one needs to assume that the Prod. Cond. holds to maintain the equivalence of the 2 notions of cutoff. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 22 / 35

42 Remark: The threshold 1/2 in the definition of hit can be replaced by any 0 < α < 1, i.e. at a price of an additive error of t rel log(1 α) the analysis extends to hit α,x(ɛ) := min{t : P x[t A > t] ɛ : for all A Ω s.t. π(a) α}, hit α(ɛ) := max x Ω hitα,x(ɛ) If α > 1/2 one needs to assume that the Prod. Cond. holds to maintain the equivalence of the 2 notions of cutoff. = When t rel is known, it suffices to consider escape probabilities from small sets in order to bound the mixing-time. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 22 / 35

43 Tools Def: For f R Ω, t 0, define P t f R Ω by P t f(x) := E x[f(x t)] = P t (x, y)f(y). y Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 23 / 35

44 Tools Def: For f R Ω, t 0, define P t f R Ω by P t f(x) := E x[f(x t)] = P t (x, y)f(y). y For f R Ω define E π[f] := x Ω π(x)f(x) and f 2 2 := E π[f 2 ]. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 23 / 35

45 Tools Def: For f R Ω, t 0, define P t f R Ω by P t f(x) := E x[f(x t)] = P t (x, y)f(y). y For f R Ω define E π[f] := x Ω π(x)f(x) and f 2 2 := E π[f 2 ]. For g R Ω denote Var πg := g E π[g] 2 2. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 23 / 35

46 Tools Def: For f R Ω, t 0, define P t f R Ω by P t f(x) := E x[f(x t)] = P t (x, y)f(y). y For f R Ω define E π[f] := x Ω π(x)f(x) and f 2 2 := E π[f 2 ]. For g R Ω denote Var πg := g E π[g] 2 2. The following is well-known and follows from elementary linear-algebra. Lemma (Contraction Lemma) Let (Ω, P, π) be a finite rev. irr. lazy MC. Let A Ω. Let t 0. Then Var πp t 1 A e 2t/t rel /4. (3) Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 23 / 35

47 Maximal Inequality The main ingredient in our approach is Starr s maximal-inequality (66) (refines Stein s max-inequality (61)) Theorem (Maximal inequality) Let (Ω, P, π) be a lazy irreducible reversible Markov chain. Let f R Ω. Define the corresponding maximal function f R Ω as f (x) := sup P k (f)(x) = sup E x[f(x k )]. 0 k< 0 k< Then for 1 < p <, f p q f p 1/p + 1/q = 1. (4) Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 24 / 35

48 Maximal Inequality The main ingredient in our approach is Starr s maximal-inequality (66) (refines Stein s max-inequality (61)) Theorem (Maximal inequality) Let (Ω, P, π) be a lazy irreducible reversible Markov chain. Let f R Ω. Define the corresponding maximal function f R Ω as f (x) := sup P k (f)(x) = sup E x[f(x k )]. 0 k< 0 k< Then for 1 < p <, f p q f p 1/p + 1/q = 1. (4) Applying this (with p = 2) to f s := P s (1 A π(a)) in conjunction with the Contraction Lemma yields the desired bound on the size of G c {(f s ) 2 > 8 f s 2 2} {(f s ) 2 > 2 f s 2 2}. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 24 / 35

49 Trees Let: T := (V, E) be a finite tree. (V, P, π) a lazy MC corresponding to some (lazy) weighted nearest-neighbor walk on T (i.e. P (x, y) > 0 iff {x, y} E or y = x). Fact: (Kolmogorov s cycle condition) every MC on a tree is reversible. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 25 / 35

50 Trees Can the tree structure be used to determine the identity of the worst sets? Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 26 / 35

51 Trees Can the tree structure be used to determine the identity of the worst sets? Easier question: what set of π measure 1/2 is the hardest to hit in a birth & death chain with state space [n] := {1, 2,..., n}? Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 26 / 35

52 Trees Can the tree structure be used to determine the identity of the worst sets? Easier question: what set of π measure 1/2 is the hardest to hit in a birth & death chain with state space [n] := {1, 2,..., n}? Answer: take a state m with π([m]) 1/2 and π([m 1]) < 1/2. Then the set worst set would be either [m] or [n] \ [m 1]. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 26 / 35

53 Trees How to generalize this to trees? Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 27 / 35

54 Central vertex C2 o (Ci) 1/2 for all i C1 C3 C4 Figure : A vertex o V is called a central-vertex if each connected component of T \ {o} has stationary probability at most 1/2. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 28 / 35

55 Trees There is always a central-vertex (and at most 2). We fix one, denote it by o and call it the root. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 29 / 35

56 Trees There is always a central-vertex (and at most 2). We fix one, denote it by o and call it the root. There is a simple reduction: T o is concentrated (from worst leaf) = hit-cutoff. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 29 / 35

57 Trees It suffices to show that E x[t o] = Ω(t mix) = T o is concentrated if X 0 = x. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 30 / 35

58 Trees It suffices to show that E x[t o] = Ω(t mix) = T o is concentrated if X 0 = x. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 30 / 35

59 Trees o=vk vk-1 v3 v2 v1 x=v0 Figure : Let v 0 = x,v 1,..., v k = o be the vertices along the path from x to o. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 31 / 35

60 Trees Proof of Concentration: Var x[t o] Ct rel t mix: It suffices to show that Var x[t o] 4t rel E x[t o]. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 32 / 35

61 Trees Proof of Concentration: Var x[t o] Ct rel t mix: It suffices to show that Var x[t o] 4t rel E x[t o]. If X 0 = x then T o is the sum of crossing times of the edges along the path between x: τ i := T vi T vi 1 d = T vi underx 0 = v i 1 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 32 / 35

62 Trees Proof of Concentration: Var x[t o] Ct rel t mix: It suffices to show that Var x[t o] 4t rel E x[t o]. If X 0 = x then T o is the sum of crossing times of the edges along the path between x: τ i := T vi T vi 1 d = T vi underx 0 = v i 1 τ 1,..., τ k are independent = it suffices to bound the sum of their variances Var x[t o] = Var x[τ i]. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 32 / 35

63 Trees Proof of Concentration: Var x[t o] Ct rel t mix: It suffices to show that Var x[t o] 4t rel E x[t o]. If X 0 = x then T o is the sum of crossing times of the edges along the path between x: τ i := T vi T vi 1 d = T vi underx 0 = v i 1 τ 1,..., τ k are independent = it suffices to bound the sum of their variances Var x[t o] = Var x[τ i]. The law of each τ i is a mixture of geometrics with parameters 1/2t rel = Varx[τ i] 4t rel E x[τ i] = 4t rel E x[t o]. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 32 / 35

64 Beyond trees The tree assumption can be relaxed. We can treat jumps to vertices of bounded distance on a tree (i.e. the length of the path from u to v in the tree (which is now just an auxiliary structure) is > r = P (u, v) = 0) under some mild necessary assumption. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 33 / 35

65 Beyond trees The tree assumption can be relaxed. We can treat jumps to vertices of bounded distance on a tree (i.e. the length of the path from u to v in the tree (which is now just an auxiliary structure) is > r = P (u, v) = 0) under some mild necessary assumption. Previously the BD assumption could not be relaxed mainly due to it being exploited via a representation of hitting times result for BD chains. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 33 / 35

66 Beyond trees The tree assumption can be relaxed. We can treat jumps to vertices of bounded distance on a tree (i.e. the length of the path from u to v in the tree (which is now just an auxiliary structure) is > r = P (u, v) = 0) under some mild necessary assumption. Previously the BD assumption could not be relaxed mainly due to it being exploited via a representation of hitting times result for BD chains. In particular, if P (u, v) δ > 0 for all u, v s.t. d T (u, v) r (and otherwise P (u, v) = 0), then cutoff the Prod. Cond. holds. Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 33 / 35

67 Open problems What can be said about the geometry of the worst sets in some interesting particular cases (say, transitivity or monotonicity)? Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 34 / 35

68 Open problems What can be said about the geometry of the worst sets in some interesting particular cases (say, transitivity or monotonicity)? When can the worst sets be described as { f 2 C} (P f 2 = λ 2f 2)? Is it true that for transitive graphs the mixing time is bounded by the expected time it takes the walk to escape a ball containing at least half the vertices? Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 34 / 35

69 Open problems What can be said about the geometry of the worst sets in some interesting particular cases (say, transitivity or monotonicity)? When can the worst sets be described as { f 2 C} (P f 2 = λ 2f 2)? Is it true that for transitive graphs the mixing time is bounded by the expected time it takes the walk to escape a ball containing at least half the vertices? Does the product condition imply cutoff for transitive graphs under certain mild conditions? Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 34 / 35

70 Open problems What can be said about the geometry of the worst sets in some interesting particular cases (say, transitivity or monotonicity)? When can the worst sets be described as { f 2 C} (P f 2 = λ 2f 2)? Is it true that for transitive graphs the mixing time is bounded by the expected time it takes the walk to escape a ball containing at least half the vertices? Does the product condition imply cutoff for transitive graphs under certain mild conditions? Does contraction implies cutoff? (under some mild conditions...) Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff for reversible Markov chains 34 / 35

Characterization of cutoff for reversible Markov chains

Characterization of cutoff for reversible Markov chains Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 3 December 2014 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff

More information

Oberwolfach workshop on Combinatorics and Probability

Oberwolfach workshop on Combinatorics and Probability Apr 2009 Oberwolfach workshop on Combinatorics and Probability 1 Describes a sharp transition in the convergence of finite ergodic Markov chains to stationarity. Steady convergence it takes a while to

More information

The cutoff phenomenon for random walk on random directed graphs

The cutoff phenomenon for random walk on random directed graphs The cutoff phenomenon for random walk on random directed graphs Justin Salez Joint work with C. Bordenave and P. Caputo Outline of the talk Outline of the talk 1. The cutoff phenomenon for Markov chains

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information

TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS

TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS JIAN DING, EYAL LUBETZKY AND YUVAL PERES ABSTRACT. The cutoff phenomenon describes a case where a Markov chain exhibits a sharp transition in its convergence

More information

Coupling AMS Short Course

Coupling AMS Short Course Coupling AMS Short Course January 2010 Distance If µ and ν are two probability distributions on a set Ω, then the total variation distance between µ and ν is Example. Let Ω = {0, 1}, and set Then d TV

More information

The Markov Chain Monte Carlo Method

The Markov Chain Monte Carlo Method The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov

More information

The coupling method - Simons Counting Complexity Bootcamp, 2016

The coupling method - Simons Counting Complexity Bootcamp, 2016 The coupling method - Simons Counting Complexity Bootcamp, 2016 Nayantara Bhatnagar (University of Delaware) Ivona Bezáková (Rochester Institute of Technology) January 26, 2016 Techniques for bounding

More information

XVI International Congress on Mathematical Physics

XVI International Congress on Mathematical Physics Aug 2009 XVI International Congress on Mathematical Physics Underlying geometry: finite graph G=(V,E ). Set of possible configurations: V (assignments of +/- spins to the vertices). Probability of a configuration

More information

Modern Discrete Probability Spectral Techniques

Modern Discrete Probability Spectral Techniques Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite

More information

Glauber Dynamics for Ising Model I AMS Short Course

Glauber Dynamics for Ising Model I AMS Short Course Glauber Dynamics for Ising Model I AMS Short Course January 2010 Ising model Let G n = (V n, E n ) be a graph with N = V n < vertices. The nearest-neighbor Ising model on G n is the probability distribution

More information

MARKOV CHAINS AND MIXING TIMES

MARKOV CHAINS AND MIXING TIMES MARKOV CHAINS AND MIXING TIMES BEAU DABBS Abstract. This paper introduces the idea of a Markov chain, a random process which is independent of all states but its current one. We analyse some basic properties

More information

A note on adiabatic theorem for Markov chains and adiabatic quantum computation. Yevgeniy Kovchegov Oregon State University

A note on adiabatic theorem for Markov chains and adiabatic quantum computation. Yevgeniy Kovchegov Oregon State University A note on adiabatic theorem for Markov chains and adiabatic quantum computation Yevgeniy Kovchegov Oregon State University Introduction Max Born and Vladimir Fock in 1928: a physical system remains in

More information

Lecture 3: September 10

Lecture 3: September 10 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 3: September 10 Lecturer: Prof. Alistair Sinclair Scribes: Andrew H. Chan, Piyush Srivastava Disclaimer: These notes have not

More information

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MARKOV CHAINS AND HIDDEN MARKOV MODELS MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe

More information

Mixing time for a random walk on a ring

Mixing time for a random walk on a ring Mixing time for a random walk on a ring Stephen Connor Joint work with Michael Bate Paris, September 2013 Introduction Let X be a discrete time Markov chain on a finite state space S, with transition matrix

More information

ARCC WORKSHOP: SHARP THRESHOLDS FOR MIXING TIMES SCRIBES: SHARAD GOEL, ASAF NACHMIAS, PETER RALPH AND DEVAVRAT SHAH

ARCC WORKSHOP: SHARP THRESHOLDS FOR MIXING TIMES SCRIBES: SHARAD GOEL, ASAF NACHMIAS, PETER RALPH AND DEVAVRAT SHAH ARCC WORKSHOP: SHARP THRESHOLDS FOR MIXING TIMES SCRIBES: SHARAD GOEL, ASAF NACHMIAS, PETER RALPH AND DEVAVRAT SHAH Day I 1. Speaker I: Persi Diaconis Definition 1 (Sharp Threshold). A Markov chain with

More information

3. The Voter Model. David Aldous. June 20, 2012

3. The Voter Model. David Aldous. June 20, 2012 3. The Voter Model David Aldous June 20, 2012 We now move on to the voter model, which (compared to the averaging model) has a more substantial literature in the finite setting, so what s written here

More information

Simons Workshop on Approximate Counting, Markov Chains and Phase Transitions: Open Problem Session

Simons Workshop on Approximate Counting, Markov Chains and Phase Transitions: Open Problem Session Simons Workshop on Approximate Counting, Markov Chains and Phase Transitions: Open Problem Session Scribes: Antonio Blanca, Sarah Cannon, Yumeng Zhang February 4th, 06 Yuval Peres: Simple Random Walk on

More information

CUTOFF FOR THE ISING MODEL ON THE LATTICE

CUTOFF FOR THE ISING MODEL ON THE LATTICE CUTOFF FOR THE ISING MODEL ON THE LATTICE EYAL LUBETZKY AND ALLAN SLY Abstract. Introduced in 1963, Glauber dynamics is one of the most practiced and extensively studied methods for sampling the Ising

More information

Lecture 28: April 26

Lecture 28: April 26 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 28: April 26 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

On Markov Chain Monte Carlo

On Markov Chain Monte Carlo MCMC 0 On Markov Chain Monte Carlo Yevgeniy Kovchegov Oregon State University MCMC 1 Metropolis-Hastings algorithm. Goal: simulating an Ω-valued random variable distributed according to a given probability

More information

Introduction to Markov Chains and Riffle Shuffling

Introduction to Markov Chains and Riffle Shuffling Introduction to Markov Chains and Riffle Shuffling Nina Kuklisova Math REU 202 University of Chicago September 27, 202 Abstract In this paper, we introduce Markov Chains and their basic properties, and

More information

Flip dynamics on canonical cut and project tilings

Flip dynamics on canonical cut and project tilings Flip dynamics on canonical cut and project tilings Thomas Fernique CNRS & Univ. Paris 13 M2 Pavages ENS Lyon November 5, 2015 Outline 1 Random tilings 2 Random sampling 3 Mixing time 4 Slow cooling Outline

More information

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.] Lectures 16-17: Random walks on graphs CSE 525, Winter 2015 Instructor: James R. Lee 1 Random walks Let G V, E) be an undirected graph. The random walk on G is a Markov chain on V that, at each time step,

More information

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+ EECS 495: Randomized Algorithms Lecture 14 Random Walks Reading: Motwani-Raghavan Chapter 6 Powerful tool for sampling complicated distributions since use only local moves Given: to explore state space.

More information

Latent voter model on random regular graphs

Latent voter model on random regular graphs Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

4: The Pandemic process

4: The Pandemic process 4: The Pandemic process David Aldous July 12, 2012 (repeat of previous slide) Background meeting model with rates ν. Model: Pandemic Initially one agent is infected. Whenever an infected agent meets another

More information

Mixing Times and Hitting Times

Mixing Times and Hitting Times Mixing Times and Hitting Times David Aldous January 12, 2010 Old Results and Old Puzzles Levin-Peres-Wilmer give some history of the emergence of this Mixing Times topic in the early 1980s, and we nowadays

More information

G(t) := i. G(t) = 1 + e λut (1) u=2

G(t) := i. G(t) = 1 + e λut (1) u=2 Note: a conjectured compactification of some finite reversible MCs There are two established theories which concern different aspects of the behavior of finite state Markov chains as the size of the state

More information

CUTOFF FOR THE STAR TRANSPOSITION RANDOM WALK

CUTOFF FOR THE STAR TRANSPOSITION RANDOM WALK CUTOFF FOR THE STAR TRANSPOSITION RANDOM WALK JONATHAN NOVAK AND ARIEL SCHVARTZMAN Abstract. In this note, we give a complete proof of the cutoff phenomenon for the star transposition random walk on the

More information

CUTOFF FOR GENERAL SPIN SYSTEMS WITH ARBITRARY BOUNDARY CONDITIONS

CUTOFF FOR GENERAL SPIN SYSTEMS WITH ARBITRARY BOUNDARY CONDITIONS CUTOFF FOR GENERAL SPIN SYSTEMS WITH ARBITRARY BOUNDARY CONDITIONS EYAL LUBETZKY AND ALLAN SLY Abstract. The cutoff phenomenon describes a sharp transition in the convergence of a Markov chain to equilibrium.

More information

CUTOFF FOR THE EAST PROCESS

CUTOFF FOR THE EAST PROCESS CUTOFF FOR THE EAST PROCESS S. GANGULY, E. LUBETZKY, AND F. MARTINELLI ABSTRACT. The East process is a D kinetically constrained interacting particle system, introduced in the physics literature in the

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

FAST INITIAL CONDITIONS FOR GLAUBER DYNAMICS

FAST INITIAL CONDITIONS FOR GLAUBER DYNAMICS FAST INITIAL CONDITIONS FOR GLAUBER DYNAMICS EYAL LUBETZKY AND ALLAN SLY Abstract. In the study of Markov chain mixing times, analysis has centered on the performance from a worst-case starting state.

More information

Coupling. 2/3/2010 and 2/5/2010

Coupling. 2/3/2010 and 2/5/2010 Coupling 2/3/2010 and 2/5/2010 1 Introduction Consider the move to middle shuffle where a card from the top is placed uniformly at random at a position in the deck. It is easy to see that this Markov Chain

More information

A Proof of Aldous Spectral Gap Conjecture. Thomas M. Liggett, UCLA. Stirring Processes

A Proof of Aldous Spectral Gap Conjecture. Thomas M. Liggett, UCLA. Stirring Processes A Proof of Aldous Spectral Gap Conjecture Thomas M. Liggett, UCLA Stirring Processes Given a graph G = (V, E), associate with each edge e E a Poisson process Π e with rate c e 0. Labels are put on the

More information

curvature, mixing, and entropic interpolation Simons Feb-2016 and CSE 599s Lecture 13

curvature, mixing, and entropic interpolation Simons Feb-2016 and CSE 599s Lecture 13 curvature, mixing, and entropic interpolation Simons Feb-2016 and CSE 599s Lecture 13 James R. Lee University of Washington Joint with Ronen Eldan (Weizmann) and Joseph Lehec (Paris-Dauphine) Markov chain

More information

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS ARI FREEDMAN Abstract. In this expository paper, I will give an overview of the necessary conditions for convergence in Markov chains on finite state spaces.

More information

Topic 1: a paper The asymmetric one-dimensional constrained Ising model by David Aldous and Persi Diaconis. J. Statistical Physics 2002.

Topic 1: a paper The asymmetric one-dimensional constrained Ising model by David Aldous and Persi Diaconis. J. Statistical Physics 2002. Topic 1: a paper The asymmetric one-dimensional constrained Ising model by David Aldous and Persi Diaconis. J. Statistical Physics 2002. Topic 2: my speculation on use of constrained Ising as algorithm

More information

Edge Flip Chain for Unbiased Dyadic Tilings

Edge Flip Chain for Unbiased Dyadic Tilings Sarah Cannon, Georgia Tech,, David A. Levin, U. Oregon Alexandre Stauffer, U. Bath, March 30, 2018 Dyadic Tilings A dyadic tiling is a tiling on [0,1] 2 by n = 2 k dyadic rectangles, rectangles which are

More information

Mixing time and diameter in random graphs

Mixing time and diameter in random graphs October 5, 2009 Based on joint works with: Asaf Nachmias, and Jian Ding, Eyal Lubetzky and Jeong-Han Kim. Background The mixing time of the lazy random walk on a graph G is T mix (G) = T mix (G, 1/4) =

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Sensitivity of mixing times in Eulerian digraphs

Sensitivity of mixing times in Eulerian digraphs Sensitivity of mixing times in Eulerian digraphs Lucas Boczkowski Yuval Peres Perla Sousi Abstract Let X be a lazy random walk on a graph G. If G is undirected, then the mixing time is upper bounded by

More information

Lecture 19: November 10

Lecture 19: November 10 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 19: November 10 Lecturer: Prof. Alistair Sinclair Scribes: Kevin Dick and Tanya Gordeeva Disclaimer: These notes have not been

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

LECTURE NOTES FOR MARKOV CHAINS: MIXING TIMES, HITTING TIMES, AND COVER TIMES IN SAINT PETERSBURG SUMMER SCHOOL, 2012

LECTURE NOTES FOR MARKOV CHAINS: MIXING TIMES, HITTING TIMES, AND COVER TIMES IN SAINT PETERSBURG SUMMER SCHOOL, 2012 LECTURE NOTES FOR MARKOV CHAINS: MIXING TIMES, HITTING TIMES, AND COVER TIMES IN SAINT PETERSBURG SUMMER SCHOOL, 2012 By Júlia Komjáthy Yuval Peres Eindhoven University of Technology and Microsoft Research

More information

2. The averaging process

2. The averaging process 2. The averaging process David Aldous June 19, 2012 Write-up published as A Lecture on the Averaging Process (with Dan Lanoue) in recent Probability Surveys. Intended as illustration of how one might teach

More information

MIXING TIME OF CRITICAL ISING MODEL ON TREES IS POLYNOMIAL IN THE HEIGHT

MIXING TIME OF CRITICAL ISING MODEL ON TREES IS POLYNOMIAL IN THE HEIGHT MIXING TIME OF CRITICAL ISING MODEL ON TREES IS POLYNOMIAL IN THE HEIGHT JIAN DING, EYAL LUBETZKY AND YUVAL PERES Abstract. In the heat-bath Glauber dynamics for the Ising model on the lattice, physicists

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Can extra updates delay mixing?

Can extra updates delay mixing? Can extra updates delay mixing? Yuval Peres Peter Winkler July 7, 2010 Abstract We consider Glauber dynamics (starting from an extremal configuration) in a monotone spin system, and show that interjecting

More information

Hard-Core Model on Random Graphs

Hard-Core Model on Random Graphs Hard-Core Model on Random Graphs Antar Bandyopadhyay Theoretical Statistics and Mathematics Unit Seminar Theoretical Statistics and Mathematics Unit Indian Statistical Institute, New Delhi Centre New Delhi,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

MARKOV CHAINS AND COUPLING FROM THE PAST

MARKOV CHAINS AND COUPLING FROM THE PAST MARKOV CHAINS AND COUPLING FROM THE PAST DYLAN CORDARO Abstract We aim to explore Coupling from the Past (CFTP), an algorithm designed to obtain a perfect sampling from the stationary distribution of a

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Decomposition Methods and Sampling Circuits in the Cartesian Lattice

Decomposition Methods and Sampling Circuits in the Cartesian Lattice Decomposition Methods and Sampling Circuits in the Cartesian Lattice Dana Randall College of Computing and School of Mathematics Georgia Institute of Technology Atlanta, GA 30332-0280 randall@math.gatech.edu

More information

Quantum walk algorithms

Quantum walk algorithms Quantum walk algorithms Andrew Childs Institute for Quantum Computing University of Waterloo 28 September 2011 Randomized algorithms Randomness is an important tool in computer science Black-box problems

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

Mixing times via super-fast coupling

Mixing times via super-fast coupling s via super-fast coupling Yevgeniy Kovchegov Department of Mathematics Oregon State University Outline 1 About 2 Definition Coupling Result. About Outline 1 About 2 Definition Coupling Result. About Coauthor

More information

Introduction to Stochastic Processes

Introduction to Stochastic Processes 18.445 Introduction to Stochastic Processes Lecture 1: Introduction to finite Markov chains Hao Wu MIT 04 February 2015 Hao Wu (MIT) 18.445 04 February 2015 1 / 15 Course description About this course

More information

COUPLING TIMES FOR RANDOM WALKS WITH INTERNAL STATES

COUPLING TIMES FOR RANDOM WALKS WITH INTERNAL STATES COUPLING TIMES FOR RANDOM WALKS WITH INTERNAL STATES ELYSE AZORR, SAMUEL J. GHITELMAN, RALPH MORRISON, AND GREG RICE ADVISOR: YEVGENIY KOVCHEGOV OREGON STATE UNIVERSITY ABSTRACT. Using coupling techniques,

More information

2. The averaging process

2. The averaging process 2. The averaging process David Aldous July 10, 2012 The two models in lecture 1 has only rather trivial interaction. In this lecture I discuss a simple FMIE process with less trivial interaction. It is

More information

What can be sampled locally?

What can be sampled locally? What can be sampled locally? Yitong Yin Nanjing University Joint work with: Weiming Feng, Yuxin Sun Local Computation Locality in distributed graph algorithms. [Linial, FOCS 87, SICOMP 92] the LOCAL model:

More information

An Introduction to Randomized algorithms

An Introduction to Randomized algorithms An Introduction to Randomized algorithms C.R. Subramanian The Institute of Mathematical Sciences, Chennai. Expository talk presented at the Research Promotion Workshop on Introduction to Geometric and

More information

Reverse Hyper Contractive Inequalities: What? Why?? When??? How????

Reverse Hyper Contractive Inequalities: What? Why?? When??? How???? Reverse Hyper Contractive Inequalities: What? Why?? When??? How???? UC Berkeley Warsaw Cambridge May 21, 2012 Hyper-Contractive Inequalities The noise (Bonami-Beckner) semi-group on { 1, 1} n 1 2 Def 1:

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

4.5 The critical BGW tree

4.5 The critical BGW tree 4.5. THE CRITICAL BGW TREE 61 4.5 The critical BGW tree 4.5.1 The rooted BGW tree as a metric space We begin by recalling that a BGW tree T T with root is a graph in which the vertices are a subset of

More information

INFORMATION PERCOLATION AND CUTOFF FOR THE STOCHASTIC ISING MODEL

INFORMATION PERCOLATION AND CUTOFF FOR THE STOCHASTIC ISING MODEL INFORMATION PERCOLATION AND CUTOFF FOR THE STOCHASTIC ISING MODEL EYAL LUBETZKY AND ALLAN SLY Abstract. We introduce a new framework for analyzing Glauber dynamics for the Ising model. The traditional

More information

Vertex and edge expansion properties for rapid mixing

Vertex and edge expansion properties for rapid mixing Vertex and edge expansion properties for rapid mixing Ravi Montenegro Abstract We show a strict hierarchy among various edge and vertex expansion properties of Markov chains. This gives easy proofs of

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS RECURRENCE IN COUNTABLE STATE MARKOV CHAINS JIN WOO SUNG Abstract. This paper investigates the recurrence and transience of countable state irreducible Markov chains. Recurrence is the property that a

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

The cover time of random walks on random graphs. Colin Cooper Alan Frieze

The cover time of random walks on random graphs. Colin Cooper Alan Frieze Happy Birthday Bela Random Graphs 1985 The cover time of random walks on random graphs Colin Cooper Alan Frieze The cover time of random walks on random graphs Colin Cooper Alan Frieze and Eyal Lubetzky

More information

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 4: October 21, 2003 Bounding the mixing time via coupling Eric Vigoda 4.1 Introduction In this lecture we ll use the

More information

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture 21. David Aldous. 16 October David Aldous Lecture 21 Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these

More information

A GIBBS SAMPLER ON THE N-SIMPLEX. By Aaron Smith Stanford University

A GIBBS SAMPLER ON THE N-SIMPLEX. By Aaron Smith Stanford University Submitted to the Annals of Applied Probability arxiv: math.pr/0290964 A GIBBS SAMPLER ON THE N-SIMPLEX By Aaron Smith Stanford University We determine the mixing time of a simple Gibbs sampler on the unit

More information

Markov Chains and Mixing Times. David A. Levin Yuval Peres Elizabeth L. Wilmer

Markov Chains and Mixing Times. David A. Levin Yuval Peres Elizabeth L. Wilmer Markov Chains and Mixing Times David A. Levin Yuval Peres Elizabeth L. Wilmer University of Oregon E-mail address: dlevin@uoregon.edu URL: http://www.uoregon.edu/~dlevin Microsoft Research, University

More information

Lecture 6: September 22

Lecture 6: September 22 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 6: September 22 Lecturer: Prof. Alistair Sinclair Scribes: Alistair Sinclair Disclaimer: These notes have not been subjected

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Lecture 14: October 22

Lecture 14: October 22 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 14: October 22 Lecturer: Alistair Sinclair Scribes: Alistair Sinclair Disclaimer: These notes have not been subjected to the

More information

Cut-off and Escape Behaviors for Birth and Death Chains on Trees

Cut-off and Escape Behaviors for Birth and Death Chains on Trees ALEA, Lat. Am. J. Probab. Math. Stat. 8, 49 6 0 Cut-off and Escape Behaviors for Birth and Death Chains on Trees Olivier Bertoncini Department of Physics, University of Aveiro, 380-93 Aveiro, Portugal.

More information

Brownian Motion. Chapter Definition of Brownian motion

Brownian Motion. Chapter Definition of Brownian motion Chapter 5 Brownian Motion Brownian motion originated as a model proposed by Robert Brown in 1828 for the phenomenon of continual swarming motion of pollen grains suspended in water. In 1900, Bachelier

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 3 Luca Trevisan August 31, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 3 Luca Trevisan August 31, 2017 U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 3 Luca Trevisan August 3, 207 Scribed by Keyhan Vakil Lecture 3 In which we complete the study of Independent Set and Max Cut in G n,p random graphs.

More information

Discrete solid-on-solid models

Discrete solid-on-solid models Discrete solid-on-solid models University of Alberta 2018 COSy, University of Manitoba - June 7 Discrete processes, stochastic PDEs, deterministic PDEs Table: Deterministic PDEs Heat-diffusion equation

More information

Mixing in Product Spaces. Elchanan Mossel

Mixing in Product Spaces. Elchanan Mossel Poincaré Recurrence Theorem Theorem (Poincaré, 1890) Let f : X X be a measure preserving transformation. Let E X measurable. Then P[x E : f n (x) / E, n > N(x)] = 0 Poincaré Recurrence Theorem Theorem

More information

Approximate Counting and Markov Chain Monte Carlo

Approximate Counting and Markov Chain Monte Carlo Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam

More information

MIXING TIMES OF RANDOM WALKS ON DYNAMIC CONFIGURATION MODELS

MIXING TIMES OF RANDOM WALKS ON DYNAMIC CONFIGURATION MODELS MIXING TIMES OF RANDOM WALKS ON DYNAMIC CONFIGURATION MODELS Frank den Hollander Leiden University, The Netherlands New Results and Challenges with RWDRE, 11 13 January 2017, Heilbronn Institute for Mathematical

More information

The diameter and mixing time of critical random graphs.

The diameter and mixing time of critical random graphs. The diameter and mixing time of critical random graphs. March 14, 2007 Joint work with: Asaf Nachmias. Background The Erdos and Rényi random graph G(n, p) is obtained from the complete graph on n vertices

More information

Invariance Principle for Variable Speed Random Walks on Trees

Invariance Principle for Variable Speed Random Walks on Trees Invariance Principle for Variable Speed Random Walks on Trees Wolfgang Löhr, University of Duisburg-Essen joint work with Siva Athreya and Anita Winter Stochastic Analysis and Applications Thoku University,

More information

Convex decay of Entropy for interacting systems

Convex decay of Entropy for interacting systems Convex decay of Entropy for interacting systems Paolo Dai Pra Università degli Studi di Padova Cambridge March 30, 2011 Paolo Dai Pra Convex decay of Entropy for interacting systems Cambridge, 03/11 1

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information