Mixing time for a random walk on a ring

Size: px
Start display at page:

Download "Mixing time for a random walk on a ring"

Transcription

1 Mixing time for a random walk on a ring Stephen Connor Joint work with Michael Bate Paris, September 2013

2 Introduction Let X be a discrete time Markov chain on a finite state space S, with transition matrix P = (p ij ), where p ij = P (X t+1 = j X t = i). The vector π is a stationary distribution for X if π = πp. If X is ergodic then we know that P (X t = j X 0 = i) π j as t, for all i, j S. But how fast does this convergence occur?

3 First choose a metric: Definition The total variation distance between two probability measures µ and µ on S is given by µ µ TV = sup A S ( µ(a) µ (A) ) = 1 2 µ(s) µ (s) s S

4 First choose a metric: Definition The total variation distance between two probability measures µ and µ on S is given by µ µ TV = sup A S ( µ(a) µ (A) ) = 1 2 µ(s) µ (s) s S Suppose we start the chain X at position X 0 = x. Write d x (t) = δ x P t π TV This measures the distance between the distribution of X t and π.

5 Now introduce a parameter to measure time until d x (t) is small. Definition The mixing time of X, when X 0 = x, is and (by convention) we define Easy to show that τ x (ε) = min {t : d x (t) ε} τ x = τ x (1/4). τ x (ε) log 2 ε 1 τ x.

6 Random walks on groups Suppose now that our state space is a finite group G, and that X has transitions governed by P (X t+1 = hg X t = g) = Q(h) for all g, h G, for some probability distribution Q on G. Then X is a random walk on a group, and (as long as Q isn t concentrated on a subgroup) its stationary distribution is uniform: π(g) = 1/ G for all g G.

7 Random walks on groups Suppose now that our state space is a finite group G, and that X has transitions governed by P (X t+1 = hg X t = g) = Q(h) for all g, h G, for some probability distribution Q on G. Then X is a random walk on a group, and (as long as Q isn t concentrated on a subgroup) its stationary distribution is uniform: π(g) = 1/ G for all g G. Distribution after repeated steps given by convolution: P (X 2 = g X 0 = id) = Q Q(g) = h Q(gh 1 )Q(h)

8 We re often interested in a natural sequence of processes X (n) on groups G (n) of increasing size: how does the mixing time τ x (n) scale with n?

9 We re often interested in a natural sequence of processes X (n) on groups G (n) of increasing size: how does the mixing time τ x (n) scale with n? In lots of nice examples a cutoff phenomenon is exhibited: for all ε > 0, lim n τ x (n) (ε) τ x (n) (1 ε) =

10 Random walk on a ring What if we add some additional structure, and try to move from a walk on a group to a walk on a ring?

11 Random walk on a ring What if we add some additional structure, and try to move from a walk on a group to a walk on a ring? For example: random walk on Z n (n odd) with x { x + 1 2x w.p. 1 p n w.p. p n

12 Random walk on a ring What if we add some additional structure, and try to move from a walk on a group to a walk on a ring? For example: random walk on Z n (n odd) with p { x + 1 w.p. 1 p n x 1 p 2x w.p. p n 0 1 p 5 6 p p p p p 1 p 1 p 2 1 p 1 p 4 1 p 3 1 p

13 Related results Chung, Diaconis and Graham (1987) study the process (used in random number generation) ax 1 w.p. 1 3 x ax w.p. 1 3 ax + 1 w.p. 1 3

14 Related results Chung, Diaconis and Graham (1987) study the process (used in random number generation) ax 1 w.p. 1 3 x ax w.p. 1 3 ax + 1 w.p. 1 3 When a = 1 there exist constants C and C such that: e Ct/n2 < P (n) t π (n) TV < e C t/n 2

15 Related results Chung, Diaconis and Graham (1987) study the process (used in random number generation) ax 1 w.p. 1 3 x ax w.p. 1 3 ax + 1 w.p. 1 3 When a = 1 there exist constants C and C such that: e Ct/n2 < P (n) t π (n) TV < e C t/n 2 When a = 2 and n = 2 m 1 there exist constants c and c such that: for t n c log n log log n, P (n) t n π (n) TV 0 as n for t n c log n log log n, P (n) t n π (n) TV > ε as n

16 Back to our process... General problem The distribution of X t isn t given by convolution. X t = 2I t X t 1 + (1 I t )(X t 1 + 1) where I t Bern(p n ).

17 Back to our process... General problem The distribution of X t isn t given by convolution. X t = 2I t X t 1 + (1 I t )(X t 1 + 1) where I t Bern(p n ). But for this relatively simple walk, we can get around this by looking at the process subsampled at jump times. (Here we call a +1 move a step and a 2 move a jump.)

18 Back to our process... General problem The distribution of X t isn t given by convolution. X t = 2I t X t 1 + (1 I t )(X t 1 + 1) where I t Bern(p n ). But for this relatively simple walk, we can get around this by looking at the process subsampled at jump times. (Here we call a +1 move a step and a 2 move a jump.) So consider (with X 0 = Y 0 = 0) Y k = k j=1 2 k+1 j B j (mod n), B j i.i.d. Geom(p n ) (mod n) (i.e. Y k is the position of X immediately following the k th jump.)

19 Plan 1 Find lower bound for mixing time of Y ; 2 Find upper bound for mixing time of Y ; 3 Try to relate these back to the mixing time for X.

20 Plan 1 Find lower bound for mixing time of Y ; 2 Find upper bound for mixing time of Y ; 3 Try to relate these back to the mixing time for X. Restrict attention to p n = 1, α (0, 1). 2nα We expect things to happen (for Y ) sometime around T n := log 2 (np n ) (1 α) log 2 n

21 Plan 1 Find lower bound for mixing time of Y ; 2 Find upper bound for mixing time of Y ; 3 Try to relate these back to the mixing time for X. Restrict attention to p n = 1, α (0, 1). 2nα We expect things to happen (for Y ) sometime around T n := log 2 (np n ) (1 α) log 2 n Theorem Y exhibits a cutoff at time T n, with window size O(1).

22 Lower bound For a lower bound, we simply find a set A n (θ) (of considerable size) that Y has very little chance of hitting before time T n θ, for some θ N. (Recall the definition of total variation distance!) Y 0 A n (θ) E [Y Tn θ] (3/8 γ)n

23 If A n (θ) is chosen as above then π(a n (θ)) = 1/4 + 2γ.

24 If A n (θ) is chosen as above then π(a n (θ)) = 1/4 + 2γ. Using Chebychev s inequality: P (Y Tn θ A n (θ)) 4 1 θ 3(3/8 γ) 2.

25 If A n (θ) is chosen as above then π(a n (θ)) = 1/4 + 2γ. Using Chebychev s inequality: P (Y Tn θ A n (θ)) 4 1 θ 3(3/8 γ) 2. Now choose γ and θ to make the difference between these large.

26 If A n (θ) is chosen as above then π(a n (θ)) = 1/4 + 2γ. Using Chebychev s inequality: P (Y Tn θ A n (θ)) 4 1 θ 3(3/8 γ) 2. Now choose γ and θ to make the difference between these large. We get a lower bound: Y has not mixed at time T n 4

27 Furthermore, we can choose γ depending on θ to maximise the discrepancy. Lower bound on total variation distance tends to one quickly as θ increases:

28 Upper bound Let P be a probability on a group G. For a representation ρ, write ˆP(ρ) = g G P(g)ρ(g) for the Fourier transform of P at ρ.

29 Upper bound Let P be a probability on a group G. For a representation ρ, write ˆP(ρ) = g G P(g)ρ(g) for the Fourier transform of P at ρ. Then a basic but extremely useful result is the following: Lemma (Diaconis and Shahshahani, 1981) P π 2 TV 1 d ρ Tr (ˆP(ρ)ˆP(ρ) ) 4 non triv irr ρ

30 Upper bound Let P be a probability on a group G. For a representation ρ, write ˆP(ρ) = g G P(g)ρ(g) for the Fourier transform of P at ρ. Then a basic but extremely useful result is the following: Lemma (Diaconis and Shahshahani, 1981) P π 2 TV 1 d ρ Tr (ˆP(ρ)ˆP(ρ) ) 4 non triv irr ρ Since P P(ρ) = ˆP(ρ)ˆP(ρ), this lemma gives a bound on total variation distance after any number of steps.

31 Writing Y k = k j=1 2 k+1 j B j (mod n), B j i.i.d. Geom(p n ) (mod n) as before, we see that the distribution of Y k is just the convolution of measures µ j given by µ j (2 k+1 j s) = (1 p n) s p n 1 (1 p n ) n, 1 j k.

32 Writing Y k = k j=1 2 k+1 j B j (mod n), B j i.i.d. Geom(p n ) (mod n) as before, we see that the distribution of Y k is just the convolution of measures µ j given by µ j (2 k+1 j s) = (1 p n) s p n 1 (1 p n ) n, 1 j k. Irreducible representations of Z n are 1-dimensional, and we can write (for s = 0,..., n 1) ˆµ j (s) = p n 1 z js (1 p n ), where z js = e i 2π n 2k+1 j s.

33 Putting this into the Lemma we obtain an upper bound: δ 0 P t π 2 TV 1 4 n 1 t s=1 k=1 p 2 n 1 2(1 p n ) cos( 2π n 2k s) + (1 p n ) 2

34 Putting this into the Lemma we obtain an upper bound: δ 0 P t π 2 TV 1 4 n 1 t s=1 k=1 p 2 n 1 2(1 p n ) cos( 2π n 2k s) + (1 p n ) 2 Theorem For any fixed θ N, lim sup δ 0 P (n) T n+θ π(n) TV h(θ) n where h tends to zero exponentially fast as θ. This completes our proof of a cutoff.

35 Moving from Y to X We ve seen that Y mixes in an interval of length O(1) around T n = log 2 (np n ): what does this tell us about the mixing time for X?

36 Moving from Y to X We ve seen that Y mixes in an interval of length O(1) around T n = log 2 (np n ): what does this tell us about the mixing time for X? Corollary For p n = 1/2n α, with α (0, 1), X exhibits a cutoff at time T X n = T n /p n with window size of O(ξ n T X n /p n ), where ξ n is any sequence tending to infinity.

37 And finally: open problems 1 We can deal with more interesting steps in our walk, but not yet with more interesting jumps, e.g. consider x + 1 w.p. 1 p n x 2x w.p. p n /2 ) x w.p. pn /2 ( n+1 2 or more general rules, such as x x 2...

38 And finally: open problems 1 We can deal with more interesting steps in our walk, but not yet with more interesting jumps, e.g. consider x + 1 w.p. 1 p n x 2x w.p. p n /2 ) x w.p. pn /2 ( n+1 2 or more general rules, such as x x Or how about this process? { x + 1 x ax w.p. 1 p n w.p. p n where multiplication by a is not invertible? (Stationary distribution won t be uniform... )

Characterization of cutoff for reversible Markov chains

Characterization of cutoff for reversible Markov chains Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 3 December 2014 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff

More information

The cutoff phenomenon for random walk on random directed graphs

The cutoff phenomenon for random walk on random directed graphs The cutoff phenomenon for random walk on random directed graphs Justin Salez Joint work with C. Bordenave and P. Caputo Outline of the talk Outline of the talk 1. The cutoff phenomenon for Markov chains

More information

Random Walks on Finite Quantum Groups

Random Walks on Finite Quantum Groups Random Walks on Finite Quantum Groups Diaconis-Shahshahani Upper Bound Lemma for Finite Quantum Groups J.P. McCarthy Cork Institute of Technology 17 May 2017 Seoul National University The Classical Problem

More information

Characterization of cutoff for reversible Markov chains

Characterization of cutoff for reversible Markov chains Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 23 Feb 2015 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

In other words, we are interested in what is happening to the y values as we get really large x values and as we get really small x values.

In other words, we are interested in what is happening to the y values as we get really large x values and as we get really small x values. Polynomial functions: End behavior Solutions NAME: In this lab, we are looking at the end behavior of polynomial graphs, i.e. what is happening to the y values at the (left and right) ends of the graph.

More information

CUT-OFF FOR QUANTUM RANDOM WALKS. 1. Convergence of quantum random walks

CUT-OFF FOR QUANTUM RANDOM WALKS. 1. Convergence of quantum random walks CUT-OFF FOR QUANTUM RANDOM WALKS AMAURY FRESLON Abstract. We give an introduction to the cut-off phenomenon for random walks on classical and quantum compact groups. We give an example involving random

More information

ABSTRACT MARKOV CHAINS, RANDOM WALKS, AND CARD SHUFFLING. Nolan Outlaw. May 2015

ABSTRACT MARKOV CHAINS, RANDOM WALKS, AND CARD SHUFFLING. Nolan Outlaw. May 2015 ABSTRACT MARKOV CHAINS, RANDOM WALKS, AND CARD SHUFFLING by Nolan Outlaw May 215 Chair: Dr. Gail Ratcliff, PhD Major Department: Mathematics A common question in the study of random processes pertains

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

The coupling method - Simons Counting Complexity Bootcamp, 2016

The coupling method - Simons Counting Complexity Bootcamp, 2016 The coupling method - Simons Counting Complexity Bootcamp, 2016 Nayantara Bhatnagar (University of Delaware) Ivona Bezáková (Rochester Institute of Technology) January 26, 2016 Techniques for bounding

More information

TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS

TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS JIAN DING, EYAL LUBETZKY AND YUVAL PERES ABSTRACT. The cutoff phenomenon describes a case where a Markov chain exhibits a sharp transition in its convergence

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Mixing times via super-fast coupling

Mixing times via super-fast coupling s via super-fast coupling Yevgeniy Kovchegov Department of Mathematics Oregon State University Outline 1 About 2 Definition Coupling Result. About Outline 1 About 2 Definition Coupling Result. About Coauthor

More information

Diaconis Shahshahani Upper Bound Lemma for Finite Quantum Groups

Diaconis Shahshahani Upper Bound Lemma for Finite Quantum Groups Diaconis Shahshahani Upper Bound Lemma for Finite Quantum Groups J.P. McCarthy Cork Institute of Technology 30 August 2018 Irish Mathematical Society Meeting, UCD Finite Classical Groups aka Finite Groups

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Propp-Wilson Algorithm (and sampling the Ising model)

Propp-Wilson Algorithm (and sampling the Ising model) Propp-Wilson Algorithm (and sampling the Ising model) Danny Leshem, Nov 2009 References: Haggstrom, O. (2002) Finite Markov Chains and Algorithmic Applications, ch. 10-11 Propp, J. & Wilson, D. (1996)

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

On random walks. i=1 U i, where x Z is the starting point of

On random walks. i=1 U i, where x Z is the starting point of On random walks Random walk in dimension. Let S n = x+ n i= U i, where x Z is the starting point of the random walk, and the U i s are IID with P(U i = +) = P(U n = ) = /2.. Let N be fixed (goal you want

More information

Oberwolfach workshop on Combinatorics and Probability

Oberwolfach workshop on Combinatorics and Probability Apr 2009 Oberwolfach workshop on Combinatorics and Probability 1 Describes a sharp transition in the convergence of finite ergodic Markov chains to stationarity. Steady convergence it takes a while to

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

MARKOV CHAINS AND MIXING TIMES

MARKOV CHAINS AND MIXING TIMES MARKOV CHAINS AND MIXING TIMES BEAU DABBS Abstract. This paper introduces the idea of a Markov chain, a random process which is independent of all states but its current one. We analyse some basic properties

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MARKOV CHAINS AND HIDDEN MARKOV MODELS MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe

More information

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Weak quenched limiting distributions of a one-dimensional random walk in a random environment Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions Lower Tail Probabilities and Normal Comparison Inequalities In Memory of Wenbo V. Li s Contributions Qi-Man Shao The Chinese University of Hong Kong Lower Tail Probabilities and Normal Comparison Inequalities

More information

Strong approximation for additive functionals of geometrically ergodic Markov chains

Strong approximation for additive functionals of geometrically ergodic Markov chains Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability

More information

Mixing Times and Hitting Times

Mixing Times and Hitting Times Mixing Times and Hitting Times David Aldous January 12, 2010 Old Results and Old Puzzles Levin-Peres-Wilmer give some history of the emergence of this Mixing Times topic in the early 1980s, and we nowadays

More information

Glauber Dynamics for Ising Model I AMS Short Course

Glauber Dynamics for Ising Model I AMS Short Course Glauber Dynamics for Ising Model I AMS Short Course January 2010 Ising model Let G n = (V n, E n ) be a graph with N = V n < vertices. The nearest-neighbor Ising model on G n is the probability distribution

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Ergodic Properties of Markov Processes

Ergodic Properties of Markov Processes Ergodic Properties of Markov Processes March 9, 2006 Martin Hairer Lecture given at The University of Warwick in Spring 2006 1 Introduction Markov processes describe the time-evolution of random systems

More information

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Catherine Matias Joint works with F. Comets, M. Falconnet, D.& O. Loukianov Currently: Laboratoire

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

A mathematical framework for Exact Milestoning

A mathematical framework for Exact Milestoning A mathematical framework for Exact Milestoning David Aristoff (joint work with Juan M. Bello-Rivas and Ron Elber) Colorado State University July 2015 D. Aristoff (Colorado State University) July 2015 1

More information

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

FOURIER TRANSFORMS OF STATIONARY PROCESSES 1. k=1

FOURIER TRANSFORMS OF STATIONARY PROCESSES 1. k=1 FOURIER TRANSFORMS OF STATIONARY PROCESSES WEI BIAO WU September 8, 003 Abstract. We consider the asymptotic behavior of Fourier transforms of stationary and ergodic sequences. Under sufficiently mild

More information

Coupling. 2/3/2010 and 2/5/2010

Coupling. 2/3/2010 and 2/5/2010 Coupling 2/3/2010 and 2/5/2010 1 Introduction Consider the move to middle shuffle where a card from the top is placed uniformly at random at a position in the deck. It is easy to see that this Markov Chain

More information

Functions based on sin ( π. and cos

Functions based on sin ( π. and cos Functions based on sin and cos. Introduction In Complex Analysis if a function is differentiable it has derivatives of all orders. In Real Analysis the situation is very different. Using sin (π/ and cos

More information

Algebraic Actions of the Discrete Heisenberg Group

Algebraic Actions of the Discrete Heisenberg Group Algebraic Actions of the Discrete Heisenberg Group Based on joint work with Doug Lind and Evgeny Verbitskiy Klaus Schmidt Vienna DAGT, Udine, July 2018 Klaus Schmidt Algebraic Actions 1 / 21 Toral automorphisms

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random

More information

Invariant measure for random walks on ergodic environments on a strip

Invariant measure for random walks on ergodic environments on a strip Invariant measure for random walks on ergodic environments on a strip Dmitry Dolgopyat and Ilya Goldsheid Department of Mathematics and Institute of Physical Science & Technology University of Maryland

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Disjointness and Additivity

Disjointness and Additivity Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Detailed Proofs of Lemmas, Theorems, and Corollaries

Detailed Proofs of Lemmas, Theorems, and Corollaries Dahua Lin CSAIL, MIT John Fisher CSAIL, MIT A List of Lemmas, Theorems, and Corollaries For being self-contained, we list here all the lemmas, theorems, and corollaries in the main paper. Lemma. The joint

More information

Introduction to Markov Chains and Riffle Shuffling

Introduction to Markov Chains and Riffle Shuffling Introduction to Markov Chains and Riffle Shuffling Nina Kuklisova Math REU 202 University of Chicago September 27, 202 Abstract In this paper, we introduce Markov Chains and their basic properties, and

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

SPECIAL CASES OF THE CLASS NUMBER FORMULA

SPECIAL CASES OF THE CLASS NUMBER FORMULA SPECIAL CASES OF THE CLASS NUMBER FORMULA What we know from last time regarding general theory: Each quadratic extension K of Q has an associated discriminant D K (which uniquely determines K), and an

More information

Convolutions and Their Tails. Anirban DasGupta

Convolutions and Their Tails. Anirban DasGupta Convolutions and Their Tails Anirban DasGupta 1 Y, Z : (Ω, A, P) (X, B), Y, Z independent. Distribution of Y + Z = Convolution of the distributions of Y, Z. Colloquial: Convolution of Y and Z. 2 Convolution

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

Solutions For Stochastic Process Final Exam

Solutions For Stochastic Process Final Exam Solutions For Stochastic Process Final Exam (a) λ BMW = 20 0% = 2 X BMW Poisson(2) Let N t be the number of BMWs which have passes during [0, t] Then the probability in question is P (N ) = P (N = 0) =

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Central Limit Theorem for Non-stationary Markov Chains

Central Limit Theorem for Non-stationary Markov Chains Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities

More information

G(t) := i. G(t) = 1 + e λut (1) u=2

G(t) := i. G(t) = 1 + e λut (1) u=2 Note: a conjectured compactification of some finite reversible MCs There are two established theories which concern different aspects of the behavior of finite state Markov chains as the size of the state

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University

More information

18.440: Lecture 33 Markov Chains

18.440: Lecture 33 Markov Chains 18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Limit theorems for dependent regularly varying functions of Markov chains

Limit theorems for dependent regularly varying functions of Markov chains Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

MATH 423/ Note that the algebraic operations on the right hand side are vector subtraction and scalar multiplication.

MATH 423/ Note that the algebraic operations on the right hand side are vector subtraction and scalar multiplication. MATH 423/673 1 Curves Definition: The velocity vector of a curve α : I R 3 at time t is the tangent vector to R 3 at α(t), defined by α (t) T α(t) R 3 α α(t + h) α(t) (t) := lim h 0 h Note that the algebraic

More information

THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE. 1. Introduction

THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE. 1. Introduction THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE HASAN AKIN Abstract. The purpose of this short paper is to compute the measure-theoretic entropy of the onedimensional

More information

Approximation of the conditional number of exceedances

Approximation of the conditional number of exceedances Approximation of the conditional number of exceedances Han Liang Gan University of Melbourne July 9, 2013 Joint work with Aihua Xia Han Liang Gan Approximation of the conditional number of exceedances

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even

More information

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random

More information

Sensitivity of mixing times in Eulerian digraphs

Sensitivity of mixing times in Eulerian digraphs Sensitivity of mixing times in Eulerian digraphs Lucas Boczkowski Yuval Peres Perla Sousi Abstract Let X be a lazy random walk on a graph G. If G is undirected, then the mixing time is upper bounded by

More information

Lecture 10: Semi-Markov Type Processes

Lecture 10: Semi-Markov Type Processes Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov

More information

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions 18.175: Lecture 8 Weak laws and moment-generating/characteristic functions Scott Sheffield MIT 18.175 Lecture 8 1 Outline Moment generating functions Weak law of large numbers: Markov/Chebyshev approach

More information

Lecture 28: April 26

Lecture 28: April 26 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 28: April 26 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Some Results on the Ergodicity of Adaptive MCMC Algorithms

Some Results on the Ergodicity of Adaptive MCMC Algorithms Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information