MARKOV CHAINS AND HIDDEN MARKOV MODELS

Size: px
Start display at page:

Download "MARKOV CHAINS AND HIDDEN MARKOV MODELS"

Transcription

1 MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe what a stationary distribution is and show that every irreducible and aperiodic Markov chain has a unique stationary distribution. Next we talk about mixing. Then we briefly talk about an application of Markov chains, which is the use of hidden Markov models. Contents 1. Finite Markov Chains 1. Stationary Distributions 3. Mixing 4 4. Ergodic Theorem 8 Acknowledgments 10 References Finite Markov Chains We will start with a definition then dive into an example that will make the definition clearer to picture. This expository paper will be following Levin s, Peres s, and Wilmer s book on Markov chains, which is listed in the acknowledgments section. Definition 1.1. A sequence of random variables {X t } is a Markov chain with state space Ω and transition matrix P if for all x, y Ω, all t 1, and all events H t 1 = t 1 s=0 {X s = x s } satisfying P(H t 1 {X t = x}) > 0, we have (1.) P{X t1 = y H t 1 {X t = x}} = P{X t1 = y X t = x} = P (x, y). The above equations means that the probability of moving to state y given that we are currently in state x does not depend on the sequence of states preceding state x. A finite Markov chain is best explained through an example. We will parameterize the space of all two state Markov chains by using the classic example of a frog jumping between two lily pads. We will denote one lily pad l for left and the other r for right. Suppose that every morning, the frog will either stay on the lily pad it is on or jump to the other lily pad. If the frog is on the right lily pad, then Date: DECEMBER 5. 1

2 MERYL SEAH it will jump to the left lily pad with a probability of p. If the frog is on the left lily pad, then it will jump to the right lily pad with a probability of q. Then Ω = {l, r}. Let the sequence (X 0, X 1, X,... ) be the sequence of lily pads that the frog sat on on day 0, day 1, day,... So based on the probabilities set up in the problem, the sequence (X 0, X 1,... ) is a Markov chain with transition matrix [ ] [ ] P (r, r) P (r, l) 1 p p (1.3) P = =. P (l, r) P (l, l) q 1 q Suppose the frog starts day 0 on the right lily pad. We can store our distribution information in a row vector (1.4) µ t = (P{X t = r X 0 = r}, P{X t = l X 0 = r}). Then µ 1 = µ 0 P and µ t1 = µ t P. Continuing to multiply by P gives us (1.5) µ t = µ 0 P t for all t 0.. Stationary Distributions [ 1 ] Suppose with have a matrix When we compute this matrix to a high 3 3 ] power, to 100 for example, we notice that the matrix approaches. We use this observation to begin our discussion into stationary distributions. Definition.1. A stationary distribution of a Markov chain is a probability distribution π satisfying (.) π = πp, where P is the transition matrix of the Markov chain. Definition.3. For x Ω, the hitting time for x is τ x = min{t 0 : X t = x}. In other words, it is the time at which the chain first visits state x. We will now demonstrate that stationary distributions exist. First, we begin with a lemma about irreducible chains and expected hitting times. Definition.4. A chain P is irreducible if for any two states x, y Ω there exists an integer t such that P t (x, y) > 0. In other words, starting from any state, it is possible to get to any other state using transitions of positive probability. Lemma.5. For any states x and y of an irreducible chain, E x (τ y ) <. Proof. By the definition of irreducibility, it follows that there exists an integer r > 0 and a real ɛ > 0 such that for any states x, w Ω there exists a j r with P j (z, w) > ɛ. Proposition.6 (Existence of a Stationary Distribution). Let P be the transition matrix of an irreducible Markov chain. Then (1) there exists a probability distribution π on Ω such that π = πp and π(x) > 0 for all x Ω, and () π(x) = 1 E x(τ x ). [

3 MARKOV CHAINS AND HIDDEN MARKOV MODELS 3 Proof. Let z Ω be an arbitrary state of the Markov chain. This proof will look at the average time the chain spends at each state before returning to z. We define π(y) := E z (number of visits to y before returning to z) = P z {X t = y, τ z > t} t=0 Then π(y) E z τ z. By the lemma, we know that for all y Ω, π(y) <. From how we defined π(y), we know that (.7) P x {X t = x, τ z > t}p (x, y). π(x)p (x, y) = t=0 Since the event in which {τ z t 1} = {τ z > t} is determined by what X 0,..., X t are, we know that (.8) P z {X t = x, X t1 = y, τ z t 1} = P z {X t = x, τ z t 1}P (x, y). Therefore, from the two equations, we know that We know that πp (x, y) = P z {X t1 = y, τ z t 1} = t=0 P z {X t = y, τ z t=1 t}. P z {X t = y, τ z t} = π(y) P z {X 0 = y, τ z > 0} P z {X t = y, τ z = t} t=1 t=1 = π(y) P z {X 0 = y} P z {X τ z = y} Suppose y = z. Then since X 0 = z and X τ z = z, P z {X 0 = y} and P z {X t = y} are both 1, so they cancel each other out. Now suppose y z. Then both P z {X 0 = y} and P z {X t = y} are 0. So, (.9) P z {X t = y, τ z t} = π(y). t=1 Therefore, it follows that π = πp. Then we normalize by x π(x) = E z(τ z ), so (.10) π(x) = π(x) E z (τ z ), which satisfies π = πp, showing that π is a stationary distribution and satisfying the first part of the proposition. Hence, as π(x) = 1, for any x Ω, a stationary probability distribution is: 1 (.11) π(x) = E x (τ x

4 4 MERYL SEAH Now we will demonstrate the uniqueness of the stationary distribution. First, we begin with some definitions. Definition.1. A function h : Ω R is harmonic at x if h(x) = P (x, y)h(y). Definition.13. A function is harmonic on D Ω if it is harmonic at every state x D. Remark.14. If h is regarded as a column vector, then a function which is harmonic on Ω satisfies P h = h. Lemma.15. Suppose that P is irreducible. A function h which is harmonic at every point of Ω is constant. Proof. Ω is finite, so there exists a state x 0 such that h(x 0 ) = M is maximal. Suppose there exists some state z such that P (x 0, z) > 0 for which we have h(z) < M, then (.16) h(x 0 ) = P (x 0, z)h(z) y z P (x 0, y)h(y) < M. However, since h(x 0 ) = M, there is a contradiction. So we know that h(z) = M for all states z such that P (x 0, z) > 0. Let y Ω. Since the chain is irreducible, this means that there exists a sequence x 0, x 1, x,..., x n = y such that P (x i, x i1 ) > 0. Following the same logic that we used to show that h(z) = M, it follows that h(y) = h(x n 1 ) = = h(x 0 ) = M. Therefore, h is constant. Corollary.17. Let P be the transition matrix of an irreducible Markov chain. Then there exists a unique probability distribution π satisfying π = πp. Proof. We already know that there exists a probability distribution satisfying π = πp because we proved the existance of a stationary distribution. By the lemma, it follows that the kernel of P I, where I is the identity matrix, has dimension 1. This means that the column rank of P I is Ω I by the rank-nullity theorem. We know that the row rank of a square matrix is equal to its column rank. This means that the equation v = vp where v is a row vector has a space of solutions that has dimension 1, meaning there is only one solution vector where the sum of its entries is Mixing Definition 3.1. The total variation distance between two probability distributions µ and ν on Ω is defined (3.) µ ν T V = max µ(a) ν(a). A Ω In other words, the total variation distance between two probability distributions is the maximum difference between the probabilities assigned by the distributions to an event. Definition 3.3. The variance of a random variable X is defined to be (3.4) V ar(x) = E((X E(X)) ).

5 MARKOV CHAINS AND HIDDEN MARKOV MODELS 5 Definition 3.5. The period of state x is defined to be the greatest common divisor of T (x), where T (x) := {t 1 : P t (x, x) > 0} is the set of times when it is possible for the chain to return to the starting position x. Remark 3.6. The chain is called aperiodic if all states have period 1. The chain is periodic if it is not aperiodic. Now we will give a lemma from number theory that we will not prove because it is not in the scope of the paper. Lemma 3.7. Any set of non-negative integers which is closed under addition and has greatest common divisor 1 must contain all but finitely many of the non-negative integers. Proposition 3.8. If P is aperiodic and irreducible, then there exists an integer r for t r such that P r (x, y) > 0 for all x, y Ω. Proof. We know from the lemma that any set of non-negative integers which is closed under addition and has greatest common divisor 1 must contain all but finitely many of the non-negative integers. Since the chain is aperiodic, the greatest common divisor of T (x) is 1. Let s, t T (x). Then P st (x, x) P s (x, x)p t (x, x) > 0, so s t T (x). Therefore, the set T (x) is closed under addition. It then follows that there exists a t(x) where t being greater than t(x) implies that t T (x). By the definition of irreducibility, we know that for all y Ω, there exists r = r(x, y) such that P r (x, y) > 0. Therefore, for t t(x) r, (3.9) P t (x, y) P t r (x, x)p r (x, y) > 0. So for t t (x) := t(x) max r(x, y), P t (x, y) > 0 for all y Ω. If t max t (x), then P t (x, y) > 0 for all x, y Ω. Definition A matrix P is stochastic if its entries are all non-negative and (3.11) for all x Ω. P (x, y) = 1 Proposition 3.1. Let µ and ν be two probability distributions on Ω. Then (3.13) µ ν T V = 1 µ(x) ν(x). Proof. Let B = {x : µ(x) ν(x)} and let A Ω. Since any x A B c satisfies µ(x) ν(x) < 0, when elements of A B c are eliminated, the probability will not decrease. Also, adding more elements of B will not decrease the difference in probability. So (3.14) µ(a) ν(a) µ(a B) ν(a B) µ(b) ν(b). Using the same logic, it follows that (3.15) ν(a) µ(a) ν(b c ) µ(b c ).

6 6 MERYL SEAH Since ν(b c ) µ(b c ) = µ(b) ν(b), then when A = B, µ(a) ν(a) is equal to the upper bound. Thus, (3.16) µ ν T V = 1 [µ(b) ν(b) ν(bc ) µ(b c )] = 1 µ(x) ν(x). Remark It follows from this proposition and the triangle inequality of real numbers that total variation distance satisfies the triangle inequality. In other words, (3.18) µ ν T V µ η T V η ν T V. Theorem 3.19 (Convergence Theorem). Suppose that P is irreducible and aperiodic, with stationary distribution π. Then there exist constants α (0, 1) and C > 0 such that (3.0) max P t (x, ) π T V Cα t Proof. P is irreducible and aperiodic, so there exists an r such that P r has strictly positive entries. Let Π be the matrix such that it has Ω rows, each of them the row vector π. For small enough δ > 0, for all x, y Ω, (3.1) P r (x, y) δπ(y) Then a stochastic matrix Q is defined by the equation (3.) P r = (1 θ)π θq Since Π is made up of the row vector π, we know that ΠM = Π for any matrix M such that πm = π. We also know that MΠ = Π if M is a stochastic matrix because its entries sum to 1. Now, we will show by induction that for k 1, (3.3) P rk = (1 θ k )Π θ k Q k If k = 1, then this equation holds because of how we defined Q. Suppose that our proposition is true for k = n. So (3.4) P rn = (1 θ k n)π θ n Q n. Then P r(n1) = P r np r = [(1 θ n )Π θ n Q n ]P r = (1 θ n )ΠP r θ n Q n P r = (1 θ n )ΠP r θ n Q n [(1 θ)π θq] = (1 θ n )ΠP r (1 θ)πθ n Q n θ n1 Q n1 Since πp r = π and Q n is stochastic, we know that ΠP r = Π and Q n Π = Π, (3.5) P r(n1) = [1 θ n1 ]Π θ n1 Q n1. Therefore, for all k 1, P rk = (1 θ k )Π θ k Q k.

7 MARKOV CHAINS AND HIDDEN MARKOV MODELS 7 Now we multiply by P j to get (3.6) P rkj Π = θ k (Q k P j Π). Now we add the absolute values of the entries in row x 0 for the matrix on each side of the equation and divide by. Q k P j Π is the largest possible total variation distance, which is 1, so (3.7) P rkj (x 0, ) π T V θ k. Definition 3.8. The maximum distance over x 0 Ω between P t (x 0, ) and π is denoted: (3.9) d(t) := max P t (x, ) π T V. We also define (3.30) d(t) := max x, P t (x, ) P t (y, ) T V. Lemma If d(t) and d(t) are defined as above, then (3.3) d(t) d(t) d(t). Proof. d(t) d(t) follows from the triangle inequality. For d(t) d(t), since π is stationary, we know that (3.33) π(a) = π(y)p t (y, A) for any set A. So, it follows that P t (x, ) π T V = max A Ω P t (x, A) π(a) So by the triangle inequality, max A Ω π(y) P t (x, A) P t (y, A) y Ω = max A Ω π(y)[p t (x, A) P t (y, A)]. π(y) max A Ω P t (x, A) P t (y, A) = π(y) P t (x, ) P t (y, ) T V. Since the average of a set of numbers cannot be greater than the maximum of that set, π(y) P t (x, ) P t (y, ) T V is bounded by max P t (x, ) P t (y, ) T V. Mixing time is a way to measure the time it takes for the Markov chain to get close to the stationary distribution. Definition The mixing time of a Markov chain is defined by (3.35) t mix (ɛ) := min{t : d(t) ɛ}, and (3.36) t mix (ɛ) := t mix ( 1 4 ). The 1 4 is an arbitrary number chosen because it is less than 1.

8 8 MERYL SEAH Lemma Let P be the transition matrix of a Markov chain with state space Ω. Let µ and ν be two distributions on Ω. Then (3.38) µp νp T V µ ν T V. Proof. µp νp T V = 1 µp (x) νp (x) = 1 µ(y)p (y, x) ν(y)p (y, x) = 1 P (y, x)[µ(y) ν(y)] 1 P (y, x) µ(y) ν(y) = 1 µ(y) ν(y) P (y, x) = 1 µ(y) ν(y) = µ ν T V. Remark By this lemma, it is clear that when c and t are non-negative, (3.40) d(ct) d(ct) d(t) c. Thus, from the above remark, it follows that (3.41) d(lt mix (ɛ)) d(lt mix (ɛ)) d(t mix (ɛ)) l (ɛ) l. Plugging in the ɛ = 1 4 (3.4) d(lt mix ) l and from the definition of mixing time, we get that (3.43) t mix (ɛ) [log ɛ 1 t mix ]. 4. Ergodic Theorem Theorem 4.1 (Strong Law of Large Numbers). Let Z 1, Z,... be a sequence of random variables with E(Z s ) = 0 for all s and (4.) V ar(z s1 Z sk ) Ck for all s and k. Then 1 t 1 (4.3) P{ lim Z s = 0} = 1. t t s=0 We will not be proving the Strong Law of Large Numbers as it is outside the scope of this paper.

9 MARKOV CHAINS AND HIDDEN MARKOV MODELS 9 Lemma 4.4. Let (a n ) be a bounded sequence. If, for a sequence of integers (n k ) satisfying lim k = 1, we have n k n k1 a 1 a nk (4.5) lim = a, k n k then (4.6) lim k a 1 a n n = a. Proof. We begin by defining A n := n 1 n k=1 a k. Let n k m < n k1. Then (4.7) A m = n m k m A j=n n k k 1 a j. m The fraction n k m tends to 1 since n k n k1 n k m 1. So if a j is bounded by B, then m j=n k 1 aj is bounded by m (4.8) B( n k1 n k n k ) which tends to 0, so A m a. Theorem 4.9 (Ergodic Theorem). Let f be a real-valued function defined on Ω. If (X t ) is an irreducible, aperiodic Markov chain, then for any starting distribution µ, 1 t 1 (4.10) P µ { lim f(x s ) = E π (f)} = 1. t t Proof. Suppose that the chain starts at state x. Then we will define s=0 (4.11) τ x,k := min{t > τ x,(k 1) : X t = x}, and set τ x,0 := 0. Every time the chain visits state x, it starts over in a way, so X τ, X x,k τ 1,..., X x,k τ x,(k 1) 1 are independent of each other. So, if (4.1) Y k := τ x,k 1 f(x s ), s=τ x,(k 1) then the sequence (Y k ) is independent and identically distributed. If S t = t 1 s=0 f(x s), then S τ x,n = n k=1 Y k, and by the strong law of large numbers, S τ (4.13) P x { lim x,n n n = E x(y 1 )} = 1. Using the Strong Law of Large Numbers again, τ x,n (4.14) P x { lim n n = E x(τ x )} = 1. So by division, S τ (4.15) P x { lim x,n n τ x,n = E x(y 1 ) E x (τ x ) } = 1.

10 10 MERYL SEAH Since τ x 1 E x (Y 1 ) = E x f(x s ) s=0 = E x f(y) τ x 1 1{Xs=y} = f(y)e x τ x 1 1{Xs=y}, and from the proof of the Convergence theorem, we know that π(x) = follows that (4.16) E x (Y 1 ) = E π (f)e x (τ x ). π(x) E z(τ z ), it So, S τ (4.17) P x { lim x,n n τ x,n = E π (f)} = 1. By the lemma, when µ equals the probability distribution with unit mass at x, the theorem is true. The proof is then complete by averaging over the starting state. Acknowledgments. It is a pleasure to thank my mentor, Jonathan DeWitt, for guiding me through this process. I would also like to thank the professors who taught me in this program. I would also like to thank the authors of the papers I referenced through this writing process. References [1] David Levin, Yuval Peres, and Elizabeth Wilmer. Markov Chains and Mixing Times. [] L.R. Rabiner and B.H. Juang. An Introduction to Hidden Markov Models. rvetro/vetrobiocomp/hmm/rabiner1986 [3] Persi Diaconis. The Markov Chain Monte Carlo Revolution. shmuel/network-course-readings/mcmcrev.pdf [4] J. Chang. Stochastic Processes. pollard/courses/51.spring09/handouts/changnotes.pdf

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS ARI FREEDMAN Abstract. In this expository paper, I will give an overview of the necessary conditions for convergence in Markov chains on finite state spaces.

More information

MARKOV CHAINS AND MIXING TIMES

MARKOV CHAINS AND MIXING TIMES MARKOV CHAINS AND MIXING TIMES BEAU DABBS Abstract. This paper introduces the idea of a Markov chain, a random process which is independent of all states but its current one. We analyse some basic properties

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES JAMES READY Abstract. In this paper, we rst introduce the concepts of Markov Chains and their stationary distributions. We then discuss

More information

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS RECURRENCE IN COUNTABLE STATE MARKOV CHAINS JIN WOO SUNG Abstract. This paper investigates the recurrence and transience of countable state irreducible Markov chains. Recurrence is the property that a

More information

Introduction to Markov Chains and Riffle Shuffling

Introduction to Markov Chains and Riffle Shuffling Introduction to Markov Chains and Riffle Shuffling Nina Kuklisova Math REU 202 University of Chicago September 27, 202 Abstract In this paper, we introduce Markov Chains and their basic properties, and

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

A note on adiabatic theorem for Markov chains and adiabatic quantum computation. Yevgeniy Kovchegov Oregon State University

A note on adiabatic theorem for Markov chains and adiabatic quantum computation. Yevgeniy Kovchegov Oregon State University A note on adiabatic theorem for Markov chains and adiabatic quantum computation Yevgeniy Kovchegov Oregon State University Introduction Max Born and Vladimir Fock in 1928: a physical system remains in

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

MARKOV CHAINS AND COUPLING FROM THE PAST

MARKOV CHAINS AND COUPLING FROM THE PAST MARKOV CHAINS AND COUPLING FROM THE PAST DYLAN CORDARO Abstract We aim to explore Coupling from the Past (CFTP), an algorithm designed to obtain a perfect sampling from the stationary distribution of a

More information

Applying Markov Chains to Monte Carlo Integration

Applying Markov Chains to Monte Carlo Integration Leonardo Ferreira Guilhoto Abstract This paper surveys the Markov Chain Monte Carlo (MCMC) method of numeric integration, with special focus on importance sampling. In the first part of the paper, the

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

Markov Chains and Mixing Times. David A. Levin Yuval Peres Elizabeth L. Wilmer

Markov Chains and Mixing Times. David A. Levin Yuval Peres Elizabeth L. Wilmer Markov Chains and Mixing Times David A. Levin Yuval Peres Elizabeth L. Wilmer University of Oregon E-mail address: dlevin@uoregon.edu URL: http://www.uoregon.edu/~dlevin Microsoft Research, University

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

Characterization of cutoff for reversible Markov chains

Characterization of cutoff for reversible Markov chains Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 23 Feb 2015 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff

More information

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505 INTRODUCTION TO MCMC AND PAGERANK Eric Vigoda Georgia Tech Lecture for CS 6505 1 MARKOV CHAIN BASICS 2 ERGODICITY 3 WHAT IS THE STATIONARY DISTRIBUTION? 4 PAGERANK 5 MIXING TIME 6 PREVIEW OF FURTHER TOPICS

More information

Characterization of cutoff for reversible Markov chains

Characterization of cutoff for reversible Markov chains Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 3 December 2014 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff

More information

Markov Chains for Everybody

Markov Chains for Everybody Markov Chains for Everybody An Introduction to the theory of discrete time Markov chains on countable state spaces. Wilhelm Huisinga, & Eike Meerbach Fachbereich Mathematik und Informatik Freien Universität

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

LECTURE NOTES FOR MARKOV CHAINS: MIXING TIMES, HITTING TIMES, AND COVER TIMES IN SAINT PETERSBURG SUMMER SCHOOL, 2012

LECTURE NOTES FOR MARKOV CHAINS: MIXING TIMES, HITTING TIMES, AND COVER TIMES IN SAINT PETERSBURG SUMMER SCHOOL, 2012 LECTURE NOTES FOR MARKOV CHAINS: MIXING TIMES, HITTING TIMES, AND COVER TIMES IN SAINT PETERSBURG SUMMER SCHOOL, 2012 By Júlia Komjáthy Yuval Peres Eindhoven University of Technology and Microsoft Research

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Modern Discrete Probability Spectral Techniques

Modern Discrete Probability Spectral Techniques Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite

More information

The coupling method - Simons Counting Complexity Bootcamp, 2016

The coupling method - Simons Counting Complexity Bootcamp, 2016 The coupling method - Simons Counting Complexity Bootcamp, 2016 Nayantara Bhatnagar (University of Delaware) Ivona Bezáková (Rochester Institute of Technology) January 26, 2016 Techniques for bounding

More information

Lecture 3: September 10

Lecture 3: September 10 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 3: September 10 Lecturer: Prof. Alistair Sinclair Scribes: Andrew H. Chan, Piyush Srivastava Disclaimer: These notes have not

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS

TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS JIAN DING, EYAL LUBETZKY AND YUVAL PERES ABSTRACT. The cutoff phenomenon describes a case where a Markov chain exhibits a sharp transition in its convergence

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Approximate Counting and Markov Chain Monte Carlo

Approximate Counting and Markov Chain Monte Carlo Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

COUPLING TIMES FOR RANDOM WALKS WITH INTERNAL STATES

COUPLING TIMES FOR RANDOM WALKS WITH INTERNAL STATES COUPLING TIMES FOR RANDOM WALKS WITH INTERNAL STATES ELYSE AZORR, SAMUEL J. GHITELMAN, RALPH MORRISON, AND GREG RICE ADVISOR: YEVGENIY KOVCHEGOV OREGON STATE UNIVERSITY ABSTRACT. Using coupling techniques,

More information

4. Ergodicity and mixing

4. Ergodicity and mixing 4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

Signed Measures. Chapter Basic Properties of Signed Measures. 4.2 Jordan and Hahn Decompositions

Signed Measures. Chapter Basic Properties of Signed Measures. 4.2 Jordan and Hahn Decompositions Chapter 4 Signed Measures Up until now our measures have always assumed values that were greater than or equal to 0. In this chapter we will extend our definition to allow for both positive negative values.

More information

Markov Processes on Discrete State Spaces

Markov Processes on Discrete State Spaces Markov Processes on Discrete State Spaces Theoretical Background and Applications. Christof Schuette 1 & Wilhelm Huisinga 2 1 Fachbereich Mathematik und Informatik Freie Universität Berlin & DFG Research

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

Introduction to Stochastic Processes

Introduction to Stochastic Processes 18.445 Introduction to Stochastic Processes Lecture 1: Introduction to finite Markov chains Hao Wu MIT 04 February 2015 Hao Wu (MIT) 18.445 04 February 2015 1 / 15 Course description About this course

More information

The cutoff phenomenon for random walk on random directed graphs

The cutoff phenomenon for random walk on random directed graphs The cutoff phenomenon for random walk on random directed graphs Justin Salez Joint work with C. Bordenave and P. Caputo Outline of the talk Outline of the talk 1. The cutoff phenomenon for Markov chains

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

Solutions to Problem Set 5

Solutions to Problem Set 5 UC Berkeley, CS 74: Combinatorics and Discrete Probability (Fall 00 Solutions to Problem Set (MU 60 A family of subsets F of {,,, n} is called an antichain if there is no pair of sets A and B in F satisfying

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information

Markov Chains and Jump Processes

Markov Chains and Jump Processes Markov Chains and Jump Processes An Introduction to Markov Chains and Jump Processes on Countable State Spaces. Christof Schuette, & Philipp Metzner Fachbereich Mathematik und Informatik Freie Universität

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Detailed Proofs of Lemmas, Theorems, and Corollaries

Detailed Proofs of Lemmas, Theorems, and Corollaries Dahua Lin CSAIL, MIT John Fisher CSAIL, MIT A List of Lemmas, Theorems, and Corollaries For being self-contained, we list here all the lemmas, theorems, and corollaries in the main paper. Lemma. The joint

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 7. Reversal This chapter talks about time reversal. A Markov process is a state X t which changes with time. If we run time backwards what does it look like? 7.1.

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Flip dynamics on canonical cut and project tilings

Flip dynamics on canonical cut and project tilings Flip dynamics on canonical cut and project tilings Thomas Fernique CNRS & Univ. Paris 13 M2 Pavages ENS Lyon November 5, 2015 Outline 1 Random tilings 2 Random sampling 3 Mixing time 4 Slow cooling Outline

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

arxiv: v2 [math.na] 20 Dec 2016

arxiv: v2 [math.na] 20 Dec 2016 SAIONARY AVERAGING FOR MULI-SCALE CONINUOUS IME MARKOV CHAINS USING PARALLEL REPLICA DYNAMICS ING WANG, PER PLECHÁČ, AND DAVID ARISOFF arxiv:169.6363v2 [math.na 2 Dec 216 Abstract. We propose two algorithms

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

A note on adiabatic theorem for Markov chains

A note on adiabatic theorem for Markov chains Yevgeniy Kovchegov Abstract We state and prove a version of an adiabatic theorem for Markov chains using well known facts about mixing times. We extend the result to the case of continuous time Markov

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

The Markov Chain Monte Carlo Method

The Markov Chain Monte Carlo Method The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov

More information

Chapter 2. Markov Chains. Introduction

Chapter 2. Markov Chains. Introduction Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Markov and Gibbs Random Fields

Markov and Gibbs Random Fields Markov and Gibbs Random Fields Bruno Galerne bruno.galerne@parisdescartes.fr MAP5, Université Paris Descartes Master MVA Cours Méthodes stochastiques pour l analyse d images Lundi 6 mars 2017 Outline The

More information

Random Walk in Periodic Environment

Random Walk in Periodic Environment Tdk paper Random Walk in Periodic Environment István Rédl third year BSc student Supervisor: Bálint Vető Department of Stochastics Institute of Mathematics Budapest University of Technology and Economics

More information

AN EXPLORATION OF KHINCHIN S CONSTANT

AN EXPLORATION OF KHINCHIN S CONSTANT AN EXPLORATION OF KHINCHIN S CONSTANT ALEJANDRO YOUNGER Abstract Every real number can be expressed as a continued fraction in the following form, with n Z and a i N for all i x = n +, a + a + a 2 + For

More information

2017 HSC Mathematics Extension 1 Marking Guidelines

2017 HSC Mathematics Extension 1 Marking Guidelines 07 HSC Mathematics Extension Marking Guidelines Section I Multiple-choice Answer Key Question Answer A B 3 B 4 C 5 D 6 D 7 A 8 C 9 C 0 B NESA 07 HSC Mathematics Extension Marking Guidelines Section II

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Weak convergence in Probability Theory A summer excursion! Day 3

Weak convergence in Probability Theory A summer excursion! Day 3 BCAM June 2013 1 Weak convergence in Probability Theory A summer excursion! Day 3 Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM June 2013 2 Day 1: Basic

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

On Finding Optimal Policies for Markovian Decision Processes Using Simulation

On Finding Optimal Policies for Markovian Decision Processes Using Simulation On Finding Optimal Policies for Markovian Decision Processes Using Simulation Apostolos N. Burnetas Case Western Reserve University Michael N. Katehakis Rutgers University February 1995 Abstract A simulation

More information

Mixing time for a random walk on a ring

Mixing time for a random walk on a ring Mixing time for a random walk on a ring Stephen Connor Joint work with Michael Bate Paris, September 2013 Introduction Let X be a discrete time Markov chain on a finite state space S, with transition matrix

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

7.1 Coupling from the Past

7.1 Coupling from the Past Georgia Tech Fall 2006 Markov Chain Monte Carlo Methods Lecture 7: September 12, 2006 Coupling from the Past Eric Vigoda 7.1 Coupling from the Past 7.1.1 Introduction We saw in the last lecture how Markov

More information

Coupling AMS Short Course

Coupling AMS Short Course Coupling AMS Short Course January 2010 Distance If µ and ν are two probability distributions on a set Ω, then the total variation distance between µ and ν is Example. Let Ω = {0, 1}, and set Then d TV

More information

Stochastic optimization Markov Chain Monte Carlo

Stochastic optimization Markov Chain Monte Carlo Stochastic optimization Markov Chain Monte Carlo Ethan Fetaya Weizmann Institute of Science 1 Motivation Markov chains Stationary distribution Mixing time 2 Algorithms Metropolis-Hastings Simulated Annealing

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Markov Random Fields

Markov Random Fields Markov Random Fields 1. Markov property The Markov property of a stochastic sequence {X n } n 0 implies that for all n 1, X n is independent of (X k : k / {n 1, n, n + 1}), given (X n 1, X n+1 ). Another

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Static Problem Set 2 Solutions

Static Problem Set 2 Solutions Static Problem Set Solutions Jonathan Kreamer July, 0 Question (i) Let g, h be two concave functions. Is f = g + h a concave function? Prove it. Yes. Proof: Consider any two points x, x and α [0, ]. Let

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture 21. David Aldous. 16 October David Aldous Lecture 21 Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

A primer on basic probability and Markov chains

A primer on basic probability and Markov chains A primer on basic probability and Markov chains David Aristo January 26, 2018 Contents 1 Basic probability 2 1.1 Informal ideas and random variables.................... 2 1.2 Probability spaces...............................

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

RANDOM WALKS. Course: Spring 2016 Lecture notes updated: May 2, Contents

RANDOM WALKS. Course: Spring 2016 Lecture notes updated: May 2, Contents RANDOM WALKS ARIEL YADIN Course: 201.1.8031 Spring 2016 Lecture notes updated: May 2, 2016 Contents Lecture 1. Introduction 3 Lecture 2. Markov Chains 8 Lecture 3. Recurrence and Transience 18 Lecture

More information

Math 456: Mathematical Modeling. Tuesday, April 9th, 2018

Math 456: Mathematical Modeling. Tuesday, April 9th, 2018 Math 456: Mathematical Modeling Tuesday, April 9th, 2018 The Ergodic theorem Tuesday, April 9th, 2018 Today 1. Asymptotic frequency (or: How to use the stationary distribution to estimate the average amount

More information