P(X 0 = j 0,... X nk = j k )

Similar documents
P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Solution: (Course X071570: Stochastic Processes)

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Stochastic Simulation

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Stochastic Processes

12 Markov chains The Markov property

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Positive and null recurrent-branching Process

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Chapter 7. Markov chain background. 7.1 Finite state space

Markov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to

Markov Chains Handout for Stat 110

arxiv: v2 [math.pr] 4 Sep 2017


MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Interlude: Practice Final

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

FINITE MARKOV CHAINS

Fall 2017 Test II review problems

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Applied Stochastic Processes

Markov Chains, Stochastic Processes, and Matrix Decompositions

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

(i) What does the term communicating class mean in terms of this chain? d i = gcd{n 1 : p ii (n) > 0}.

Lecture 20 : Markov Chains

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

IEOR 6711: Professor Whitt. Introduction to Markov Chains

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

2. Transience and Recurrence

STOCHASTIC PROCESSES Basic notions

Lecture 9 Classification of States

Stat 516, Homework 1

MATH3283W LECTURE NOTES: WEEK 6 = 5 13, = 2 5, 1 13

On asymptotic behavior of a finite Markov chain

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Markov Chain Monte Carlo

MARKOV PROCESSES. Valerio Di Valerio

Consider an infinite row of dominoes, labeled by 1, 2, 3,, where each domino is standing up. What should one do to knock over all dominoes?

Stochastic modelling of epidemic spread

PUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES. Notes

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes.

Some Definition and Example of Markov Chain

< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Selected Exercises on Expectations and Some Probability Inequalities

MARKOV CHAINS AND HIDDEN MARKOV MODELS

Probability & Computing

Lecture 5: Random Walks and Markov Chain

Chapter 11 Advanced Topic Stochastic Processes

88 CONTINUOUS MARKOV CHAINS

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Markov Chains and Stochastic Sampling

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

1 Random walks: an introduction

Zdzis law Brzeźniak and Tomasz Zastawniak

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n.

IEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9

Markov Chains (Part 4)

Math Homework 5 Solutions

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Reinforcement Learning

Markov Chains (Part 3)

Bootstrap random walks

Markov Chains, Random Walks on Graphs, and the Laplacian

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Markov chains. Randomness and Computation. Markov chains. Markov processes

Stochastic Processes

Discrete time Markov chains

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Ergodic Properties of Markov Processes

ABSTRACT MARKOV CHAINS, RANDOM WALKS, AND CARD SHUFFLING. Nolan Outlaw. May 2015

Question Points Score Total: 70

arxiv:math/ v1 [math.co] 13 Jul 2005

MARKOV CHAIN MONTE CARLO

A sequence of triangle-free pseudorandom graphs

Homework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4

PROBABILITY VITTORIA SILVESTRI

Mathematical Methods for Computer Science

SMSTC (2007/08) Probability.

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Non-homogeneous random walks on a semi-infinite strip

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 1 Divide and Conquer Algorithm Theory WS 2016/17 Fabian Kuhn

Convex Optimization CMU-10725

k-protected VERTICES IN BINARY SEARCH TREES

What you learned in Math 28. Rosa C. Orellana

MAS275 Probability Modelling Exercises

Markov and Gibbs Random Fields

Genealogy of Pythagorean triangles

The cost/reward formula has two specific widely used applications:

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Transcription:

Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that (Z n ) n 0 is Markov chain with transition matrix P k. Solution. First we prove a general result: if (X n ) n 0 is a Markov chain then for any collection of states i,..., i k and any collection of indices 0 n <... < n k we have P(X nk = i k X nk = i k,..., X n = i ) = P(X nk = i k X nk = i k ). Equivalently, we will show that P(X n = i,..., X nk = i k ) = P(X nk = i k X nk = i k ) P(X n2 = i 2 X n = i ). Note that where A n,...,n k i,...,i k P(X n = i,..., X nk = i k ) = j A n,...,n k i,...,i k P(X 0 = j 0,... X nk = j k ) = {(j 0,..., j nk ) S n k+ : j nl = i l for l =,..., k}. But by induction we have P(X 0 = j 0,... X n = j n ) = P(X n = j n X n = j n ) P(X = j X 0 = j 0 )P(X 0 = j 0 ). Hence for n m and i m, i n S we have P(X n = i n X m = i m ) = P(X n = i n, X m = i m ) P(X m = i m ) j A m,n P(X n = j n X n = j n ) P(X = j X 0 = j 0 )P(X 0 = j 0 ) im,in = j A P(X m m = j m X m = j m ) P(X = j X 0 = j 0 )P(X 0 = j 0 ) im = P(X n = j n X n = j n ) P(X m+ = j m+ X m = j m ) j B m,n im,in where B m,n i m,i n = {(j m,..., j n ) S n m+ : j m = i m, j n = i m }. The claim now follows. Returning to the specific problem, we have P(Z n = i n Z n = i n,..., Z 0 = i 0 ) = P(X kn = i n X k(n ) = i n,..., X 0 = i 0 ) = P(X kn = i n X k(n ) = i n ) = (P k ) in,i n = P(Z n = i n Z n = i n ) and (Z n ) n is a Markov chain with transition matrix P k.

Problem 2. Let U, U 2,... be a sequence of independent random variable uniformly distributed on [0, ]. Given a function G : S [0, ] S, let (X n ) n be defined recursively by X n+ = G(X n, U n+ ). Show that (X n ) n is a Markov chain on S. Prove that all Markov chains can be realized in this fashion with a suitable choice of the function G. Solution 2. Let i 0,..., i n+ be any collection of points in S. Since U n+ is independent of X 0,..., X n we have P(X n+ = i n+ X 0 = i 0,..., X n = i n ) = P(G(X n, U n+ ) = i n+ X 0 = i 0,..., X n = i n ) = P(G(i n, U n+ ) = i n+ ) = P(G(X n, U n+ ) = i n+ X n = i n ) = P(X n+ = i n+ X n = i n ) and hence (X n ) n is a Markov chain. The transition probabilities are then given by p ij = P(G(i, U ) = j). Conversely, suppose P = (p ij ) i,j is an arbitrary stochastic matrix. Let G : S [0, ] S be defined by j j G(i, u) = j if p ik u < p ik. If U is a random variable uniformly distributed on [0, ] then k= P(G(i, U) = j) = p ij by construction. Hence the Markov chain with transition matrix P can be realized by the recurrence X n+ = G(X n, U n+ ) where U, U 2,... is a sequence of independent uniform [0, ] random variables. Problem 3. Let X, X 2,... be a sequence of independent random variables with k= P(X n = ) = P(X n = ) = 2 for all n. Let S n = X +... + X n. Prove that (S n ) n 0 is a recurrent Markov chain on the set of integers Z = {0, ±, ±2,...}. [You might find Stirling s formula useful: n! 2πn n+/2 e n.] Solution 3. The stochastic process (S n ) n is a Markov chain since for all integers i,..., i n+ we have P(S n+ = i n+ S = i,..., S n = i n ) = P(S n + X n+ = i n+ S = i,..., S n = i n ) = P(X n+ = i n+ i n ) = P(S n+ = i n+ S n = i n ) 2

where we have used the fact that X n+ is independent of S,... S n. We can prove recurrence in two ways. First, by counting the left and right turns of the random walk we have { 0 if n is odd P(S n = 0) = 2 n( n n/2 By Stirling s formula, we have the approximation ( ) 2k 22k k πk and which implies p 00 (n) = n= ) if n is even. P(S n = 0) =. n= The state zero is thus a recurrent state. Since the Markov chain is irreducible, it is recurrent. Alternatively, let X n = S n + X 0 and define the stopping time T = inf{n 0 : X n = 0}. Let h i = P(T < X 0 = i). By conditioning on the first step of the walk we have h i = 2 h i + 2 h i+. All solutions to this linear difference equation are of the form h i = a + bi for constants a, b R. Since we have the bound 0 h i then b = 0. Also, we have the boundary condition h 0 = so a =. In particular, the P(S n = 0 for some n } = 2 h + 2 h = and the random walk is recurrent. Problem 4. Fix a natural number d, and let X, X 2,... be a sequence of independent vector-valued random variables with for all i d, where P(X n = e i ) = P(X n = e i ) = 2d e i = (0,..., 0,, 0,..., 0) }{{}}{{} i d i Let S n = X + + X n. Prove that (S n ) n 0 is a transient Markov chain on Z d if and only if d 3. [You may find the following inequality useful: If i +... + i d = dn then i! i d! (n!) d.] Solution 4. As before, by counting the left, right, up, down, etc. turns of the random walk we have { 0 if n is odd P(S n = 0) = (2d) n n! i + +i d =n/2 (i!) 2 (i d if n is even.!) 2 For d = 2, a miracle occurs and P(S 2k = 0) = [2 2k( 2k k ) ] 2 exactly. By Stirling s formula we have P(S 2k = 0) (πk), and since n p 00(n) = the walk is recurrent. 3

We can bound the probability of {S 2k = 0} by (2d) 2k i + +i d =k (2k)! (2d) 2k (2k)! (i!) 2 (i d!) 2 k! min i + +i d =k(i!) (i d!) = (4d) k (2k)! k! min i + +i d =k(i!) (i d!) = (4d) k (2k)! k![(k/d)!] d Ck d/2 i + +i d =k k! (i!) (i d!) for some constant C > 0, at least when k is divisible by d, where we have used the fact k! i + +i d =k = (i!) (i d!) dk. (If k is not divisible by d we have min i + +i d =k(i!) (i d!) = ( k/d!) d( {k/d}) ( + k/d!) d{k/d} where x is the greatest integer less than or equal to x and {x} = x x is the fractional part of x, in which case the asymptotics still hold.) If d 3 the sum n p 00(n) converges and 0 is a transient state, and hence the chain is transient. Problem 5. Let T and T 2 be stopping times for a Markov chain (X n ) n 0 on S. Prove that each of the following are also stopping times:. T = min{n : X n = i} for some fixed i S. 2. T (ω) = N for all ω Ω for a fixed N N. 3. T = min{t, T 2 }. 4. T = max{t, T 2 }. 5. T = T + T 2. Solution 5. By definition, a random variable T : Ω N { } is a stopping time if and only if for every n N there is a function f n : S n+ {0, } such that {T =n} = f n (X 0,..., X n ). Notice that that if T is a stopping time then there exist functions g n such that {T n+} = g n (X 0,..., X n ) since {T n+} = {T n} = n g k (X 0,..., X k ) Conversely, if a random variable T has the property that for every n N there is a function g n such that {T n+} = g n (X 0,..., X n ) then T is a stopping time, since {T =n} = {T n} {T n+} = g n (X 0,..., X n ) g n (X 0,..., X n ). Similarly, T is a stopping time if and only if there exist then there exists h n such that {T n} = h n (X 0,..., X n ). 4 k=0

. 2. 3. 4. {T =n} = {X i,...,x n i,x n=i} {τ=n} = { if n = N 0 if n N {T n+} = {T n+,t 2 n+} = {T n+} {T2 n+}. {T n} = {T n,t 2 n} = {T n} {T2 n}. 5. {T =n} = n k=0 {T =k} {T2 =n k}. Problem 6. A flea hops randomly on the vertices of a triangle with vertices labelled,2, and 3, hopping to each of the other vertices with equal probability. If the flea starts at vertex, find the probability that after n hops the flea is back to vertex. A second flea also starts at vertex and hops about on the vertices of a triangle, but this flea is twice as likely to jump clockwise as anticlockwise. What is the probability that after n hops this second flea is back to vertex? Solution 6. At any time, the flea is either on vertex (state A), or it is on one of the two other vertices (state A). If it is in state a, it hops to state b with probability. On the other hand, if it is in state B, it hops to state A with probability /2 and stays in state B with probability /2. Hence the transition matrix is ( 0 P = 2 The eigenvalues of P are and /2. We know from general principle that (P n ) = a + b( /2) n for some constants a, b. But (P 0 ) = and (P ) = 0 so The desired probability is 2 ). (P n ) = 3 + 2 3 ( /2)n. Now label the vertices of the triangle states, 2, and 3. The transition matrix is P = 0 2 0 2 2 0 with eigenvalues and ± i 2 2 = 3 3 /2 e ±iπ/6, where i =. We know that the entries of the matrix P n are of the form (P n ) ij = a + ( 3 /2 ) n (b cos(πn/6) + c sin(πn/6)). 5

Since (P 0 ) =, (P ) = 0, and (P 2 ) = 4 we have 9 (P n ) = 3 + 2 3 ( 3 /2 ) n cos(nπ/6) In both cases, it saves time to notice that the invariant distribution puts probability /3 on vertex, so that the constant term a = /3. Problem 7. Consider the second flea of Problem 6. What is the expected number of hops the flea makes before it is first back to vertex? What is the expected number of times the flea visits vertex 3 before first reaching vertex 2? (Assume that the vertices are labelled so that 2 3... is clockwise.) Solution 7. Consider the Markov chain (X n ) n 0 on {, 2, 3} with transition matrix P = 0 2 0 2 2 0 For each vertex i let T i = inf{n : X n = i}. Since the chain is irreducible and positive recurrent, and has unique invariant distribution π = (/3, /3, /3), we have E (T ) = π = 3. Now we are asked to compute E ( T 2 n= {X n=3}). Here are two approaches: First, let k i = E T2 i n= {X n=3}. Then by conditioning on X we have T 2 k = E( {Xn=3} X 0 = ) n= T 2 = E( {Xn=3} X 0 =, X = 2)P(X = 2 X 0 = ) n= T 2 +E( {Xn=3} X 0 =, X = 3)P(X = 3 X 0 = ) n= = (0)(2/3) + ( + k 3 )(/3) where we have used the time homogeneity of the chain. Similarly, k 3 = (0)(/3) + (k )(2/3) so that k = 3/7. Here is another approach. Let N = T 2 n= {X n=3}. Note that P (N = 0) = P(X = 2 X 0 = ) = 2/3. 6

and for k, so that P (N = k) = P (X = 3, X 2 =,..., X 2k 2 =, X 2k = 3, X 2k = 2} +P (X = 3, X 2 =,..., X 2k 2 =, X 2k = 3, X 2k =, X 2k+ = 2} = (/3) k+ (2/3) k + (/3) k (2/3) k+ = (7/6)(2/9) k E (N) = k(7/6)(2/9) k = 3/7 k= as before. [Thanks to those who contributed to these solutions during the examples class.] Although irrelevant to the question, note that k 2 = E 2 (N) = γ (2) 3 = π 3 /π 2 = is the expected number of times the flea visits vertex 3 before returning to vertex 2, assuming the flea began at vertex 2. 7