Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that (Z n ) n 0 is Markov chain with transition matrix P k. Solution. First we prove a general result: if (X n ) n 0 is a Markov chain then for any collection of states i,..., i k and any collection of indices 0 n <... < n k we have P(X nk = i k X nk = i k,..., X n = i ) = P(X nk = i k X nk = i k ). Equivalently, we will show that P(X n = i,..., X nk = i k ) = P(X nk = i k X nk = i k ) P(X n2 = i 2 X n = i ). Note that where A n,...,n k i,...,i k P(X n = i,..., X nk = i k ) = j A n,...,n k i,...,i k P(X 0 = j 0,... X nk = j k ) = {(j 0,..., j nk ) S n k+ : j nl = i l for l =,..., k}. But by induction we have P(X 0 = j 0,... X n = j n ) = P(X n = j n X n = j n ) P(X = j X 0 = j 0 )P(X 0 = j 0 ). Hence for n m and i m, i n S we have P(X n = i n X m = i m ) = P(X n = i n, X m = i m ) P(X m = i m ) j A m,n P(X n = j n X n = j n ) P(X = j X 0 = j 0 )P(X 0 = j 0 ) im,in = j A P(X m m = j m X m = j m ) P(X = j X 0 = j 0 )P(X 0 = j 0 ) im = P(X n = j n X n = j n ) P(X m+ = j m+ X m = j m ) j B m,n im,in where B m,n i m,i n = {(j m,..., j n ) S n m+ : j m = i m, j n = i m }. The claim now follows. Returning to the specific problem, we have P(Z n = i n Z n = i n,..., Z 0 = i 0 ) = P(X kn = i n X k(n ) = i n,..., X 0 = i 0 ) = P(X kn = i n X k(n ) = i n ) = (P k ) in,i n = P(Z n = i n Z n = i n ) and (Z n ) n is a Markov chain with transition matrix P k.
Problem 2. Let U, U 2,... be a sequence of independent random variable uniformly distributed on [0, ]. Given a function G : S [0, ] S, let (X n ) n be defined recursively by X n+ = G(X n, U n+ ). Show that (X n ) n is a Markov chain on S. Prove that all Markov chains can be realized in this fashion with a suitable choice of the function G. Solution 2. Let i 0,..., i n+ be any collection of points in S. Since U n+ is independent of X 0,..., X n we have P(X n+ = i n+ X 0 = i 0,..., X n = i n ) = P(G(X n, U n+ ) = i n+ X 0 = i 0,..., X n = i n ) = P(G(i n, U n+ ) = i n+ ) = P(G(X n, U n+ ) = i n+ X n = i n ) = P(X n+ = i n+ X n = i n ) and hence (X n ) n is a Markov chain. The transition probabilities are then given by p ij = P(G(i, U ) = j). Conversely, suppose P = (p ij ) i,j is an arbitrary stochastic matrix. Let G : S [0, ] S be defined by j j G(i, u) = j if p ik u < p ik. If U is a random variable uniformly distributed on [0, ] then k= P(G(i, U) = j) = p ij by construction. Hence the Markov chain with transition matrix P can be realized by the recurrence X n+ = G(X n, U n+ ) where U, U 2,... is a sequence of independent uniform [0, ] random variables. Problem 3. Let X, X 2,... be a sequence of independent random variables with k= P(X n = ) = P(X n = ) = 2 for all n. Let S n = X +... + X n. Prove that (S n ) n 0 is a recurrent Markov chain on the set of integers Z = {0, ±, ±2,...}. [You might find Stirling s formula useful: n! 2πn n+/2 e n.] Solution 3. The stochastic process (S n ) n is a Markov chain since for all integers i,..., i n+ we have P(S n+ = i n+ S = i,..., S n = i n ) = P(S n + X n+ = i n+ S = i,..., S n = i n ) = P(X n+ = i n+ i n ) = P(S n+ = i n+ S n = i n ) 2
where we have used the fact that X n+ is independent of S,... S n. We can prove recurrence in two ways. First, by counting the left and right turns of the random walk we have { 0 if n is odd P(S n = 0) = 2 n( n n/2 By Stirling s formula, we have the approximation ( ) 2k 22k k πk and which implies p 00 (n) = n= ) if n is even. P(S n = 0) =. n= The state zero is thus a recurrent state. Since the Markov chain is irreducible, it is recurrent. Alternatively, let X n = S n + X 0 and define the stopping time T = inf{n 0 : X n = 0}. Let h i = P(T < X 0 = i). By conditioning on the first step of the walk we have h i = 2 h i + 2 h i+. All solutions to this linear difference equation are of the form h i = a + bi for constants a, b R. Since we have the bound 0 h i then b = 0. Also, we have the boundary condition h 0 = so a =. In particular, the P(S n = 0 for some n } = 2 h + 2 h = and the random walk is recurrent. Problem 4. Fix a natural number d, and let X, X 2,... be a sequence of independent vector-valued random variables with for all i d, where P(X n = e i ) = P(X n = e i ) = 2d e i = (0,..., 0,, 0,..., 0) }{{}}{{} i d i Let S n = X + + X n. Prove that (S n ) n 0 is a transient Markov chain on Z d if and only if d 3. [You may find the following inequality useful: If i +... + i d = dn then i! i d! (n!) d.] Solution 4. As before, by counting the left, right, up, down, etc. turns of the random walk we have { 0 if n is odd P(S n = 0) = (2d) n n! i + +i d =n/2 (i!) 2 (i d if n is even.!) 2 For d = 2, a miracle occurs and P(S 2k = 0) = [2 2k( 2k k ) ] 2 exactly. By Stirling s formula we have P(S 2k = 0) (πk), and since n p 00(n) = the walk is recurrent. 3
We can bound the probability of {S 2k = 0} by (2d) 2k i + +i d =k (2k)! (2d) 2k (2k)! (i!) 2 (i d!) 2 k! min i + +i d =k(i!) (i d!) = (4d) k (2k)! k! min i + +i d =k(i!) (i d!) = (4d) k (2k)! k![(k/d)!] d Ck d/2 i + +i d =k k! (i!) (i d!) for some constant C > 0, at least when k is divisible by d, where we have used the fact k! i + +i d =k = (i!) (i d!) dk. (If k is not divisible by d we have min i + +i d =k(i!) (i d!) = ( k/d!) d( {k/d}) ( + k/d!) d{k/d} where x is the greatest integer less than or equal to x and {x} = x x is the fractional part of x, in which case the asymptotics still hold.) If d 3 the sum n p 00(n) converges and 0 is a transient state, and hence the chain is transient. Problem 5. Let T and T 2 be stopping times for a Markov chain (X n ) n 0 on S. Prove that each of the following are also stopping times:. T = min{n : X n = i} for some fixed i S. 2. T (ω) = N for all ω Ω for a fixed N N. 3. T = min{t, T 2 }. 4. T = max{t, T 2 }. 5. T = T + T 2. Solution 5. By definition, a random variable T : Ω N { } is a stopping time if and only if for every n N there is a function f n : S n+ {0, } such that {T =n} = f n (X 0,..., X n ). Notice that that if T is a stopping time then there exist functions g n such that {T n+} = g n (X 0,..., X n ) since {T n+} = {T n} = n g k (X 0,..., X k ) Conversely, if a random variable T has the property that for every n N there is a function g n such that {T n+} = g n (X 0,..., X n ) then T is a stopping time, since {T =n} = {T n} {T n+} = g n (X 0,..., X n ) g n (X 0,..., X n ). Similarly, T is a stopping time if and only if there exist then there exists h n such that {T n} = h n (X 0,..., X n ). 4 k=0
. 2. 3. 4. {T =n} = {X i,...,x n i,x n=i} {τ=n} = { if n = N 0 if n N {T n+} = {T n+,t 2 n+} = {T n+} {T2 n+}. {T n} = {T n,t 2 n} = {T n} {T2 n}. 5. {T =n} = n k=0 {T =k} {T2 =n k}. Problem 6. A flea hops randomly on the vertices of a triangle with vertices labelled,2, and 3, hopping to each of the other vertices with equal probability. If the flea starts at vertex, find the probability that after n hops the flea is back to vertex. A second flea also starts at vertex and hops about on the vertices of a triangle, but this flea is twice as likely to jump clockwise as anticlockwise. What is the probability that after n hops this second flea is back to vertex? Solution 6. At any time, the flea is either on vertex (state A), or it is on one of the two other vertices (state A). If it is in state a, it hops to state b with probability. On the other hand, if it is in state B, it hops to state A with probability /2 and stays in state B with probability /2. Hence the transition matrix is ( 0 P = 2 The eigenvalues of P are and /2. We know from general principle that (P n ) = a + b( /2) n for some constants a, b. But (P 0 ) = and (P ) = 0 so The desired probability is 2 ). (P n ) = 3 + 2 3 ( /2)n. Now label the vertices of the triangle states, 2, and 3. The transition matrix is P = 0 2 0 2 2 0 with eigenvalues and ± i 2 2 = 3 3 /2 e ±iπ/6, where i =. We know that the entries of the matrix P n are of the form (P n ) ij = a + ( 3 /2 ) n (b cos(πn/6) + c sin(πn/6)). 5
Since (P 0 ) =, (P ) = 0, and (P 2 ) = 4 we have 9 (P n ) = 3 + 2 3 ( 3 /2 ) n cos(nπ/6) In both cases, it saves time to notice that the invariant distribution puts probability /3 on vertex, so that the constant term a = /3. Problem 7. Consider the second flea of Problem 6. What is the expected number of hops the flea makes before it is first back to vertex? What is the expected number of times the flea visits vertex 3 before first reaching vertex 2? (Assume that the vertices are labelled so that 2 3... is clockwise.) Solution 7. Consider the Markov chain (X n ) n 0 on {, 2, 3} with transition matrix P = 0 2 0 2 2 0 For each vertex i let T i = inf{n : X n = i}. Since the chain is irreducible and positive recurrent, and has unique invariant distribution π = (/3, /3, /3), we have E (T ) = π = 3. Now we are asked to compute E ( T 2 n= {X n=3}). Here are two approaches: First, let k i = E T2 i n= {X n=3}. Then by conditioning on X we have T 2 k = E( {Xn=3} X 0 = ) n= T 2 = E( {Xn=3} X 0 =, X = 2)P(X = 2 X 0 = ) n= T 2 +E( {Xn=3} X 0 =, X = 3)P(X = 3 X 0 = ) n= = (0)(2/3) + ( + k 3 )(/3) where we have used the time homogeneity of the chain. Similarly, k 3 = (0)(/3) + (k )(2/3) so that k = 3/7. Here is another approach. Let N = T 2 n= {X n=3}. Note that P (N = 0) = P(X = 2 X 0 = ) = 2/3. 6
and for k, so that P (N = k) = P (X = 3, X 2 =,..., X 2k 2 =, X 2k = 3, X 2k = 2} +P (X = 3, X 2 =,..., X 2k 2 =, X 2k = 3, X 2k =, X 2k+ = 2} = (/3) k+ (2/3) k + (/3) k (2/3) k+ = (7/6)(2/9) k E (N) = k(7/6)(2/9) k = 3/7 k= as before. [Thanks to those who contributed to these solutions during the examples class.] Although irrelevant to the question, note that k 2 = E 2 (N) = γ (2) 3 = π 3 /π 2 = is the expected number of times the flea visits vertex 3 before returning to vertex 2, assuming the flea began at vertex 2. 7