1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in state 4? If the chain starts in state 1, can it ever reach state 4? (b) If X 0 = 3, describe the distribution of the length of time (that is, the #steps) that the chain spends in state 3, and then the distribution of its next destination. 2. Weather forecasting Assume: (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather Define: { 1 sunny on nth day Y n := 0 rains on nth day X n := Y n 1 + 2Y n, X 0 = 0 S = {0, 1} Define α = P (Y n+1 = 0 Y n = 0) and β = P (Y n+1 = 1 Y n = 1); see also the slides. Compute the transition matrix of X and draw the state transition diagram. 3. Gambler s Ruin Program a simulation of the Gambler s Ruin as defined on slides; that is with p = 1/2, a = b = 10 and i = a. Try to make a similar graph for 10 simulation trajectories. Increase the number of simulated trajectories and use them to estimate θ a and E a. Compare the estimates with the theoretical derivations. Hint: You can choose your own software. One way to simulate the steps in the process is to draw a value from a Bernoulli distribution Y Bern(p), and then given current X n define X n+1 = X n 1+2Y. Use an in-build function to draw from the Bernoulli distribution. 4. Difference equations Consider the Gambler s Ruin as a symmetric random walk with absorbing states; that is p = q = 1. Say gambler A starts with k chips, has N chips, and goes 2 bankrupt if he has no more chips. Define p k as P (A bankrupt started in state k). Note that we have p k = 1 2 (p k+1 + p k 1 ), 1
with boundaries p 0 = 1 and p N = 0. Derive p k for k = 1, 2,...N 1. Hint: There is a quick solution by noting that the distance between p k p k 1 is constant. You can also use the generic solution for difference equations: p n = c 1 θ n 1 + c 2 nθ n 2 where, in this case, θ is the root (with multiplicity 2) of 1 2 θ2 θ + 1 2 = 0. 5. First passage time Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4, 5} and transition matrix 1/2 1/2 0 0 0 0 1/3 0 2/3 0 P = 0. 0 0 0 1/5 4/5 0 0 0 1 0 Define the random variable T i =min{n 0, X n = 5 X 0 = i}. Compute E(T 1 ); that is, the expected first passage time for state 5 given state 1 at time 0. Hint: Use the law of total expectation and E(T i ) for i = 2, 3, 4, 5 6. Markov or not Markov A fair die is thrown repeatedly and independently. Denote by X n the score obtained in the nth throw (i.e. X n takes the values 1, 2,..., 6 with equal probabilities). Define stochastic processes S n : for n = 1, 2, 3,..., let S n be the sum of the scores obtained in the first n throws, i.e. S n = n i=1 X i ; Z n : for n = 2, 3, 4,..., let Z n be the largest of the two most recent scores obtained, i.e. Z n = max(x n, X n 1 ). For each of the stochastic processes S n and Z n, state whether or not the stochastic process is a Markov chain and carefully justify your answer (if the process is not a Markov chain then give an example where the Markov property breaks down). If the stochastic process is a Markov chain, then write down its transition matrix. 7. Three-state continuous-time Markov chain Program a simulation of the three-state chain as defined on slide 99. Try to make a similar graph for 10 simulation trajectories. Increase the number of simulated trajectories and use them to estimate E[T 0 ] and E[T 1 ]. Hint. You can choose your own software. Slide 97 will help you to get started. For example, for leaving state 0, simulate a transition time by drawing a value from the exponential distribution T Exp(q 01 + q 02 ), and then draw from a Bernoulli distribution R Bern(p = q 01 /(q 01 + q 02 )) to determine to which state the chain goes. 8. Illness-death model Consider a Three-state continuous-time Markov chain with q 01 = λ 01 and q 02 = q 12 = λ D. Interpret this as an illness-death model, with state 2 representing death. 2
(a) Let T 0 be the event time of leaving state 0, which is exponentially distributed with rate λ 01 + λ D. Show that T 0 has the same distribution function as min{t A, T B }, where T A and T B are independent exponentially distributed random variables with rates λ 01 and λ D, respectively. (Note that the event time for 0 1 is not independent from the time for 0 2.) (b) For an individual in state 0, define T as the time of death. Show that this individual s mean survival in state 1 is given by E(T ) E(T 0 ) = λ 01 /(λ D λ 01 + λ 2 D). 9. Matrix exponential For a continuous-time Markov process with time-homogeneous D D generator matrix Q, the transition probability matrix P(t) for time t > 0 is the solution to the Kolmogorov backward equation P (t) = QP(t) subject to P(0) = I D, where I D is the identity matrix. The solution is the matrix exponential P(t) = e tq (tq) k =. k=0 (a) Show that if the D D matrix Q has D linearly independent eigenvectors, then P(t) = A diag ( e b 1t,..., e b Dt ) A 1, where b 1,..., b D are the eigenvalues of Q, A is the matrix with the corresponding eigenvectors as columns, and diag(x 1,..., x D ) denotes the D D diagonal matrix with diagonal entries x 1,..., x D. (b) Why is this rewrite useful? Explain in words. 10. Matrix exponential Consider the following generator matrix for a three-state progressive survival model Q = (q 12 + q 13 ) q 12 q 13 0 q 23 q 23 0 0 0 = 1 1/2 1/2 0 1 1 0 0 0 (a) Use software to compute eigenvalues and eigenvectors. Show that the matrix exponential cannot be expressed using the eigenvalue decomposition. (b) Can you think of a way to approximate P(t)? Choose t = 1 as illustration. (c) (Optional) R-package expm includes functions to compute/approximate P(t). Load the package into R and have a look at the relevant function: > library(expm) > help(expm) Use expm to explore the issue in (a) and the question in (b).. 3
11. Matrix exponential Show that a solution for the Kolmogorov forward equation P (t) = P(t)Q is the matrix exponential (tq) k P(t) =. Hint: start with d dt 12. Poisson process k=0 (tq) k k=0 and rewrite the resulting sum using k=1... (a) Men and women arrive at a shop, forming independent Poisson processes of rates α and β per hour respectively. Let M 1 be the time until the first male customer arrives and let W 1 be the time until the first female customer arrives. Show that p = P (M 1 < W 1 ) = α/(α + β). (b) Let N be the number of male customers that arrive before the first female customer. Using p and the lack-of-memory property of the exponential distribution, or by conditioning on the arrival time W 1 of the first female customer, find the probability distribution of N. (c) Find the distribution of the time Z =min(m 1, W 1 ) until the first customer arrives. Hint: first calculate P (Z > z). (d) Exactly one female customer arrived in the first hour (note: nothing is said here about how many male customers arrived). Let T be the time at which the first customer arrived. Find P (T > t) and hence evaluate E(T ). Hint: use the lemma about arrival times being (conditionally) uniformly distributed. 13. Discrete-time process: equilibrium distribution Find the irreducible classes of states for the Markov chains with the following transition matrices (all state spaces begin at 0). State whether they are closed or not and classify them in terms of transience, recurrence (positive or null), period and ergodicity. State whether or not an equilibrium distribution exists and, if it does, find it. (a) P = (b) P = (c) P = 1/3 2/3 0 0 1/2 0 1/2 0 0 1/2 0 1/2 0 1 0 0 1/2 0 1/2 0 0 1/2 0 1/2 1/6 0 0 1/6 2/3 0 1/3 2/3 0 0 0 0 0 0 1 0 1/4 1/4 1/2 0 0 1/4 1/4 0 1/2 4
(d) P = 0 0 0 1 0 0 0 1/3 0 1/3 0 1/3 0 0 0 0 0 0 1/3 0 1/3 1/3 0 0 0 0 0 1 14. Continuous-time process: equilibrium distribution Consider the process X(t) with S = {0, 1}. Let the generator matrix be given by ( ) µ µ Q = λ λ The corresponding transition probability matrix is ( ) p00 (t) p P(t) = 01 (t) = p 10 (t) p 11 (t) λ+µe (λ+µ)t λ+µ 1 p 11 (t) 1 p 00 (t) µ+λe (λ+µ)t λ+µ (This matrix can be derived by eigenvalue decomposition of Q and matrix exponentiation, or by solving the forward equations; e.g., p 00(t) = p 00 (t)µ + p 01 (t)λ.) (a) Use P(t) to derive the equilibrium distribution. (b) Use Q to derive the equilibrium distribution. Hint: no need to derive P(t) first. 5