1. Markov chain LTCC. Exercises solutions (a) Draw a state space diagram with the loops for the possible steps. If the chain starts in state 4, it must stay there. If the chain starts in state 1, it will remain in {1, 2. (b) For X 0 = 3, define K as the time (#steps) in state 3 from the start then ( ) 1 k 2 P (K = k) = 3 3 and the distribution of K is Geom(2/3). In describing the next destination, we are conditioning on the fact that we don t go to state 3. Thus P (next destination is 2) = and, similarly, for destination 4, 1/4. 2. Weather forecasting State space of X is {0, 1, 2, 3: 1/2 1/2 + 1/6 = 3 4, X n Y n 1 Y n 0 0 0 1 1 0 2 0 1 3 1 1 X n Y n 1 Y n P(Y n+1 = 0 Y n, Y n 1 ) P(Y n+1 = 1 Y n, Y n 1 ) 0 0 0 p 00 = P(X n+1 = 0 X n = 0) = α p 02 = P(X n+1 = 2 X n = 0) = 1 α 1 1 0 p 10 = P(X n+1 = 0 X n = 1) = α p 12 = P(X n+1 = 2 X n = 1) = 1 α 2 0 1 p 21 = P(X n+1 = 1 X n = 2) = 1 β p 23 = P(X n+1 = 3 X n = 2) = β 3 1 1 p 31 = P(X n+1 = 1 X n = 3) = 1 β p 33 = P(X n+1 = 3 X n = 3) = β State transition diagram: 1
3. Gambler s Ruin As an example, part of my R code: # Loop over the S iterations: for(s in 1:S){ # Start with X = i: n <- 0 X <- i sim[n+1,s] <- X # Simulate process: while(x>0 & X<(a+b)){ # Draw direction: direction <- -1 + 2*rbinom(1,1,p) # Next step: X <- X + direction # Save step: n <- n + 1 sim[n+1,s] <- X Here sim is a matrix with S columns to store the simulated trajectories. Tricky: typically you index the rows in a matrix 1, 2, 3..., but the Xs are indexed by n = 0, 1, 2,.... When you do the summary stats to compute θ a and E a this needs some attention. Send me an email if you want the full R code. 4. Difference equations See the handwritten solutions. 5. First passage time See the handwritten solutions. 6. Markov or not Markov S n : Note that S n+1 = S n + X n+1. Thus, given S n the distribution of S n+1 depends only on X n+1 and is independent of S 1,..., S n 1. Hence S + n is a Markov chain. The state space is {1, 2,... and the transition matrix P is given by 0 1 1 1 1 1 1 0 0 0 6 6 6 6 6 6 0 0 1 1 1 1 1 1 0 0 6 6 6 6 6 6 P = 0 0 0 1 1 1 1 1 1 0 6 6 6 6 6 6. 0 0 0 0 1 1 1 1 1 1 6 6 6 6 6 6 etc 2
Z n : This is not a Markov chain. For example, while P(Z n+1 = 1 Z n = 6, Z n 1 = 1) = P(Z n+1 = 1 X n = 6, X n 1 = 1) = 0 P(Z n+1 = 1 Z n = 6, Z n 1 = 6, Z n 2 = 1) = P(Z n+1 = 1 Z n = 6, X n 1 = 6, X n 2 = 1) > 0. To find the latter probability, note that P(Z n+1 = 1 Z n = 6, X n 1 = 6, X n 2 = 1) = P(Z n+1 = 1 and Z n = 6 X n 1 = 6, X n 2 = 1) P(Z n = 6 X n 1 = 6, X n 2 = 1) = P(X n+1 = 1 and X n = 1 X n 1 = 6, X n 2 = 1) P(Z n = 6 X n 1 = 6, X n 2 = 1) = (1/6) 2 /1. 7. Three-state continuous-time Markov chain As an example, part of my R code: # Loop over the S iterations: for(s in 1:S){ # Simulate leaving state 0: t0 <- rexp(1,rate = q01+q02) # Determine state: DRAW <- rbinom(1,1,prob = q01/(q01+q02)) if(draw){x <- 1else{X <- 2 # Update trajectory: sim[2,s] <- X sim.times[2,s] <- t0 # Simulate leaving state 1 if applicable: if(x==1){ t1 <- rexp(1,rate = q12) sim[3,s] <- 2 sim.times[3,s] <- t0+t1 Here sim is a matrix with S columns to store the simulated states, and sim.times is a matrix to store the simulated transition times. Do the summary stats using sim.times. For exampel, holding time in state 0: T 0 <- mean(sim.times[2,]). Send me an email if you want the full R code. 3
8. Illness-death model (a) For holding time in state 0: T 0 Exp(λ 01 + λ D ). Because of independence, P (T A > t, T B > t) = P (T A > t)(p (T B > t). Both variables are exponentially distributed, so P (T A > t)(p (T B > t) = exp( (λ 01 + λ D )t). Note also that 1 P (T A > t, T B > t) = P (min{t A, T B < t). So also min{t A, T B Exp(λ 01 + λ D ). (b) Given that the hazard of death is λ D for both states. Overall mean survival for an individual in state 1 is E(T ) = 1/λ D. From (a) we get E(T 0 ) = 1/(λ 01 +λ D ). So the time that an individual who is currently in state 0 is expected to spent in state 1 (i.e., mean survival in state 1) is the difference: E(T ) E(T 0 ) = λ 01 /(λ D λ 01 + λ 2 D). 9. Matrix exponential (a) Note that with Q = ABA 1, we have Q k = AB k A 1 with B k a diagonal matrix. Use the rewrite AB k A 1 in the summation series for the matrix exponential, and note that you can write the summation of matrices as summations of scalars within a diagonal matrix. (b) Because of the decomposition of Q, the matrix exponential for P(t) has been reduced to a series of scalar exponentials, which simplifies the computation of P(t) considerably. 10. Matrix exponential (a) You can compute eigenvectors in R using the function eigen, but the matrix with eigenvectors as columns cannot be inverted; that is, the eigenvectors are not independent: Q <- matrix(c(-1,1/2,1/2,0,-1,1,0,0,0),3,3,byrow = TRUE) decomp <- eigen(q) A <- decomp$vector det(a) (b) Can do a finite series of summations to approximate the infinite series: # Time interval: t <- 1 Rep <- 100 # Approximating P matrix with summation: summation <- function(t,r){ # k = 0: P <- diag(3) # k = 1: P <- P + (Q*t)/factorial(1) # k > 2: for(r in 2:R){ Q.r <- Q for(i in 2:r){ Q.r <- Q.r%*%Q P <- P+ (Q.r*t^r)/factorial(r) 4
return(p) summation(t,rep). Quality of approximation will depend on Rep and Q (c) (Optional) # This will work: t <- 1 expm(t*q) # Note that expm gives an error when using eigenvalue decomp.: expm(t*q, method = "R_Eigen") 11. Matrix exponential 12. Poisson process See the handwritten solutions. d dt P(t) = d t n Q n dt n=0 n! nt n 1 Q n = n=0 n! ( t n 1 Q n 1 ) = Q n=1 (n 1)! ( t m Q m ) = Q = P(t)Q m! m=0 13. Discrete-time process: equilibrium distribution Classification of states is important here for deciding on whether an equilibrium distribution exists or not. Note that an invariant distribution is not necessarily an equilibrium one. (a) {0, 1, 2, 3 finite irreducible (so closed) so positive recurrent. Loop, so period is 1 and therefore ergodic. Irreducible, ergodic MC so equilibrium exists (and is invariant distribution) by Main Limit Theorem. Solve π = πp to give π = (9/23, 8/23, 4/23, 2/23) (b) {0, 1, 2, 3 finite irreducible (so closed) so positive recurrent. Period is 2 so no equilibrium distribution. (c) {0 and {3 both not closed, transient, aperiodic not ergodic. {1, 2, 4 closed, finite so positive recurrent. Period is 1 so ergodic. Equilibrium exists. Solve π = πp (transient states must have invariant probability 0) to give π = (0, 3/15, 4/15, 0, 8/15) (d) {1 and {4 both not closed, transient, aperiodic not ergodic. {0, 2, 3 closed, finite so positive recurrent. Period is 3 so not ergodic. {5 closed, finite so positive recurrent. Period is 1 so ergodic. 2 closed classes so no equilibrium (long run behaviour depends upon initial state) 5
14. Continuous-time process: equilibrium distribution (a) Note that lim p 00(t) = lim p 10 (t) = t t λ λ + µ, and lim p 11(t) = lim p 01 (t) = µ t t λ + µ. Per definition, π = ( λ, ) µ λ+µ λ+µ is the equilibrium distribution. (b) Solve πq = 0. It follows that µπ 1 + λ(1 π 1 ) = 0, so π 1 = λ/(µ + λ), and thus π 2 = 1 π 1 = µ/(µ + λ). Because π is an invariant distribution and X(t) is irreducible, π is the equilibrium distribution. 6