LTCC. Exercises solutions

Similar documents
LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Stochastic Processes

Markov Chains (Part 3)

MARKOV PROCESSES. Valerio Di Valerio

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

STOCHASTIC PROCESSES Basic notions

The Transition Probability Function P ij (t)

Statistics 150: Spring 2007

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

SMSTC (2007/08) Probability.

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Markov Chains and Stochastic Sampling

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Markov chains. Randomness and Computation. Markov chains. Markov processes

Lecture 5: Random Walks and Markov Chain

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Statistics 992 Continuous-time Markov Chains Spring 2004

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Markov Chains Handout for Stat 110

Introduction to Queuing Networks Solutions to Problem Sheet 3

Understanding MCMC. Marcel Lüthi, University of Basel. Slides based on presentation by Sandro Schönborn

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Stochastic process. X, a series of random variables indexed by t

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

12 Markov chains The Markov property

Markov Processes Cont d. Kolmogorov Differential Equations

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Markov processes and queueing networks

STAT STOCHASTIC PROCESSES. Contents

Markov Chains, Stochastic Processes, and Matrix Decompositions

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Question Points Score Total: 70

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

A simple dynamic model of labor market

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH37012 Week 10. Dr Jonathan Bagley. Semester

Quantitative Model Checking (QMC) - SS12

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

Countable state discrete time Markov Chains

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Continuous Time Markov Chain Examples

Stochastic modelling of epidemic spread

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Mathematical Methods for Computer Science

MSc MT15. Further Statistical Methods: MCMC. Lecture 5-6: Markov chains; Metropolis Hastings MCMC. Notes and Practicals available at

Part I Stochastic variables and Markov chains

Continuous time Markov chains

Stochastic Processes (Week 6)

Dynamic interpretation of eigenvectors

A review of Continuous Time MC STA 624, Spring 2015

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

IEOR 6711, HMWK 5, Professor Sigman

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

88 CONTINUOUS MARKOV CHAINS

STAT 380 Continuous Time Markov Chains

Convex Optimization CMU-10725

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

LECTURE #6 BIRTH-DEATH PROCESS

Markov Processes Hamid R. Rabiee

2. Transience and Recurrence

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

LINEAR ALGEBRA QUESTION BANK

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Homework set 3 - Solutions

Necessary and sufficient conditions for strong R-positivity

Classification of Countable State Markov Chains

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

1 Continuous-time chains, finite state space

IEOR 6711: Professor Whitt. Introduction to Markov Chains

6 Markov Chain Monte Carlo (MCMC)

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Markov chains (week 6) Solutions

Stochastic Models: Markov Chains and their Generalizations

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Data analysis and stochastic modeling

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

Probability, Random Processes and Inference

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Online Social Networks and Media. Link Analysis and Web Search

CS 798: Homework Assignment 3 (Queueing Theory)

Problem Set 8

Markov Chain Monte Carlo

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Stochastic modelling of epidemic spread

Lecture 2: September 8

Lecture 8: The Metropolis-Hastings Algorithm

Population Games and Evolutionary Dynamics

Introduction to Machine Learning CMU-10701

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Stat 150 Practice Final Spring 2015

Transcription:

1. Markov chain LTCC. Exercises solutions (a) Draw a state space diagram with the loops for the possible steps. If the chain starts in state 4, it must stay there. If the chain starts in state 1, it will remain in {1, 2. (b) For X 0 = 3, define K as the time (#steps) in state 3 from the start then ( ) 1 k 2 P (K = k) = 3 3 and the distribution of K is Geom(2/3). In describing the next destination, we are conditioning on the fact that we don t go to state 3. Thus P (next destination is 2) = and, similarly, for destination 4, 1/4. 2. Weather forecasting State space of X is {0, 1, 2, 3: 1/2 1/2 + 1/6 = 3 4, X n Y n 1 Y n 0 0 0 1 1 0 2 0 1 3 1 1 X n Y n 1 Y n P(Y n+1 = 0 Y n, Y n 1 ) P(Y n+1 = 1 Y n, Y n 1 ) 0 0 0 p 00 = P(X n+1 = 0 X n = 0) = α p 02 = P(X n+1 = 2 X n = 0) = 1 α 1 1 0 p 10 = P(X n+1 = 0 X n = 1) = α p 12 = P(X n+1 = 2 X n = 1) = 1 α 2 0 1 p 21 = P(X n+1 = 1 X n = 2) = 1 β p 23 = P(X n+1 = 3 X n = 2) = β 3 1 1 p 31 = P(X n+1 = 1 X n = 3) = 1 β p 33 = P(X n+1 = 3 X n = 3) = β State transition diagram: 1

3. Gambler s Ruin As an example, part of my R code: # Loop over the S iterations: for(s in 1:S){ # Start with X = i: n <- 0 X <- i sim[n+1,s] <- X # Simulate process: while(x>0 & X<(a+b)){ # Draw direction: direction <- -1 + 2*rbinom(1,1,p) # Next step: X <- X + direction # Save step: n <- n + 1 sim[n+1,s] <- X Here sim is a matrix with S columns to store the simulated trajectories. Tricky: typically you index the rows in a matrix 1, 2, 3..., but the Xs are indexed by n = 0, 1, 2,.... When you do the summary stats to compute θ a and E a this needs some attention. Send me an email if you want the full R code. 4. Difference equations See the handwritten solutions. 5. First passage time See the handwritten solutions. 6. Markov or not Markov S n : Note that S n+1 = S n + X n+1. Thus, given S n the distribution of S n+1 depends only on X n+1 and is independent of S 1,..., S n 1. Hence S + n is a Markov chain. The state space is {1, 2,... and the transition matrix P is given by 0 1 1 1 1 1 1 0 0 0 6 6 6 6 6 6 0 0 1 1 1 1 1 1 0 0 6 6 6 6 6 6 P = 0 0 0 1 1 1 1 1 1 0 6 6 6 6 6 6. 0 0 0 0 1 1 1 1 1 1 6 6 6 6 6 6 etc 2

Z n : This is not a Markov chain. For example, while P(Z n+1 = 1 Z n = 6, Z n 1 = 1) = P(Z n+1 = 1 X n = 6, X n 1 = 1) = 0 P(Z n+1 = 1 Z n = 6, Z n 1 = 6, Z n 2 = 1) = P(Z n+1 = 1 Z n = 6, X n 1 = 6, X n 2 = 1) > 0. To find the latter probability, note that P(Z n+1 = 1 Z n = 6, X n 1 = 6, X n 2 = 1) = P(Z n+1 = 1 and Z n = 6 X n 1 = 6, X n 2 = 1) P(Z n = 6 X n 1 = 6, X n 2 = 1) = P(X n+1 = 1 and X n = 1 X n 1 = 6, X n 2 = 1) P(Z n = 6 X n 1 = 6, X n 2 = 1) = (1/6) 2 /1. 7. Three-state continuous-time Markov chain As an example, part of my R code: # Loop over the S iterations: for(s in 1:S){ # Simulate leaving state 0: t0 <- rexp(1,rate = q01+q02) # Determine state: DRAW <- rbinom(1,1,prob = q01/(q01+q02)) if(draw){x <- 1else{X <- 2 # Update trajectory: sim[2,s] <- X sim.times[2,s] <- t0 # Simulate leaving state 1 if applicable: if(x==1){ t1 <- rexp(1,rate = q12) sim[3,s] <- 2 sim.times[3,s] <- t0+t1 Here sim is a matrix with S columns to store the simulated states, and sim.times is a matrix to store the simulated transition times. Do the summary stats using sim.times. For exampel, holding time in state 0: T 0 <- mean(sim.times[2,]). Send me an email if you want the full R code. 3

8. Illness-death model (a) For holding time in state 0: T 0 Exp(λ 01 + λ D ). Because of independence, P (T A > t, T B > t) = P (T A > t)(p (T B > t). Both variables are exponentially distributed, so P (T A > t)(p (T B > t) = exp( (λ 01 + λ D )t). Note also that 1 P (T A > t, T B > t) = P (min{t A, T B < t). So also min{t A, T B Exp(λ 01 + λ D ). (b) Given that the hazard of death is λ D for both states. Overall mean survival for an individual in state 1 is E(T ) = 1/λ D. From (a) we get E(T 0 ) = 1/(λ 01 +λ D ). So the time that an individual who is currently in state 0 is expected to spent in state 1 (i.e., mean survival in state 1) is the difference: E(T ) E(T 0 ) = λ 01 /(λ D λ 01 + λ 2 D). 9. Matrix exponential (a) Note that with Q = ABA 1, we have Q k = AB k A 1 with B k a diagonal matrix. Use the rewrite AB k A 1 in the summation series for the matrix exponential, and note that you can write the summation of matrices as summations of scalars within a diagonal matrix. (b) Because of the decomposition of Q, the matrix exponential for P(t) has been reduced to a series of scalar exponentials, which simplifies the computation of P(t) considerably. 10. Matrix exponential (a) You can compute eigenvectors in R using the function eigen, but the matrix with eigenvectors as columns cannot be inverted; that is, the eigenvectors are not independent: Q <- matrix(c(-1,1/2,1/2,0,-1,1,0,0,0),3,3,byrow = TRUE) decomp <- eigen(q) A <- decomp$vector det(a) (b) Can do a finite series of summations to approximate the infinite series: # Time interval: t <- 1 Rep <- 100 # Approximating P matrix with summation: summation <- function(t,r){ # k = 0: P <- diag(3) # k = 1: P <- P + (Q*t)/factorial(1) # k > 2: for(r in 2:R){ Q.r <- Q for(i in 2:r){ Q.r <- Q.r%*%Q P <- P+ (Q.r*t^r)/factorial(r) 4

return(p) summation(t,rep). Quality of approximation will depend on Rep and Q (c) (Optional) # This will work: t <- 1 expm(t*q) # Note that expm gives an error when using eigenvalue decomp.: expm(t*q, method = "R_Eigen") 11. Matrix exponential 12. Poisson process See the handwritten solutions. d dt P(t) = d t n Q n dt n=0 n! nt n 1 Q n = n=0 n! ( t n 1 Q n 1 ) = Q n=1 (n 1)! ( t m Q m ) = Q = P(t)Q m! m=0 13. Discrete-time process: equilibrium distribution Classification of states is important here for deciding on whether an equilibrium distribution exists or not. Note that an invariant distribution is not necessarily an equilibrium one. (a) {0, 1, 2, 3 finite irreducible (so closed) so positive recurrent. Loop, so period is 1 and therefore ergodic. Irreducible, ergodic MC so equilibrium exists (and is invariant distribution) by Main Limit Theorem. Solve π = πp to give π = (9/23, 8/23, 4/23, 2/23) (b) {0, 1, 2, 3 finite irreducible (so closed) so positive recurrent. Period is 2 so no equilibrium distribution. (c) {0 and {3 both not closed, transient, aperiodic not ergodic. {1, 2, 4 closed, finite so positive recurrent. Period is 1 so ergodic. Equilibrium exists. Solve π = πp (transient states must have invariant probability 0) to give π = (0, 3/15, 4/15, 0, 8/15) (d) {1 and {4 both not closed, transient, aperiodic not ergodic. {0, 2, 3 closed, finite so positive recurrent. Period is 3 so not ergodic. {5 closed, finite so positive recurrent. Period is 1 so ergodic. 2 closed classes so no equilibrium (long run behaviour depends upon initial state) 5

14. Continuous-time process: equilibrium distribution (a) Note that lim p 00(t) = lim p 10 (t) = t t λ λ + µ, and lim p 11(t) = lim p 01 (t) = µ t t λ + µ. Per definition, π = ( λ, ) µ λ+µ λ+µ is the equilibrium distribution. (b) Solve πq = 0. It follows that µπ 1 + λ(1 π 1 ) = 0, so π 1 = λ/(µ + λ), and thus π 2 = 1 π 1 = µ/(µ + λ). Because π is an invariant distribution and X(t) is irreducible, π is the equilibrium distribution. 6