LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Similar documents
LTCC. Exercises solutions

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Probability, Random Processes and Inference

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

MATH HOMEWORK PROBLEMS D. MCCLENDON

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

MAS275 Probability Modelling Exercises

The Transition Probability Function P ij (t)

Statistics 150: Spring 2007

Markov Chains (Part 3)

12 Markov chains The Markov property

Problem Set 8

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

2. Transience and Recurrence

2 DISCRETE-TIME MARKOV CHAINS

Stochastic Processes

Quantitative Model Checking (QMC) - SS12

Markov Chains and Stochastic Sampling

STOCHASTIC PROCESSES Basic notions

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Continuous-Time Markov Chain

Markov Processes Cont d. Kolmogorov Differential Equations

Lecture 5: Random Walks and Markov Chain

Statistics 992 Continuous-time Markov Chains Spring 2004

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

Readings: Finish Section 5.2

IE 336 Seat # Name (one point) < KEY > Closed book. Two pages of hand-written notes, front and back. No calculator. 60 minutes.

Continuous Time Markov Chains

ISE/OR 760 Applied Stochastic Modeling

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

MARKOV PROCESSES. Valerio Di Valerio

Markov Chains, Stochastic Processes, and Matrix Decompositions

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Examination paper for TMA4265 Stochastic Processes

2 Discrete-Time Markov Chains

Markov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to

Lecture 20 : Markov Chains

88 CONTINUOUS MARKOV CHAINS

Markov Chains Handout for Stat 110

Markov Chains. Chapter 16. Markov Chains - 1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Markov Processes Hamid R. Rabiee

Markov processes and queueing networks

Lecture 9 Classification of States

Understanding MCMC. Marcel Lüthi, University of Basel. Slides based on presentation by Sandro Schönborn

Math 1553, Introduction to Linear Algebra

Summary of Stochastic Processes

Some Definition and Example of Markov Chain

Lectures on Markov Chains

Modelling Complex Queuing Situations with Markov Processes

Stochastic process. X, a series of random variables indexed by t

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Markov Chains (Part 4)

Lecture 5: Introduction to Markov Chains

A review of Continuous Time MC STA 624, Spring 2015

n(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

SMSTC (2007/08) Probability.

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

CS 798: Homework Assignment 3 (Queueing Theory)

1 Continuous-time chains, finite state space

Markov chains. Randomness and Computation. Markov chains. Markov processes

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

Numerical methods for lattice field theory

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

18.175: Lecture 30 Markov chains

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Question Points Score Total: 70

6 Continuous-Time Birth and Death Chains

LECTURE #6 BIRTH-DEATH PROCESS

MATH 3510: PROBABILITY AND STATS July 1, 2011 FINAL EXAM

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

18.600: Lecture 32 Markov Chains

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Math Camp Notes: Linear Algebra II

Stochastic Modelling Unit 1: Markov chain models

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Countable state discrete time Markov Chains

6.842 Randomness and Computation March 3, Lecture 8

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

Probability Distributions

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Practice problems for Exam 3 A =

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Transcription:

1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in state 4? If the chain starts in state 1, can it ever reach state 4? (b) If X 0 = 3, describe the distribution of the length of time (that is, the #steps) that the chain spends in state 3, and then the distribution of its next destination. 2. Weather forecasting Assume: (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather Define: { 1 sunny on nth day Y n := 0 rains on nth day X n := Y n 1 + 2Y n, X 0 = 0 S = {0, 1} Define α = P (Y n+1 = 0 Y n = 0) and β = P (Y n+1 = 1 Y n = 1); see also the slides. Compute the transition matrix of X and draw the state transition diagram. 3. Gambler s Ruin Program a simulation of the Gambler s Ruin as defined on slides; that is with p = 1/2, a = b = 10 and i = a. Try to make a similar graph for 10 simulation trajectories. Increase the number of simulated trajectories and use them to estimate θ a and E a. Compare the estimates with the theoretical derivations. Hint: You can choose your own software. One way to simulate the steps in the process is to draw a value from a Bernoulli distribution Y Bern(p), and then given current X n define X n+1 = X n 1+2Y. Use an in-build function to draw from the Bernoulli distribution. 4. Difference equations Consider the Gambler s Ruin as a symmetric random walk with absorbing states; that is p = q = 1. Say gambler A starts with k chips, has N chips, and goes 2 bankrupt if he has no more chips. Define p k as P (A bankrupt started in state k). Note that we have p k = 1 2 (p k+1 + p k 1 ), 1

with boundaries p 0 = 1 and p N = 0. Derive p k for k = 1, 2,...N 1. Hint: There is a quick solution by noting that the distance between p k p k 1 is constant. You can also use the generic solution for difference equations: p n = c 1 θ n 1 + c 2 nθ n 2 where, in this case, θ is the root (with multiplicity 2) of 1 2 θ2 θ + 1 2 = 0. 5. First passage time Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4, 5} and transition matrix 1/2 1/2 0 0 0 0 1/3 0 2/3 0 P = 0. 0 0 0 1/5 4/5 0 0 0 1 0 Define the random variable T i =min{n 0, X n = 5 X 0 = i}. Compute E(T 1 ); that is, the expected first passage time for state 5 given state 1 at time 0. Hint: Use the law of total expectation and E(T i ) for i = 2, 3, 4, 5 6. Markov or not Markov A fair die is thrown repeatedly and independently. Denote by X n the score obtained in the nth throw (i.e. X n takes the values 1, 2,..., 6 with equal probabilities). Define stochastic processes S n : for n = 1, 2, 3,..., let S n be the sum of the scores obtained in the first n throws, i.e. S n = n i=1 X i ; Z n : for n = 2, 3, 4,..., let Z n be the largest of the two most recent scores obtained, i.e. Z n = max(x n, X n 1 ). For each of the stochastic processes S n and Z n, state whether or not the stochastic process is a Markov chain and carefully justify your answer (if the process is not a Markov chain then give an example where the Markov property breaks down). If the stochastic process is a Markov chain, then write down its transition matrix. 7. Three-state continuous-time Markov chain Program a simulation of the three-state chain as defined on slide 99. Try to make a similar graph for 10 simulation trajectories. Increase the number of simulated trajectories and use them to estimate E[T 0 ] and E[T 1 ]. Hint. You can choose your own software. Slide 97 will help you to get started. For example, for leaving state 0, simulate a transition time by drawing a value from the exponential distribution T Exp(q 01 + q 02 ), and then draw from a Bernoulli distribution R Bern(p = q 01 /(q 01 + q 02 )) to determine to which state the chain goes. 8. Illness-death model Consider a Three-state continuous-time Markov chain with q 01 = λ 01 and q 02 = q 12 = λ D. Interpret this as an illness-death model, with state 2 representing death. 2

(a) Let T 0 be the event time of leaving state 0, which is exponentially distributed with rate λ 01 + λ D. Show that T 0 has the same distribution function as min{t A, T B }, where T A and T B are independent exponentially distributed random variables with rates λ 01 and λ D, respectively. (Note that the event time for 0 1 is not independent from the time for 0 2.) (b) For an individual in state 0, define T as the time of death. Show that this individual s mean survival in state 1 is given by E(T ) E(T 0 ) = λ 01 /(λ D λ 01 + λ 2 D). 9. Matrix exponential For a continuous-time Markov process with time-homogeneous D D generator matrix Q, the transition probability matrix P(t) for time t > 0 is the solution to the Kolmogorov backward equation P (t) = QP(t) subject to P(0) = I D, where I D is the identity matrix. The solution is the matrix exponential P(t) = e tq (tq) k =. k=0 (a) Show that if the D D matrix Q has D linearly independent eigenvectors, then P(t) = A diag ( e b 1t,..., e b Dt ) A 1, where b 1,..., b D are the eigenvalues of Q, A is the matrix with the corresponding eigenvectors as columns, and diag(x 1,..., x D ) denotes the D D diagonal matrix with diagonal entries x 1,..., x D. (b) Why is this rewrite useful? Explain in words. 10. Matrix exponential Consider the following generator matrix for a three-state progressive survival model Q = (q 12 + q 13 ) q 12 q 13 0 q 23 q 23 0 0 0 = 1 1/2 1/2 0 1 1 0 0 0 (a) Use software to compute eigenvalues and eigenvectors. Show that the matrix exponential cannot be expressed using the eigenvalue decomposition. (b) Can you think of a way to approximate P(t)? Choose t = 1 as illustration. (c) (Optional) R-package expm includes functions to compute/approximate P(t). Load the package into R and have a look at the relevant function: > library(expm) > help(expm) Use expm to explore the issue in (a) and the question in (b).. 3

11. Matrix exponential Show that a solution for the Kolmogorov forward equation P (t) = P(t)Q is the matrix exponential (tq) k P(t) =. Hint: start with d dt 12. Poisson process k=0 (tq) k k=0 and rewrite the resulting sum using k=1... (a) Men and women arrive at a shop, forming independent Poisson processes of rates α and β per hour respectively. Let M 1 be the time until the first male customer arrives and let W 1 be the time until the first female customer arrives. Show that p = P (M 1 < W 1 ) = α/(α + β). (b) Let N be the number of male customers that arrive before the first female customer. Using p and the lack-of-memory property of the exponential distribution, or by conditioning on the arrival time W 1 of the first female customer, find the probability distribution of N. (c) Find the distribution of the time Z =min(m 1, W 1 ) until the first customer arrives. Hint: first calculate P (Z > z). (d) Exactly one female customer arrived in the first hour (note: nothing is said here about how many male customers arrived). Let T be the time at which the first customer arrived. Find P (T > t) and hence evaluate E(T ). Hint: use the lemma about arrival times being (conditionally) uniformly distributed. 13. Discrete-time process: equilibrium distribution Find the irreducible classes of states for the Markov chains with the following transition matrices (all state spaces begin at 0). State whether they are closed or not and classify them in terms of transience, recurrence (positive or null), period and ergodicity. State whether or not an equilibrium distribution exists and, if it does, find it. (a) P = (b) P = (c) P = 1/3 2/3 0 0 1/2 0 1/2 0 0 1/2 0 1/2 0 1 0 0 1/2 0 1/2 0 0 1/2 0 1/2 1/6 0 0 1/6 2/3 0 1/3 2/3 0 0 0 0 0 0 1 0 1/4 1/4 1/2 0 0 1/4 1/4 0 1/2 4

(d) P = 0 0 0 1 0 0 0 1/3 0 1/3 0 1/3 0 0 0 0 0 0 1/3 0 1/3 1/3 0 0 0 0 0 1 14. Continuous-time process: equilibrium distribution Consider the process X(t) with S = {0, 1}. Let the generator matrix be given by ( ) µ µ Q = λ λ The corresponding transition probability matrix is ( ) p00 (t) p P(t) = 01 (t) = p 10 (t) p 11 (t) λ+µe (λ+µ)t λ+µ 1 p 11 (t) 1 p 00 (t) µ+λe (λ+µ)t λ+µ (This matrix can be derived by eigenvalue decomposition of Q and matrix exponentiation, or by solving the forward equations; e.g., p 00(t) = p 00 (t)µ + p 01 (t)λ.) (a) Use P(t) to derive the equilibrium distribution. (b) Use Q to derive the equilibrium distribution. Hint: no need to derive P(t) first. 5