Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1
Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible ergodic Markov chain, lim p = n#".285.285.285.285 ( n) ij.263.263.263.263 where π j = steady state probability of being in state j j.166#.166.166.166" Markov Chains - 2
Some Observations About the Limit The behavior of this important limit depends on properties of states i and j and the Markov chain as a whole. If i and j are recurrent and belong to different classes, then p (n) ij =0 for all n. (n If j is transient, then lim p ) ij = 0 n "# for all i. Intuitively, the probability that the Markov chain is in a transient state after a large number of transitions tends to zero. In some cases, the limit does not exist Consider the following Markov chain: if the chain starts out in state 0, it will be back in 0 at times 2,4,6, and in state 1 at times 1,3,5,. Thus p (n) 00 =1 if n is even and p (n) (n ) 00 =0 if n is odd. Hence the limit lim p does 00 n "# not exist. Markov Chains - 3
Steady-State Probabilities How can we find these probabilities without calculating P (n) for very large n? The following are the steady-state equations: M # =1 j= 0 " j M " j = #" i p ij for all j = 0,...,M i= 0 " j 0 for all j = 0,...,M In matrix notation we have π T P = π T Solve a system of linear equations. Note: there are M+2 equations and only M+1 variables (π 0, π 1,, π M ), so one of the equations is redundant and can be M dropped - just don t drop the equation = 1 " j j= 0 Markov Chains - 4
Solving for the Steady-State Probabilities M T P = T and i =1 " # i=0 0 1 M " ' ' %' ' ' # p 00 p 01... p 0M p 10 p 11... p 1M p (M&1)M p M 0 p M1... p MM ( ( ( = " # 0 1 M ( ( % % 0 p 00 + 1 p 10 ++ M p M 0 = 0 0 p 01 + 1 p 11 ++ M p M1 = 1 = 0 p 0M + 1 p 1M ++ M p MM = M 0 + 1 ++ M = 1 Idea is to go from steady state to steady state: X t M j " M " j i 0 " i " 0 Markov Chains - 5 t
Steady-State Probabilities Examples Find the steady-state probabilities for P = & 0.3 % 0.6 0.7# 0.4" P = & 1 3 1 2 0 % 2 3 0 1 4 0 # 1 2 3 4 " Inventory example P = & 0.080 0.632 0.264 % 0.080 0.184 0.368 0.368 0.184 0.368 0 0.368 0.368 0.368# 0 0 0.368" Markov Chains - 6
Other Applications of Steady-State Probabilities Expected recurrence time: we are often interested in the expected number of steps between consecutive visits to a particular (recurrent) state. What is the expected number of sunny days between rainy days? What is the expected number of weeks between ordering cameras? Long-run expected average cost per unit time: in many applications, we incur a cost or gain a reward every time a Markov chain visits a specific state. If we incur costs for carrying inventory, and costs for not meeting demand, what is the long-run expected average cost per unit time? Markov Chains - 7
Expected Recurrence Times The expected recurrence time, denoted µ jj, is the expected number of transition between two consecutive visits to state j. The steady state probabilities, π j, are related to the expected recurrence times, µ jj, as 1 µ jj = for all j = 0,1,..., M j Markov Chains - 8
Weather Example What is the expected number of sunny days in between 0 1 rainy days? First, calculate π j. 0 0.8+ 1 0.6 = 0 0 =1 1 Sun 0 Rain 1 # " 0.8 0.2 0.6 0.4 & % 0 0.2 + 1 0.4 = 1 0 + 1 = 1 ( 1 1 )0.2 + 1 0.4 = 1 0.2 = 0.8 1 1 =1/ 4 and 0 = 3 / 4 Now, µ 11 = 1/π j = 4 For this example, we expect 4 sunny days between rainy days. Markov Chains - 9
Inventory Example What is the expected number of weeks in between orders? First, the steady state probabilities are: 0 = 0.286, 1 = 0.285, 2 = 0.263, 3 = 0.166 Now, µ 00 = 1/π 0 = 3.5 For this example, on the average, we order cameras every three and a half weeks. Markov Chains - 10
P = Expected Recurrence Times " 0.3 0.7% ' # 0.6 0.4& Examples ( 0 = 6 13 ( 1 = 7 13 µ 00 = 13 6 = 2.1667 µ 11 = 13 7 =1.857 0.7 0.3 0 1 0.4 P = 0.6 " 1 2 3 3 0 % ' 1 2 0 1 2 ' 0 1 3 ' # 4 4 & ( 0 = 3 15 ( 1 = 4 15 ( 2 = 8 15 µ 00 = 15 3 = 5 µ 11 = 15 4 = 3.75 µ 22 = 15 8 =1.875 2/3 1/2 1/3 0 1 2 3/4 Markov Chains - 11 1/2 1/4
Steady-State Cost Analysis Once we know the steady-state probabilities, we can do some long-run analyses Assume we have a finite-state, irreducible Markov chain Let C(X t ) be a cost at time t, that is, C(j) = expected cost of being in state j, for j=0,1,,m The expected average cost over the first n time steps is # E % 1 % n n " C X t t =1 ( ) The long-run expected average cost per unit time is a function of steady state probabilities % lim E ' 1 n " # &' n n C X t t =1 ( ) & ( '( ( M * )* = + jc j j =0 ( ) Markov Chains - 12
Steady-State Cost Analysis Inventory Example Suppose there is a storage cost for having cameras on hand: C( i) = " 0 if i = 0 2 if i =1 # 8 if i = 2 % 18 if i = 3 The long-run expected average cost per unit time is " 0 C 0 ( ) + " 1 C( 1) + " 2 C( 2) + " 3 C( 3) ( ) + 0.285( 2) + 0.268( 8) + 0.166( 18) = 0.286 0 = 5.662 Markov Chains - 13
First Passage Times - Motivation In many applications, we are interested in the time at which the Markov chain visits a particular state for the first time. If I start out with a dollar, what is the probability that I will go broke (for the first time) after n gambles? If I start out with three cameras in my inventory, what is the expected number of days after which I will have none for the first time? The answers to these questions are related to an important concept called first passage times Markov Chains - 14
First Passage Times The first passage time from state i to state j is the number of transitions made by the process in going from state i to state j for the first time When i = j, this first passage time is called the recurrence time for state i Let f ij (n) = probability that the first passage time from state i to state j is equal to n What is the difference between f ij (n) and p ij (n)? X t j i 0 t t+n p ij (n) includes paths that visit j f ij (n) does not include paths that visit j Markov Chains - 15
Some Observations about First Passage Times First passage times are random variables and have probability distributions associated with them f ij (n) = probability that the first passage time from state i to state j is equal to n These probability distributions can be computed using a simple idea: condition on where the Markov chain goes after the first transition For the first passage time from i to j to be n>1, the Markov chain has to transition from i to k (different from j) in one step, and then the first passage time from k to j must be n-1. This concept can be used to derive recursive equations for f (n) ij Markov Chains - 16
First Passage Times The first passage time probabilities satisfy a recursive relationship f (1) ij = p (1) ij = p ij f ij (2) = f ij (n) = " (1) p ik f kj k j " (n#1) p ik f kj k j X t M j i p im p ij p i0 p ii f ij (n "1) f (n "1) Mj (n "1) f 0 j 0 t t+1 t+n Markov Chains - 17
First Passage Times Inventory Example Suppose we were interested in the number of weeks until the first order (start in State 3, X 0 =3) Then we would need to know what is the probability that the first order is submitted in Week 1? ( 1 f 30 = p 30 = 0.080 Week 2? f (2) ( 1) 30 = " p 3k f k 0 = p 31 f (1) 10 + p 32 f (1) 20 + p 33 f 30 Week 3? ( ) = # ( 2) ( 2 p 3k f k0 = p 31 f ) ( 2 10 + p 32 f ) 2 20 + p 33 f 30 f 30 3 k "0 k0 = p 31 p 10 + p 32 p 20 + p 33 p 30 = 0.184(0.632) + 0.368(0.264) + 0.368(0.080) = 0.243 ( ) Markov Chains - 18
Probability of Ever Reaching j from i If the chain starts out in state i, what is the probability that it visits state j at some future time? This probability is denoted f ij and f ij = " n=1 f ij (n) If f ij =1, then the chain starting at i definitely reaches j at some future time, in which case f (n) ij is a genuine probability distribution for the first passage time. On the other hand, if f ij <1, the chain starting at i may never reach j. In fact, the probability that this happens is 1-f ij Markov Chains - 19
Expected First Passage Times The expected first passage time from state i to state j is µ ij = if f ij <1 µ ij = E" (n) # f ij & n=1 % = nf ij If f ij =1, we can also calculate µ ij using the idea to condition on where the chain goes after one transition (n) if f ij =1 µ ij =1p ij + " p ik (1+ µ kj ) = " p ik + " p ik µ kj =1+ " p ik µ kj k j k k j k j µ = 1+ M ij p ik k = 0 k " j µ kj Markov Chains - 20
Expected First Passage Times Inventory Example Find the expected time until the first order is submitted µ =1+ p µ + p µ + p µ " 30 31 10 32 20 33 30 µ =1+ p µ + p µ + p µ # 20 21 10 22 20 23 30 µ =1+ p µ + p µ + p µ % 10 11 10 12 20 13 30 Solve simultaneously, µ 10 =1.58 weeks µ 20 = 2.51 weeks µ 30 = 3.50 weeks Find the expected time between orders µ 00 = 1 " 0 = 3.50 weeks Markov Chains - 21