Module 8. Lecture 3: Markov chain

Save this PDF as:

Size: px
Start display at page: Transcription

1 Lecture 3: Markov chain

2 A Markov chain is a stochastic rocess having the roerty that the value of the rocess X t at time t, deends only on its value at time t-1, X t-1 and not on the sequence X t-2, X t-3,, X 0 that the rocess assed through to arrive at X t-1 For 1 st order Markov chain or Single ste M.C. ( X = a X = a X = a X = a X = a t j t 1 i t 2 k t q) = rob ( X = a X = a t j t 1 i) rob,,,...,

3 Diagrammatically, it may be reresented as, X o X t-2 X t-1 X t We will be able to write this as t-1 t time eriod X t-1 =a i X t =a j State i transited to State j t = P X = a X = a t j t 1 i P is the robability that it goes in to state j, starting with array i here.

4 Transition robability It is the robability that state i will transit to state j t-2 t t-3 t-2 t-1 t t t+ If = τ t, τ then, the series is called homogeneous Markov Chain [i.e., transition robabilities remain the same across the time]

5 Here analysis is done only for: Single ste (1 st order) homogeneous M. C. If t is month then will not be homogeneous (seasonal change) = transition robability for i to j i=1,2,, m and j=1,2,.,m where m is the no. of ossible states that the rocess can occuy. TPM, (Total robability matrix) Probability stating m that i=1 go into m P = = j=2 m1 m2 m3 mm Sum of each row=1

6 t t+ τ If = t, τ Each row must add to 1 m = 1, j=1 i Such matrices whose individual rows add u to 1 are called the stochastic matrices = ( 1), total no. of robability values that need to be estimated 2 m m mm Estimate = m n j = 1 n from historical data

7 Historic data Time eriod States (random) e.g. No. of times it went into state 1 out of these 50 times = 20, No. of times it transited to state 2 = 20 No. of times it transited to state 3 = 15 Then 20 = 11 ; = 12 ; = Deficit, non-deficit Drought, non-drought Two states Two states

8 (n) j : robability that the rocess will be in state j after n time stes t-1 n Time interval i j - state (0) j : Initial robability of being in state j,,...,...a sum vector n = ( n) ( n) ( n) 1 2 m 1 xm Probability of being in state 1 in time ste n

9 If o (1) create at t=1 = (1) ( o ) Probability vector at time 1 is given P from robability m (0) (0) ( ) m =,,..., o 1 2 m m1 m2 m3 mm (0) (0) (0) m m Probability that event will start from state 2 Probability of transition from 2 to 1

10 Probability that the state is 1 in eriod 1 (1) (1) (1) =,,..., 1 2 m Probability that state is 2 in time eriod 1 Probability that state is m in time eriod 1 Simillarly, (2) (1) = P. PP (0) =.. Any time eriod n, = P ( n) (0) n = P (0) 2 After time n, rob. to be in articular state j (TPM) Initial rob. vector

11 =,after a large m then steady state robability (n+ m) (n) condition is achieved. Once the steady state is reached, (n+ m) (n) = = So, =.P Examle: A 2-state Markov Chain; for a sequence of wet and dry sells i = 1 dry; i = 2 wet d w (i) P [day 1 is wet day 0 is dry] P = = P [X t = 2 X t-1 = 1] = 12 = (1) 2 = 0.1 (Ans) d w

12 Examle Problem (ii) P[day 2 is wet day 0 is dry] (2) 2 = (2) (1) (2) = day wet P [ ] = [ ] Because day 0 is dry Dry wet Probability that day 2 will be wet (2) = 2 = 0.14

13 Examle Problem (iii) Prob [day 100 is wet day 1 is dry] ie.., P (100) P = Here, the fact that day 1 was dry, would not significantly affect the robability of rain on day 100. So n can be assumed to be large and solve the roblem based on steady-state robabilities

14 Examle Problem To determine steady state, 2 or P PP.. or or or = = = = P P. P = P P = P = P P = P.. All the rows same =(0.8333; ) dry wet