Lecture 3: Markov chain
A Markov chain is a stochastic rocess having the roerty that the value of the rocess X t at time t, deends only on its value at time t-1, X t-1 and not on the sequence X t-2, X t-3,, X 0 that the rocess assed through to arrive at X t-1 For 1 st order Markov chain or Single ste M.C. ( X = a X = a X = a X = a X = a t j t 1 i t 2 k t 3 1 0 q) = rob ( X = a X = a t j t 1 i) rob,,,...,
Diagrammatically, it may be reresented as, X o X t-2 X t-1 X t We will be able to write this as t-1 t time eriod X t-1 =a i X t =a j State i transited to State j t = P X = a X = a t j t 1 i P is the robability that it goes in to state j, starting with array i here.
Transition robability It is the robability that state i will transit to state j t-2 t t-3 t-2 t-1 t t t+ If = τ t, τ then, the series is called homogeneous Markov Chain [i.e., transition robabilities remain the same across the time]
Here analysis is done only for: Single ste (1 st order) homogeneous M. C. If t is month then will not be homogeneous (seasonal change) = transition robability for i to j i=1,2,, m and j=1,2,.,m where m is the no. of ossible states that the rocess can occuy. TPM, (Total robability matrix) Probability stating 11 12 13 1m that i=1 go into 21 22 23 2m P = = j=2 m1 m2 m3 mm Sum of each row=1
t t+ τ If = t, τ Each row must add to 1 m = 1, j=1 i Such matrices whose individual rows add u to 1 are called the stochastic matrices = ( 1), total no. of robability values that need to be estimated 2 m m mm Estimate = m n j = 1 n from historical data
Historic data 1 2 100 Time eriod 1 1 2 1 2 3 States (random) e.g. No. of times it went into state 1 out of these 50 times = 20, No. of times it transited to state 2 = 20 No. of times it transited to state 3 = 15 Then 20 = 11 ; 50 15 = 12 ; 50 15 = 13. 50 Deficit, non-deficit Drought, non-drought Two states Two states
(n) j : robability that the rocess will be in state j after n time stes t-1 n Time interval i j - state (0) j : Initial robability of being in state j,,...,...a sum vector n = ( n) ( n) ( n) 1 2 m 1 xm Probability of being in state 1 in time ste n
If o (1) create at t=1 = (1) ( o ) Probability vector at time 1 is given P from robability 11 12 13 1m (0) (0) ( ) 21 22 23 2m =,,..., o 1 2 m m1 m2 m3 mm (0) (0) (0) + +... + m m 1 11 2 21 1 Probability that event will start from state 2 Probability of transition from 2 to 1
Probability that the state is 1 in eriod 1 (1) (1) (1) =,,..., 1 2 m Probability that state is 2 in time eriod 1 Probability that state is m in time eriod 1 Simillarly, (2) (1) = P. PP (0) =.. Any time eriod n, = P ( n) (0) n = P (0) 2 After time n, rob. to be in articular state j (TPM) Initial rob. vector
=,after a large m then steady state robability (n+ m) (n) condition is achieved. Once the steady state is reached, (n+ m) (n) = = So, =.P Examle: A 2-state Markov Chain; for a sequence of wet and dry sells i = 1 dry; i = 2 wet d w 0.9 0.1 (i) P [day 1 is wet day 0 is dry] P = 0.5 0.5 = P [X t = 2 X t-1 = 1] = 12 = (1) 2 = 0.1 (Ans) d w
Examle Problem (ii) P[day 2 is wet day 0 is dry] (2) 2 = (2) (1) (2) = day wet P [ ] = 0.9 0.1 [ 0.86 0.14] Because day 0 is dry 0.9 0.1 0.5 0.5 Dry wet Probability that day 2 will be wet (2) = 2 = 0.14
Examle Problem (iii) Prob [day 100 is wet day 1 is dry] ie.., P (100) 2 0.9 0.1 P = 0.5 0.5 Here, the fact that day 1 was dry, would not significantly affect the robability of rain on day 100. So n can be assumed to be large and solve the roblem based on steady-state robabilities
Examle Problem To determine steady state, 2 or P PP.. or or or 0.86 0.14 = = 0.7 0.3 0.8376 0.1624 = = 0.8120 0.1880 4 2 2 P P. P. 0.8334 0.1666 = P P = 0.8320 0.1672 8 4 4 P.. 0.8333 0.1667 = P P = 0.8333 0.1667 16 8 8 P.. All the rows same =(0.8333; 0.1667) dry wet