Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e. The future of the process depends only on the present and not on the past.
Examples:
But
Markov Chains Integer-values Markov Processes are called Markov Chains. Examples: Sum process. Counting process Random walk Poisson process Markov chains can be discrete-time or continuous-time.
Discrete Time Markov Chains Initial PMF : p j (0) P( X 0 = j ) ; j = 0,1,... Transition Probability Matrix:
e.g. Binomial counting process : S n = S n-1 + X n X n ~ Bernoulli 1-p 1-p 1-p 1-p 1-p 0 1 2 k k+1 p p p p p p
n-step Transition Probabilities: Time 0 Time 1 Time 2 i j k
State Probabilities: PMF at any time can be obtained from initial PMF and the TPM.
Steady State Probabilities: In some cases, the probabilities p j (n) approach a fixed point as n Not all Markov chains settle in to steady state, e.g. Binomial Counting
Classification of States ( Discrete time Markov Chains) * State j is accessible from state i if p ij (n) > 0 for some n 0 * States i and j communicate if i is accessible from j and j from i. This is denoted i j. * i i * i j and j k i k. Class: States i and j belong to the same class if i j. If S set of states, then for any Markov chain If a Markov chain has only one class, it is called irreducible.
Recurrence Properties: Let f i P ( X n ever returns to i X 0 = i ) If f i = 1, i is termed recurrent. If f i < 1, i is termed transient. If i is recurrent, X 0 = i infinite # of returns to i. If i is transient, X 0 = i finite # of returns to i.
If i is recurrent and i Class k, then all j Class k are recurrent. If i is transient, all j are transient, i.e. recurrence and transience are class properties. States of an irreducible Markov chain are either all transient or all recurrent. If # of states <, all states cannot be transient All states in a finite-state irreducible Markov Chain are recurrent. Periodicity: If for state i, p ii (n) = 0 except when n is a multiple of d, where d is a largest such integer, i is said to have a period d. Period is also a class property. An irreducible Markov chain is aperiodic, if all of its states have period 1.
1 3 2 4 5 Class 1(Transient) Class 2(Recurrent) 1 2 3 Irreducible Markov Chain 5 4 0 1 2 3 k k+1 Non- Irreducible Markov Chain
1 2 1 1 3 1 A typical periodic M C 1 0 1/2 1/2 1 1 2 3 1 Recurrence times for 0,1 = { 2,4,6,8,.... } Recurrence times for 2,3 = { 4,6,8,.... } period = 2
Let X 0 = i where i is a recurrent state. Define T i (k) interval between (k-1) th and k th returns to i. (by the law of large numbers) where π i is the long-term fraction of time spent in state i. i Positive Recurrent: E(T i ) <, π i > 0 i Null Recurrent: E(T i ) =, π i = 0 (e.g. all states in a random walk with p = 0.5) i is Ergodic if it is positive recurrent, aperiodic. Ergodic Markov Chain: An irreducible, aperiodic, positive recurrent MC.
Limiting Probabilities: π j s satisfy the rule for stationary state PMF : A This is because long-term proportion of time in which j follows i = long-term proportion of time in i P( i j) = π i p ij and long-term proportion of time in j = Σ i (long-term proportion of time in which j follows i) = Σ i π i p ij π j
Theorem: For an Irreducible, aperiodic and positive recurrent Markov Chain Where π j is a unique non-negative solution of A. i.e. Steady state prob of j = stationary state pmf = Long-term fraction of time in j Ergodicity.
Transition Probabilities: Continuous-Time Markov Chains P( X(s+t) = j X(s) = i ) = P( X(t) = j X(0) = i ) p ij (t) t 0 i.e. the transition probabilities depend only on t, not on s (time-invariant transition probabilities homogenous) P(t) = TPM = matrix of p ij (t) i,j Clearly P (0) = I (identity matrix)
Ex 8.12 : Poisson Process Can only transition from j to j+1 or remain in j because δ is small for 2 transitions.
Can only transition from j to j+1 or remain in j because δ is small for 2 transitions.
State Occupancy Times:
Embedded Markov Chains : Consider a continuous-time Markov Chain with the state Occupancy times T i and The corresponding Markov chain is a discrete time MC with the same states as the original MC. Each time the state i is entered, a T i ~ exponential (ν i ) is chosen. After T i is elapsed, a new state is transitioned to with probability q ij, which depend on the original MC as: This is very useful in generating Markov chains in simulations.
Transition Rates:
State Probabilities:
This is a system of Chapman Kolmogorov Equations. These are solved for each p j (t) using the initial PMF p(0)= [p 0 (0) p 1 (0) p 2 (0).... ] Note: If we start with p i (0) = 1, p j (0) = 0 j i, p j (t) p ij (t) C-K equations can be used to find TPM P(t)
Steady State Probabilities: If p j (t) p j j as t, the system reaches equilibrium (SS). Then Solve these equations j to obtain p j s - equilibrium PMF. The GBE states that, at equilibrium, rate of probability flow out of j (LHS) = rate of probability flow in to j (RHS)
Example: M/M/1 queue ( Poisson arrivals/ exp-time arrivals / 1 server) arrival rate = λ, service rate = µ γ i,i+1 = λ i =0,1,2,... ( i customers i+ 1 customers ) γ i,i-1 = µ i =1,2,3,... ( i customers i - 1 customers ) 0 λ 1 λ 2 j λ j+1 λ µ µ µ µ
Ex: Birth- Death processes λ 0 λ 1 λ 2 λ j 0 1 2 3 j j+1 µ 1 µ 2 µ 3 µ j+1 λ j = birth rate at state j µ j = death rate at state j
Theorem: Given CT MC X(t) with associated embedded MC [q ij ] with SS PMF π j, if [q ij ] is irreducible and pos recurrent, the long term fraction of time spent by X(t) is state i is Which is also the unique solution to the GBE s.