Markov Chains (2)
Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2
pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous Markov chains. For such chains, we use the following notation to denote n-step transition probabilities. p ( n) P( X k X j) jk mn m n The one-step transition probabilities p jk (1) are simply written as, thus: p jk p p (1) P( X k X j) jk jk n n1 3
The pmf of the random variable, often called the initial probability vector, is specified as p(0) [ p (0), p (0),...] 0 1 The one-step transition probabilities are compactly specified in the form of a transition probability matrix P [ ] The entries of the matrix P satisfy the following two properties X 0 p00 p01 p02. p p p......... 10 11 12 pij 0 p ij 1, i, j I; and p 1, i I. ji ij 4
An equivalent description of the one-step transition probabilities can be given by a directed graph called the state transition diagram (state diagram for short) of the Markov chain. A node labeled i of the state diagram represents state i of the Markov chain and a branch labeled pij from node i to j implies that the conditional probability is P[ X j X i] p n n1 ij 5
Example: Two states Suppose a person can be in one of two states "healthy" or "sick". Let X(n), n = 0, 1, 2, refer the state at time n where Define 6
Its corresponding DTMC can be shown by state diagram transition probability matrix 7
We are interested in obtaining an expression for evaluating the n-step transition probability from the onestep probabilities. If we let P(n) be the matrix whose (i, j) entry is, that is, let P(n) be the matrix of n-step transition probabilities, then we can write P( n) P. P( n 1) P Thus the matrix of n-step transition probabilities is obtained by multiplying the matrix of one-step transition probabilities by itself n-1 times. n p ij ( n) 8
We can obtain the pmf of the random variable from the n-step transition probabilities and the initial probability vector as follows p( n) p(0) P( n) p(0) P n X n This implies that step dependent probability vector of a homogeneous Markov chain are completely determined from the one-step transition probability matrix P and the initial probability vector p(0). 9
Example: Stock Exchange 10
Example: Stock Exchange 11
Example: Stock Exchange 12
A state i is said to be transient iff there is a positive probability that the process will not return to this state. A state i is said to be recurrent iff starting from i, the process eventually returns to state i with probability one. For a recurrent state i, p ( n) 0, n 1 define the period of state i denoted by d ii ii, as the greatest common divisor (gcd) of the set of positive integers n such that p ( n) 0. ii 13
A state i has period k if any return to state i must occur in multiples of k time steps. Formally, the period of a state is defined as A recurrent state i is said to be aperiodic if its period d 1, and periodic if d 1. k gcd{ n : Pr( X i X i) 0} i A state i is said to be an absorbing state iff p 1. n 0 ii i 14
Two states i and j communicate if directed paths from i to j and vice-versa exist. A Markov chain is said to be irreducible if every recurrent state can be reached from every other state in a finite number of steps. In other words, for all integer n 1 such that p ( n) 0. ij i, j I, there is an 15
Continuous Time Markov Chain (CTMC) As in DTMCs, we confine our attention to discrete-state processes. This implies that, although the parameter t has a continuous range of values, the set of valuesxt () is discrete. Recall the definition of discrete-state continuous tie stochastic process stated in the class which satisfies P[ X ( t) x X ( t ) x, X ( t ) x,... X ( t ) x ] The behavior of the process is characterized by (1) initial state probability given by the pmf of X t0 P X t0 k k and (2) the transition probabilities p ( v, t) P( X ( t) j X ( v) i) ij n n n1 n1 0 0 P[ X ( t) x X ( t ) x ] n n ( ), ( ( ) ), 0,1,2,... 16
Continuous Time Markov Chain (CTMC) Let denote the pmf of X(t) (or the state probabilities at time t by j ( t) P( X ( t) j), j0,1,2,...; t0 It is clear that ji ( t) 1 j for any t 0, since at any given time the process must be in some state. 17
Continuous Time Markov Chain (CTMC) Using the theorem of total probability, for given t v, we can express the pmf of X(t) in term of the transition probabilities p ( v, t) and the pmf of X(v): If we let v=0, then ij j ( t) P( X ( t) j) P( X ( t) j X ( v) i) P( X ( v) j) ii pij ( v, t) i( v) ii j ( t) pij (0, t) i(0) ii 18
Continuous Time Markov Chain (CTMC) If we let the ( t) [ 0( t), 1( t),...], then in the matrix form we have d () t dt () tq Where Q is the infinitesmial generator matrix containing the transition rates q ij from any state i to any other state j, where i j of a given CTMC. q ii The elements on the main diagonal of Q are defined by q ii q j, ji ij 19
Continuous Time Markov Chain (CTMC) If for a given CTMC, the steady state probabilities are independent of time, we immediately get d () t lim 0 t dt For determining the unconditional state probabilities resolves to much simpler system of linear equations 0 q, j S is In matrix for, we get accordingly ij i 0 Q 20
Continuous Time Markov Chain (CTMC) Example: Discussion on steady state solution of the following CTMC in class 1 2 3 1 2 3 1 21