Markov Chains Absorption Hamid R. Rabiee
Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is (i.e., p ii = ). A Markov chain is absorbing if it has at least one absorbing state, and if from every state it is possible to go to an absorbing state (not necessarily in one step). 0 3 4 States 0 and 4 are absorbing
The canonical form By separating transient (TR) and absorbing (ABS) states, the transition matrix of any absorbing Markov chain can be written as: And as time passes we can see that: 3
Absorption theorem In an absorbing MC the probability that the process will be absorbed is. (i.e. Q n 0 as n ). Proof sketch: By definition of an absorbing MC, There exist a path S from any non-absorbing state s j to an absorbing state. So there is a positive probability p j of taking this path every time the process starts from s j. Therefore there exists p and m, such that the probability of not absorbing after m steps is at most p. After km steps the probability of not being absorbed is at most p k, and as time goes to infinity this probability approaches zero. 4
The Fundamental Matrix Definition: For an absorbing Markov chain P, the following matrix is called the fundamental matrix for P. N = I Q Theorem: For an absorbing MC the matrix I Q has an inverse N, and N = I + Q + Q +. The ij-entry n ij of the Matrix N is the expected number of times the chain is in state s j, given that it starts in state s i. 5
Proof: I Q x = 0 x = Qx x = Q n x. Since Q n 0, we have Q n x 0, so x = 0. Thus x = 0 is the only point in the nullspace of I Q, therefore I Q = N exists. I Q I + Q + Q + + Q n = I Q n+ I + Q + Q + + Q n = N(I Q n+ ). Letting n tend to infinity we have: N = I + Q + Q + 6
Proof (cont d): Consider two transient states i and j, and suppose that S i is the initial state. X (k) : a R.V. which equals if the chain is in state s j after k steps, and equals 0 otherwise. We have: P X k = = (Q k ) ij The expected number of times the chain is in state s j in the first n steps, given that it starts in state s i is: E X 0 + X + + X n = (Q 0 ) ij + (Q ) ij + + (Q n ) ij As n goes to infinity we have: E X 0 + X + = (Q 0 ) ij + (Q ) ij + = N ij 7
Example: Consider the following Markov chain (D random walk with 5 states): 0 3 4 The transition matrix in canonical form is: ; 8
Example (cont d): If we start in state, then the expected number of times in states, and 3 before being absorbed are, and. 9
Time to Absorption: Question: Given that the chain starts in state s i, what is the expected number of steps before the chain is absorbed? Reminder: Starting from s i, the expected number of steps the process will be in state s j before absorption is N ij. Therefor j N ij is the expected number of steps before absorption. Theorem: Let t i be the expected number of steps before the chain is absorbed, given that the chain starts in state s i, and let t be the column vector whose i-th entry is t i. Then t = Nc, where c is a column vector all of whose entries are. 0
Absorption Probabilities: Question: Given that the chain starts in the transient state s i, what is the probability that it will be absorbed in the absorbing state s j? Intuition: Starting from s i, the expected number the process will be in state s k before absorption is N ik. Each time, the probability to move to state s j is R kj (kj-th element of matrix R introduced in the canonical form).
Absorption Probabilities: Theorem: Let B ij be the probability that an absorbing chain will be absorbed in the absorbing state s j if it starts in the transient state s i. Let B be the matrix with entries b ij. Then B is a t-by-r matrix, and B = NR, where N is the fundamental matrix and R is as in the canonical form. Proof: B ij = n k (n) q ik rkj = k n (n) q ik rkj = = NR ij k n ik r kj
Example: In previous example (D random walk with 5 states) we found that: Hence The expected number of steps before absorption when the process starts from states,, 3 is 3, 4 and 3 respectively. 3
Example (cont d): From the canonical form: Hence Here the first row tells us that, starting from state, there is probability 3/4 of absorption in state 0 and /4 of absorption in state 4. 4
References Grinstead C. M, and Snell J. L, Introduction to probability, American Mathematical Society, 997 5