Tutorial 13 Markov chains "The future starts today, not tomorrow." Pope John Paul II A sequence of trials of an experiment is a nite Markov chain if: the outcome of each experiment is one of a nite set of states Ω = {i 1, i 2,... i n }. the outcome of an experiment depends only on the present state, and not on any past states: P (X k+1 = j k+1 X k = j k, X k 1 = j k 1,... X 0 = j 0 ) = P (X k+1 = j k+1 X k = j k ) for some states j 0, j 1,... j k+1 from Ω. We will work with time-homogeneous Markov chains, i.e. the probability transition matrix P does not depend on k, and then: P 11 P 12... P 1n...... P n1 P n2... P nn where P (X k+1 = j X k = i) = P ij. The probability that the system is in state i after k steps is denoted by: p i (k) = P (X k = i) which means the following distribution for the random variables X k : X k : One has the property: i1 i 2... i n p i1 (k) p i2 (k)... p in (k) p i (k) = or written in a vectorial way: n p j (k 1) P ji j=1 p(k) = p(k 1) P Suppose a Markov chain has initial probability vector: p(0) = (p i1 (0), p i2 (0),..., p in (0))
and transition matrix P, then the probability vector after n repetitions (steps) of the experiment is: The following identity holds: p(n) = p(0) P n P (X 0 = j 0, X 1 = j 1, X 2 = j 2,..., X k = j k ) = p j0 (0)P j0 j 1 P j1 j 2... P jk 1 j k for arbitrary states j 0,... j k. Absorbing Markov chains absorbing Markov states: a state i is absorbing if p ii = 1 A Markov chain is an absorbing Markov chain if the chain has at least one absorbing state and it is possible to go from any nonabsorbing state to an absorbing state. let P be the transition matrix of an absorbing Markov chain. Rearrange the rows and columns so that the absorbing states come rst. Matrix P will have the form I 0 R Q the fundamental matrix is dened as F = (I Q) 1 and it can be shown that: P n I 0, n F R 0 the matrix F R gives the matrix of probabilities that a particular initial nonabsorbing state will lead to a particular absorbing state. Regular Markov chains A Markov chain is a regular Markov chain if its transition matrix is regular i.e. some power of it has all the entries positive. for a regular Markov chain there exists a unique probability vector v such that for every probability vector v 0 : v 0 P n v, n the vector v is called the equilibrium vector and it gives the long-range trend of the Markov chain the vector v = (v 1, v 2,..., v n ) is found using the identities: and v v v 1 + v 2 +... + v n = 1.
Solved problems Problem 1. At the end of June 40% of the voters were registered as liberal, 45% as conservative, and 15% as independent. Over a one-month period, the liberals retained 80% of their constituency, while 15% switched to conservative and 5% to independent. The conservatives retained 70% and lost 30% to liberals. The independent retained 60% and lost 20% each to liberals and conservatives. Assume that these trends continue. a. Write a transition matrix using this information. b. Find the percent of each type of voter at the end of August. c. If the elections are in October 2018 which party will have the most chances to win the elections? Solution: the transition matrix, using the states L (liberal), C (conservative) and I (independent), is: 0.80 0.15 0.05 0.30 0.70 0 0.20 0.20 0.60 identify the initial probability vector: p(0) = (0.40, 0.45, 0.15) after two months the probability vector is computed using the formula: p(2) = p(0) P 2 observe that P 2 is a regular matrix and thus we have a regular Markov chain. formulate the main property of a regular Markov chain: for a regular Markov chain there exists a unique probability vector v such that for every probability vector v 0 : v 0 P n v, n nd this equilibrium vector v = (v 1, v 2, v 3 ) which gives the long-range trend of the Markov chain, from the equations: and v v v 1 + v 2 + v 3 = 1. the vector v gives the situation in October 2018.
Problem 2. A large group of mice is kept in a cage having connected compartments A, B, and C. Mice in compartment A move to B with probability 0.3 and to C with probability 0.4 Mice in B move to A or C with probability 0.2 and 0.25, respectively. The door of compartment C can not be opened from inside. Find the probability that a mouse from compartment A will eventually end up in compartment C. Solution: The probability transition matrix of the attached Markov chain is: A B C A 0.3 0.3 0.4 B 0.2 0.55 0.25 C 0 0 1 Thus C is an absorbing state absorbing Markov states: a state i is absorbing if p ii = 1 A Markov chain is an absorbing Markov chain if the chain has at least one absorbing state and it is possible to go from any nonabsorbing state to an absorbing state. let P be the transition matrix of an absorbing Markov chain. Rearrange the rows and columns so that the absorbing states come rst. Matrix P will have the form I 0 R Q the fundamental matrix is dened as F = (I Q) 1 and it can be shown that: P n I 0, n F R 0 the matrix F R gives the matrix of probabilities that a particular initial nonabsorbing state will lead to a particular absorbing state. Rearranging the states one gets: C A B C A 1 0 0 0.4 0.3 0.3 B 0.25 0.2 0.55 0.4 0.3 0.3 thus R = and Q = 0.25 0.2 0.55 1.76 1.17 0.99 Finally one has F and F R 0.78 2.74 0.99 Thus a mouse from compartment A has 99% chance to end up trapped in compartment C.
Proposed problems Problem 1. Write the transition diagram corresponding to the transition matrix: 0.5 0.3 0.2 0 1 0 0.2 0.2 0.6 and reversely write the transition matrix corresponding to the diagram: Problem 2. At "Politehnica" University a student has a chance of 15% of unking out during a given year, a 25% chance to repeat the year and 60% chance to nish the year. For a 3rd year student the possible states are: 3rd year student, 4th year student, has unked out, has graduated. Find a transition matrix. Find the probability that a 3rd year student will graduate. Problem 3. A market analyst is interested in whether consumers prefer Dell or Gateway computers. two market surveys taken one year apart reveals the following: 10% of Dell owners had switched to Gateway and the rest continued with Dell. 35% of Gateway owners had switched to Dell and the rest continued with Gateway. Find the distribution of the market after a long period of time. Problem 4. A security guard can stand in front of any one of the three doors of a building, and every minute he decides whether to move to another door chosen at random. If he is at middle door, he is equally likely to stay where he is, move to the door to the left, or move to the door to the right. If he is at the door on either end, he is equally likely to stay where he is or to move to the middle door. Write the transition probability matrix and prove that it corresponds to a regular Markov chain. Find the long-range trend for the fraction of time the guard spends in front of each door.
Problem 5. Let Ω = {C, R, S, G} denote the space of weather conditions, where C=cloudy, R=rainy, S=snowy and G=good. We suppose to have a probability transition matrix P as follows: 0.35 0.25 0.15 0.25 0.35 0.35 0.20 0.10 0.35 0.15 0.45 0.05 0.34 0.05 0.01 0.60 If on Monday the weather is good what is the weather forecast for Wednesday? (i.e. the chances it will be cloudy, rainy, snowy or good ). Find the chance that on Tuesday it will be rainy, on Wednesday it will be cloudy and on Thursday it will rain again. Problem 6. We simplify the previous problem assuming now only three possible weather conditions C,R and G with the probability matrix: 0.5 0.2 0.3 0.4 0.4 0.2 0.3 0.3 0.4 If on Monday is rainy what is the weather forecast for the Christmas Day? Problem 7. A computer system can operate in two dierent modes. Every hour, it remains in the same mode or switches to a dierent mode according to the transition probability matrix: 0.4 0.6 0.6 0.4 If the system is in Mode I at 5 : 30 pm, what is the probability that it will be in Mode I at 7 : 30 pm on the same day? Draw the state transition diagram for the corresponding Markov chains of these two problems.