Lecture 9 Classification of States

Size: px
Start display at page:

Download "Lecture 9 Classification of States"

Transcription

1 Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and some theory before we get to examples. You might want to peek ahead notions are being introduced; it will help your understanding. THE COMMUNICATION RELATION Let {X n } n N0 be a Markov chain on the state space S. For a given set B of states, define the hitting time τ(b) of B as τ B = min{n N 0 : X n B}. (9.) We know that τ B is, in fact, a stopping time with respect to {X n } n N0. When B consists of only one element B = {i}, we simply write τ i for τ {i} ; τ i is the first time the Markov chain {X n } n N0 hits the state i. As always, we allow τ B to take the value +; it means that no state in B is ever hit. The hitting times are important both for immediate applications of {X n } n N0, as well as for better understanding of the structure of Markov chains. Example 9.. Let {X n } n N0 be the chain which models a game of tennis (see Lecture 8). The probability of winning for (say) Amélie can be phrased in terms of hitting times: P[Amélie wins] = P[τ ia < τ ib ], where i A = Amélie wins and i B = Björn wins (the two absorbing states of the chain). We will learn how to compute such probabilities in the subsequent lectures. Having introduced the hitting times τ B, let us give a few more definitions. Recall that the notation P i [A] is used to mean P[A X 0 = i] (for any event A). In practice, we use P i to signify that we are starting the chain from the state i, i.e., P i corresponds to a Markov chain whose transition matrix is the same as the one of {X n } n N0, but the initial distribution is given by P i [X 0 = j] = 0 if j = i and P i [X 0 = i] =.

2 Lecture 9: Classification of States 2 of 27 Definition 9.2. A state i S is said to communicate with the state j S, denoted by i j if P i [τ j < ] > 0. Intuitively, i communicates with j is there is a non-zero chance that the Markov chain X will eventually visit j if it starts from i. Sometimes we also say that j is a consequent of i, that j is accessible from i, or that j follows i. Example 9.3. In the tennis example, every state is accessible from (0, 0) (the fact that p (0, ) is important here), but (0, 0) is not accessible from any other state. The consequents of (40, 40) are (40, 40) itself, (40, Adv), (Adv, 40), Amélie wins and Björn wins. Before we examine some properties of the relation, here is a simple but useful characterization and it is based on the following fact about probability: let {A n } n N0 be an increasing sequence of events, i.e., A n A n+, for all n N 0. Then P[ n N0 A n ] = lim n P[A n ], and the sequence inside the limit is non-decreasing. Proposition 9.4. i j if and only if > 0 for some n N 0. Proof. The event A = {τ j < } can be written as an increasing union Therefore, A = n N A n, where A n = {τ j n}. P i [τ j < ] = P i [A] = lim n P i [A n ] = lim n P i [τ j n], and the sequence P i [τ j n], n N is non-decreasing. In particular, P i [τ j < ] P i [A n ], for all n N. (9.2) Suppose, first, that visited, we have > 0 for some n. Since τ j is the first time j is P i [A n ] = P i [τ j n] P i [X j = n] = > 0. By (9.2), we have P i [τ j < ] > 0, and so, i j. Conversely, suppose that i j, i.e., P i [A] > 0. Since P i [A] = lim n P i [A n ], we must have P i [A n0 ] > 0 for some n 0 (and then all larger

3 Lecture 9: Classification of States 3 of 27 n), i.e., P i [τ j n 0 ] > 0. When τ j n 0, we must have X 0 = j or X = j or... or X n = j, i.e., and so {τ j n 0 } n 0 k=0 {X k = j}, 0 < P i [τ j n 0 ] P i [ n n 0 0 k=0 {X k = j}] P i [X k = j]. k=0 Therefore, P i [X n = j] for at least one n {0,,..., n 0 }. In other words, > 0, for at least one n {0,,..., n 0 }. Proposition 9.5. For all i, j, k S, we have. i i, 2. i j, j k implies i k. Proof.. If we start from state i S we are already there (note that 0 is allowed as a value for τ B in (9.)), i.e., τ i = 0 when X 0 = i. 2. Using Proposition 9.4, it will be enough to show that ik > 0 for some n N. By the same Proposition, we know that p (n ) > 0 and p (n 2) jk > 0 for some n, n 2 N 0. By the Chapman-Kolmogorov relations, with n = n + n 2, we have ik = p (n ) il p (n 2) lk p (n ) p (n 2) jk > 0. l S Remark 9.. The inequality ik p (n ) il p (n 2) lk is valid for all i, l, k S, as long as n + n 2 = n. It will come in handy later. CLASSES Definition 9.7. We say that the states i and j in S intercommunicate, denoted by i j if i j and j i. A set B S of states is called irreducible if i j for all i, j S. Unlike the relation of communication, the relation of intercommunication is symmetric. Moreover, we have the following three properties (the result follows directly from Proposition 9.4, so we omit it): Proposition 9.8. The relation is an equivalence relation of S, i.e., for all i, j, k S, we have

4 Lecture 9: Classification of States 4 of 27. i i (reflexivity), 2. i j implies j i (symmetry), and 3. i j, j k implies i k (transitivity). The fact that is an equivalence relation allows us to split the state-space S into equivalence classes with respect to. In other words, we can write S = S S 2 S 3..., where S, S 2,... are mutually exclusive (disjoint) and all states in a particular S n intercommunicate, while no two states from different equivalence classes S n and S m do. The sets S, S 2,... are called classes of the chain {X n } n N0. Equivalently, one can say that classes are maximal irreducible sets, in the sense that they are irreducible and no class is a subset of a (strictly larger) irreducible set. A cookbook algorithm for class identification would involve the following steps:. Start from an arbitrary state (call it ). 2. Identify all states j that communicate with it - don t forget that always i i, for all i. 3. That is your first class, call it C. If there are no elements left, then there is only one class C = S. If there is an element in S \ C, repeat the procedure above starting from that element. The notion of a class is especially useful in relation to another natural concept: Definition 9.9. A set B S of states is said to be closed if i j for all i B and all j S \ B. A state i S such that the set {i} is closed is called absorbing. Here is a nice characterization of closed sets: Proposition 9.0. A set B of states is closed if and only if p = 0 for all i B and all j B c = S \ B. Proof. Suppose, first, that B is closed. Then for i B and j B c, we have i j, i.e., = 0 for all n N. In particular, p = 0. Conversely, suppose that p = 0 for all i B, j B c. We need to show that i j (i.e. = 0 for all n N) for all i B, j B c. Suppose, to the contrary, that there exist i B and j B c such that > 0 for some n N. Since p = 0, we must have n >. Out

5 Lecture 9: Classification of States 5 of 27 of all n > such that ik > 0 for some k B c, we pick the smallest one (let us call it n 0 ), so that that p (n 0 ) ik = 0 for all k B c. By the Chapman-Kolmogorov relation, we have = k S p (n ) ik p kj = k B p (n ) ik p kj + p (n ) ik p kj. (9.3) k B c The terms in the first sum in the second line of (9.3) are all zero because p kj = 0 (k B and j B c ), and the terms of the second one are also all zero because p (n ) ik contradiction. = 0 for all k B c. Therefore, = 0 - a Intuitively, a set of states is closed if it has the property that the chain {X n } n N0 stays in it forever, once it enters it. In general, if B is closed, it does not have to follow that S \ B is closed. Also, a class does not have to be closed, and a closed set does not have to be a class. Here are some examples: Example 9.. Consider the tennis chain of the previous lecture and consider the following three sets of states:. B = { Amélie wins }: closed and a class, but S \ B is not closed 2. B = S \ {(0, 0)}: closed, but not a class, and 3. B = {(0, 0)}: class, but not closed. There is a relationship between classes and the notion of closedness: Proposition 9.2. Every closed set B is a union of classes. Proof. Let ˆB be the union of all classes C such that C B =. In other words, take all the elements of B and throw in all the states which intercommunicate with them. I claim that ˆB = B. Clearly, B ˆB, so we need to show that ˆB B. Suppose, to the contrary, that there exists j ˆB \ B. By construction, j intercommunicates with some i B. In particular i j. By closedness of B, we must have j B. This is a contradiction with the assumptions that j ˆB \ B. Example 9.3. A converse of Proposition 9.2 is not true. Just take the set B = {(0, 0), (0, 5)} in the tennis example. It is a union of classes, but it is not closed.

6 Lecture 9: Classification of States of 27 TRANSIENCE AND RECURRENCE It is often important to know whether a Markov chain will ever return to its initial state, and if so, how often. The notions of transience and recurrence address this questions. Definition 9.4. The (first) visit time to state j, denoted by τ j () is defined as τ j () = min{n N : X n = j}. As usual τ j () = + if X n = j for all n N. Note that the definition of the random variable τ j () differs from the definition of τ j in that the minimum here is take over the set N of natural numbers, while the set of non-negative integers N 0 is used for τ j. When X 0 = j, the hitting time τ j and the visit time τ j () coincide. The important difference occurs when X 0 = j. In that case τ j = 0 (we are already there), but it is always true that τ j (). It can even happen that P i [τ i () = ] =. Definition 9.5. A state i S is said to be. recurrent if P i [τ i () < ] =, 2. positive recurrent if E i [τ i ()] < (E i means expectation when the probability is P i ), 3. null recurrent if it is recurrent, but not positive recurrent, 4. transient if it is not recurrent. A state is recurrent if we are sure we will come back to it eventually (with probability ). It is positive recurrent if the time between two consecutive visits has finite expectation. Null recurrence means the we will return, but the waiting time may be very long. A state is transient is there is a positive chance (however small) that the chain will never return to it. Remember that the greatest common denominator (GCD) of a set A of natural numbers if the largest number d such that d divides each k A, i.e., such that each k A is of the form k = ld for some l N. Definition 9.. A period d(i) of a state i S is the greatest common denominator of the return-time set R(i) = {n N : > 0} of the state i. When R(i) =, we set d(i) =. A state i S is called aperiodic if d(i) =.

7 Lecture 9: Classification of States 7 of 27 Example 9.7. Consider the Markov chain with three states and the transition matrix 0 0 P = The return set for each state i {, 2, 3} is given by R(i) = {3,, 9, 2,... }, so d(i) = 3 for all i {, 2, 3}. However, if we change the probabilities a bit: 0 0 ˆP = 0 0, the situation changes drastically: so that d(i) = for i {, 2, 3} R() = {3, 4, 5,,... }, R(2) = {2, 3, 4, 5,,... }, R(3) = {, 2, 3, 4, 5,,... }, A CRITERION FOR RECURRENCE The definition of recurrence from above is conceptually simple, but it gives us no clue about how to actually go about deciding whether a particular state in a specific Markov chain is recurrent. A criterion stated entirely in terms of the transition matrix P would be nice. Before we give it, we need to introduce some notation. For two (not necessarily different) states, i, j S, let f (n) be the probability that it will take exactly n steps for the first visit to j (starting from i) to occur: f (n) = P i [τ j () = n] = P[X n = j, X n = j, X n 2 = j,..., X 2 = j, X = j X 0 = i]. Let f (no superscript) denote the probability that j will be reached from i eventually, i.e., f = P i [τ j () < ] = Clearly, we have the following equivalence: The reason the quantities f (n) relationship: f (n). n= i is recurrent if and only if f =. are useful lies in the following recursive

8 Lecture 9: Classification of States 8 of 27 Proposition 9.8. For n N and i, j S, we have = n m= f (m) p (n m). Proof. In preparation for the rest of the proof, let us reiterate that the event {τ j () = m} can be written as {τ j () = m} = {X m = j, X m = j, X m 2 = j,..., X = j}. Using the law of total probability, we can split the event {X n = j} according to the value of the random variable τ j () (the values of τ j () larger than n do not appear in the sum since X n cannot be equal to j in those cases): n = P i [X n = j] = P i [X n = j τ j () = m] P i [τ j () = m] m= = = n m= P i [X n = j X m = j, X m = j, X m 2 = j,..., X = j] f (m) n P i [X n = j X m = j] f (m) = m= Where did we use the Markov Property? n m= p (n m) f (m) An important corollary to Proposition 9.8 is the following characterization of recurrence: Proposition 9.9. A state i S is recurrent if and only if = +. n= Proof. For i = j, Proposition 9.8 states that = n m= p (n m) f (m). (9.4) For N N, we define N s N =, n= so that, if we sum (9.4) over all n from to N N, we get s N = N n= n m= p (n m) f (m) = N n= N m= {m n} p (n m) f (m) (9.5) Then, we interchange the order of summation in the last sum in (9.5) and remember that p (0) = to get s N = N m= N n=m p (n m) f (m) = N f (m) m= N m = n=0 N f (m) ( + s N m ) m=

9 Lecture 9: Classification of States 9 of 27 The sequence {s N } N N is non-decreasing, so s N N f (m) ( + s N ) ( + s N ) f. m= Therefore, f lim N s N + s N. If n= p(n) = +, then lim N s N =, and, so, lim N s N +s N =, so f. On the other hand, f = P i [τ i () < ], so f. Therefore, f = and the state i is recurrent. Conversely, suppose that i is recurrent, i.e., f =. We can repeat the procedure from above (but with + instead of N and using Tonelli s theorem to interchange the order of the sums) to get that = f (m) n= m= ( + This can happen only if m= p(n) n= = +. ) = + n=. Here is an application of our recurrence criterion - a beautiful and unexpected result of George Pólya from 92. Example 9.20 (Optional). In addition to the simple symmetric random walk on the line (d = ) we studied before, one can consider random walks whose values are in the plane (d = 2), the space (d = 3), etc. These are usually defined as follows: the random walk in d dimensions is the Markov chain with the state space S = Z d ; starting from the state (x,..., x d ), it picks one of its 2d neighbors (x +,..., x d ), (x,..., x d ), (x, x 2 +,..., x d ), (x, x 2,..., x d ),..., (x,..., x d + ), (x,..., x d ) randomly and uniformly and moves there. In this problem, we are interested in recurrence properties of the d-dimensional random walk and and their dependence on d. We already know that the simple symmetric random walk on Z is recurrent (i.e., every i Z is a recurrent state). The easiest way to proceed when d 2 is to use Proposition 9.8, i.e., to estimate the values, for n N. By symmetry, we can focus on the origin, i.e., it is enough to compute, for each n N, = 00 = P 0[X n = (0, 0,..., 0)]. For that, we should count all trajectories from (0,..., 0) that return to (0,..., 0) in n steps. First of all, it is clear that n needs to be even, i.e., n = 2m, for some m N. It helps if we think of any trajectory as a sequence of increments ξ,..., ξ n, where each ξ i takes its value in the set {,, 2, 2,..., d, d}. In words, ξ i = +k if the k-th coordinate

10 Lecture 9: Classification of States 0 of 27 increases by on the i-th step, and ξ i = k, if the k-th coordinate decreases. This way, the problem becomes combinatorial: In how many ways can we put one element of the set {,, 2, 2,..., d, d} into each of n = 2m boxes so that the number of boxes with k in them equals to the number of boxes with k in them? To get the answer, we start by fixing a possible count (i,..., i d ), satisfying i + + i d = m of the number of times each of the values in {, 2,..., d} occurs. These values have to be placed in m of the 2m slots and their negatives (possibly in a different order) in the remaining m slots. So, first, we choose the positive slots (in ( 2m m ) ways), and then distribute i ones, i 2 twos, etc., in those slots; this can be done in 2 ( ) m i i 2... i d ways. This is also the number of ways we can distribute the negative ones, twos, etc., in the remaining slots. All in all, for fixed i, i 2,..., i d, all of this can be done in For d = 2 we could have used the values up, down, left and right, for,, 2 or 2, respectively. In dimension 3, we could have added forward and backward, but we run out of words for directions for larger d. 2 ( m i...i ) is called the multinomial coefficient. It counts the number of ways we d can color m objects into one of d colors such that there are i objects of color, i 2 of color 2, etc. It is a generalization of the binomial coefficient and is given by ( m i i 2... i d ) = m! i!i 2!... i d!. ( )( 2m m ) 2 m i i 2... i d ways. Remembering that each path has the probability (2d) 2m, and summing over all i,..., i d with i + + i d = m, we get p (2m) = (2d) 2m ( ) 2m m i + +i d =m For d =, the situation is somewhat simpler: p (2m) = ( ) 2m 4 m m ( m i i 2... i d It is still too complicated sum over all m N, but we can simplify it further by using Stirling s formula n! 2πn ( n e ) n, where a n b n means lim n a n /b n =. Indeed, from there, ( ) 2m 4m, (9.) m πm ) 2. and so p (2m) mπ. It is easy to apply our criterion: p (2m) =, m=

11 Lecture 9: Classification of States of 27 and we recover our previous conclusion that the simple symmetric random walk is recurrent. Moving on to the case d = 2, we notice that the sum of the multinomial coefficients no longer equals ; in fact it is given by 3 m i=0 ( ) m 2 = i ( ) 2m, (9.7) m 3 Why is this identity true? Can you give a counting argument? and, so, and p (2m) = m ( 4 m πm ) 2 πm, p (2m) = +, m= which implies that the two-dimensional random walk is also recurrent. How about d 3? Things are even more complicated now. The multinomial sum does not admit a nice closed-form expression as in (9.7), so we need to do some estimates; these are a bit tedious so we skip them, but report the punchline, which is that p (2m) C( 3m ) 3/2, for some constant C. This is where it gets interesting: this series converges: p (2m) <, m= and, so, the random walk is transient for d = 3. This is enough to conclude that the random walk is transient for all d 3, too (why?). In a nutshell, the random walk is recurrent for d =, 2, but transient for d 3. CLASS PROPERTIES Certain properties of states are shared between all elements in a class. Knowing which properties have this feature is useful for a simple reason - if you can check them for a single class member, you know automatically that all the other elements of the class share it. Definition 9.2. A property is called a class property it holds for all states in its class, whenever it holds for any one particular state in the that class. Put differently, a property is a class property if and only if either all states in a class have it or none does.

12 Lecture 9: Classification of States 2 of 27 Proposition Transience and recurrence are class properties. Proof. Suppose that the state i is recurrent, and that j is in its class, i.e., that i j. Then, there exist natural numbers m and k such that p (m) > 0 and p (k) ji > 0. By the Chapman-Kolmogorov relations, for each n N, we have p (n+m+k) = l S l 2 S p (k) jl l l 2 p (m) l 2 m p(k) ji p (m). In other words, there exists a positive constant c (take c = p (k) ji p (m) ), independent of n, such that p (n+m+k) c. Therefore, by recurrence of i we have n= p(n) n= = n=m+k+ p (n+m+k) n= c = +, and n= = +, and so, j is recurrent. Therefore, recurrence is a class property. Since transience is just the opposite of recurrence, it is clear that transience is also a class property. Proposition Period is a class property, i.e., all elements of a class have the same period. Proof. Let d = d(i) be the period of the state i, and let j i. Then, there exist natural numbers m and k such that p (m) > 0 and p (k) ji > 0. By Chapman-Kolmogorov, p (m+k) p (m) p (k) ji > 0, and so m + k R(i). Similarly, for any n R(j), p (m+k+n) p (m) p (k) ji > 0, so m + k + n R(i). By the definition of th period, we see now that d(i) divides both m + k and m + k + n, and, so, it divides n. This works for each n R(j), so d(i) is a common divisor of all elements of R(j); this, in turn, implies that d(i) d(j). The same argument with roles of i and j switched shows that d(j) d(i). Therefore, d(i) = d(j). THE CANONICAL DECOMPOSITION Now that we know that transience and recurrence are class properties, we can introduce the notion of canonical decomposition of a Markov

13 Lecture 9: Classification of States 3 of 27 chain. Let S, S 2,... be the collection of all classes; some of them contain recurrent states and some transient ones. Proposition 9.22 tells us that if there is one recurrent state in a class, than all states in the class must be recurrent. This, it makes sense to call the whole class recurrent. Similarly, the classes which are not recurrent consists entirely of transient states, so we call them transient. There are at most countably many states, so the number of all classes is also at most countable. In particular, there are only countably (or finitely) many recurrent classes, and we usually denote them by C, C 2,.... Transient classes are denoted by T, T 2,.... There is no particular rule in the choice of indices, 2, 3,... for particular classes. The only point is that they can be enumerated because there are at most countably many of them. The distinction between different transient classes is usually not very important, so we pack all transient states together in a set T = T T Definition Let S be the state space of a Markov chain {X n } n N0. Let C, C 2,... be its recurrent classes, and T, T 2,... the transient ones, and let T = T T 2... be their union. The decomposition S = T C C 2 C 3..., is called the canonical decomposition of the (state space of the) Markov chain {X n } n N0. The reason that recurrent classes are important is simple - they can be interpreted as Markov chains themselves. In order for such an interpretation to be possible, we need to make sure that the Markov chain stays in a recurrent class if it starts there. In other words, we have the following important proposition: Proposition Recurrent classes are closed. Proof. Suppose, contrary to the statement of the Proposition, that there exist two states i = j such that. i is recurrent, 2. i j, and 3. j i. The idea of the proof is the following: whenever the transition i j occurs (perhaps in several steps), then the chain will never return to i, since j i. By recurrence of i, that is not possible. Therefore i j - a contradiction.

14 Lecture 9: Classification of States 4 of 27 More formally, the recurrence of i means that (starting from i, i.e., under P i ) the event A = {X n = i, for infinitely many n N} has probability, i.e., P i [A] =. The law of total probability implies that = P i [A] = P i [A τ j () = n]p i [τ j () = n]+ n N + P i [A τ j () = +] P i [τ j () = +]. (9.8) On the event τ j () = n, the chain can visit the state i at most n times, because it cannot hit i after it visits j (remember j i). Therefore, P[A τ j () = n] = 0, for all n N. It follows that = P i [A τ j () = +] P i [τ j () = +]. Both of the terms on the right-hand side above are probabilities whose product equals, which forces both of them to be equal to. In particular, P i [τ j () = +], or, phrased differently, i j - a contradiction. Together with the canonical decomposition, we introduce the canonical form of the transition matrix P. The idea is to order the states in S with the canonical decomposition in mind. We start from all the states in C, followed by all the states in C 2, etc. Finally, we include all the states in T. The resulting matrix looks like this P P P = 0 0 P , Q Q 2 Q where the entries should be interpreted as matrices: P is the transition matrix within the first class, i.e., P = (p, i C, j C ), etc. Q k contains the transition probabilities from the transient states to the states in the (recurrent) class C k. Note that Proposition 9.25 implies that each P k is a stochastic matrix, or, equivalently, that all the entries in the row of P k outside of P k are zeros. We finish the discussion of canonical decomposition with an important result and one of its consequences. Proposition 9.2. Suppose that the state space S is finite. Then there exists at least one recurrent state.

15 Lecture 9: Classification of States 5 of 27 Proof. The transition matrix P is stochastic, and so are all its powers P n. In particular, if we sum the entries in each row, we get =, for all i S. j S Summing the above equality over all n N and switching the order of integration gives + = = n N n N j S = j S n N. We conclude that n N = + for at least one state j (this is where the finiteness of S is crucial). We claim that j is a recurrent state. Suppose, to the contrary, that it is transient. If we sum the equality from Proposition 9.8 over n N, we get = n N n N = m N n m= f (m) p (n m) f (m) = n=0 m N n=m = f. n=0 p (n m) f (m) (9.9) Transience of j implies that n= p(n) < and, by definition, f, for any i, j. Therefore, n N < - a contradiction. Remark If S is not finite, it is not true that recurrent states must exist. Just remember the Deterministically-Monotone Markov Chain example, or the random walk with p = 2. All states are transitive there. In a finite state-space case, we have the following dichotomy: Corollary A class of a Markov chain on a finite state space is recurrent if and only if it is closed. Proof. We know that recurrent classes are closed. In order to show the converse, we need to prove that transient classes are not closed. Suppose, to the contrary, the there exists a finite state-space Markov chain with a closed transient class T. Since T is closed, we can see it as a state space of the restricted Markov chain. This, new, Markov chain has a finite number of states so there exists a recurrent state. This is a contradiction with the assumption that T consists only of transient states. Remark Again, finiteness is necessary. For a random walk on Z, all states intercommunicate. In particular, there is only one class Z itself and it it trivially closed. If p = 2, however, all states are transient, and, so, Z is a closed and transient class.

16 Lecture 9: Classification of States of 27 EXAMPLES Random walks: p (0, ). Communication and classes. Clearly, it is possible to go from any state i to either i + or i in one step, so i i + and i i for all i S. By transitivity of communication, we have i i + i + 2 i + k. Similarly, i i k for any k N. Therefore, i j for all i, j S, and so, i j for all i, j S, and the whole S is one big class. Closed sets. The only closed set is S itself. Transience and recurrence We studied transience and recurrence in the lectures about random walks (we just did not call them that). The situation highly depends on the probability p of making an up-step. If p > 2, there is a positive probability that the first step will be up, so that X =. Then, we know that there is a positive probability that the walk will never hit 0 again. Therefore, there is a positive probability of never returning to 0, which means that the state 0 is transient. A similar argument can be made for any state i and any probability p = 2. What happens when p = 2? In order to come back to 0, the walk needs to return there from its position at time n =. If it went up, the we have to wait for the walk to hit 0 starting from. We have shown that this will happen sooner or later, but that the expected time it takes is infinite. The same argument works if X =. All in all, 0 (and all other states) are null-recurrent (recurrent, but not positive recurrent). Periodicity. Starting from any state i S, we can return to it after 2, 4,,... steps. Therefore, the return set R(i) is always given by R(i) = {2, 4,,... } and so d(i) = 2 for all i S. Gambler s ruin: p (0, ). Communication and classes. The winning state a and the losing state 0 are clearly absorbing, and form one-element classes. The other a states intercommunicate among each other, so they form a class of their own. This class is not closed (you can - and will - exit it and get absorbed sooner or later). Transience and recurrence. The absorbing states 0 and a are (trivially) positive recurrent. All the other states are transient: starting from any state i {, 2,..., a }, there is a positive probability (equal to p a i ) of winning every one of the next a i games and, thus, getting absorbed in a before returning to i.

17 Lecture 9: Classification of States 7 of 27 Periodicity. The absorbing states have period since R(0) = R(a) = N. The other states have period 2 (just like in the case of a random walk). Deterministically monotone Markov chain Communication and classes. A state i communicates with the state j if and only if j i. Therefore i j if and only if i = j, and so, each i S is in a class by itself. Closed sets. The closed sets are precisely the sets of the form B = i, i +, i + 2,..., for i N. Transience and recurrence All states are transient. Periodicity. The return set R(i) is empty for each i S, so d(i) =, for all i S. A game of tennis Communication and classes. All the states except for those in E = {(40, Adv), (40, 40), (Adv, 40), Amélie wins, Björn wins} intercommunicate only with themselves, so each i S \ E is in a class by iteself. The winning states Amélie wins and Björn wins are absorbing, and, so, also form classes with one element. Finally, the three states in {(40, Adv), (40, 40), (Adv, 40)} intercommunicate with each other, so they form the last class. Periodicity. The states i in S \ E have have the property that = 0 for all n N, so d(i) =. The winning states are absorbing so d(i) = for i {Amélie wins, Björn wins}. Finally, the return set for the remaining three states is {2, 4,,... } so their period is 2. PROBLEMS Problem 9.. Suppose that all classes of a Markov chain are recurrent, and let i, j be two states such that i j. Only one of the statements below is necessarily true; which one? (a) for each state k, either i k or j k (b) j i (c) p ji > 0 or p > 0 (d) n= p(n) <

18 Lecture 9: Classification of States 8 of 27 (e) None of the above Solution: The correct answer is (b). (a) False. Take a chain with two states, 2 where p = p 22 =, and set i = j =, k = (b) True. Recurrent classes are closed, so i and j belong to the same class. Therefore j i. (c) False. Take a chain with 4 states, 2, 3, 4 where p 2 = p 23 = p 34 = p 4 =, and set i =, j = (d) False. That would mean that j is transient. (e) False Problem 9.2. Consider a Markov Chain whose transition graph is given on the right Identify the classes Find transient and recurrent states. 3. Find periods of all states Compute f (n) 3, for all n N Using Mathematica, we can get that, approximately, P 20 = , where P is the transition matrix of the chain. Compute the probability P[X 20 = 3], if the initial distribution (the distribution of X 0 ) is given by P[X 0 = ] = /2 and P[X 0 = 3] = /2. Solution:. The classes are T = {}, T 2 = {2}, C = {3, 4, 5, } and C 2 = {7, 8} 2. The states in T and T 2 are transient, and the others are recurrent. 3. The periods are all.

19 Lecture 9: Classification of States 9 of For n =, f (n) 3 = 0, since you need at least two steps to go from to 3. For n = 2, the chain needs to follow 2 3, for f (n) 3 = (/2) 2. For n larger than 2, the only possibility is for the chain to stay in the state for n 2 periods, jump to 2 and finish at 3, so f (n) 3 = (/2) n in that case. All together, we have f (n) 3 = 0, n = (/2), n We know from the notes that the distribution of X 20, when represented as a vector a (20) = (a (20), a (20) 2,..., a (20) 8 ) satisfies a (20) = a (0) P 20. By the assumption a (0) = ( 2, 0, 2, 0, 0, 0, 0, 0), so P[X 20 = 3] = a (20) 3 = = Problem 9.3. Identify the recurrent states in the setting of. Problem 8.2, Lecture 8, 2. Problem 8.3, Lecture 8. Solution:. The state (0, 00) is clearly recurrent (it is absorbing). Let us show that all other (0200 of them) are transient. It will be enough to show that i (0, 00), for each i S, because (0, 00) is in a recurrent class by itself. To do that, we just need to find a path from any state (r, b) to (0, 00) through the transition graph: (r, b) (r, b) (0, b) (0, b + ) (0, 00). 2. Once the chain moves to the next state, it can never go back. Therefore, it cannot happen that i j unless i = j, and so, every state is a class of its own. The class {m} is clearly absorbing and, therefore, recurrent. All the other classes are transient since it is possible (and, in fact, certain) that the chain will move on to the next state and never come back. As for periods, all of them are, since is in the return set of each state. Problem 9.4. A fair -sided die is rolled repeatedly, and for n N, the outcome of the n-th roll is denoted by Y n (it is assumed that {Y n } n N are independent of each other). For n N 0, let X n be the remainder (taken in the set {0,, 2, 3, 4}) left after the sum n k= Y k is divided by 5, i.e. X 0 = 0, and X n = n Y k ( mod 5 ), for n N, k=

20 Lecture 9: Classification of States 20 of 27 making {X n } n N0 a Markov chain on the state space {0,, 2, 3, 4} (no need to prove this fact). Write down the transition matrix of the chain, classify the states, separate recurrent from transient ones, and compute the period of each state. Solution: The outcomes, 2, 3, 4, 5, leave remainders, 2, 3, 4, 0,, when divided by 5, so the transition matrix P of the chain is given by P = Since p > 0 for all i, j S, all the states belong to the same class, and, because there is at least one recurrent state in a finite-state-space Markov chain and because recurrence is a class property, all states are recurrent. Finally, is in the return set of every state, so the period of each state is. Problem 9.5. Let {Z n } n N0 be a Branching process (with state space S = {0,, 2, 3, 4,... } = N 0 ) with the offspring probability given by p 0 = /2, p 2 = /2. Classify the states (find classes), and describe all closed sets. Solution: If i = 0, then i is an absorbing state (and, therefore, a class in itself). For i N, p > 0 if j is of the form 2k for some k i (out of i individuals, i k die and the remaining k have two children). In particular, no one ever communicates with odd i N. Therefore, each odd i is a class in itself. For even i, it communicates with all even numbers from 0 to 2i; in particular, i i + 2 and i i 2. Therefore, all positive even numbers intercommunicate and form a class. Clearly, {0} is a closed set. Every other closed set will contain a positive even number; indeed, after one step we are in an even state, no matter where be start from, and this even number could be positive, unless the initial state was 0). Therefore, any closed set different from {0} will have to contain the set of all even numbers 2N 0 = {0, 2, 4,... }, and, 2N 0 is a closed set itself. On the other hand, if we add any collection of odd numbers to 2N 0, the resulting set will still be closed (exiting it would mean hitting an odd number outside of it). Therefore, closed sets are exactly the sets of the form C = {0} or C = 2N 0 D, where D is any set of odd numbers.

21 Lecture 9: Classification of States 2 of 27 Problem 9.. Which of the following statements is true? Give a short explanation (or a counterexample where appropriate) for your choice. {X n } n N0 is a Markov chain with state space S.. f (n) for all i, j S and all n N. 2. If states i and j intercommunicate, then there exists n N such that > 0 and ji > If all rows of the transition matrix are equal, then all states belong to the same class. 4. If f < and f ji <, then i and j do not intercommunicate. 5. If P n I, then all states are recurrent. (Note: We say that a sequence {A n } n N of matrices converges to the matrix A, and we denote it by A n A, if (A n ) A, as n, for all i, j.) Solution:. TRUE. is the probability that X n = j ( given that X 0 = i). On the other hand f (n) is the probability that X n = j and that X k = j for all k =,..., n (given X 0 = i). Clearly, the second event is a subset of the first, so f (n). 2. FALSE. Consider the Markov chain with the transition graph depicted on the right. All states intercommunicate, but 2 > 0 if and only if n is of the form 3k +, for k N 0. On the other hand 2 > 0 if and only if n = 3k + 2, for some k N 0. Thus, 2 and 2 are never simultaneously positive FALSE. Consider a Markov chain with the following transition matrix: [ ] 0 P =. 0 Then is an absorbing state and it is in a class of its own, so it is not true that all states belong to the same class FALSE. Consider the Markov chain with the transition graph on the right: Then the states 2 and 3 clearly intercommunicate. In the event that (starting from 2), the first transition happens to be 2, we are never visiting 3. Otherwise, we visit 3 at time. Therefore, f 23 = /2 <. Similarly, f 32 = /2 <

22 Lecture 9: Classification of States 22 of TRUE. Suppose that there exists a transient state i S. Then n <, and, in particular, 0, as n. This is a contradiction with the assumption that, for all i S. Problem 9.7. Let C be a class in a Markov chain. Only one of the statements below is necessariliy true; which one? (a) C is closed, (b) C c is closed, (c) at least one state in C is recurrent, (d) for all states i, j C, p > 0, (e) none of the above. Solution: The answer is (e). (a) False. Take C = {(0, 0)} in the Tennis example. (b) False. Take C = {Amélie wins} in the Tennis example. (c) False. This is only true in finite chains. Take the Deterministically Monotone Markov Chain as an example. All of its states are transient. (d) False. This would be true if it read for each pair of states i, j C, there exists n N such that > 0. Otherwise, we can use the Tennis chain and the states i = (40, Adv) and j = (Adv, 40). They belong to the same class, but p = 0 (you need to pass through (40, 40) to go from one to another). (e) True. Problem 9.8. In a Markov chain whose state space has n elements (n N). Only one of the statements below is necessariliy true; which one? (a) all classes are closed (b) at least one state is transient, (c) not more than half of all states are transient, (d) there are at most n classes, (e) none of the above. Solution: The answer is (d).

23 Lecture 9: Classification of States 23 of 27 (a) False. In the Tennis example, there are classes that are not closed. (b) False. Just take the Regime Switching with 0 < p 0, p 0 <. Both of the states are recurrent there. Or, simply take a Markov chain with only one state (n = ). (c) False. In the Tennis example, 8 states are transient, but n = 20. (d) True. Classes form a partition of the state space, and each class has at least one element. Therefore, there are at most n classes. (e) False. Problem 9.9. Let C and C 2 be two (different) classes. Only one of the statements below is necessariliy true; which one? (a) i j or j i, for all i C, and j C 2, (b) C C 2 is a class, (c) if i j for some i C and j C 2, then k l for all k C 2 and l C, (d) if i j for some i C and j C 2, then k l for some k C 2 and l C, (e) none of the above. Solution: The answer is (c). (a) False. Take C = {Amélie wins} and C 2 = {Björn wins}. (b) False. Take the same example as above. (c) True. Suppose, to the contrary, that k l for some k C 2 and l C. Then, since j and k are in the same class, we must have j k. Similarly, l i (l and i are in the same class). Using the transitivity property of the communication relation, we get j k l i, and so j i. By the assumption i j, and so, i j. This is a contradiction, however, with the assumption that i and j are in different classes. (d) False. In the Tennis example, take i = (0, 0), j = (30, 0), C = {i} and C 2 = {j}. (e) False.

24 Lecture 9: Classification of States 24 of 27 Problem 9.0. Let i be a recurrent state with period 5, and let j be another state. Only one of the statements below is necessariliy true; which one? (a) if j i, then j is recurrent, (b) if j i, then j has period 5, (c) if i j, then j has period 5, (d) if j i then j is transient, (e) none of the above. Solution: The answer is (c) (a) False. Take j = 0 and i = in the chain in the picture. (b) False. Take the same counterexample as above. (c) True. We know that i is recurrent, and since all recurrent classes are closed, and i j, h must belong to the same class as i. Period is a class property, so the period of j is also 5. (d) False. Take j =, i = 0 in the chain in the picture. (e) False. Problem 9.. Let i and j be two states such that i is transient and i j. Only one of the statements below is necessariliy true; which one? (a) n= p(n) = n= p(n), (b) if i k, then k is transient, (c) if k i, then k is transient, (d) period of i must be, (e) none of the above. Solution: The answer is (c)

25 Lecture 9: Classification of States 25 of 27 (a) False. In the chain above, take i = and j = 2. Clearly, i j, and both of them are transient. A theorem from the notes states that n= p(n) < and n= p(n) <, but these two sums do not need to be equal. Indeed, the only way for i = to come back to itself in n steps is to move to 2, come back to 2 in n 2 steps and then jump to. Therefore, n 3, p (2) = 0.25 and p () = p = 0. Therefore, = p () + p (2) n= + n=3 = p (n 2) 0.25 for = n=. This equality implies that the two sums are equal if and only if 0.75 n= = 0.25, i.e., = = 3. n= n= We know however, that one of the possible ways to go from 2 to 2 in n steps is to just stay in 2 all the time. The probability of this trajectory is (/2) n, and so, (/2) n for all n. Hence, n= n= (/2) n =. Therefore, the two sums cannot be equal. (b) False. Take i = 2 and j = 3 in the chain above. (c) True. Suppose that k i, but k is recurrent. Since recurrent classes are closed, i must be in the same class as k. That would mean, however, that i is also recurrent. This is a contradiction with the assumption that i is transient. (d) False. Take i = 2 in the modification of the example above in which p 22 = 0 and p 2 = p 23 = /2. The state 2 is still transient, but its period is 2. (e) none of the above. Problem 9.2. Suppose there exists n N such that P n = I, where I is the identity matrix and P is the transition matrix of a finite-state-space Markov chain. Only one of the statements below is necessarily true. Which one? (a) P = I, (b) all states belong to the same class, (c) all states are recurrent (d) the period of each state is n,

26 Lecture 9: Classification of States 2 of 27 (e) none of the above. Solution: The answer is (c). (a) False. Take the Regime-switching chain with [ ] 0 P = 0 Then P 2 = I, but P = I. (b) False. If P = I, all states are absorbing, and, therefore, each is in a class of its own. (c) True. By the assumption P kn = (P n ) k = I k = I, for all k N. Therefore, p (kn) = for all k N, and so lim m p (m) = 0 (maybe it even doesn t exist). In any case, the series m= p(m) cannot be convergent, and so, i is recurrent, for all i S. Alternatively, the condition P n = I means that the chain will be coming back to where it started - with certainty - every n steps, and so, all states must be recurrent. (d) False. Any chain satisfying P n = I, but with the property that the n above is not unique is a counterexample. For example, if P = I, then P n = I for any n N. (e) False. Problem 9.3. Let i be a recurrent state with period 3, and let j be another state. Only one of the statements below is necessarily true. Which one? (a) if j i, then j is recurrent (b) if j i, then j has period 3 (c) if i j, then j has period 3 (d) if j i then j is transient (e) none of the above. Solution: The true statement is (c). Since i is recurrent and i j, we have j i, and so, i and j belong to the same class. Periodicity is a class property so the period of i and j are the same, namely 3. Problem 9.4. Let C and C 2 be two (different) communication classes of a Markov chain. Only one of the statements below is necessarily true. Which one?

27 Lecture 9: Classification of States 27 of 27 (a) i j or j i, for all i C, and j C 2 (b) C C 2 is a class (c) if i j for some i C and j C 2, then k l for all k C 2 and l C (d) if i j for some i C and j C 2, then k l for some k C 2 and l C, (e) none of the above. Solution: The correct answer is (c). Suppose that i j and k l for some k C 2 and some l C. Since both j and k are in the same class (and so are i and l), we have j k (and l i), and then, by transitivity i j k l i which implies that j k. Therefore, j and k are in the same class, which is in contradiction with the assumption that C and C 2 are different classes.

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Lecture 6 Random walks - advanced methods

Lecture 6 Random walks - advanced methods Lecture 6: Random wals - advanced methods 1 of 11 Course: M362K Intro to Stochastic Processes Term: Fall 2014 Instructor: Gordan Zitovic Lecture 6 Random wals - advanced methods STOPPING TIMES Our last

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

the time it takes until a radioactive substance undergoes a decay

the time it takes until a radioactive substance undergoes a decay 1 Probabilities 1.1 Experiments with randomness Wewillusethetermexperimentinaverygeneralwaytorefertosomeprocess that produces a random outcome. Examples: (Ask class for some first) Here are some discrete

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

RANDOM WALKS IN ONE DIMENSION

RANDOM WALKS IN ONE DIMENSION RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Proof Techniques (Review of Math 271)

Proof Techniques (Review of Math 271) Chapter 2 Proof Techniques (Review of Math 271) 2.1 Overview This chapter reviews proof techniques that were probably introduced in Math 271 and that may also have been used in a different way in Phil

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

FINITE MARKOV CHAINS

FINITE MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Lecture 3: Sizes of Infinity

Lecture 3: Sizes of Infinity Math/CS 20: Intro. to Math Professor: Padraic Bartlett Lecture 3: Sizes of Infinity Week 2 UCSB 204 Sizes of Infinity On one hand, we know that the real numbers contain more elements than the rational

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MATH 56A SPRING 2008 STOCHASTIC PROCESSES MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.

More information

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets 1 Rational and Real Numbers Recall that a number is rational if it can be written in the form a/b where a, b Z and b 0, and a number

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

FINITE ABELIAN GROUPS Amin Witno

FINITE ABELIAN GROUPS Amin Witno WON Series in Discrete Mathematics and Modern Algebra Volume 7 FINITE ABELIAN GROUPS Amin Witno Abstract We detail the proof of the fundamental theorem of finite abelian groups, which states that every

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

2. AXIOMATIC PROBABILITY

2. AXIOMATIC PROBABILITY IA Probability Lent Term 2. AXIOMATIC PROBABILITY 2. The axioms The formulation for classical probability in which all outcomes or points in the sample space are equally likely is too restrictive to develop

More information

Countability. 1 Motivation. 2 Counting

Countability. 1 Motivation. 2 Counting Countability 1 Motivation In topology as well as other areas of mathematics, we deal with a lot of infinite sets. However, as we will gradually discover, some infinite sets are bigger than others. Countably

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures 36-752 Spring 2014 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-fields, and Measures Instructor: Jing Lei Associated reading: Sec 1.1-1.4 of Ash and Doléans-Dade; Sec 1.1 and A.1

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................

More information

At the start of the term, we saw the following formula for computing the sum of the first n integers:

At the start of the term, we saw the following formula for computing the sum of the first n integers: Chapter 11 Induction This chapter covers mathematical induction. 11.1 Introduction to induction At the start of the term, we saw the following formula for computing the sum of the first n integers: Claim

More information

Stochastic Processes (Stochastik II)

Stochastic Processes (Stochastik II) Stochastic Processes (Stochastik II) Lecture Notes Zakhar Kabluchko University of Ulm Institute of Stochastics L A TEX-version: Judith Schmidt Vorwort Dies ist ein unvollständiges Skript zur Vorlesung

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

DR.RUPNATHJI( DR.RUPAK NATH )

DR.RUPNATHJI( DR.RUPAK NATH ) Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x: 108 OPTIMAL STOPPING TIME 4.4. Cost functions. The cost function g(x) gives the price you must pay to continue from state x. If T is your stopping time then X T is your stopping state and f(x T ) is your

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

2. Prime and Maximal Ideals

2. Prime and Maximal Ideals 18 Andreas Gathmann 2. Prime and Maximal Ideals There are two special kinds of ideals that are of particular importance, both algebraically and geometrically: the so-called prime and maximal ideals. Let

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes.

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes. 1. LECTURE 1: APRIL 3, 2012 1.1. Motivating Remarks: Differential Equations. In the deterministic world, a standard tool used for modeling the evolution of a system is a differential equation. Such an

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

Final Exam Review. 2. Let A = {, { }}. What is the cardinality of A? Is

Final Exam Review. 2. Let A = {, { }}. What is the cardinality of A? Is 1. Describe the elements of the set (Z Q) R N. Is this set countable or uncountable? Solution: The set is equal to {(x, y) x Z, y N} = Z N. Since the Cartesian product of two denumerable sets is denumerable,

More information

OPTION-CLOSED GAMES RICHARD J. NOWAKOWSKI AND PAUL OTTAWAY

OPTION-CLOSED GAMES RICHARD J. NOWAKOWSKI AND PAUL OTTAWAY Volume 6, Number 1, Pages 142 153 ISSN 1715-0868 OPTION-CLOSED GAMES RICHARD J. NOWAKOWSKI AND PAUL OTTAWAY Abstract. We consider the class of combinatorial games with the property that each player s move

More information

CIS 2033 Lecture 5, Fall

CIS 2033 Lecture 5, Fall CIS 2033 Lecture 5, Fall 2016 1 Instructor: David Dobor September 13, 2016 1 Supplemental reading from Dekking s textbook: Chapter2, 3. We mentioned at the beginning of this class that calculus was a prerequisite

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

Markov Chains. Contents

Markov Chains. Contents 6 Markov Chains Contents 6.1. Discrete-Time Markov Chains............... p. 2 6.2. Classification of States................... p. 9 6.3. Steady-State Behavior.................. p. 13 6.4. Absorption Probabilities

More information

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School Basic counting techniques Periklis A. Papakonstantinou Rutgers Business School i LECTURE NOTES IN Elementary counting methods Periklis A. Papakonstantinou MSIS, Rutgers Business School ALL RIGHTS RESERVED

More information

If something is repeated, look for what stays unchanged!

If something is repeated, look for what stays unchanged! LESSON 5: INVARIANTS Invariants are very useful in problems concerning algorithms involving steps that are repeated again and again (games, transformations). While these steps are repeated, the system

More information

Spanning, linear dependence, dimension

Spanning, linear dependence, dimension Spanning, linear dependence, dimension In the crudest possible measure of these things, the real line R and the plane R have the same size (and so does 3-space, R 3 ) That is, there is a function between

More information

Notes on ordinals and cardinals

Notes on ordinals and cardinals Notes on ordinals and cardinals Reed Solomon 1 Background Terminology We will use the following notation for the common number systems: N = {0, 1, 2,...} = the natural numbers Z = {..., 2, 1, 0, 1, 2,...}

More information

The Boundary Problem: Markov Chain Solution

The Boundary Problem: Markov Chain Solution MATH 529 The Boundary Problem: Markov Chain Solution Consider a random walk X that starts at positive height j, and on each independent step, moves upward a units with probability p, moves downward b units

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Lecture 12 : Recurrences DRAFT

Lecture 12 : Recurrences DRAFT CS/Math 240: Introduction to Discrete Mathematics 3/1/2011 Lecture 12 : Recurrences Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last few classes we talked about program correctness. We

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

SMT 2013 Power Round Solutions February 2, 2013

SMT 2013 Power Round Solutions February 2, 2013 Introduction This Power Round is an exploration of numerical semigroups, mathematical structures which appear very naturally out of answers to simple questions. For example, suppose McDonald s sells Chicken

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information