FINITE MARKOV CHAINS

Size: px
Start display at page:

Download "FINITE MARKOV CHAINS"

Transcription

1 Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat, Lògica i Estadística UB Barcelona, 3 de uny de 5

2 Abstract The purpose of this final proect is to study and understand finite Markov chains First of all, the topic is introduced by definitions and relevant properties Subsequently, how to classify the chains will be the point in question Finally, it is analyzed the behaviour of the chain in the long-time Furthermore, different examples will be exposed throughout the proect, to show how Markov chains work and make them easier to comprehend i

3 Acknowledgments I would like to express my gratitude to my tutor David Márquez because he has helped me every week since I started this proect on February His advise and assistance have been essential and with his professional guidance I could accomplish this study and I could learned about it as well My special thanks are extended to my family and friends for their support and consideration during all this period ii

4 Contents Introduction Markov chains in finite time Time-homogeneous Markov chains and examples Chapman-Kolmogorov equation and n-step transition probabilities 3 Class structure 3 3 Communicated states Closed sets Irreducible chains 3 3 Periodic states Cyclic classes First classification of the states of a Markov chain 5 33 Hitting times and absorption probabilities 8 4 Recurrent and transient states 4 Definitions 4 Rules 5 43 Examples 7 5 Invariant distributions 3 5 Invariant distributions or stationary distributions 3 5 Ergodicity Examples 38 6 Limit of a time-homogeneous Markov chain 4 6 Behaviour in mean 4 6 Mean recurrence time and invariant distributions Limiting probability distribution and stationary distribution Reversibility Metropolis-Hastings algorithm 47 7 Conclusions 5 A Probability review 5 A General outcomes 5 A Conditional probability 5 A3 Independence 53 A4 Discrete Random Variables 54 A5 Expected value 54 B Problems and applications 55 iii

5 Introduction Since I began to study the grade of Mathematics in the University of Barcelona, probability and statistics have been the field of this science I enoyed the most because I think that in this area it is possible to learn theoretically and their application is clear as well For that reason, I decided that the topic of my final proect was about it During my studies, I have learned the main concepts about probability and with this study I will be able to consolidate and increase my knowledge in this discipline With regard to the specific topic, I did know what was a Markov chain thanks to the subect of Modelització, where they were briefly introduced to us So that, I decided to choose this topic to learn it in more detail It seemed intriguing to me as well, because I also thought that my mathematical skills will be improved by doing it In particular, this proect is focused on finite Markov chains, whose values are discrete These types of processes have another relevant characteristic as well: the absence of memory, which means that the past is not relevant for the future, what is important is the present The proect is divided into five parts: The first part consists of introduce the concept of Markov chain and explain some properties of them In fact, the notions defined here will be indispensable for the subsequent parts In the second part, the description of the characteristics of a Markov chain goes deeper Moreover, it allows to start to classify the chains The aim of the third part is to complete the classification of the Markov chains started in the preceding part Furthermore, the evolution that the chain has over the time will take up an important place in this part The fourth part goes beyond the previous ones, by starting to study the long-time properties of Markov chains With statements and its demonstrations, the understanding will be better In the fifth and final part, the explanation of the behaviour of Markov chains in the long-time will conclude as well as this study Throughout all the proect, the presence of examples will be an important point in order to comprehend how Markov chains work Additionally, two annexes are included: On one hand, the first annex provides the main outcomes studied in the subect of Probabilitats These results are important during all the proect and their use in it will be continuous On the other hand, the second annex collects various exercises made in parallel as the theoretical proect This supplement allows to complete the topic

6 Markov chains in finite time We start introducing the definition of a Markov chain and we study the first properties of them In particular, we focus on a specific type of Markov chains, which have an important property as we will see After that, we determine what is a stopping time, a valuable notion that will be useful in the following chapters We work with these new concepts and we give examples to clarify them Subsequently, the concepts of Chapman-Kolmogorov equation and n-step transition probabilities will help us to study the evolution of the chain over the states Finally, we show two applications of these ideas Throughout all this chapter we consider a probability space (Ω, A, P and a collection of discrete random variables X n : Ω I, where I is a countable set called set of states or state space Therefore, every i I defines a state Time-homogeneous Markov chains and examples In this section we define what is a Markov chain, in particular, we work with what we call time-homogeneous Markov chain Furthermore, we show some characteristics of these chains and various examples in order to understand them better Firstly, we introduce a necessary concept to the definition of a Markov chain Definition : A stochastic process is a collection of random variables that are included into a random set {X t : t T }; for every t, we have: X t : Ω R In other words, it represents the evolution of a system of random values over time We will study the case where T is discrete, that is: T = Z + = {,,, } Therefore, we will have the process: {X n : n } So now we can consider the following notion Definition A matrix Π = (p : i, I is a stochastic matrix or a transition matrix if (a p [, ] (b I p =, i I In addiction, p, i, I, are called transition probabilities Observation 3 A stochastic process can be represented by transition matrices as well as by state diagrams Examples: (i Consider a stochastic process with transition matrix / / /3 /3

7 Then the state diagram is /3 3 /3 / / (ii Consider a stochastic process with state diagram α α β β Then the transition matrix is ( α α β β In the following result we describe what is necessary to have a Markov chain Definition 4 A stochastic process {X n : n }, whose values are in a set of states I, is a Markov Chain with initial distribution γ = {γ i : i I} and transition matrix Π = (p : i, I if (a X D = γ, which means that for i I we have P (X = i = γ i (b P (X n+ = X = i,, X n = i n, X n = i = P (X n+ = X n = i, i,, i n, i, I, n Observation 5 The last equality is called Markov property and it points out that given the present state, the future and the past states are independent As we have said before, we will focus on a particular type of Markov chains Definition 6 A Markov Chain defined as in Definition 4 is called a timehomogeneous Markov chain if P (X n+ = X n = i = P (X = X = i = p, n, i, I Moreover, we will write it as HMC (γ, Π Observation 7 The definition above shows that the transition probability p is independent of n Theorem 8 A stochastic process {X n : n }, whose values are in the set of states I, is a time-homogeneous Markov Chain (γ, Π if, and only if, P (X = i,, X n = i = γ i p i,i p in,i, i,, i n, i I, n, where p i,i,, p in,i are the transition probabilities associated with Π 3

8 Proof Using the general product rule, we have P (X = i,, X n = i = = P (X = i P (X = i X = i P (X n = i X = i,, X n = i n = γ i p i,i p in,i The first condition of the Markov property is satisfied because P (X = i = i I γ i p i,i = γ i i I p i,i = γ i Consequently, X D = γ Now, we study the second condition of the Markov property On one hand, we have P (X n+ = X = i,, X n = i = P (X = i,, X n = i, X n+ = P (X = i,, X n = i On the other hand, we get = γ i p i,i p γ i p i,i p in,i = p P (X n+ = X n = i = P (X n = i, X n+ = P (X n = i P (X = i,, X n = i, X n+ = = = i,,i n I i,,i n I i,,i n I i,,i n I P (X = i,, X n = i γ i p i,i p in,ip γ i p i,i p in,i = p Therefore, the second condition of the Markov property is satisfied In addiction, as p does not depend on n, we also obtain that this is a time-homogeneous Markov chain Proposition 9 Let {X n : n } be a HMC (γ, Π Then, {X n+m : m } is a HMC (L (X m, Π, where L (X m is the law of the discrete random variable X m Proof The result follows by Theorem 8 P (X +m =,, X n+m = n = = P (X = i,, X m = i m, X m =,, X n+m = n = i,,i m I i,,i m I γ i p i,i p im, p, p n, n = p, p n, n P (X m = 4

9 Now, we introduce the concept of stopping time, which will be important in successive chapters Definition A random variable T : Ω {,, } { } such that the events {T = n} only depend on X,, X n, for n, is called stopping time We also study an important characteristic related to time-homogeneous Markov chains and stopping times, which is called strong Markov property Theorem Let {X n : n } be a HMC (γ, Π and assume that T is a stopping time Then, {X T +n : n } is a HMC (δ i, Π conditionally on T < and X T = i Moreover, it is independent of X, X,, X T Proof Consider an event B determined by X,, X T ; so that, B {T = m} only depends on X, X,, X m Then, using the Markov property we have P ({X T =, X T + =,, X T +n = n } B {T = m} {X T = i} = P ({X m =, X m+ =,, X m+n = n } B {T = m} {X m = i} = P (X m =, X m+ =,, X m+n = n B {T = m} {X m = i} P (B {T = m} {X m = i} = P (X m =, X m+ =,, X m+n = n X m = i P (B {T = m} {X T = i} = P (X =, X =,, X n = n X = i P (B {T = m} {X T = i} Taking the sum from m = until infinity on both sides of the previous equality, we get Therefore P ({X T =, X T + =,, X T +n = n } B {T < } {X T = i} = P i (X =, X =,, X n = n P (B {T < } {X T = i} P ({X T =, X T + =,, X T +n = n } B {T < } {X T = i} P ({T < } {X T = i} = P i (X =, X =,, X n = n P (B {T < } {X T = i} P ({T < } {X T = i} Finally we obtain Examples: (i Random walk on Z P ({X T =, X T + =,, X T +n = n } B T <, X T = i = P i (X =, X =,, X n = n P (B T <, X T = i Suppose a particle moving along a straight line in unit steps Each step is one unit to the right with probability p or one unit to the left with probability q = p The states are the possible positions Let X = be the initial distribution, which is the constant ; it means that X is the initial position We consider X n = X + ξ + + ξ n, where P (ξ n = = p 5

10 and P (ξ n = = q = p, with p (, and {ξ n : n } are independent and identically distributed random variables; that is to say that ξ n is the movement in the nth stage Now, we verify that {X n : n } is a Markov chain by proving that it satisfies the Markov property Using that {ξ n : n } are independent and identically distributed, for any i,, i n, i, I, we have P (X n+ = X = i,, X n = i n, X n = i = = P (X = i,, X n = i n, X n = i, X n+ = P (X = i,, X n = i n, X n = i = P (X = i, ξ = i i, ξ = i i,, ξ n = i i n, ξ n+ = i P (X = i, ξ = i i,, ξ n = i i n = P (X = i, ξ = i i, ξ = i i,, ξ n = i i n P (ξ n+ = i P (X = i, ξ = i i,, ξ n = i i n = P (ξ n+ = i := p On the other hand, we get P (X n+ = X n = i = P (X n + ξ n+ =, X n = i P (X n = i = P (ξ n+ = i, X n = i P (X n = i = P (ξ n+ = i P (X n = i P (X n = i = P (ξ n+ = i := p So that, {X n : n } satisfies the Markov property Furthermore, as p does not depend on n, we obtain that this is a time-homogeneous Markov chain Finally, we write the transition matrix of this process: p q p Π = q p q p q The previous concepts of this example prove the next result Proposition Let {Y n : n } be a sequence of random variables, { which are independent and identically distributed, whose values are in I Then, X n = n } Y i : n is a time-homogeneous Markov chain i= 6

11 Generally, we have the following result Proposition 3 Let {Z n : n } be a sequence of random variables, which are independent and identically distributed, such that Z n : Ω I Consider f : I I I and a random variable X, independent of {Z n : n } and whose values are in I Then, if we define X n+ = f (X n, Z n+, with n, then {X n : n } is a timehomogeneous Markov chain Proof Firstly, we study the Markov property Using that {Z n : n } are independent and identically distributed, we get P (X n+ = X = i,, X n = i n, X n = i = P (X = i,, X n = i n, X n = i, X n+ = P (X = i,, X n = i n, X n = i = P (X = i, f (X, Z = i,, f (X n, Z n = i, f (X n, Z n+ = P (X = i, f (X, Z = i,, f (X n, Z n = i = P (X = i, f (i, Z = i,, f (i n, Z n = i, f (i, Z n+ = P (X = i, f (i, Z = i,, f (i n, Z n = i = P (X = i, f (i, Z = i,, f (i n, Z n = i P (f (i, Z n+ = P (X = i, f (i, Z = i,, f (i n, Z n = i = P (f (i, Z n+ = = p On the other hand, we have P (X n+ = X n = i = P (X n+ =, X n = i P (X n = i = P (f (i, Z n+ =, X n = i P (X n = i = P (f (i, Z n+ = = p = P (f (i, Z n+ = P (X n = i P (X n = i = P (f (X n, Z n+ =, X n = i P (X n = i Consequently, {X n : n } verifies the Markov property Moreover, as p does not depend on n, we obtain that this is a time-homogeneous Markov chain (ii Random walk on Z with absorbing barriers (also known as The Gambler s Ruin Problem Suppose two gamblers, A and B, who play at tossing a coin (so that, the possible outcomes are heads or tails and they have a euros and b euros, respectively The game finishes when one of the players get ruined We also consider that on each successive gamble, the player A wins euro if the result is a head or loses euro if the result is a tail, with probabilities p and q = p, respectively So that, the stochastic process in this case is: {X n : n } and it points out the evolution of the capital that has the player A Moreover, we write X = a and X n+ = X n + ξ n+, such that X n : Ω {,,,, a + b} Then, the initial distribution is 7

12 γ = δ {a} and the transition matrix is q p q Π = p q p (iii Random walk on Z with reflecting barriers The idea is the same as the previous example but now, when one of the players gets ruined, the other one gives him one euro, so that the gamble does not end Consequently, the transition matrix in this case is q p q Π = p q p (iv Model for the inventory management A certain product is put up for sale to satisfy the requests of the consumers Let Z n+ be the number of demanded unities of the product between the moments n and n + Let X be the initial value of the stock and {Z n : n } independent and identically distributed random variables and independents of X as well We also know that the stock is updated after every stage Therefore, let X n be the value of the stock in the nth stage We consider the values s and S, which are the critical point and the maximum point, respectively, such that < s < S In that way, if X n s, then we place the stock at the stage S, otherwise (that is to say that X n > s we keep the stock were it was Moreover, we assume that X S and we write: X n = {S, S, S, }, with n In addiction, considering that we can obtain negative values if the request is not handled, but it will be handled immediately after the replacement of the stock; then X n can be expressed by the recurrence { Xn Z X n+ = n+, if s < X n S, S Z n+, if X n s Using a similar argument that the one used in Example (i, it can be proved that this model is represented by a time-homogeneous Markov chain δ is called Dirac delta function and it is defined as {, if x a, δ {a} =, if x = a 8

13 (v Ehrenfest model for statistical mechanics A simple version This model allows us to represent the diffusion by a porous material, a membrane for example Furthermore, it can determine the heat exchange between two systems with different temperatures Suppose that we have two boxes, labeled A and B, that contain a total of N particles Let X n be the number of particles in A at the moment n Just after the moment n we take one particle, no matter in which box is located, and we change it to the other box at the moment n + So that, if we write X n = i, with i {,,, N}, then { i, if the particle was in box A, X n+ = i +, if the particle was in box B Now, we study the initial distribution of this process: if X = n, then γ = δ {n } Moreover, considering that if a particle is located in the state i, then the transition is only possible to the states i and i +, we have that the transition probabilities in this case are p = P (X n+ = i + X n = i = N i N, if = i +, P (X n+ = i X n = i = i N, if = i,, otherwise Therefore, the transition matrix that we obtain is N N N N N N 3 N Π = 3 N N N N N N N Observation 4 This matrix and the one obtained in Example (iii (Random walk on Z with reflecting barriers are similar; the difference between both is that in this case the transition probability p is not independent of i Finally, we verify that this is a time-homogeneous Markov chain We write the model as: X n+ = X n + Z n+, with n, Z n {+, } and P (Z n+ = X n = i = N i N, P (Z n+ = X n = i = i N Then, it can be expressed recursively as: X n+ = f (Z n+, g (X, Z,, Z n, that is to say that X n is function of X, Z,, Z n Furthermore, if we define X n := g (X, Z,, Z n, we obtain that Z n+ and X n are independent and then we can apply the Proposition 3, which implies that {X n : n } is a time-homogeneous Markov chain 9

14 Chapman-Kolmogorov equation and n-step transition probabilities In this section, we study what is the probability that a Markov chain can be located in a certain state after n steps In addiction, we work on the Chapman-Kolmogorov equation, which allow us to express this probability Finally, we show some examples to assimilate this concept Definition Let {X n : n } be a time-homogeneous Markov chain We define p (m = P (X n+m = X n = i, with n, m and i, I, as the probability of, staying in the state i, we arrive to the state in m steps; that concept is known as n-step transition probabilities Moreover, the m-step transition matrix is ( Π m = p (m : i, I Observation If m =, then p ( same case that we have already studied = P (X n+ = X n = i = p, which is the Proposition 3 Let {X n : n } be a time-homogeneous Markov chain and Π m the m-step transition matrix Then, Π m = Π m = Π m Π Proof We should prove that p (m Π m = Π m Π Therefore, we have p (m = k I = P (X n+m = X n = i = P (X n+m =, X n = i P (X n = i P (X n+m =, X n+m = k, X n = i = k I P (X n = i p (m i,k p k,, for m ; that is to say that = k I P (X n+m = X n+m = k, X n = i P (X n+m = k X n = i = k I p (m i,k p k, Corollary 4 With the hypothesis of the previous proposition, we can obtain Π m is a transition matrix Considering that Π l+k = Π l Π k, with l, k, then p (l+k as Chapman-Kolmogorov equation = p (l h I i,h p(k h, If the Chapman-Kolmogorov equation is repeated by iteration, we get p (m = p i,i p (m i, = p i,i p i,i p im, i I i,,i m I, which is known

15 The following result shows what is the law or distribution of X n : Proposition 5 Let {X n : n } be a HMC (γ, Π Then, k I, we have where γ (n = γπ n is the law of X n P (X n = k = γ (n k, Proof P (X n = k = P (X n = k X = h P (X = h = γ h p (n h,k, h I h I where γ h = P (X = h Corollary 6 With the hypothesis of the previous proposition, we get P (X l+k = i = P (X l+k = i X l = P (X l = = I I γ (l+k = γ (l Π k γ (n = P (X n = = γ h p (n h, = h I h I i,,i n I γ (l p (k,i γ h p h,i p i,i p in, In consequence, The proposition below studies the distributions of the process in finite dimension Proposition 7 The initial distribution γ and the transition matrix Π establish the law of the random vector (X n,, X nk, with n,, n k Proof To show this result, we use the general product rule Firstly, we study the case (X,, X k So that, we have P (X = i,, X k = i k, X k = i k = = P (X = i P (X = i X = i P (X k = i k X = i,, X k = i k = γ i p i,i p i,i p ik,i k Now, in general, the law of the vector (X n,, X nk with n < n < < n k is P ( X n = i,, X nk = i k, X nk = i k = = P (X n = i P (X n = i X n = i P ( X nk = i k X n = i,, X nk = i k = h I γ h p (n h,i p (n n i,i p (n k n k i k,i k To conclude this section we study what is the probability p (n i,i in the following important two examples: (i The most general two-state chain, with α, β >, has transition matrix of the form ( α α Π = β β And it is represented by the following state diagram α α β β

16 To get the value of p (n, we use the relation Πn+ = Π n Π Then p (n+, = p (n, ( α + p(n, β On the other hand, we know that p (n, + p(n, = So that, we get p (n+, = p (n, ( α β + β Therefore, we have the following recurrence relation for p (n, { (n+ p, = p (n, ( α β + β, p (, = Solving the previous recurrence relation by induction or using the geometric series, we obtain that it has a unique solution, which is p (n, = β α + β + α α + β ( α βn Observation 8 The case α + β = is not possible because we have supposed that α, β > (ii Virus mutation Suppose a virus can exist in N different strains and in each generation either stays the same, or with probability α mutates to another strain, which is chosen at random What is the probability that the strain in the nth generation is the same as the original? We can model this process as a Markov chain with N states: {,, N} and a transition matrix Π given by the next transition probabilities { α, if i =, p = α N, if i Then, we need to compute p (n, and to get that we can assume that there are only two states: the initial state (state and all the remaining states, which can be considered as one state (state Therefore, we have a two-state time-homogeneous Markov chain that can be represented by the following state diagram α α β β where β = α N Finally, using the result obtained in the previous example, we get p (n, = β α + β + α α + β ( α βn = N + N N ( αn N Observation 9 The last two examples we have already studied are not difficult because we have considered time-homogeneous Markov chains with two states and the matrices obtained are simple However, when there are more states, we can get matrices whose powers can be difficult to compute n

17 3 Class structure In this chapter, we begin to classify Markov chains Firstly, we define concepts as states that communicate and absorbing, essential and passing through states Moreover, we study when a Markov chain is or not irreducible, a notion related to the number of equivalence classes that has the chain We also give some examples to clarity these ideas Subsequently, we study the periodicity that have the states, a notion that allows us to define what are the cyclic subclasses We illustrate it all by some examples as well Finally, we study the concepts of absorption probabilities and hitting times, which are related to the evolution of the chain over the time Furthermore, we prove important results that involve the previous concepts and we show their application with examples Throughout all this chapter we consider a probability space (Ω, A, P and a collection of discrete random variables X n : Ω I, where I is a countable set and {X n : n } is a HMC (γ, Π with Π = (p ; i, I 3 Communicated states Closed sets Irreducible chains In this section we work with Markov chains by dividing it into small parts; so that, it is simpler to study them and they together explain the global behavior of the chain Definition 3 A state I is called accessible from a state i I (written as i if there exists n such that P (X n = X = i = p (n > Definition 3 A state i I communicates with state I (written as i if both i and i Proposition 33 equivalent Given two different states i, I, the following conditions are (a i (b There exist states i, i,, i n I such that Proof Considering that p (n = p i,i p i,i p in,, we obtain directly the equivalence above Observation 34 p i,i p i,i p in, > i,,i n I The communicate property is an equivalence relation in the set of states I It can also be possible to make a partition of the set of states I in equivalence classes Definition 35 C is a closed class if, given i C and i, then C It means that it is impossible to leave this class C Moreover, we have that p =, i C 3 C

18 Definition 36 A state i I is called absorbing if {i} is a closed class; in other words, if p i,i = Definition 37 A Markov chain is irreducible if there is only one equivalence class; that is to say that all the states are communicated Definition 38 A state i I is called inessential or passing through state if there exist m and I such that p (m > but p (n,i =, n In other words, it is possible to leave the state i but it is impossible to return to it In addiction, the complementary of an inessential set of states is called essential set of states; it is composed of absorbing states and states where the leaving is possible and the returning as well Examples: (i Random walk on Z As all the states communicate with one another, they are all essential and this chain has only one class, which is {,,,,,, }; so that, the chain is irreducible (ii Random walk on Z with absorbing barriers On one hand, as and N are absorbing states, we have that these states are essential On the other hand, the states,, N are passing through states because it is possible to leave them, that is to say that you go to the absorbing states, but the retuning is not possible Therefore, {}, {,, N } and {N} are the classes in this case, which means that the chain is not irreducible Moreover, {} is a closed class because once you are there, it is impossible to leave it; the same argument can be applied to class {N} (iii Consider the Markov chain with transition matrix Π = Therefore, the state diagram in this case is As state leads to states and 3, and then the return to is possible, that means that is an essential state; the same happens with states and 3 In addiction, the states 5 and 6 communicate with one another, so they are also essential On the other hand, state 4 is inessential because it is possible to leave it, but it is impossible to return to it Therefore, {,, 3}, {4} and {5, 6} are the classes in this case, which implies that the chain is not irreducible Furthermore, {5, 6} is a closed class because once you are in this class, it is impossible to leave it 4

19 3 Periodic states Cyclic classes First classification of the states of a Markov chain We illustrate this section with a previous example Consider a random walk on Z, with p (, We can decompose Z into two disoint subsets C and C, which are the sets of even and odd numbers, respectively So, in one step it is necessary to go from C to C Moreover, in two steps we return again to the original set Having this idea in mind, here we focus on this periodic behavior Definition { 3 Let i } I be an essential state The period of the state i I is defined as: d i = gcd n : p (n i,i > Moreover, if d i =, then the state i I is called aperiodic Observation 3 Chapman-Kolmogorov equation shows that if p (n, k This is due to: p (kn i,i p (n i,i k p (n i,i > i,i >, then p (kn i,i > In the following result we show that the period is an equivalence class property, with respect to the relation i Proposition 33 Let us assume that the two states i, I are communicated Then, d i = d Proof Consider p (n > and p(m,i >, with n, m Then, we have that p (n+hk+m i,i ( p (n p (k h (m, p,i, k This is because it is possible to go from the state i until the same state i in n + hk + m steps by the following way X = i, X n =, X n+k =,, X n+hk =, X n+hk+m = i Therefore, k such that p (k, that, d i n + hk + m, h >, we have that p(n+hk+m i,i >, for any h So We have that d i n + m because p (n p(m,i > Then, as we have seen previously that d i n + hk + m, h, it is necessary that we have that d i hk, h In particular, d i k, k such that p (k, > and, consequently, we get that d i d Using a similar argument we can obtain that d d i In conclusion, d i = d Proposition 34 Let C be an equivalence class by the relation with period d If i, C, then: p (r >, p(s >, which implies that r s = d Proof We consider t and p (t,i r > such that: p (r+t i,i s > such that: p (s+t i,i >, which exists because i Then p (r p(t,i > r + t = d p (s p(t,i > s + t = d Therefore, r s = d 5

20 Now, let us assume that p (r >, with r = ad + b and b d So, if p(s >, then s = cd + b, for any c (we have that r s = d as well In other words, each pair of states i, C can determine an element b Z/d such that all the ways from i to with positive probability have length b module d Thus, if i C, we can define { } C = C : p (n > n (mod d { } C = C : p (n > n (mod d C d = { C : p (n > n d (mod d } And we have that C = d = C Definition 35 The sets C, C,, C d are called cyclic subclasses Proposition 36 Consider C b, with p,k > Then, k C b+, with C d = C It means that, beginning in i C, the process moves from the states of the subclass C b to the states of the subclass C b+ in one step Proof We consider m such that p (n > So, we have that p (n+ i,k p (n p(,k > Moreover, as n b (mod d, then, n + b + (mod d, so that: k C b+ Examples: (i Random walk on Z Assuming that X =, previously we have seen that this Markov chain is irreducible because it has only one class, which is {,,,,,, }, with period d = because, for example, starting in state, the returning to this state is only possible in stages, 4, In addiction, the cyclic subclasses are { } C = Z : p (n, > n = = {, ±, ±4, } { } C = Z : p (n, > n = + = {±, ±3, ±5, } (ii Random walk on Z with absorbing barriers At the beginning of this section we have shown that this Markov chain is not irreducible because it has three classes: {}, {,, N } and {N} On one hand, the absorbing states in this example are and N; therefore, they are aperiodic because, beginning in state, in each stage is possible to come back to it; the same argument is valid for state N On the other hand, {,, N } is a class with period d = because, for example, starting in state, the returning to this state is only possible in stages, 4, 6

21 Moreover, the cyclic subclasses are { } C = {,, N } : p (n, > n = = {, 3, 5, } C = { } {,, N } : p (n, > n = + = {, 4, 6, } Observation 37 Previously we have defined the period of a certain essential state, and using that notion we have also defined the cyclic subclasses But, in the example above, we study both concepts when the states involved are inessential as well, in order to understand better their definitions, provided that both of them make sense; that is to say that p (n i,i >, for i I (iii Consider the HMC with state space I = {,, 3, 4}, whose transition matrix is Π = Therefore, the state diagram in this case is 4 3 This Markov chain is irreducible because it has only one class, which is {,, 3, 4} Thus, there is only one equivalence class, whose period is the period of any element Moreover, all the states have period d = because, for instance, starting in state, the returning to this state is only possible in stages, 4, 6, Furthermore, the cyclic subclasses are { } C = {,, 3, 4} : p (n, > n = = {, } { } C = {,, 3, 4} : p (n, > n = + = {3, 4} Observation 38 The description of the cyclic subclasses depends on the state that we take as a reference For instance, in the previous example, if we take the state i = 3 as a reference, instead of the state i =, we obtain that the cyclic subclasses are { } C = {,, 3, 4} : p (n 3, > n = = {3, 4} { } C = {,, 3, 4} : p (n 3, > n = + = {, } 7

22 33 Hitting times and absorption probabilities In this section we study the moment when the Markov chain enters a subset of the state space In addiction, we analyze the time required to arrive to this subset Definition 33 Let {X n : n } be a HMC with a finite number of states I and with transition matrix Π The hitting time of a subset A of I is the random variable H A : Ω N { } given by H A (ω = inf {n : X n (ω A}, where we assent that the infimum of the empty set is starting from the state i that X n gets into A is then Furthermore, the probability λ A i = P (H A < X = i When A is a closed set, λ A i Observation 33 is called absorption probability If A is a closed set and i A, then λ A i = If A and B are closed sets, A B = and i B, then λ A i = Definition 333 The mean time that {X n : n } needs to reach A is called mean absorption time and it is given by m A i = E i (H A = np (H A = n X = i + P (H A = X = i n= Observation 334 In the previous definition, E i means that p i = P ( X = i From now on, we study how to compute the value of the absorption probabilities and the mean absorption times in different cases Theorem 335 Let A be a closed class and { λ A i : i I } the absorption probabilities Then, { λ A i : i I } are the solution to the system of linear equations λ A } i =, if i A, λ A i = p λ A, if i / A I Proof On one hand, if X = i A, then H A =, so that λ A i = On the other hand, if X = i / A, then H A Using the Markov property, we have Therefore P (H A < X = i, X = = P (H A < X = = λ A λ A i = P (H A < X = i = I = I P (H A <, X = i, X = P (X = i P (H A < X = i, X = P (X = i, X = P (X = i = I p λ A 8

23 Observation 336 We have already proved that the absorption probabilities are the solution to the previous system of linear equations, but this solution can not be unique What we have shown is that it is the minimal non-negative solution; it means that if {X i : i I} is another solution with X i, i, then X i λ A i, i Observation 337 For the states i I, we can write the absorption probabilities as λ A i = p + p λ A, A / A where p is the probability to reach A starting in the state i in one step A Example: In this example we work on a similar version of The Gambler s Ruin Problem, which we have already studied in the last chapter Consider the Markov chain, with q = p (,, with transition matrix q p Π = q p Therefore, the state diagram in this case is q p q p 3 q Suppose that you go to a casino with i euros and you gamble euro in each play; so that, you can win euro with probability p and you can lose euro with probability q = p If we want to study the probability of getting ruined, we are talking about the absorption probabilities of the state, because {} is the only absorption class that has the chain So that, we write λ i = λ {} i, for i =,, Then, using the previous theorem, for i we have {, if i =, λ i = pλ i+ + qλ i, if i Solving the previous recurrence, we get If p q, then λ i = A + B ( q p i, with A + B = On one hand, if p < q, then λ i =, i λ i = ( q p i, i On the other hand, if p > q, then If p = q, then λ i =, i 9

24 Theorem 338 Let A be a closed class and { m A i : i I } the mean absorption times Then, { m A i : i I } are the solution to the system of linear equations m A i =, if i A, m A i = + p m A, if i / A Proof On one hand, if X = i A, then H A = ; so that, m A i = / A On the other hand, if X = i / A, then H A Using the Markov property, for n we have P (H A = n, X = X = i = P (H A = n, X =, X = i P (X = i = P (H A = n X = i, X = p = P (H A = n X = p The case where P (H A = X = i > is immediate because we get that m A i =, if i / A, and then the equality m A i = + p m A, if i / A, is satisfied Therefore, we focus / A on the case where P (H A = X = i =, and we have m A i = E i (H A = = np (H A = n X = i + P (H A = X = i n= [np (H A = n X = i P (H A = n X = i + P (H A = n X = i] n= = + (n P (H A = n X = i n= = + (n P (H A = n X = p = + m A p I n= I = + / A p m A Observation 339 We have already proved that the mean absorption times are the solution to the previous system of linear equations, but this solution can not be unique What we have shown is that it is the minimal non-negative solution; it means that if {X i : i I} is another solution with X i, i, then X i m A i, i

25 Example: Consider the chain with transition matrix Π = Therefore, the state diagram in this case is / / / 3 / 4 Starting from state, what is the probability of absorption in state 4? We compute all the absorption probabilities in state 4: We write λ i = λ {4} i = P (X n finishes in 4 X = i, for i =,, 3, 4 Then λ = λ = λ = λ + λ 3 λ = 3 λ 3 = λ + λ 4 λ 3 = 3 λ 4 = λ 4 = Therefore, starting from state, the probability of absorption in state 4 is λ = 3 Starting from state, how many stages are necessary until the chain is absorbed in state or in state 4? We compute all the mean absorption times in states or 4: We write m i = m {,4} i, for i =,, 3, 4 Then m = m = + m + m 3 m 3 = + m + m 4 m 4 = m = m = m 3 = m 4 = Therefore, starting from state, until the chain is absorbed in state or in state 4 are necessary m = stages

26 4 Recurrent and transient states In this chapter, initially we study the concepts of recurrent and transient states Afterward, we work on the notion of the first instant that the chain is located in some state and also on the time of the rth step Subsequently, the number of times that the chain is located in a certain state is an important idea as well; in fact, this concept and the stopping times help us to define what is a return probability After that, we show rules and criteria to handle with this notion and, finally, we provide some examples for a complete understanding of them Throughout all this chapter we consider a probability space (Ω, A, P and a collection of discrete random variables X n : Ω I, where I is a countable set and {X n : n } is a HMC (γ, Π with Π = (p ; i, I 4 Definitions In this section, we study whether, starting in some state, it is possible to return to it or not In addiction, we work on the time that the chain is located in a certain state Definition 4 The state i I is recurrent or persistent if The state i I is transient if P i (X n = i for infinite n = P i (X n = i for infinite n = That is to say that you always come back to a recurrent state, but you can never return to a transient state Observation 4 An absorbing state is recurrent Definition 43 The first instant that the chain is located in state i I is defined as T i (ω = inf {X n (ω = i : n }, for ω Ω, where we assent that the infimum of the empty set is Moreover, the rth instant that the chain is located in the state i I can be defined by using the following recurrence Ti (ω =, Ti (ω = T i (ω, T (r+ i { } (ω = inf X n (ω = i : n T (r i (ω + Furthermore, we can define the time of the rth excursion as { { } S (r T (r i = i T (r i = inf X (r T i +n = i : n, if T (r i <,, otherwise

27 Taking into account the previous concepts and the definition of stopping times made in chapter, we get the following result Observation { 44 } For k, the random variables T (k i and S (k i are stopping times because T (k i = n means that X n (ω = i and, before this, there are r i s So that, there is only dependence on X,, X n Now, we show an interesting lemma related to the rth instant that the chain is located in a certain state and to the the time of the rth excursion as well In its proof, we use stopping times and the strong Markov property, both ideas that have been seen previously { } Lemma 45 S (r i is independent of X m : m T (r i conditional on T (r i <, ( for r =, 3, Furthermore, P S (r i = n T (r i < = P i (T i = n Proof Applying the strong Markov property to the stopping time T = T (r i and assuming that if T < then X T = i, we have that {X T +n : n } is a HMC (δ i, Π conditional on T (r i <, and it is also independent of X,, X T Moreover S (r i = inf {X T +n = i : n } is the first time that the chain {X T +n : n } is located in the state i; so that ( P S (r i = n T (r i < = P (T i = n X = i = P i (T i = n as Definition 46 The number of times that the chain is located in the state i is defined N i (ω = # {X n (ω = i : n } = n= {Xn=i} Observation 47 Considering the last definition, we have E (N i = E ( = n= n= p (n,i {Xn=i} = ( E {Xn=i} = P (X n = i X = n= In particular, if i =, we get that E i (N i = n= p (n i,i Now, we study the probability that the chain returns to a certain state, a concept related to the first instant that the chain is located in the state considered Definition 48 The return probability is defined as n= ρ = P (T < X = i, ρ i,i = P (T i < X = i 3

28 We finish this section with an important result that involves the previous definition and the notion of number of times that the chain is located in a certain state Lemma 49 For r, we have that P (N r X = i = ρ ρ r, Proof Note that if X = i, then {N r} = { } T (r < We prove it by induction Firstly, we study the case r = : P (N X = i = P (T < X = i = ρ Now we need to prove that if the statement holds for r, then it also holds for r + Using the induction hypothesis, which is as well as the lemma 45, we obtain P (N r X = i = ρ ρ r,, ( P (N r + X = i = P T (r+ ( = P T (r <, S (r+ < X = i ( P T (r <, S (r+ <, X = i = P (X = i ( = P S (r+ < T (r <, X = i P [ = n= n= P < X = i ( T (r < X = i ( ] S (r+ = n T (r < P (N r X = i [ ] = P (T = n ρ ρ r, [ ] = P (T = n X = ρ ρ r, n= = P (T < X = ρ ρ r, Observation 4 In particular, we have P (N i X = i = P (N i r X = i = ρ r i,i, for r = ρ ρ r, 4

29 4 Rules In this section we study criteria and rules related to recurrent states and transient states Furthermore, we work on the decomposition of a chain by using the first time that it is located in a certain state Theorem 4 (a If ρ i,i = P (T i < X = i =, then the state i is recurrent In addiction, and E i (N i = as well (b If ρ i,i = P (T i < X = i <, then the state i is transient Moreover, and E i (N i = ρ i,i ρ i,i n= n= p (n i,i p (n i,i = < Proof On one hand, we suppose that ρ i,i = Using the property of probability 8(b, we get P (N i = X = i = lim r P (N i > r X = i = lim r P (N i r + X = i = lim ρ r+ r i,i = lim = r Therefore, i is recurrent Furthermore, we have n= p (n i,i = E i (N i = On the other hand, consider ρ i,i < Using that E (X = P (X l, we obtain l= n= p (n i,i = E i (N i = P (N i l X = i = l= ρ l i,i = l= l= ρ l+ i,i = ρ i,i ρ i,i < In consequence, P (N i = X = i = and i is transient Observation 4 Using the definitions of recurrent state and transient state, we note that the implication to the left in the previous theorem is also satisfied; that is to say that ρ i,i = P (T i < X = i = if, and only if, the state i is recurrent In addiction, = and E i (N i = as well n= p (n i,i ρ i,i = P (T i < X = i < if, and only if, the state i is transient Moreover, p (n i,i < and E i (N i = ρ i,i ρ i,i n= 5

30 The next result show that being recurrent or transient is a class property Theorem 43 Let C be a class Then, all the states that are in C are recurrent or transient Proof Consider i, C and suppose that i is transient; as the states i and belong to the same class, then there exist n, m such that ρ (n > and ρ(m,i > Therefore, for all r, we have ρ (n+r+m i,i Thus, is transient ρ (n ρ(r, ρ(m,i ρ (r, ρ(n+r+m i,i ρ (n ρ(m,i r= ρ (r, ρ (n ρ(m,i r= ρ (n+r+m i,i < Similarly, considering i, C and supposing that i is recurrent, we can obtain ρ (r, = r= So that, is recurrent Theorem 44 If C is a recurrent class, then C is a closed class Proof Assume that C is not a closed class In other words, there exist i C, / C and m such that ρ (m > We have as well that P i ({X m = } {X n = i for infinite n} = ; it means that if you leave the class C, then it is impossible to return to it Therefore P i (X n = i for infinite n < So that, i is not recurrent, that is to say that C is not recurrent, which stands in contradiction to the previous assumption Theorem 45 If C is a finite closed class, then C is recurrent Proof Suppose that C is a finite closed class Consider that the Markov chain {X n : n } starts in C, for instance, in state C Then, there exists i C such that P (X n = i for infinite n > On the other hand, the Markov property implies that P (X n = i for infinite n = P (X n = i for some n P i (X n = i for infinite n As P i (X n = i for infinite n, then i is not transient; that is to say that i is recurrent Thus, C is recurrent Observation 46 All finite Markov chains have at least one recurrent state In all finite Markov chains, starting in a state i it is possible, at least, to go to a recurrent state If there exist two states i, I such that i, but not i, then is transient If I is finite, then is transient if, and only if, there exists a state i such that i but not i 6

31 43 Examples In this section we show some interesting examples in order to understand better the previous concepts of this chapter (i Random walk on Z We have already shown that there is only one class in this case, but we do not know if this chain is make up of recurrent states or transient states We had S = X =, S n = n X i : Ω {k n : k =,,, n}, for n i= So that, we get P (S n = k n = ( n p k ( p n k k If we define h = k n, then we obtain ( n P (S n = h = n+h p n+h ( p n h Now, we study the summability of the series p (n, : p (n, =, if n is odd, p n ( p n, if n is even p (n, = P (S n = = ( n n Therefore, considering only the even numbers, we have p (n, = ( n p n ( p n n n= n= Using the Stirling formula, which is n! πn ( n n e as n approaches infinity, and writing q = p, we obtain ( n n p n ( p n = ( n p n q n (4pqn n πn In particular, we study the case where p = q = : ( ( p (n n n ( n, = n πn So that, we get n= p (n, = +, πn n= this is because we have obtained a p-series with p = < Therefore, using Observation 4, the state is recurrent Thus, all the states are recurrent since there is only one class; in other words, the chain is recurrent 7

32 (ii Random walk on Z In this case the state space is I = { (i, Z } and the transition probabilities are p, if i = i +, =, p p (,(i, =, if i = i, =, p 3, if i = i, = +, p 4, if i = i, =, with p + p + p 3 + p 4 = From now on, we consider the particular case where p i = 4, for i =,, 3, 4; and we compute p (n (,( Starting in state (i,, to return to this state (i, in n stages it is necessary that, for some k N, we have made k movements to the east, k to the west, n k to the north and n k to the south So that, we express these n repetitions of an experience with 4 possible outcomes, which are east, west, north and south, with probability 4, by using the multinomial distribution Therefore, we get p (n (,( = 4 n n k= ( ( (n! n k!k! (n k! (n k! = n n is the return proba- ( ( We observe that p (n n (, (,( = n = p (n (n, where p, n bility to state in n stages in a random walk on Z Using again the Stirling formula, we get ( ( p (n n (,( = n n πn So that, we have n= p (n (,( πn = +, n= this is because we have obtained an harmonic series Therefore, using Observation 4, the state (, is recurrent In consequence, all the states are recurrent as there is only one class; that is to say that the chain is recurrent Observation 43 The case where there is not symmetry, that is p q, has more difficulties and the result is that the chain is transient 8

33 (iii Consider the Markov chain with transition matrix Π = Therefore, the state diagram in this case is In this example there are three equivalent classes, which are C = {, 3}, C = {} and C 3 = {4} The state is transient because, starting in this state, you can go to state but, once you are there, you can never return to state ; therefore, C is a transient class Similarly, the state 4 is transient because you can leave this state to go to state 3 but, then, you can not come back to state 4; thus, C 3 is a transient class Finally, C is a recurrent class because it is a closed class Now we study, starting in state, what is the probability of return to this state in stages If we begin in state, we can only move to states, or 3 If we go to states or 3, it means that we are going to the closed class C ; so that, it will not be possible to return to state In consequence, the only possibility left is going repeatedly to state during the stages Therefore, we obtain p (, = (p, = 9

34 5 Invariant distributions In this chapter, firstly we introduce the concept of invariant distributions, also known as stationary distributions After that, we study the existence of this type of distributions; in particular, we show the relation between this idea and the limit of p (n as n approaches infinity Subsequently, we work on the uniqueness of invariant distributions; specifically, we see that this notion is related to the concept of ergodicity Finally, we study the concept of regularity and we give some examples to complete the explanation of the previous concepts Throughout all this chapter we consider a probability space (Ω, A, P and a collection of discrete random variables X n : Ω I, where I is a countable set and {X n : n } is a HMC (γ, Π with Π = (p : i, I 5 Invariant distributions or stationary distributions The aim of this section is to study the long-time properties that has a HMC; in other words, we work on the value of lim Moreover, we show that this concept is related to what n p(n we call invariant distributions Definition 5 Let Π be a transition matrix A probability distribution µ over I is called invariant distribution or stationary distribution for Π if µ t = µ t Π, where µ = i I µ i p, I, and µ t = ( µ, µ,, µ card(i In addiction, the following expression is also satisfied µ =, I because µ is a probability distribution over I Observation 5 The previous definition shows that µ t = µ t Π n, n ; so that, {X n : n } is a HMC (µ, Π, whose law is µ (n (k = µ (n k = P (X n = k = i I µ i p (n i,k = µ k, for k I and n Therefore, all the random variables X n have the same law, n Taking the previous observation into consideration, we get the following result Theorem 53 Assume that {X n : n } is a HMC (µ, Π and let µ be an invariant distribution for Π Then, {X n+m : n } is also a HMC (µ, Π Proof The preceding observation asserts that P (X m = k = µ k, k I As we have as well that X m+n+ is independent of X m, X m+,, X m+n conditional on X m+n = k and has distribution (p : I, we obtain the required result 3

35 From now on, we suppose that the state space I is finite Next we study the existence of stationary distributions Theorem 54 Consider a HM C with finite state space I and transition matrix Π Then, Π has, at least, one invariant distribution Proof Let v be a probability over I; it means that v = ( v,, v card(i, with vi [, ] and i I v i = Now, n, we define v t n = n n k= v t Π k Then, we get that v t n establishes a probability over I because we have that v i n, i I vn i = i I n n i I k= h I v h p (k h,i = n n k= h I v h [ i I ] p (k h,i = n v h = n k= h I h I v h = Furthermore, the set of probabilities over I is a closed and bounded set of [, ] card(i ; so that, there exists a convergent subsequence, whose limit is an element included in that set That is to say that there exists a probability µ over I such that, for some subsequence {v nk : k }, it satisfies that the limit of v nk as k approaches infinity equals µ Then, we have that [ vn t k vn t k Π = nk n k l= n k v t Π l l= v t Π l+ ] = n k ( v t v t Π n k Finally, considering the limit of the previous result as k approaches infinity, we obtain µ t µ t ( Π = lim v t v t Π n k =, k n k because { v t v t Π n k : k } is a bounded sequence distribution for Π In consequence, µ is an invariant Observation 55 The system of linear equations µ t = µ t Π has solution To see this fact, we verify that µ t (I Π =, which is also a system of linear equations with determinant p, p, p, I p, p, p, I p I, p I, p I, I where I denotes the cardinality of the state space I The sum of any row of the previous determinant equals I p, i I So that, the system has solution, although we can not assert that the possible solution is a probability In addiction, in general there is not uniqueness 3

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Lecture 4 - Random walk, ruin problems and random processes

Lecture 4 - Random walk, ruin problems and random processes Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 6, 2016 Outline 1. Introduction 2. Chapman-Kolmogrov Equations

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Markov Chains Introduction

Markov Chains Introduction Markov Chains 4 4.1. Introduction In this chapter, we consider a stochastic process {X n,n= 0, 1, 2,...} that takes on a finite or countable number of possible values. Unless otherwise mentioned, this

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007 CSL866: Percolation and Random Graphs IIT Delhi Arzad Kherani Scribe: Amitabha Bagchi Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Lecture 6 Random walks - advanced methods

Lecture 6 Random walks - advanced methods Lecture 6: Random wals - advanced methods 1 of 11 Course: M362K Intro to Stochastic Processes Term: Fall 2014 Instructor: Gordan Zitovic Lecture 6 Random wals - advanced methods STOPPING TIMES Our last

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

MATH HOMEWORK PROBLEMS D. MCCLENDON

MATH HOMEWORK PROBLEMS D. MCCLENDON MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING

SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING SIMPLE RANDOM WALKS: IMPROBABILITY OF PROFITABLE STOPPING EMILY GENTLES Abstract. This paper introduces the basics of the simple random walk with a flair for the statistical approach. Applications in biology

More information

Quitting games - An Example

Quitting games - An Example Quitting games - An Example E. Solan 1 and N. Vieille 2 January 22, 2001 Abstract Quitting games are n-player sequential games in which, at any stage, each player has the choice between continuing and

More information

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes.

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes. 1. LECTURE 1: APRIL 3, 2012 1.1. Motivating Remarks: Differential Equations. In the deterministic world, a standard tool used for modeling the evolution of a system is a differential equation. Such an

More information

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why

More information

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University

More information

0.1 Markov Chains Generalities 0.1. MARKOV CHAINS 1

0.1 Markov Chains Generalities 0.1. MARKOV CHAINS 1 0.1. MARKOV CHAINS 1 0.1 Markov Chains 0.1.1 Generalities A Markov Chain consists of a countable (possibly finite) set S (called the state space) together with a countable family of random variables X,

More information

Markov Chains (Part 4)

Markov Chains (Part 4) Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya

Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya BBM 205 Discrete Mathematics Hacettepe University http://web.cs.hacettepe.edu.tr/ bbm205 Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya Resources: Kenneth Rosen, Discrete

More information

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal. 35 Mixed Chains In this chapter we learn how to analyze Markov chains that consists of transient and absorbing states. Later we will see that this analysis extends easily to chains with (nonabsorbing)

More information

Markov Chains and Pandemics

Markov Chains and Pandemics Markov Chains and Pandemics Caleb Dedmore and Brad Smith December 8, 2016 Page 1 of 16 Abstract Markov Chain Theory is a powerful tool used in statistical analysis to make predictions about future events

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

Stochastic Processes (Stochastik II)

Stochastic Processes (Stochastik II) Stochastic Processes (Stochastik II) Lecture Notes Zakhar Kabluchko University of Ulm Institute of Stochastics L A TEX-version: Judith Schmidt Vorwort Dies ist ein unvollständiges Skript zur Vorlesung

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

3. DISCRETE RANDOM VARIABLES

3. DISCRETE RANDOM VARIABLES IA Probability Lent Term 3 DISCRETE RANDOM VARIABLES 31 Introduction When an experiment is conducted there may be a number of quantities associated with the outcome ω Ω that may be of interest Suppose

More information

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of

More information

Markov Chains. Chapter 16. Markov Chains - 1

Markov Chains. Chapter 16. Markov Chains - 1 Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

Stability of the two queue system

Stability of the two queue system Stability of the two queue system Iain M. MacPhee and Lisa J. Müller University of Durham Department of Mathematical Science Durham, DH1 3LE, UK (e-mail: i.m.macphee@durham.ac.uk, l.j.muller@durham.ac.uk)

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Markov Chains. Contents

Markov Chains. Contents 6 Markov Chains Contents 6.1. Discrete-Time Markov Chains............... p. 2 6.2. Classification of States................... p. 9 6.3. Steady-State Behavior.................. p. 13 6.4. Absorption Probabilities

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking

MADHAVA MATHEMATICS COMPETITION, December 2015 Solutions and Scheme of Marking MADHAVA MATHEMATICS COMPETITION, December 05 Solutions and Scheme of Marking NB: Part I carries 0 marks, Part II carries 30 marks and Part III carries 50 marks Part I NB Each question in Part I carries

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

RANDOM WALKS. Course: Spring 2016 Lecture notes updated: May 2, Contents

RANDOM WALKS. Course: Spring 2016 Lecture notes updated: May 2, Contents RANDOM WALKS ARIEL YADIN Course: 201.1.8031 Spring 2016 Lecture notes updated: May 2, 2016 Contents Lecture 1. Introduction 3 Lecture 2. Markov Chains 8 Lecture 3. Recurrence and Transience 18 Lecture

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

MAT 135B Midterm 1 Solutions

MAT 135B Midterm 1 Solutions MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above.

More information