Summary of Results on Markov Chains. Abstract

Size: px
Start display at page:

Download "Summary of Results on Markov Chains. Abstract"

Transcription

1 Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G, Alessandria, Italy (Dated: August 30, 2007) Abstract These short lecture notes contain a summary of results on the elementary theory of Markov chains. The purpose of these notes is to let the reader understand as quickly as possible the concept of statistical equilibrium, based on the stationary distribution of homogeneous Markov chains. Some exercises related to these notes can be found in a separate document. PACS numbers: r, Ey, a, Jc, Electronic address: enrico.scalas@mfn.unipmn.it; URL: 1

2 I. INTRODUCTION Many models used in Economics, in Physics or in other sciences are instances of Markov chains. This is the case of Schelling s model [1] or of the closely related Ising s model [2] with the usual Monte Carlo dynamics [3]. Economists will find further motivation to study Markov chains in a recent book by Aoki and Yoshikawa [4]. Markov chains have the advantage that their theory can be introduced and many results can be proven in the framework of the elementary theory of probability, without extensively using measure theoretical tools. In order to compile the present summary, the books by Hoel et al., by Kemeny and Laurie Snell, by Durrett and by Çinlar [5 8] have been consulted. These notes can be considered as a summary of the first two chapters of Hoel et al.. In this summary, random variables will be denoted by capital letters X, Y,... and their values by small letters x, y,.... In order to define a Markov chain, a random variable X n will be considered that can assume values in a finite or at most denumerable set of states S at instants denoted by the subscript n = 0, 1, 2,.... This subscript will always be interpreted as a discrete-time index. It will be further assumed that P (X n+1 = x n+1 X 0 = x 0,..., X n = x n ) = P (X n+1 = x n+1 X n = x n ), (1) for every choice of the non-negative integer n and of the values x 0,..., x n which belong to S. P ( ) is a conditional probability. The meaning of equation (1) is that the probability of X n+1 does not depend on the past history, but only on the value of X n ; this equation, the socalled Markov property, can be used to define Markov chains. The conditional probabilities P (X n+1 = x n+1 X n = x n ) are called transition probabilities. If they do not depend on n, they are stationary (or homogeneous) transition probabilities and the corresponding Markov chains are stationary (or homogeneous) Markov chains. II. PROPERTIES OF MARKOV CHAINS A. Transitions and initial distribution The transition function, P (x, y) of a Markov chain, X n, is defined as P (x, y) = P (X 1 = y X 0 = x), x, y S. (2) 2

3 The values of P (x, y) are non-negative and the sum over the final states y of P (x, y) is 1. In the finite case with M states, this function can be represented as a square M M matrix with non-negative matrix elements and with rows summing up to 1. For a stationary Markov chain, one has that P (X n+1 = y X n = x) = P (x, y), n 1, (3) the initial distribution is π 0 (x) = P (X 0 = 0), (4) and the joint probability distribution P (X 0 = x 0, X 1 = x 1,..., X n = x n ) can be expressed as a product of π 0 (x) and P (x, y) s in the following way P (X 0 = x 0, X 1 = x 1..., X n = x n ) = π 0 (x 0 )P (x 0, x 1 ) P (x n 1, x n ). (5) The m-step transition function P m (x, y) is the probability of going from state x to state y in m steps. It is given by P m (x, y) = P (x, y 1 )P (y 1, y 2 ) P (y m 2, y m 1 )P (y m 1, y) (6) y 1 y m 1 for m 2; for m = 1, it coincides with P (x, y) and for m = 0, it is 1 if x = y and 0 elsewhere. The following three formulae involving P m (x, y) are useful in the theory of Markov chains: P n+m (x, y) = z P (X n = y) = x P (X n+1 = y) = x P n (x, z)p m (z, y) (7) π 0 (x)p n (x, y) (8) P (X n = x)p (x, y). (9) B. Hitting times and classification of states Given a subset of states A, the hitting time T A is defined as T A = min{n > 0 : X n A}. (10) Thanks to the concept of hitting time, it is possible to classify the states of Markov chains in a very useful way. Let P x ( ) denote the probability of an event for a Markov chain starting at state x. Then one has the following formula for the n-step transition function: n P n (x, y) = P x (T y = m)p n m (y, y). (11) m=1 3

4 An absorbing state of a Markov chain is a state a for which P (a, a) = 1 or, equivalently, P (a, y) = 0 for any state y a. If the chain reaches such a state, it is trapped there and it will never leave. For an absorbing state, it turns out that P n (x, a) = P x (T a n) for n 1. The quantity ρ xy = P x (T y < ) (12) can be used to introduce two classes of states. ρ yy is the probability that a chain starting at y will ever return to y. A state y is recurrent if ρ yy = 1 and transient if ρ yy < 1. For a transient state, there is a positive probability to never return back. An absorbing state is recurrent. The indicator function I y (z) helps in defining the counting random variable N(y). The indicator function I y (X n ) is 1 if X n = y and 0 otherwise, therefore N(y) = I y (X n ) (13) n=1 counts the number of times in which the chain reaches state y. The event {N(y) 1} coincides with the event {T y < }. Therefore, one can write By induction, one can prove that for m 1 hence and finally P x (N(y) 1) = P x (T y < ) = ρ xy. (14) P x (N(y) m) = ρ xy ρ m 1 yy, (15) P x (N(y) = m) = ρ xy ρ m 1 yy (1 ρ yy ), (16) P x (N(y) = 0) = 1 P x (N(y) 1) = 1 ρ xy. (17) One can define G(x, y) = E x (N(y)), the average number of visits to state y for a Markov chain that started at x. It turns out that G(x, y) = E x (N(y)) = It is now possible to state the following P n (x, y). (18) n=1 Theorem Let y be a transient state. Then P x (N(y) < ) = 1 and G(x, y) = ρ xy 1 ρ yy, x S (19) finite for all states. 4

5 2. Let y be a recurrent state. Then P y (N(y) = ) = 1 and G(y, y) = and one also has P x (N(y) = ) = P x (T y < ) = ρ xy, x S. (20) Finally, if ρ xy = 0, then G(x, y) = 0, else if ρ xy > 0, then G(x, y) =. This theorem tells that the Markov chain pays only a finite number of visits to a transient state, whereas if it starts from a recurrent state it will come back there an infinite number of times. If the Markov chain starts at any state x, it may well be that it will never visit the recurrent state y, but if it gets there, it will come back infinitely many times. A Markov chain is called transient if it has only transient states and recurrent if all of its states are recurrent. A finite Markov chain at least has one recurrent state and cannot be transient. C. The decomposition of space state A state x leads to another state y if ρ xy > 0 or, equivalently, if there exists a positive integer n for which P n (x, y) > 0. If x leads to y and y leads to z, then x leads to z. Based on this concept, there is the following Theorem 2. Let x be a recurrent state and suppose that x leads to y, Then y is recurrent and ρ xy = ρ yx = 1. A set of states C is said to be closed if no state in C leads to a state outside C. An absorbing state a defines the closed set {a}. There are several caracterizations of closed sets, but they will not be included here. A closed set C is irreducible (or ergodic) if for any choice of two states x and y in C, x leads to y. It is a consequence of Theorem (2) that if C is an irreducible closed set, either every state in C is transient or every state in C is recurrent. Another consequence of Theorems (1,2) is the following Corollary 1. For an irreducible closed set of recurrent states, C one has ρ xy = 1, P x (N(y) = ) = 1, and G(x, y) = for all choices of x and y in C. Finally, one has the following important result as a direct consequence of the above theorems and corollaries Theorem 3. If C is a finite irreducible closed set, then any state in C is recurrent. 5

6 If we are given a finite Markov chain, it is often possible to directly verify if the process is irreducible (or ergodic) by using the transition function (matrix) and controlling whether any state leads to any other state. Finally, one can prove the following decomposition into irreducible (ergodic) components Theorem 4. A non-empty set S R of recurrent states is the union of a finite or countably infinite number of disjoint irreducible closed sets C 1, C 2,.... If the initial state of the Markov chain is within one of the sets C i, the time evolution will take place within this set and the chain will visit any of these states an infinite number of times. If the chain starts within the set of transient states S T, either it will stay in this set visiting any transient state only a finite number of times, or, if it reaches one of the C i, it will stay there and will visit any state of the irreducible closed set infinitely many times. The problem arises to determine the hitting time distribution of the various ergodic components for a chain that starts in a transient state, as well as the absorption probability ρ C (x) = P x (T C < ) for x S T. The latter problem has the following solution when S T is finite. Theorem 5. Let the set S T be finite and let C be a closed irreducible set of recurrent states. Then the system of equations f(x) = P (x, y) + P (x, y)f(y), x S T (21) y C y S T has the unique solution f(x) = ρ C (x), x S T. III. THE PATH TO STATISTICAL EQUILIBRIUM A. The stationary distribution The stationary distribution, π(x), is a function of the Markov chain state space such that its values are non-negative, its sum over state space is 1, and π(x)p (x, y) = π(y), y S. (22) x It is interesting to notice that, for all n π(x)p n (x, y) = π(y), y S. (23) x 6

7 Moreover, if X 0 follows the stationary distribution, then, for all n, the distribution of X n also follows the stationary distribution. Indeed, the distribution of X n does not depend on n if and only if π 0 (x) = π(x). If π(x) is a stationary distribution and n P n (x, y) = π(y) holds for every initial state x and for every state y then one can conclude that n P (X n = y) = π(y) irrespective of the initial distribution. This means that, after a transient period, the distribution of chain states reaches a stationary distribution, which can then be interpreted as an equilibrium distribution in the statistical sense. For the reasons discussed above, it is important to see under which conditions, π(x) exists and is unique and to study the convergence properties of P n (x, y). B. How many times is a recurrent state visited in average? Let N n (t) denote the number of visits to a state y up to time step n. This random variable is defined as n N n (y) = I y (X m ). (24) m=1 One can also define the average number of visits to state y, starting from x up to step n: n G n (x, y) = E x (N n (y)) = P m (x, y). (25) If m y = E y (T y ) is taken to indicate the mean return (recurrence) time to come back to y for a chain starting at y, then, as an application of the strong law of large numbers one has m=1 Theorem 6. Let y be a recurrent state, then N(y) n n = I {T y< } m y (26) with probability one and G n (x, y) n n = ρ xy m y, x S (27) The meaning of this theorem is that if a chain reaches a recurrent state y, then it returns there with frequency 1/m y. Note that the quantity N n (y)/n is immediately accessible from Monte Carlo simulation of Markov chains. A corollary is of immediate relevance to finite Markov chains: 7

8 Corollary 2. Let x, y be two generic states in an irreducible closed set of recurrent states C, then G n (x, y) n n and if P (X 0 C) = 1, then with probability one for any state y in C If m y = the right sides are both 0. A null recurrent state y is a recurrent state for which m y = 1 m y, (28) N(y) n n = 1 (29) m y =. A positive recurrent state y is is a recurrent state for which m y <. The following result characterizes positive recurrent states Theorem 7. If x is a positive recurrent state and x leads to y then also y is positive recurrent. In a finite irreducible closed set of states there is no null recurrent state: Theorem 8. If C is a finite irreducible closed set of states, every state in C is positive recurrent. These corollaries are immediate consequences of the above theorems and corollary Corollary 3. An irreducible Markov chain having a finite number of states is positive recurrent. Corollary 4. A Markov chain having a finite number of states has no null recurrent states. As a final remark of this subsection, note that Theorem (6) and Corollary (2) connect time averages defined by N n (y)/n to ensemble averages defined by G n (x, y)/n and they can be called ergodic theorems. Ergodic theorems are related to the so-called strong law of large numbers, one of the important results of probability theory. Theorem 9. Let ξ 1, ξ 2,..., be independent and identically distributed random variables with finite mean µ, then ξ 1 + ξ ξ n n n If these random variables are positive with infinite mean, the theorem still holds with µ = +. 8 = µ

9 C. Existence, uniqueness and convergence to the stationary distribution Eventually, the main results on the existence and uniqueness of π(x) and the iting behaviour of P n (x, y) can be stated. The ergodic theorems discussed in the previous subsection do provide a rule for the Monte Carlo approximation of π(x) that can be used to prove its existence and uniqueness. First of all, the stationary weight of both transient states and null recurrent states is zero. Theorem 10. If π(x) is a stationary distribution and x is a transient state or a null recurrent state then π(x) = 0. This means that a Markov chain without positive recurrent states cannot have a stationary probability distribution. However, Theorem 11. An irreducible positive recurrent Markov chain has a unique stationary disrtribution π(x) given by π(x) = 1. (30) m x This theorem provides the utimate justification for the use of Markov chain Monte Carlo simulations to sample the stationary distribution if the hypotheses of the theorem are fulfilled. In order to get an approximate value for π(y) one lets the system equilibrate (and to fully justify this step, the convergence theorem will be necessary) and then counts the number of occurences of state y, N n (y) in a long enough simulation of the Markov chain and divides it by the number of Monte Carlo steps n. This program can be carried out when the state space is not too large. In a typical Monte Carlo simulation of the Ising model, with K sites, the number of states is 2 K and soon grows to become untractable. In a simulation, many states will be never sampled even if the Markov chain is irreducible. For this reason, Metropolis et al. introduced the importance sampling trick whose explanation is outside the scope of the present notes [3, 9]. The next corollary provides a nice characterization of positive recurrent Markov chains. Corollary 5. An irreducible Markov chain is positive recurrent if and only if it has a stationary distribution. For chains with a finite number of states the existence and uniqueness of the stationary distribution is granted if they are irreducible. 9

10 Corollary 6. If a Markov chain having a fnite number of states is irreducible, it has a unique stationary distribution and, finally, the corollary discussed above, where the recipe was given to estimate π(x) from Monte Carlo simulations: Corollary 7. For an irreducible positive recurrent Markov chain having stationary distribution π, one has with probability one N n (x) n n For reducible Markov chains the following results hold = π(x). (31) Theorem 12. Let S P denote the positive recurrent states of a Markov chain 1. if S P is empty, the stationary distribution does not exist; 2. if S P is not empty and irreducible, the chain has a unique distribution; 3. if S P is non empty and reducible, the chain has an infinite number of stationary distributions. Case 3 is when the chain reaches one of the closed irreducible sets and then stays there forever. It is a subtle case, where Monte Carlo simulations may not give proper results if the chain reducibility is not studied. If x is a state of a Markov chain such that P n (x, x) > 0 for some n 1, its period d x can be defined as the greatest common divisor of the set {n 1 : P n (x, x) > 0}. For two states x and y leading to each other, d x = d y. States in an irreducible Markov chain have a common period d. The chain is called periodic of period d if d > 1 and aperiodic if d = 1. The following theorem gives the conditions for the convergence of P n (x, y) to the stationary distribution: Theorem 13. For an aperiodic irreducible positive recurrent Markov chain with stationary probability π(x) P n (x, y) = π(y), x, y S. (32) n For a periodic chain with the same properties and with period d, for each pair of states in S, there is an integer r, 0 r < d, such that P n (x, y) = 0 unless n = md + r for some non-negative integer m, and P md+r (x, y) = dπ(y), x, y S. (33) m 10

11 This theorem is the only one in the list that needs (mild) number-theoretic tools to be proven. Acknowledgements These notes were written during a visit to Marburg University supported by an Erasmus fellowship. The author wishes to thank Prof. Guido Germano and his group for their warm hospitality. [1] T.S. Schelling, (1971) Dynamic Models of Segregation, Journal of Mathematical Sociology 1, [2] E. Ising, (1924) Beitrag zur Theorie des Ferro- und Paramagnetismus, Dissertation, Mathematisch-Naturwissenschaftliche Fakultät der Hamburgischen Universität, Hamburg. [3] D. Landau and K. Binder (1995) A Guide to Monte Carlo Simulations in Statistical Physics, Cambridge University Press. [4] M. Aoki and H. Yoshikawa (2007) Reconstructing Macroeconomics. A Perspective from Statistical Physics and Combinatorial Stochastic Processes, Cambridge University Press. [5] P.G. Hoel, S.C. Port, and C.J. Stone (1972) Introduction to Stochastic Processes, Houghton Mifflin, Boston. [6] J.G. Kemeny and J. Laurie Snell (1976) Finite Markov Chains, Springer, New York. [7] R. Durrett (1999) Essentials of Stochastic Processes, Springer, New York. [8] E. Çinlar (1975) Introduction to Stochastic Processes, Prentice Hall, Englewood Cliffs. [9] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller and E. Teller (1953) Equation of State Calculations by Fast Computing Machines, Journal of Chemical Physics, 21,

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final STATS 3U03 Sang Woo Park March 29, 2017 Course Outline Textbook: Inroduction to stochastic processes Requirement: 5 assignments, 2 tests, and 1 final Test 1: Friday, February 10th Test 2: Friday, March

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

SUPPLEMENT TO EXTENDING THE LATENT MULTINOMIAL MODEL WITH COMPLEX ERROR PROCESSES AND DYNAMIC MARKOV BASES

SUPPLEMENT TO EXTENDING THE LATENT MULTINOMIAL MODEL WITH COMPLEX ERROR PROCESSES AND DYNAMIC MARKOV BASES SUPPLEMENT TO EXTENDING THE LATENT MULTINOMIAL MODEL WITH COMPLEX ERROR PROCESSES AND DYNAMIC MARKOV BASES By Simon J Bonner, Matthew R Schofield, Patrik Noren Steven J Price University of Western Ontario,

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS ARI FREEDMAN Abstract. In this expository paper, I will give an overview of the necessary conditions for convergence in Markov chains on finite state spaces.

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Lecture 3: September 10

Lecture 3: September 10 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 3: September 10 Lecturer: Prof. Alistair Sinclair Scribes: Andrew H. Chan, Piyush Srivastava Disclaimer: These notes have not

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Approximate Counting and Markov Chain Monte Carlo

Approximate Counting and Markov Chain Monte Carlo Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam

More information

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MARKOV CHAINS AND HIDDEN MARKOV MODELS MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains, Random Walks on Graphs, and the Laplacian Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

An Application of Graph Theory in Markov Chains Reliability Analysis

An Application of Graph Theory in Markov Chains Reliability Analysis An Application of Graph Theory in Markov Chains Reliability Analysis Pavel SKALNY Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science, VSB Technical University of

More information

Math 456: Mathematical Modeling. Tuesday, April 9th, 2018

Math 456: Mathematical Modeling. Tuesday, April 9th, 2018 Math 456: Mathematical Modeling Tuesday, April 9th, 2018 The Ergodic theorem Tuesday, April 9th, 2018 Today 1. Asymptotic frequency (or: How to use the stationary distribution to estimate the average amount

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes

On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Markov Chains for Everybody

Markov Chains for Everybody Markov Chains for Everybody An Introduction to the theory of discrete time Markov chains on countable state spaces. Wilhelm Huisinga, & Eike Meerbach Fachbereich Mathematik und Informatik Freien Universität

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Markov Processes. Stochastic process. Markov process

Markov Processes. Stochastic process. Markov process Markov Processes Stochastic process movement through a series of well-defined states in a way that involves some element of randomness for our purposes, states are microstates in the governing ensemble

More information

Sampling Methods (11/30/04)

Sampling Methods (11/30/04) CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with

More information

Lecture #5. Dependencies along the genome

Lecture #5. Dependencies along the genome Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3., Polanski&Kimmel Section 2.8. Prepared by Shlomo Moran, based on Danny Geiger s and Nir Friedman s. Dependencies along the genome

More information

Markov and Gibbs Random Fields

Markov and Gibbs Random Fields Markov and Gibbs Random Fields Bruno Galerne bruno.galerne@parisdescartes.fr MAP5, Université Paris Descartes Master MVA Cours Méthodes stochastiques pour l analyse d images Lundi 6 mars 2017 Outline The

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Lecture 2: September 8

Lecture 2: September 8 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 Lecturer: Prof. Alistair Sinclair Scribes: Anand Bhaskar and Anindya De Disclaimer: These notes have not been

More information

FINITE MARKOV CHAINS

FINITE MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

The Optimal Stopping of Markov Chain and Recursive Solution of Poisson and Bellman Equations

The Optimal Stopping of Markov Chain and Recursive Solution of Poisson and Bellman Equations The Optimal Stopping of Markov Chain and Recursive Solution of Poisson and Bellman Equations Isaac Sonin Dept. of Mathematics, Univ. of North Carolina at Charlotte, Charlotte, NC, 2822, USA imsonin@email.uncc.edu

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School

More information

Chapter 2. Markov Chains. Introduction

Chapter 2. Markov Chains. Introduction Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be

More information

CSC 446 Notes: Lecture 13

CSC 446 Notes: Lecture 13 CSC 446 Notes: Lecture 3 The Problem We have already studied how to calculate the probability of a variable or variables using the message passing method. However, there are some times when the structure

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

30 Classification of States

30 Classification of States 30 Classification of States In a Markov chain, each state can be placed in one of the three classifications. 1 Since each state falls into one and only one category, these categories partition the states.

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Reducing Markov Chains for Performance Evaluation

Reducing Markov Chains for Performance Evaluation 1 Reducing Markov Chains for erformance Evaluation Y. ribadi, J..M. Voeten and B.D. Theelen Information and Communication Systems Group, Faculty of Electrical Engineering Eindhoven Embedded Systems Institute

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Markov Processes on Discrete State Spaces

Markov Processes on Discrete State Spaces Markov Processes on Discrete State Spaces Theoretical Background and Applications. Christof Schuette 1 & Wilhelm Huisinga 2 1 Fachbereich Mathematik und Informatik Freie Universität Berlin & DFG Research

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

Markov Random Fields

Markov Random Fields Markov Random Fields 1. Markov property The Markov property of a stochastic sequence {X n } n 0 implies that for all n 1, X n is independent of (X k : k / {n 1, n, n + 1}), given (X n 1, X n+1 ). Another

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Lecture 2 : CS6205 Advanced Modeling and Simulation

Lecture 2 : CS6205 Advanced Modeling and Simulation Lecture 2 : CS6205 Advanced Modeling and Simulation Lee Hwee Kuan 21 Aug. 2013 For the purpose of learning stochastic simulations for the first time. We shall only consider probabilities on finite discrete

More information

Lecture 4 - Random walk, ruin problems and random processes

Lecture 4 - Random walk, ruin problems and random processes Lecture 4 - Random walk, ruin problems and random processes Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 4 - Random walk, ruin problems and random processesapril 19, 2009 1 / 30 Part I Random

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Discrete Time Markov Chain (DTMC)

Discrete Time Markov Chain (DTMC) Discrete Time Markov Chain (DTMC) John Boccio February 3, 204 Sources Taylor & Karlin, An Introduction to Stochastic Modeling, 3rd edition. Chapters 3-4. Ross, Introduction to Probability Models, 8th edition,

More information