Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Size: px
Start display at page:

Download "Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321"

Transcription

1 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321

2 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process {X n } = {X n : n =0, 1, 2,...}. The process if defined by the collection of all joint distributions of order m, F Xn1,...,X n m (x 1,...,x m ) for all instants {n 1,...,n m } and for all m =1, 2, 3,... If the process has the property that, given the present, the future is independent of the past, we say that the process satisfies the Markov property, or, it is a Markov chain. Furthermore, we are mainly interested in the case where X t takes on values in some countable set S, called state space. Copyright G. Caire (Sample Lectures) 322

3 Markov chains Definition 44. property: The process {X n } is a Markov chain if it satisfies the Markov P(X n = j X 0 = x 0,...,X n 1 = i) =P(X n = j X n 1 = i) for all i, j, x 0,...,x n 2 2 S and for all n =1, 2, 3,... The Markov property implies that: P(X nk = j X n0 = x 0,...,X nk 1 = i) =P(X nk = j X nk 1 = i) for all k, n, all n 0 apple n 1 apple apple n k 1 apple n k and all i, j, x 0,...,x nk 1. Also P(X n+m = j X 0 = x 0,...,X m = i) =P(X n+m = j X m = i) Copyright G. Caire (Sample Lectures) 323

4 Transition probability The evolution of a markov chain is defined by its transition probability, defined by P(X n+1 = j X n = i) (where without loss of generality we may assume that S is an integer set. Definition 45. The chain {X n } is called homogeneous if its transition probabilities do not depend on the time, i.e., P(X n+1 = j X n = i) =P(X 1 = j X 0 = i) for all n, i, j. The transition probability matrix P =[p i,j ] is the S S matrix of the transition probabilities, such that p i,j = P(X n+1 = j X n = i) Copyright G. Caire (Sample Lectures) 324

5 Property of the transition matrix Theorem 46. The transition matrix P of a Markov chain is a stochastic matrix, that is, it has non-negative elements such that X p i,j =1 j2s (sum of the elements on each row yields 1) In order to characterize the probability for n steps transitions, we introduce the n-step transition probability matrix with elements p i,j (m, m + n) =P(X m+n = j X m = i) By homogeneity, we have that P(m, m + 1) = P. Furthermore, P(m, m + n) = P (n) does not depend on m. Copyright G. Caire (Sample Lectures) 325

6 Chapman-Kolmogorov equation Theorem 47. p i,j (m, m + n + r) = X k p i,k (m, m + n)p k,j (m + n, m + n + r) Therefore, P(m, m + n + r) =P(m, m + n)p(m + n, m + n + r). It follows that for homogeneous Markov chains, P(m, m + n) =P n, i.e., P (n) = P n. Copyright G. Caire (Sample Lectures) 326

7 Long-term evolution We let u(n) denote the pmf of X n, that is, for each n we have that u(n) is a vector with S non-negative components that sum to 1. Lemma 33. u(m + n) =u(m)p n, and hence u(n) =u(0)p n. This describes the pmf of X n in terms of the initial state pmf u(0). Copyright G. Caire (Sample Lectures) 327

8 First step and last step conditioning We have a MC with transition matrix P. Let A S denote a subset of states. For i/2 A, we are interested in the probability P(X k 2 A for some k =1,...,m X 0 = i) =P ([ m k=1{x k 2 A} X 0 = i) Define the stopping time N =min{n 1:X n 2 A}, and the new MC W n = Xn n<n a n N where a is a special symbol. Copyright G. Caire (Sample Lectures) 328

9 {W n } has transition probability matrix q i,j = p i,j if i/2 A, j /2 A q i,a = X p i,j if i/2 A j2a q a,a = 1 Notice that the original MC enters a state in A by time m if and only if W m = a, then P ([ m k=1{x k 2 A} X 0 = i) =P(W m = a W 0 = i) =q i,a (m) =[Q m ] i,a Copyright G. Caire (Sample Lectures) 329

10 Now, we are interested in the probability of never entering the subset of states A in the times 0,...,m, i.e., for i, j /2 A we wish to compute P {X m = j} \ m 1 k=1 {X k /2 A} X 0 = i The above condition is equivalent to W m = j conditionally on W 0 = i, hence P {X m = j} \ m 1 k=1 {X k /2 A} X 0 = i = q i,j (m) Copyright G. Caire (Sample Lectures) 330

11 Next, we wish to consider the first entrance probability, for i/2 A and j 2 A P {X m = j} \ m 1 k=1 {X k /2 A} X 0 = i = = X r/2a = X r/2a P {X m = j} \ {X m 1 = r} \ m 2 k=1 {X k /2 A} X 0 = i P (X m = j X m 1 = r) P {X m 1 = r} \ m 2 k=1 {X k /2 A} X 0 = i = X r/2a p r,j q i,r (m 1) When i 2 A and j/2 A, we can determine P {X m = j} \ m 1 k=1 {X k /2 A} X 0 = i Copyright G. Caire (Sample Lectures) 331

12 By conditioning on the first transition: P {X m = j} \ m 1 k=1 {X k /2 A} X 0 = i = = X r/2a = X r/2a P {X m = j} \ m 1 k=2 {X k /2 A} \ {X 1 = r} X 0 = i P {X m = j} \ m 1 k=2 {X k /2 A} X 1 = r P(X 1 = r X 0 = i) = X r/2a q r,j (m 1)p i,r Copyright G. Caire (Sample Lectures) 332

13 Finally, we can calculate the conditional probability of X m = j given that the chains starts at X 0 = i and has not entered the subset A in any time till time m. For i, j /2 A, we have P(X m = j {X 0 = i} \ m k=1 {X k /2 A}) = = P({X m = j} \ m k=1 {X k /2 A} X 0 = i) P(\ m k=1 {X k /2 A} X 0 = i) = = P({X m = j} \ m 1 k=1 {X k /2 A} X 0 = i) Pr/2A P({X m = r} \ m 1 k=1 {X k /2 A} X 0 = i) q i,j (m) P r/2a q i,r(m) Copyright G. Caire (Sample Lectures) 333

14 Classification of states (1) Definition 46. A state i 2 S is called persistent (or recurrent) if P(X n = i for some n 1 X 0 = i) =1 Otherwise, if the above probability is strictly less than 1, the state is called transient. We are interested in the first passage probability f i,j (n) =P(X 1 6= j, X 2 6= j,..., X n 1 6= j, X n = j X 0 = i) We define f i,j = P 1 n=1 f i,j(n). Notice: state j is persistent if and only if f j,j =1. Copyright G. Caire (Sample Lectures) 334

15 We define the generating functions P i,j (s) = 1X p i,j (n)s n, F i,j (s) = n=0 1X f i,j (n)s n n=0 with the conditions: p i,j (0) = i,j, f i,j (0) = 0 for all i, j 2 S. We shall assume s < 1, since in this way the generating functions converge, and occasionally evaluate limits for s " 1 using Abel s theorem. Notice that f i,j = F i,j (1). Theorem 48. a) P i,i (s) =1+P i,i (s)f i,i (s), b) P i,j (s) =F i,j (s)p j,j (s), for i 6= j. Copyright G. Caire (Sample Lectures) 335

16 Classification of states (2) Corollary 6. a) State j is persistent if P n p j,j(n) = 1. If this holds, then P n p i,j(n) =1 for all states i such that f i,j > 0. b) State j is transient if P n p j,j(n) < 1. If this holds then P n p i,j(n) < 1 for all i. c) If j is transient, then p i,j (n)! 0 as n!1. Let N(i) denote the number of times a chain visit state i given that its starting point is i. It is intuitively clear that P(N(i) =1) = 1 i is persistent 0 i is transient Let T j =min{n : X n = j} denote the time to the first visit to state j. This is a random variable, with the convention that T j = 1 is the visit never occurs. We have that P(T j = 1 X 0 = j) > 0 if and only if j is transient an in this case E[T j X 0 = j] =1. Copyright G. Caire (Sample Lectures) 336

17 We define the mean recurrence time µ j of a state j as P µ j = E[T j X 0 = j] = n nf j,j(n) j is persistent 1 j is transient Notice: µ j may be infinity even though the state is persistent! Definition 47. A persistent state j is called null if µ j = 1 and non-null or positive if µ j < 1. Theorem 49. A persistent state i is null if and only if p i,i (n)! 0 as n!1. If this holds, then p j,i (n)! 0 for all j. Copyright G. Caire (Sample Lectures) 337

18 Let d(i) denote the period of state i, defined as d(i) =GCD{n : p i,i (n) > 0} We call i periodic, if d(i) > 1, and aperiodic if d(i) =1. Definition 48. A state is called ergodic if it is persistent, non-null and aperiodic. Copyright G. Caire (Sample Lectures) 338

19 Classification of chains (1) We say that state i communicates with state j (written i! j) if p i,j (n) > 0 for some n 0. In this case, state j can be reached from state i with positive probability. We say that state i and j intercommunicate if i! j and j! i. In this case, we write i $ j. Clearly, i $ i, since p i,i (0) = 1. Also, i! j if and only if f i,j > 0. Intercommunication $ induces an equivalence relation on the state space. In fact, if i $ j and j $ k then i $ k. Hence, S can be partitioned into equivalence classes of mutually intercommunicating states. Copyright G. Caire (Sample Lectures) 339

20 Theorem 50. If i $ j then: a) i and j have the same period; b) i is transient if and only if j is transient; c) i is null persistent if and only if j is null persistent; Copyright G. Caire (Sample Lectures) 340

21 Classification of chains (2) Definition 49. A set C S of states is called: a) closed if p i,j =0for all i 2 C and j/2 C. b) irreducible if i $ j for all i, j 2 C. Once the chain enters a closed set of states, it will never leave it (trapped). A closed set of cardinality 1 (single state) is called absorbing state. For example, the state 0 in a branching process is an absorbing state. Equivalence classes with respect to the relation $ are irrducible. We call an irreducible set C aperiodic, persistent null, persistent etc.. if all states in the set have the property. Copyright G. Caire (Sample Lectures) 341

22 Theorem 51. Decomposition theorem. The state space S can be uniquely partitioned as S = T [ C 1 [ C 2 [ where T is the set of transient states, and C i are irreducible closed sets of persistent states. Copyright G. Caire (Sample Lectures) 342

23 Qualitative behavior of Markov chains In light of Theorem 51 we can identify the following behaviors: If the chain starts in some state i 2 C r, then it stays in C r forever. If the chain starts in T, then either it stays in T forever or eventually will end up into some C r. If S is finite (finite-state Markov chain), then the first possibility cannot occur: Lemma 34. If S is finite, then at least one state is persistent and all persistent states are non-null. Copyright G. Caire (Sample Lectures) 343

24 Example Let S = {1, 2, 3, 4, 5, 6} and consider the transition matrix P = Copyright G. Caire (Sample Lectures) 344

25 Stationary distribution Definition 50. The vector is called a stationary distribution of the chain if it has entries { j : j 2 S} such that: a) j 0 for all j, and P j2s j =1. b) it satisfies = P, that is, j = P i ip i,j, for all j 2 S. This is called stationary distribution since if X 0 is distributed with u(0) =, then all X n will have the same distribution, in fact u(n) =u(0)p n = P n = PP n 1 = P n 1 = = Given the classification of chains and the decomposition theorem, we shall assume that the chain is irreducible, that is, its state space is formed by a single equivalence class of intercommunicating (persistent) states C, or by the class of transient states T. Copyright G. Caire (Sample Lectures) 345

26 Stationary distribution and mean recurrence time Theorem 52. A irreducible chain has a stationary distribution if and only if all states are non-null persistent. In this case, is unique and satisfies j = 1 µ j, where µ j is the mean recurrence time of state j. Copyright G. Caire (Sample Lectures) 346

27 Solution of the linear equation x = xp Fix a state k 2 S. Let i (k) denote the mean number of visits of the chain to state i between two successive visits to state k. Defining N i = 1X n=1 I {Xn =i}\{t k n} where T k is the time of first passage to state k, we have i (k) =E[N i X 0 = k] = 1X P(X n = i, T k n X 0 = k) n=1 Copyright G. Caire (Sample Lectures) 347

28 Since k is persistent, then k (k) = = 1X P(X n = k, T k n X 0 = k) n=1 1X P(T k = n X 0 = k) = n=1 n=1 1X f k,k (n) =1 It is also clear that T k = P i2s N i, since the time between two visits to state k must be spent somewhere else. Taking expectations, we find µ k = X i2s i (k) this shows that the vector (k) ={ i (k) :i 2 S} contains the terms whose sum is equal to the mean recurrence time of state k, denoted as usual by µ k. Copyright G. Caire (Sample Lectures) 348

29 Lemma 35. For any state k of an irreducible persistent chain, the vector (k) satisfies i (k) < 1 for all i and furthermore (k) = (k)p. For an irreducible persistent chain, we have proved that the equation x = xp has a solution (k) with non-negative entries, that sum to µ k. If state k is non-null, then µ k < 1, and therefore the vector = (k)/µ k is non-negative, with sum 1, and solution of the equation x = xp. It follows that = (k)/µ k is a stationary distribution. This shows that every non-null persistent irreducible chain has a stationary distribution, summarized as follows: Theorem 53. If the chain is irreducible and persistent, there exists a positive root x of the equation x = xp which is unique up to a multiplicative constant. The chain is non-null if P i x i < 1 and null if P i x i = 1. Copyright G. Caire (Sample Lectures) 349

30 Example: irreducible class of the previous example Let S = {1, 2} and consider the transition matrix P = " # Copyright G. Caire (Sample Lectures) 350

31 Criterion for non-null persistency of irreducible chains Consider an irreducible chain (notice: irreducibility must be checked directly by inspection). In order to check if the chain is non-null persistent, we just have to look for a stationary distribution. Since Theorem 52 is a necessary and sufficient condition, if we find, then we conclude that the chain is non-null persistent. Copyright G. Caire (Sample Lectures) 351

32 Criterion for transience of irreducible chains Theorem 54. Let s 2 S be a state of an irreducible chain. The chain is transient if an only if there exists a non-zero solution {y j : j 6= s} satisfying y j apple 1 for all j, to the system of equations y i = X j:j6=s p i,j y j, 8 i 6= s Copyright G. Caire (Sample Lectures) 352

33 Criterion for persistency and null persistency It follows from Theorem 54 that an irreducible chain is persistent if and only if the only bounded solution to the system of equations in Theorem 54 is the zero vector y j =0for all j. If this happens, but the chain has no stationary distribution, then we conclude that the chain is null persistent. Another condition for null persistency is given as follows: Theorem 55. Let s 2 S be a state of an irreducible chain with S = {0, 1, 2,...}. The chain is persistent if there exists a solution {y j : j 6= s} to the system of inequalities X p i,j y j, 8 i 6= s y i such that y i!1as i!1. j:j6=s Copyright G. Caire (Sample Lectures) 353

34 Ergodic behavior of Markov chains We are interested in the limit of the n-th order transition probabilities, p i,j (n), for large n. Periodicity yields some difficulties, as the following examples shows: Example: consider S = {1, 2}, with p 1,2 = p 2,1 =1. Then, p 1,1 (n) =p 2,2 (n) = 0 if n is odd 1 if n is even It is clear that if states are periodic, the sequence of n-th order transition probabilities may not converge. Copyright G. Caire (Sample Lectures) 354

35 Ergodic theorem for Markov chains Theorem 56. For an irreducible aperiodic chain, we have that lim p i,j(n) = 1 8 i, j 2 S n!1 µ j Remarks: If the chain is transient or null-persistent, then p i,j (n)! 0 for all i, j since µ j = 1. If the chain is non-null persistent, then p i,j (n)! 1 µ j = j > 0, independent of i, where is the unique stationary distribution. From Theorem 56 we see that in any case, for an irreducible aperiodic chain, the limit of p i,j (n) is independent of X 0 = i. Copyright G. Caire (Sample Lectures) 355

36 Therefore, the chain forgets its starting point. It follows that P(X n = j) = X i P(X 0 = i)p i,j (n)! 1 µ j irrespectively of the initial pmf vector u(0) = {P(X 0 = i),i2 S}. If the chain {X n } is periodic, it turns out that a sampling of the chain at integer multiples of the period {Y n = X dn : n 0} is aperiodic. In this case, letting q j,j (n) =P(Y n = j Y 0 = j), it turns out that q j,j (n) =P(X nd = j X 0 = j) =p j,j (nd)! d µ j or, equivalently, that the mean recurrence time of the new chain {Y n } is equal to 1/d times the mean recurrence time of the original chain. Copyright G. Caire (Sample Lectures) 356

37 End of Lecture 11 Copyright G. Caire (Sample Lectures) 357

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

Markov Chains Absorption (cont d) Hamid R. Rabiee

Markov Chains Absorption (cont d) Hamid R. Rabiee Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

M3/4/5 S4 Applied Probability

M3/4/5 S4 Applied Probability M3/4/5 S4 Applied Probability Autumn 215 Badr Missaoui Room 545 Huxley Building Imperial College London E-Mail: badr.missaoui8@imperial.ac.uk Department of Mathematics Imperial College London 18 Queens

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Markov Chains (Part 4)

Markov Chains (Part 4) Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Markov Chains Absorption Hamid R. Rabiee

Markov Chains Absorption Hamid R. Rabiee Markov Chains Absorption Hamid R. Rabiee Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is (i.e., p ii = ). A

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

ISyE 6650 Probabilistic Models Fall 2007

ISyE 6650 Probabilistic Models Fall 2007 ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Discrete Markov Chain. Theory and use

Discrete Markov Chain. Theory and use Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Markov Chains Introduction

Markov Chains Introduction Markov Chains 4 4.1. Introduction In this chapter, we consider a stochastic process {X n,n= 0, 1, 2,...} that takes on a finite or countable number of possible values. Unless otherwise mentioned, this

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

On asymptotic behavior of a finite Markov chain

On asymptotic behavior of a finite Markov chain 1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong

More information

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1 Markov Chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 5, 2011 Stoch. Systems

More information

ON A CONJECTURE OF WILLIAM HERSCHEL

ON A CONJECTURE OF WILLIAM HERSCHEL ON A CONJECTURE OF WILLIAM HERSCHEL By CHRISTOPHER C. KRUT A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF

More information

FINITE MARKOV CHAINS

FINITE MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,

More information

Discrete Time Markov Chain (DTMC)

Discrete Time Markov Chain (DTMC) Discrete Time Markov Chain (DTMC) John Boccio February 3, 204 Sources Taylor & Karlin, An Introduction to Stochastic Modeling, 3rd edition. Chapters 3-4. Ross, Introduction to Probability Models, 8th edition,

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and TMA 4265 Stochastic Processes Semester project, fall 2014 Student number 730631 and 732038 Exercise 1 We shall study a discrete Markov chain (MC) {X n } n=0 with state space S = {0, 1, 2, 3, 4, 5, 6}.

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

Markov Chains. Contents

Markov Chains. Contents 6 Markov Chains Contents 6.1. Discrete-Time Markov Chains............... p. 2 6.2. Classification of States................... p. 9 6.3. Steady-State Behavior.................. p. 13 6.4. Absorption Probabilities

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Advanced Computer Networks Lecture 2. Markov Processes

Advanced Computer Networks Lecture 2. Markov Processes Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information