STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

Size: px
Start display at page:

Download "STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final"

Transcription

1 STATS 3U03 Sang Woo Park March 29, 2017 Course Outline Textbook: Inroduction to stochastic processes Requirement: 5 assignments, 2 tests, and 1 final Test 1: Friday, February 10th Test 2: Friday, March 17th Contents 1 Introduction Review Stochastic processes Markov chains (Discrete time Markov chains) Markov property Transition function and initial distribution Joint distribution Recurrence Absortion probabilities Birth-Death Markov Chain Branching process Stationary distribution Stationary distribution Positive recurrence Long time behaviour Period of a state Long time behaviour

2 5 Continuous Time Markov Chain Exponential distribution Recurrent and transient states Continuous time Birth-Death Markov Chain

3 1 Introduction 1.1 Review Definition 1.1 (Independent random variables). X and Y are independent iff for any a, b R, P (X a, Y b) P (X a)p (Y b) 1.2 Stochastic processes Definition 1.2 (Stochastic process). Let T be a subset of [0, + ]. For each t T, let X t be a random variable. Then, the collection of {X t : t T } is called a stochastic process. Simply put, a stochastic process is just a family of random variables. Example Let T {0}. Then, {X 0 } is a stochastic process. Example Let T {1, 2, 3,..., m} be a set of finite natural numbers. Then, {X 1, X 2, X 3,..., m} is a stochastic process. Example Let T {0, 1, 2,... } be a set of all non-negative integers. Then, {X 1, X 2, X 3,... } is a stochastic process. Example Let T [0, + ) be a set of all non-negative real numbers. Then, {X t : t 0} is a stochastic process. Definition 1.3 (Time index). Let T be time index. If T {0, 1, 2,... }, then the time is discrete. If T [0, ), then time is continuous. Definition 1.4 (State Space). State space, S, is the space space where the random variable takes the values. Given a sample space, S, and time index t T, we can define X t (w) S, to describe a stochastic process. Here, {X t : t T } describes the dependence relation. We can further categorize a stochastic process by considering the following two cases: countable and uncountable state space. Time index can also be categorized as follows: discrete and continuous time. Note that each stochastic process must belong to one of the four categories. Remark. Every stochastic process can be described by the following three factors: 1. Time index 2. State space 3. Dependence relation Example Let S {0, 1} and T {0, 1, 2,... }. Given, { 1 with probability of 1/2 X n 0 with probability of 1/2 {X 0, X 1, X 2,... } is a stochastic process and is often noted as Bernoulli trials. 3

4 2 Markov chains (Discrete time Markov chains) We will only be dealing with discrete time Markov chains in chapter 1 and 2. In other words, T {0, 1, 2,... }. It follows that the state space, S, will be at most countable. Finally, Markov describes the dependence relation: X 0, X 1, X 2,.... In example 1.2.5, every trial of the Bernoulli trials was independent. On the other hand, in a Markov model, X n+1 depends on X n but not on any past stats, X 1, X 2,..., X n Markov property Definition 2.1. Markov property can be expressed as follows: P (X n+1 x n+1 X 0 x 0,..., X n 1 x n 1, X n x n ) P (X n+1 x n+1 X n x n ) P (X n+1 y X n x) is noted as the transition probability and it describes the one step transition from x to y starting at time n. If P (X n+1 y X n x) P (X 1 y X 0 x), then the Markov chain is called to have stationary transition, or homogeneous. Definition 2.2. Let {X n : n 0, 1, 2,... } be a homogeneous Markov chain. Then, P xy P (X 1 y X 0 x) P (X n+1 y X n x), is the one-step transition probability. Definition 2.3. Following the definition 2.2, we can now define one-step transition matrix: P (P xy ) x,y S Remark. Given, X 0, π 0 (x) P (X 0 x) is called the initial distribution. Given a Markov chain, we wish to answer the following fundamental questions: 1. Distribution of X n for any n Join distribution of X n1,..., X xk for any 1 n 1 < n 2 < < n k k, Long time behaviour, i.e. lim P (X n x) n Example We have the following Markov chain: {X n : n 0, 1, 2,... } where S {0, 1}. For this model, its initial distribution can be described as follows: { π 0 (0) P (X 0 0) a π 0 (1) 1 a 4

5 Transition probabilities can be written in a similar fashion: P (X 1 1 X 0 0) p, P (X 1 0 X 0 1) q, P (X 1 0 X 0 0) 1 p P (X 1 1 X 0 1) 1 q where 0 p, q 1. For this Markov chain, we can consider the following three cases: Case 1. p q 0. This case is trivial. Case 2. p q 1. This case is also trivial. Case 3. 0 p + q 2. P (X n+1 0) P (X n+1 0 X n 0) + P (X n+1 0 X n 1) P (X n 0)P (X n+1 0 X n 0) + P (X n 1)P (X n+1 0 X n 1) P (X n 0)(1 p) + P (X n 1)q P (X n 0)(1 p) + (1 P (X n 0))q (1 p q)p (X n 0) + q We can further expand this as follows: P (X n+1 0) (1 p q)p (X n 0) + q (1 p q) [ (1 p q)p (X n 1 0) + q ] + q n 1 (1 p q) n P (X 0 0) + q (1 p q) j j0 Note that Therefore, we have n 1 (1 p q) j (1 p q)n 1 (1 p q) 1 j0 P (X n+1 0) (1 p q) n a + q (1 p q)n 1 (1 p q) 1 (1 p q) n a q ( (1 p q) n 1 ) p + q For this Markov chain, we find that lim P (X n 0) q n p + q 5

6 2.2 Transition function and initial distribution Example P xy P (X n+1 y X n x) P (X 1 y X 0 x) Definition 2.4. Transition function, P (x, y) : S X [0, 1], satisfies the following conditions: 1. P (x, y) y S P (x, y) 1 for all x S. Definition 2.5. Given a transition function, P (x, y), a transition matrix is defined as follows: P (P (x, y)) x,y S Example ( 1 ) Example /4 1/2 1/4 1/8 1/4 5/8 0 1/4 3/4 Definition 2.6. Initial distribution is a probability mass function (pmf) that is defined as follows π 0 (x) P (X 0 x). Note that it must satisfy the following conditions: 1. π 0 (x) 0 2. x S π 0(x) 1 Theorem 2.1. Let {x N : n 0, 1, 2,... } be a Markov chain with initial distribution π 0 (x), and one-step transition matrix P (P (x, y)) x,y S. Then, the distribution of X n is P (X n x n ) π 0 (x 0 )P (x 0, x 1 )... P (x n 1, x n ) x 0 S x 1 S π 0 PP... P }{{} n x n 1 S Proof. For any n 1, x n S P (X n x n ) P (X n x n, x 0 S, X 1 S,..., X n 1 S) P (X n x n, X 0 x 0,..., X n 1 x n 1 ) x 0 S x 1 S x n 1 S 6

7 Note that P (X n x n, X 0 x 0,..., X n 1 x n 1 ) P (X 0 x 0 )P (X 1 x 1 X 0 x 0 )P (X 2 x 2 X 0 x 0, X 1 x 1 ) P (X n x n X 0 x 0,..., X n 1 x n 1 ) Using the Markov property, it is evident that the equation above is equivalent to P (X 0 x 0 )P (X 1 x 1 X 0 x 0 ) P (X n x n X n 1 x n 1 ). Example Simple random walk is a Markov chain: where S {0, ±1, ±2,... }. X 0 0 { 1 p X 1 1 q { X p X 2 X 1 1 q { X n p X n X n 1 1 q Example (Ehrenfest chain). Suppose that we have a box and a inivisble bar that divides the box into region I and II. d balls are placed in a box. Initially, n balls are distributed in region I and d n balls are distributed in region II. You pick a ball at random. If it s from region I, you put it back in region II. If it s from region II, you put it back in region I. First, note that this Markov chain has a state space of S {0, 1, 2,..., d}. We observe that { 0 y > 1 P (0, y) 1 y 1 0 y 0, 2 1 P (1, y) d y d y 2 In general, we have 0 y x ± 1 P (x, y) 1 x d y x + 1 x d y x 1 Combining these results, we have the following transition matrix: d d... (d+1) (d+1) 7

8 Example (Birth-Death Markov chain). At each time step, one person can die and a new person can be born: p x y X n + 1 q x y X n 1 X n+1 r x y X n 0 else Example (Queuing chain). At each time step, one customer is served and new customers arrive: { y n+1 if X n 0 X n+1 X n 1 + y n+1 if X n 1 We introduce a new notation, x + x 0, which is essentially max(x, 0). Using this notation, we can rewrite the Markov chain as follows X n+1 (X n 1) + + y n Example (Branching Markov chain). If X 0 0, then X n 0 for all n 1. We call 0 an absorbing state. Suppose X 0 1. An individual, i, will produce y i number of offsprings at each generation. Then, we will have X 1 y (1) y (1) X 0 Each individual in generation 1 will also produce offsprings. Then, X 2 y (2) y (2) X 0 We wish to understand how the population will evolve over time. To do so, we can look at the expected value. It s clear that the population will grow if E[y] > 1. On the other hand, if E[y] < 1, the population will eventually die out. Example (Wright-Fisher Markov chain). For this Markov chain, we start by make the following assumptions: 1. The population size is fixed. 2. No generation overlap. Within the population, there are N number of individuals of two types: I and II. Let X 0 be number of type I individuals at time 0. Each individual in generation 1 pick its parent from generation 0 at random. This process is equivalent to repeating Bernoulli trials N times (also equivalent to binomial). 8

9 Therefore, we have X 1 Bin(N, X 0 N ) X 2 Bin(N, X 1 N ). 2.3 Joint distribution X n+1 Bin(N, X n N ) Given a Markov chain with π 0 and P, how do we find (1) the distribution of X n and (2) the joint distribution of X n and X m where n < m? From the previous section, recall that π n π 0 PP }{{... P }. n Example Consider the following transition matrix: 1/2 1/2 0 0 P 0 1/2 1/ /2 1/ Supose that π 0 (1, 0, 0, 0). Then, we have π 1 (1, 0, 0, 0)P (1/2, 1/2, 0, 0) π 2 (1, 0, 0, 0)PP (1/4, 1/2, 1/4, 0) Eventually, all states will converge to the absorbing state and stay there. To find the join distribution, we first note that P (X n x, X m x m ) P (X n x n )P (X m x m X n x n ) P (X m x m )P (X m n x m X 0 x n ) Definition 2.7. For any interger m, m-step transition matrix is given by When m 0, we have P m (x, y) P (X m y X 0 x). P 0 (x, y) { 1 y x 0 y x We can decompose m-step transition matrix as follows: P m (x, y) P (X m y X 0 x) P (X m y, x 1 S,..., X m 1 S X 0 x) P (x 0, x 1 )... P (x n 1, y) x 1 S x 2 S x m 1 S 9

10 Then, we have Therefore, we have P (X m y) P (X m y, X m S) P (X m y, X 0 x 0 ) x 0 S P (X 0 x 0 )P m (X 0, y) x 0 S π 0 (x 0 )P m (X 0, y) x 0 S (P (X m x m )) xm S π 0P m Definition 2.8 (Hitting time). Given A S, hitting time T A is defined as follows: T a min{n 1 : X n A} If A {x}, then we have T x T x. Note that T A 1 If x n / A for all n 1. we have T A + Now, we wish to understand the distribution of T y given thatn X 0 x. First, note that we have Similarly, we have P x (T y 1) P (T y 1 X 0 x) P (x, y) P x (T y 2) P x (x 1 y, x 2 y) w y P (x, w)p (w, y) Generally, we have P x (T y n + 1) P x (x 1 y,..., x n y, x n+1 y) P (X 0 x, X 1 y... X n y, X n+1 y) P (X 0 x) P (X 0 x, X 1 y) P (X 0 x, X 1 y... X n y, X n+1 y) P (X 0 x) P (X x, X 1 y) x 1 y P (x, x 1 )P x1 (T y n) Note that the last result follows from the Markov property. Lemma 2.1. P m (x, y) m P x (T y k)p m k (y, y) k1 10

11 Proof. P m (x, y) P (X m y X 0 x) P (X m y, T y m x 0 x) m P (X m y, T y k X 0 x) k1 m P (X 0 x, T y k, X m y) P (X 0 x) m P (X 0 x, T y k) P (X 0 x, T y k, X m y) P (X 0 x) P (X 0 x, T y k) m P x (T y k)p (X m y X 0 x, x y, x k y) k1 k1 k1 m P x (T y k)p (X m y x k y) k1 2.4 Recurrence Before we define recurrent and transient states, we introduce the following notation: ρ xy P x (T y ) P x (T y k). k Definition 2.9 (Recurrent and Transient states). A state x is called recurrent if ρ xx 1. Ohterwise, it is called transient. We introduce more notations: { 1 y x I x (y) (indicator function of x). 0 else N(y) I y (X n ) Theorem 2.2. n1 1. P x (N(y) m) ρ xy ρ m 1 yy 2. P x (N(y) m) ρ xy ρ m 1 yy (1 ρ yy ) 3. P x (N(y) 0) 1 ρ xy 11

12 Proof. First, assume that theorem 1 is true. Then, we have P x (N(y) m) P x (N(y) m) P x (N(y) m + 1) Now, we want to prove theorem 3: ρ xy ρyy m 1 ρ xy ρ m yy ρ xy ρyy m 1 (1 ρ yy ) P x (N(y) 0) 1 P x (N(y) 1) Finally, we just have to prove theorem 1: 1 ρ xy P x (N(y) m) P x (The Markov chain visits state y at least m times) P x (T y n 1 )P y (T y n 2 ) P y (T y n y ) n 1 1 n m 1 P x (T y n 1 ) P y (T y n 2 ) P y (T y n m ) n 1 1 n 2 1 n m 1 P x (T y < )P y (T y < ) P y (T y < ) ρ xy ρ m 1 yy Before looking at the next theorem, we introduce another notation: E x [ ] is the expectation given the initial state of x. Then, we have E x [I y (X n )] P x (I y (X n ) 1) P x (X n y) P n (x, y) Furthermore, we introduce the notation, G: G(x, y) E x [N(y)] E x [ y I y (x n ) ] y y E x [I y (x n )] P n (x, y) Theorem If y is transient, then for any x S, P x (N(y) < ) 1 and G(x, y) P xy 1 P yy <. 12

13 2. If y is recurrent, then for any x S, P x (N(y) ) 1 and G(y, y). Furthermore, we have P x (N(y) ) ρ xy and { if ρ xy > 0 G(x, y) 0 if ρ xy 0 Proof. Suppose y is transient. Then, we have ρ < 1. For any x, we have ( ) P x (N(y) ) P x {N(y) m} Therefore, we have Furthermore, m1 lim P x(n(y) m) m lim ρ xyρ m 1 yy m 0 P x (N(y) < ) 1 P x (N(y) ) G(x, y) mp x (N(y) m) m1 m1 mρ xy ρ m 1 yy (1 ρ yy ) ρ xy (1 ρ yy ) ρ xy (1 ρ yy ) m1 ( m1 mρ m 1 yy ρ m yy 1 ρ xy (1 ρ yy ) (1 ρ yy ) 2 Let s prove the second statement. If y is recurrent, then ρ yy 1. For any x, we have P x (N(y) ) P x ( {N(y) m}) ) lim P x(n(y) m) m lim ρ xyρ m 1 yy m ρ xy 13

14 Then, we have G(x, y) mρ xy ρ m 1 yy mρ xy ρ xy m Example Let y be a transient state. Find lim P n (x, y). n <. Since the series con- Recall that G(x, y) n1 P n (x, y) ρxy 1 ρ yy verges, it is easy to see that lim P n (x, y) 0 n Example Let {X n, n 0} be a two state Markov S {0, 1}. Can both be transient? We start by noting that P x (X n S) 1. If both are transient, we have yielding a contradiction. lim P x(x n S) lim P n (x, 0) + P n (x, 1) 0, n n Definition A Markov chain is recurrent if all states are recurrent, and the chain is transient of all states are transient. Definition A state x leads to state y of ρ xy > 0 denoted x y. Remark. It is possible that x x. Lemma x y iff there exists n 1 such that P n (x, y) > If x y and y z, then x z. Proof. By definition, x y ifff ρ xy > 0. In other words, P x (T y < ) > 0. Then, 0 < P x (T y < ) P x (T y n) P n (x, y) n1 Therefore, P n (x, y) > 0 for some n. Conversely, if P n (x, y) > 0 for some n 1, we can define n 0 min{n 1, P n (x, y)}. Clearly, 0 < P n (x, y) P x (T y n 0 ) < ρ xy. To prove the second statement, note that x y iff n 1 1 such that P n1 (x, y) 0. Similarly, y z iff n 2 1 such that P n2 (y, z) 0. Then, Therefore, x z n1 P n1+n2 (x, z) P n1 (x, y)p n2 (y, z) > 0 14

15 Theorem 2.4. If x is recurrent and x y, then y is recurrent and ρ xy ρ yx 1. Proof. To yield contradiction, suppose ρ yx 1. Then, 1 ρ y > 0. Furthermore, if x y, there exists n 1 such that P n1 (x, y) > 0. This implies that P n1 (x, y)(1 ρ yx ) > 0 The first part is the probability that x reaches y in n 1 steps. However, the second part says that y never goes to x, contradicting the assumption that x is recurrent. Therefore, ρ yx 1. To prove that y is recurrent, we first note that if x y, there exists n 1 such that P n1 (x, y) > 0. Similarly, if ρ yx 1, y 1 and there exists n 2 such that P n1 (y, x) > 0. Then, G(y, y) P n (y, y) n1 P n1+n2+m (y, y) m1 P n2 (y, x)p m (x, x)p x,y m1 ( ) P n2 (y, x) P m (x, x) P n1 (x, y) n1 } {{ } G(x,x) Finally, to prove that ρ xy 1, we note that y is recurrent. Then, by following the proof of the first statement, we can prove that ρ xy 1. Definition If x y and y x, we write and say that x communicates with y x y Definition A subset C is closed if for any x C and y / C, x y (ρ xy 0). Definition A closed subset C is irreducible if every x, y C communicate with each other. 15

16 We can further define closed and irreducible set where (1) x, y C, x y, and (2) x C, z / C, ρ xz 0. Closed, irreducible, and recurrent set is then defined as (1) x, y C, x y, ρ xy ρ yx 1, and (2) x C, z / C, ρ xz 0. Then, we can decompose a state space, S, into a set of recurrent and transient state: S C R C T Theorem 2.5. If for x, y C R, C x C y. Then, C x C y. Proof. Let w C x C y. Then, w x and w y. For any z C x, we have and z x w y z C y, implying that C x C y. By symmetry, C y C x. Therefore, C x C y. Theorem 2.6. The state space S of a Markov chain can be decomposed as two union of C R and C T. Furthermore, C R can be decomposed into the union of at most countable number of closed, irreducible, recurrent sets. Note that you have to stay in a recurrent set if you start from a recurrenut set. On the other hand, if you start from a transient set, you have to move to a recurrent state if the set contains finite elements. If the set contains infinite number of elements, it is possible to stay in the transient set forever. Example Let {X n, n 0, 1, 2,... } be a Markov Chain with S {0, 1, 2, 3, 4, 5} and the following one step transient matrix: /4 1/2 1/ P 0 1/5 2/5 1/5 0 1/ /6 1/3 1/ /2 0 1/ /4 0 3/4 (a) Find C R and C T Note that 0 is an absorbing state. If you start from state 1 or 2. you have a positive probability of going to state 0. Therefore, state 1 and 2 are transient. On the other hand, we have , implying that Then, {3, 4, 5} form a closed, irreducible, and recurrent state. Therefore, C R {0, 3, 4, 5} C T {1, 2} 16

17 (b) Decompose C R Clearly, C R {0} {3, 4, 5} and two subsets are irreducible. Remark. All closed, irreducible, finite set are recurrent set. Example Let S {0, 1, 2, 3,... }. Given the following transition matrix, R Then, each state is an absorbing state and we have C R {i}. Example Consider the following transtion matrix: R Then, since 1 2 3, we have i0 C R {1, 2, 3} Example Consider the following transition matrix: R Then, we have C T {1, 2} and C R {3}. Example Consider the following transition matrix: R a 0 1 a For all 0 a < 1, decomposition of the state space does not change. Higher a only implies that it will take longer to get to the absorbing state. 17

18 2.5 Absortion probabilities Definition 2.15 (Absortion probabilities). Let C be a recurrent, irreducible, closed set. For x C T, probability of x being abosrbed by C is given by ρ C (x) ρ x (T c < ) To calculate the absortion state, we must solve ρ C (x) P (x, y) + P (x, y)ρ C (y). y C y C T This is in fact a system of linear equations. We are interested the uniqueness of the solution. Theorem 2.7. If C T is finite, then the system w x P (x, y) + y C has a unique solution w x ρ C (x). y C T w y Proof. Let {w x : x C T } be any solution. Then, w x P (x, y) + P (x, y)w y y C y C T P (x, y) + P (x, y) P (y, z) + P (y, z)w z y C y C T z C y C T P (x, y) + P (x, y)p (y, z)w z + P (x, y)p (y, z) y C y C T z C T y C T z C P (x, y) + P 2 (x, z)w z + P (x, y)p (y, zsss) y C z C T y C T z C P x (T C 2) + P 2 (x, z)w z z C T P x (T c n) + z C T P 2 (x, z)w z Now, we can take the limit as n goes to infinity: ( w x lim P x (T c n) + ) P 2 (x, z)w z n z C T P x (T c ) + lim P n (x, z)w z n z C T Since C T is finite, lim n P n (x, z) 0, and therefore, w x P x (T c ). 18

19 Example Let {X n, n 0, 1, 2,... } be a Markov chain with S {1, 2, 3, 4} and P 1/4 0 3/ /3 1/3 1/ Let C {1}. Find ρ C (2), ρ C (3) First, note that we can decompose the set as follows: Since C T is finite, we have w x y C C R {1, 4}, C T {2, 3} P (x, y) + y C T P (x, y)w y Then, we have w x P (3, 1) + P (2, 2)w 2 + P (2, 3)w w 3 Similarly, we have w w w 3 Therefore, we have w 3 1 5, w Theorem 2.8. If for any x, y S, x y, then the chain is irreducible. Then, it follows that a finite state of irreducible Markov Chain is recurrent. Remark. Infinite, irreducible Markov chain can be transient. doesn t imply recurrence. Irreducibility Then, when will an inifinite state, irreducible Markov chain be reccurent? We look at the birth-death Markov chain to understand this idea. 2.6 Birth-Death Markov Chain Definition A Markov Chain {X n, n 1, 2,... } is called a birth-death Markov chain if 1. S {0, 1, 2,..., d} where d can be either finite or infinite. When d, S {0, 1, 2,... }. p x y x + 1 q x y x 1 2. P (x, y). r x y x 0 else 19

20 Note that if p x > 0, q x > 0, for 1 x d q and p 0 > 0, q d > 0, then the chain irreducible. If the chain is irreducible and d <, then the birth-death chain is recurrent. Theorem 2.9. For any a, b S and a < b. Let u(x) P x (T a < T b ) for a x b, u(a) 1, u(b) 0. Also, define Then, Proof. First, note that u(x) P x (T a < T b ) Rearranging, we get Γ 0 1, Γ k q 1 q 2 q k p 1 p 2 p k, k 1 u(x) b 1 rx Γ r b 1 r1 Γ r P x (x 1 x or x + 1 or x 1 x 1, T a < T b ) P x (x 1 x, T a < T b ) + P x (x 1 x + 1, T a < T b ) + P x (x 1 x 2, T a < T b ) P x (x 1 x)p x (T a < T b ) + P x (x 1 x + 1)P x+1 (T a < T b ) + P x (x 1 x 1)P x 1 P (T a < T b ) r x u(x) + p x u(x + 1) + q x u(x 1) (1 r x )u(x) p x u(x + 1) + q x u(x 1) (p x + q x )u(x) p x u(x + 1) + q x u(x 1) p x (u(x + 1) u(x)) q x (u(x) u(x 1)) Now, we can use this formula recursively: u(x + 1) u(x) q x p x (u(x) u(x 1)) q x p x q x 1 p x 1 (u(x 1) u(x 2)) q x qa+1 (u(a + 1) u(a)) p x p a+1 q 1 q a p 1 p a q x q 1 q a qa+1 (u(a + 1) u(a)) p 1 p a p x p a+1 Γ x Γ a (u(a + 1) u(a)). By definition, we know that u(b) 0 and u(a) 1. It is then trivial that u(b) u(a) 1. Finally, we can apply telescoping to achieve the desired result: 1 u(b) u(b 1) + u(b 1) u(b 2) + + u(a + 1) u(a) Γ b 1 (u(a + 1) u(a)) + Γ b 2 (u(a + 1) u(a)) Γ a + + (u(a + 1) u(a)) Γ a 20

21 Thus, we have u(a) u(a + 1) If we put everything together, we have Γ a b 1 r1 Γ r u(x) u(x + 1) Γ x Γ a (u(a + 1) u(a)) Γ x b 1 r1 Γ r for all a < x < b. Finally, since u(x) u(x) u(b), we can apply telescoping again: u(x) u(x) u(x + 1) + u(x + 1) u(x + 1) + + u(b 1) u(b) b 1 rx Γ r b 1 ra Γ r We have now derived a major result for the birth and death Markov chain. Lemma 2.3. ρ 00 P (0, 0) + P (0, 1)ρ 10. Proof. ρ 00 P 0 (T 0 < ) P 0 (X 1 0, T 0 < ) + P 0 (X 1 1, T 0, ) P 0 (X 1 0) + P (0, 1)P 1 (T 0 < ) P (0, 0) + P (0, 1)ρ 10 Theorem The birth and death Markov chain is recurrent iff Γ r. Proof. Let a 0, b n, and x 1. Observe that Then, since u(1) P 1 (T 0 < T n ) r0 n 1 r1 Γ r + Γ a Γ a 1 n 1 r0 Γ 1 n 1 r r0 Γ. r ρ 10 P 1 (T 0 < ) lim n P 1(T 0 < n) lim n ( 1 ) 1 n 1 r0 Γ. r Clearly, ρ 10 1 iff Γ r. When ρ 10 1, we have ρ 00 P (0, 0) + r0 P (0, 1) 1 and 0 becomes a recurrent state. Since the chain is irreducible, it is reccurent. 21

22 Example Consider a birth-death Markov Chain whose state is a set of all non-negative integers. For each state, it has a probability of going up of 0.51 and probability of going down of Then, Clearly, Γ r q 1 q r p 1 p r ( Γ k is a converging geometric series. Therefore, this is a transient k0 Markov Chain. Example Cnosider the following chain: p 0 x 0, y x x 0, y x 1 r 0 x 0, y 0 P (x, y) p x x 1, y x + 1 r x x 1, y x q x x 1, y x 1 We may define ) r p x x + 2 2(x + 1), q x x 2(x + 1). Then, it follows that p x + q x 1 and r x 0. We wish to know if this Chain is transient or not. First, observe that In general, we have Γ x q 1 q x p 1 p x Γ 1 q 1 p (1+1) 1+2 2(1+1) 2 2(2+1) x 2(x+1) 2+2 2(2+1) x+2 2(x+1) 1 2 x (1 + 2)(2 + 2) (x + 2) 1 2 (x + 1)(x + 2) 22

23 Then, we see that Γ x Γ 0 + Γ 1 + x0 Therefore, this chain is transient. 2.7 Branching process x2 Γ x ( ) 1 (x + 1)(x + 2) x ( x ) x < In the branching process, offspring of each individual follows a distribution ψ whose probability mass is given by P (x). Then, we have X n X n+1 i1 ψ n+1 i with X 1 ψ 1 1. We will be looking at the case where 0 < P (0) < 1 and P (0) + P (1) < 1. For this Markov Chain, state space is defined as S {0, 1, 2,... }, and 0 is the absorbing state. Since all the other states are transient, we define ρ as the probability of extinction. Definition Let µ E[ψ]. The model is called subcritical if µ < 1; critical if µ 1; supercritical if µ > 1; and explosive if µ. Theorem ρ 1 iff µ 1. 23

24 3 Stationary distribution 3.1 Stationary distribution Definition 3.1. Consider a Markov Chain {X n, n 1, 2, 3,... } with state space S. A probability π on S is called a stationary distribution of the chain if π(x)p (x, y) π(x), for all x S, x S where P (P (x, y)) is the one-step transition matrix. Lemma 3.1. If π 0 π, then P (X n x) π(x) for all n. Proof. If n 0, π 0 π. Now, assume n k is true. Then, P (X k+1 x) P (X k S X k+1 x) z S P (X k z)p (X k+1 x X k z) z S π(z)p (z, x) π(x) By induction, the proof is complete 1. Definition 3.2. Consider a Markov Chain {X n, n 1, 2, 3,... } with state space S. A probability π on S is called a steady state of the chain if lim P n (x, y) π(y), for all x S n Lemma 3.2. Let π be a steady state distribution of the Markov chain. Then, for any initial distribution π 0, Proof. Let π 0 (x) p(x 0 x). Then, lim P (X n y) π(y) n P (X n y) x S π 0 (x)p n (x, y) lim P (X n y) lim n n π 0 (x)p n (x, y) x S lim π 0(x)P n (x, y) n x S π 0 (x) lim P n (x, y) n x S ( ) π 0 (x) π(y) π(y) x S 1 Since πp π, π is the eigenvector of the matrix P whose eigenvalue is 1. 24

25 Example Let ({X n, ) n 0, 1, 2,... } be a two state Markov chain with 1 0 S {0, 1} and P. Since πp π for any π, any distribution is a 0 1 stationary distribution. ( ) 0 1 Example If P, π ( 1/2 1/2 ) is the only stationary distribution. 1 0 ( ) 1 p p Example Let P. To find the stationary distribution, q 1 q we must solve ( ) ( ) 1 p p π(0) π(1) ( π(0) π(1) ) q 1 q Then, we get (1 p)π(0) + qπ(1) π(0) pπ(0) + (1 q)π(1) π(1) Therefore, { π(0) q Note that P n ( q p+q q p+q π(1) + (1 p q)n p (1 p q)n q p+q p+q p+q p p+q p p+q p p+q (1 p q)n p + (1 p q)n q p+q p+q ) As n, we get ( q p+q q p+q p p+q p p+q Therefore, we conclude that this is both stationary and steady state distribution. Example Consider a Markov chain characterized by the following transition matrix: ( ) 1/4 3/4 P 1/3 2/3 Clearly, the chain is more likely to be at state 1 than 0. Then, we have 1/3 P (X n 0) π(0) 3/4 + 1/ /4 P (X n 1) π(1) 3/4 + 1/ ) 25

26 Example Let {X n, n 0, 1, 2,... } be a Markov chain with S {0, 1, 2} and 1/2 1/2 0 P 1/3 1/3 1/3 0 1/2 1/2 Find the stationary distribution of the chain. First, let π (π(0), π(1), π(2)). Since πp π, we have 1 2 π(0) + 1 3π(1) π(0) 1 2 π(0) π(1) + 1 2π(2) π(1) 1 3 π(1) + 1 2π(2) π(2) Then, we find that π (2/7, 2/7, 3/7) Example Let {X n, n 1, 2, 3,... } be a Birth-death Markov chain with S {0, 1, 2,..., d} and r 0 p q 1 r 1 p 1 0 P q n 1 r n 1 p n q n r n Find the stationary distribution of the chain. Once again, we use the fact that πp π. Then, we end up with the following set of linear equations: r 0 π(0) + q 1 π(1) π(0) p 0 π(0) + r 1 π(1) + q 2 π(2) π(1) p k 1 π(k 1) + r k π(k) + q k+1 π(k + 1) π(k) p d 1 π(d 1) + r n π(d) π(d) First, we observe that π(1) p0 q 0 π(0). Then, we have p 0 π(0) + (1 p 1 q 1 )π(1) + q 2 π(2) π(1) p 0 π(0) p 1 π(1) q 1 π(1) + q 2 π(2) π(1) q 2 π(2) p 1 π(1) Then, we have π(2) p 1 q 2 π(1) p 1p 0 q 2 q 1 π(0) 26

27 By recursion, we have π(k) p 0p 1 p k 1 π(0) q 1 q 2 q k Since π(0) + π(1) + + π(d) 1, we have Therefore, 1 π(0) + p 0 π(0)π(1) + p 0p 1 π(0) + + p 0 p d 1 q 1 q 1 q 2 q 1 q d ( ) d p 0 p i 1 1 π(0) 1 + q 1 q i 1 π(0) 1 + d i1 i1 p 0 p i 1 q 1 q i π(k) p 0p 1 p k 1 q 1q 2 q k 1 + d i1 p 0 p i 1 q 1 q i Remark. If d, the birth-death chain has a unique stationary distribution iff d p 0 p i 1 < q 1 q i i1 Example Suppose we have d balls in each of the two urns. Total number of red balls is d (total number of blue balls is also d). Let X 0 be the number of red balls in total in urn 1. We pick a ball from each urn at random and switch. Then, X 1 will be the number of red balls after first switching. We want to find P and find its stationary distribution. P (i, i) occurs when we pick red balls or red balls from both urns. Then, Likewise, we have i(d i) P (i, i) 2 d 2 P (i, i + 1) Note the boundary conditions: (d i)2 d 2, P (i, i 1) i2 d 2 P (0, 0) 0, P (0, 1) 1, P (d, d) 0, P (d, d 1) 1. 27

28 Finally, we can write the transition matrix: (d 1) (d 1) 2 d 2 d 2 d 0 2 P Since this chain is equivalent to birth-death Markov chain, we know that Observe that Then, Therfore, 1 + p 0 p 1 p k 1 q π(k) 1 q 2 q k 1 + d p 0 p i 1 q 1 q i i1 p 0 p k 1 P (0, 1)P (1, 2) P (k 1, k) q 1 q k P (1, 0)P (2, 1) P (k, k 1) d i1 ( ) 2 d i d 2 (d 1)2 d2 d 2 1 d 2 22 (d (k 1))2 d 2 d 2 k2 d 2 (d(d 1) (d k + 1))2 (1 2 k) 2 ( ) 2 d k d i0 ( ) 2 d i d i0 ( d d ) π(k) k)( d k ( 2d ) d ( )( ) d d i d i ( ) 2d d 3.2 Positive recurrence We introduce a new notation: m x E x [T x ] kp (T x k X 0 x), k1 where T x min{n 1, X n x}. 28

29 Definition 3.3. Let x be a recurrent state. If m x <, then x is called positive recurrent. If m x, then x is called null recurrent. Theorem 3.1. If x is transient, then m x. Proof. If x is transient, ρ xx < 1. In other words, P x (T x ) 1 ρ xx > 0. Therefore, m x kp (T x k X 0 x) P x (T x ) k1 Recall that [ ] G(x, y) E x [N(y)] E x I {y} (X n ) P n (x, y). For any n 1, let Theorem 3.2. N n (y) 1. If y is transient, then for all x. 2. If y is recurrent, then n1 n I {y} (X k ) n k1 G n (x, y) E x [N n (y)] N n (y) lim n n n1 n P k (x, y) k1 N n (y) G n (x, y) lim 0, lim 0 n n n n I{T y < } G n (x, y), lim ρ xy m y n n m y Corollary. Let C be an irreducible set of recurrent states. Then, G n (x, y) N n (y) lim lim 1, x, y C n n n n m y Theorem 3.3. If x is positive recurrent and x y, then y is positive recurrent. 29

30 Proof. If x is positive recurrent and x y, then x y. In other words, there exists n 1 1, n 2 1 such that P n1 (x, y) > 0, P n2 (xy) > 0. Observe that P n1+n+n2 (y, y) P n2 (y, x)p n (x, x)p n1 (x, y) Observe that, n [ n ] P n1+n+n2+k (y, y) P n2 (y, x) P k (x, x) P n1 (y, x) Then, Then, k1 n+n 1+n 2 mn 1+n 2+1 m1 P m (y, y) m1 k1 n P n1+n2+k (y, y) k1 n 1+n 2 n 1+n 2 P m (y, y) + P m (y, y) + G n+n1+n 2 (y, y) G n1+n 2 (y, y) P n2 (y, x)g n (x, x)p n1 (x, y) G n+n1+n 2 (y, y) G n1+n 2 (y, y) n Since G n1+n 2 (y, y) 0 as n, we have 2 n+n 1+n 2 mn 1+n 2+1 P m (y, y) P n2 (y, x) G n(x, x) P n1 (x, y) n 1 m y P n1 (x, y)p n2 (y, x) 1 m x > 0 If m x is finite, then m y must be finite as well. Theorem 3.4. Let C be a finite irreducible set of recurrent states. Then, every state in C is positive recurrent. Proof. Clearly, given x C, P k (x, y) 1, y C 2 Above result is derived from the following: G n1 +n lim 2 +n (n 1 + n 2 + n)g n1 +n lim 2 +n 1 n n n n(n 1 + n 2 + n) 1 m y 30

31 for all positive integer k. Then, n n P n (x, y) k1 y C y C n P k (x, y) k1 y C G n (x, y) Furthermore, y C lim y C G n (x, y) n n y C lim n 1 G n (x, y) n G n (x, y) n 1 1 Now, it follows that there exists z C such that G n (x, y) lim > 0, n n implying that m z < and z is positive recurrent. Since every state in C communicated with z, every state in C is positive recurrent. Remark. Let {X n, n 1, 2,... } be a Markov chain with finite state space S. Then, all recurrent states are positive recurrent. Theorem 3.5. Let π be a stationary distribution of a Markov chain. If y is transient or null recurrent, then π(y) 0. Proof. Let π be a stationary distribution of the chain. Then, P (X k y) π(y) if the initial distribution is π. Then, P (X k y) P (X 0 S, X k y) x S P (X 0 x, X k y) x S P (X 0 x)p (X k y X 0 x) x S π(x)p k (x, y) π(y) 31

32 Then, we get Thus, π(y) 1 n x S x S x S π(y) lim n π(x)p k (x, y) k1 x S 1 n n x S n π(x)p k (x, y) k1 π(x) 1 n n P k (x, y) k1 π(x) G n(x, y) n x S π(x) lim n π(x) G n(x, y) n G n (x, y) n 0 Example Does a Markov Chain with no positive recurrent states have a stationary distribution? Let π be a stationary distribution. By theorem 3.5, π(y) 0 for all y S. This yields a contradiction and a Markov Chain with no positive recurrent states cannot have a stationary distribution. Theorem 3.6. An irreducible positive recurrent Markov Chain has a unique stationary distribution given by π(x) 1 m x Example Does a finite state Markov Chain have a stationary distribution? Consider the following decomposition of the state space: C C R C T C P R C T C 1 C j C T Each C i is an irreducible positive recurrent class. By the theorem, there is a stationary on each C i. Then, { π i (x) 1 m x x C i π i (x) 0 x / C i is a stationary distribution. 32

33 Example When does a finite state Markov Chain have a unique stationary distribution? All recurrent states must communicate. Example Can a finite state Markov Chain have exactly two stationary distributions? Assume π 2, π 2 are two different stationary distributions: π 1 P π 1, π 2 P π 2 Then, for any 0 λ 1, λπ 1 + (1 λ)π 2 is also a stationary distribution. Therefore, a Markov Chain cannot have exactly two stationary distributions. Theorem 3.7. Let {X n, n 0, 1, 2,... } be a Markov Chain with state space S. Let N denote the nnumber of stationary distributions of the chain. Then, 0 if there is no positive recurrent state N 1 if there is one irreducible positive recurrent class else Example Consider the Birth-Death Markov chain. If p0 p x 1 q 1 q x <, then there is a stationary distribution. This condition is in fact equivalent to positive recurrence of a chain. Previously, we have shown that an irreducible Birth-Death Markov Chain with X {0, 1, 2,... } (1) is recurrent iff x1 q 1 q x p 1 p x, and (2) has a unique stationary distribution iff x1 These results lead to following theorems: Theorem The chain is transient iff p 0 p x 1 q 1 q x <. x1 q 1 q x p 1 p x < 33

34 2. The chain is null recurrent iff x1 q 1 q x p 1 p x, x1 p 0 p x 1 q 1 q x 3. The chain is positive recurrent iff x1 p 0 p x 1 q 1 q x < Example A Birth-Death Markov Chain with p x q x a is null recurrent. Example A Birth-Death Markov Chain with p x p and q x q. Notice that q 1 q x ( ) q p x1 1 p x p x1 p 0 p x 1 ( ) p q 1 q x q x1 Clearly, if p q, one of them will converge and the other will diverge. Thus, when p > q, the chain is transient and when p < q, the chain is positive recurrent. x1 34

35 4 Long time behaviour 4.1 Period of a state Definition 4.1. Let I be a set of positive integres. The greatest common divisor of I, denoted by gcd(i) is defined as gcd(i) min{n, n m for all m I} Definition 4.2. The period of a state x is defined as d x gcd{n : P n (x, x) > 0} Example Let S {1, 2, 3}. Consider 1/2 1/2 0 P 1/2 1/3 1/3 1/2 1/3 0 Since P 2 (3, 3) > 0 and P 3 (3, 3) > 0, its period is 1. Example Consider P Clearly, Therefore, period of 1 is 3. Example Consider a pure birth Markov chain with P (x, x + 1) 1. Then, d 0. Definition 4.3. A state x is called periodic if d x 1. Theorem 4.1. Let I x {n 1 : P n (x, x) > 0}. By definition, d x gcd I x. If 1 I x, then d x 1. Example Consider a Markov Chain characterized by the following transition matrix: P 1/2 0 1/ Find the period of each state. 1. For x 0, we have I 0 {2, 4, 6,... } because Therefore, d For x 1, we have I 1 {2, 4, 6,... } for a similar reason. Therefore, d

36 3. For x 2, notice that 2 is an absorbing state. Since 2 2, we have 1 I 2. Therefore, d 2 1. Theorem 4.2. If x y, then d x d y. Proof. Let I x {n 1 : P n (x, x) 0}, I y {n 1 : P n (y, y) 0}. Since x y, there exists n 1 1, n 2 1 such that P n1 (x, y) > 0 and P n1 (x, y) > 0. Thus, P n1+n2 > 0. Since n 1 + n 2 I x, d x is a divisor of n 1 + n 2. For any m I y, we have P m (y, y) > 0. Then, P n1+n2+m (x, x) P n1 (x, y)p m (y, y)p n2 (y, x) > 0 This implies that n 1 + n 2 + m I x. Since d x is a divisor of n 1 + n 2 + m and of n 1 +n 2, it is follows that that d x is also a divisor of m. Thus, d x d y. Likewise, we can prove that d y d x. Therefore, d x d y. Recall that we can decompose any state space as follows: ( ) ( ) C T i C i P R j C j NR By the previous theorem, we can conclude that each class has exactly one period. 4.2 Long time behaviour Theorem 4.3. If y is null recurent, then for all x. lim P n (x, y) 0 n Theorem 4.4. If {X n, n 0, 1,... } is an irreducible positive recurrent Markov chian with period d, 1. If the chain is periodic, then lim P n (x, y) π(y) n 2. If the period of the chain is greater than or equal to 2, then for any x, y S there exists an integer 0 r < d such that lim P md+r (x, y) dπ(y) m Further, P n (x, y) 0 if n md + r. 36

37 Theorem 4.5 (Long time behaviour). 0 if y is transient or null recurrent lim P n (x, y) { π(y) if y is positive recurrent and d y 1 n dπ(y) for n md + r if ρ xy 1, y is PR, and d y 2 0 else Example Consider a Markov chain characterized by the following transition matrix: P 1/2 0 1/ Then, lim P n (x, y) n { 0 if y 0, 1 1 if x y 2 Example Let {X n, n 0, 1, 2} be a Markov chain with S {1, 2, 3, 4} and 1/2 1/2 0 0 P 1/6 1/2 1/ /2 1/2 1/ /2 1/2 Find lim n P n. We want to compute P n (1, 1) P n (1, 2) P n (1, 3) P n (1, 4) P n P n (2, 1) P n (2, 2) P n (2, 3) P n (2, 4) P n (3, 1) P n (3, 2) P n (3, 3) P n (3, 4) P n (4, 1) P n (4, 2) P n (4, 3) P n (4, 4) First, notice that so all state communicate with each other. Since we have finite state space, we have one close, irreducible, positive recurrent class. Also, we have d 1 gcd{n 1, P n (1, 1) > 0} 1 since P 1 (1, 1) 1/2 > 0. Since all states are positive recurrent and have period of 1, we can conclude that for all states in S. Therefore, lim n Pn lim P n (x, y) π(y) n 1/8 3/8 3/8 1/8 1/8 3/8 3/8 1/8 1/8 3/8 3/8 1/8 1/8 3/8 3/8 1/8 37

38 Example Let {X n, n 0, 1, 2} be a Markov chain with S {1, 2, 3, 4} and P 1/3 0 2/ /3 0 1/ Find lim n P 2n and lim n P 2n+1. Clearly, this chain is irreducible and positive recurrent. We also know that its stationary distribution is given by ( 1 π 8, 3 8, 3 8, 1 ) 8 Observe that d 1 gcd{n 1, P n (1, 1) > 0} gcd{2, 3, 6,... } 2 So there must exist r such that lim n P n (x, y) dπ(y) for n md + r. Notice that P 2n+1 (1, 1) 0. Then, we must have lim P 2n (1, 1) 2π(1) 1 n 4. Likeiwse, we have P 2n (2, 1) 0. Therefore, we must have lim P 2n+1 (2, 1) 2π(1) 1 n 4. Therefore, we can conclude that 1/4 0 3/4 0 P 2n 0 3/4 0 1/4 1/4 0 3/ /4 0 1/4 0 3/4 0 1/4 P 2n+1 1/4 0 3/ /4 0 1/4 1/4 0 3/4 0 Example Let {X n, n 0, 1, 2} be a Markov chain with S {0, 1, 2,... } and r 0 p q 1 r 1 p 1 0 P 0 q 2 r 2 p

39 Clearly, we have d { 1 if r i > 0for at least one i 2 else Example Let {X n, n 0, 1, 2,... } be a Markov chain with finite number of states. Assume that the chain is irreducible and each column of P add up to 1. Find the stationary distribution of the chain. 39

40 5 Continuous Time Markov Chain In this section, we will still be looking at at most countable state space. However, we introduce a new concept: Definition 5.1 (Waiting time). Starting from state x, you wait at x for τ x until the next jump occurs. In discrete time Markov chain, τ x was fixed but in continuous time, τ x is a random variable. In order to satisfy the Markov property, waiting time must follow the exponential distribution, the only random variable with memoryless property. 5.1 Exponential distribution Definition 5.2 (Exponential distribution). A random variable X that follows an exponential distribution has the following properties: 1. X > 0 2. f(x) λe λx 3. E[x] 1/λ We observe that if λ is large, the waiting time becomes shorter. If λ goes to infinity, waiting time will go to zero. On the other hand, if λ goes to 0, waiting time will go to infinity. These are two boundary cases. Example Let X 1, X 2 be two independent exponential random variables with respective parameters: λ 1, λ 2. Set X min(x 1, X 2 ). Then, X is exponential with parameter λ λ 1 + λ 2 Proof. Notice that P (X x) 1 P (X > x) 1 P (min(x 1, X 2 ) > x) 1 P (X 1 > x, X 2 > x) 1 P (X 1 > x)p (X 2 > x) Since CDF of an exponential distribution is F (x) 1 e λx, we get 1 P (X 1 > x)p (X 2 > x) 1 (1 F λ1 (x))(1 F λ2 (x)) 1 e λ1x e λ2x 1 e (λ1+λ2)x Therefore, X follows an exponential distribution with parameter λ λ 1 + λ 2. Theorem 5.1. Let X 1, X 2,..., X n be independent exponential random variables. Let X min(x 1, X 2,..., X n ). Then, X is a exponential random variable with parameter λ λ 1 + λ λ n. 40

41 Recall that the Markov property is defined as the following: P (X n x n X n 1 x n 1,..., X 0 x 0 ) P (X n x n X n 1 x n 1 ) How does this apply in the continuous case? Definition 5.3. For any n 1, let x 0, x 1,..., x n S. Then, the Markov property is defined as the following: P (X τn x n X 0 x 0,..., X τn 1 x n 1 ) P (X τn x n X τn 1 x n 1 ) We now introduce a new notation. For any 0 s t, and x, y S, we have P (s, t, x, y) P xy (s, t) P (X t y X s x) Definition 5.4. Let T [0, ) and S be a finite or countable set. The stochastic process is given by {X t, t T } {X t, t 0}. Then, 1. The distribution of X 0 is called the initial distribution of the process, denoted by π. 2. The process {X t, t 0} is continuous time Markov chain if for any n 1, x 0, x 1,..., x n S, 0 t 1 < < t n, P (X tn x n X t0 x 0,..., X tn 1 x n 1 ) P (X tn x n X tn 1 x n 1 ) 3. The Markov chain {X t, t > 0} is time homogeneous if for any t, s T, P (X t+s y X s x) P (X t y x 0 x) 4. For time-homogeneous Markov chain, P xy (t) P (X t y X 0 x) is called the transition function of the chain. 5. A state x is absorbing if P (X t x x 0 x) 1 for all t. { 1 if x y 6. δ xy 0 else 7. Q (Q xy ) is a transition probbability matrix such that Q xx 0. { q x if x y 8. For each x S, let q x 0. Set q xy q x Q xy if x y Theorem 5.2. Let {X t, t > 0} be a homogeneous continuous time Markov chain with waiting time distribution {exp(q x ), x S} and transition probability Q. Then, we have 1. Chaphen-Kolmogorov Equation: P xy (t + s) z S P xz (t)p zy (s) 41

42 2. Backward Equation: P xy(t) z S q xz P zy (t) P (t) AP, where A (q xy ). Proof. For any x, y S and t, s T, we have P xy (t + s) P (X t+s y X 0 x) P (X t+s y, X t S X 0 x) P (X t+s y, X t S, X 0 x) P (X 0 x) z S P (X t+s y, X t z, X 0 x) P (X 0 x) P (X t+s y, X t z, X 0 x) P (X t z, X 0 x) P (X t z, X 0 x) P (X 0 x) z S z S P (X t+s y X 0 x, X t z)p (X t z x 0 x) z S P (X t+s y X t z)p (X t z x 0 x) z S P zy (s)p xz (t) This proves the first statement. Now, we prove the second statement: P xy (t) P (X t y X 0 x) P (τ 1 > t, X t y X 0 x) +P (τ 1 t, X t y X 0 x) }{{} no jump occurs δ xy P (τ 1 > t) }{{} e qxt +P (τ 1 t, X τ1 x, X t y X 0 x) δ xy e qxt + z x P (τ 1 t, X τ1 x, X t z X 0 x) δ xy e qxt + z x δ xy e qxt + δ xy e qxt + t 0 t 0 t 0 q x e qxs Q xz P zy (t s)ds q x e qxs z x Q xz P zy (t s)ds q x e qx(t u) z x Q xz P zy (u)du t δ xy e qxt + e qxt q x e qxu Q xz P zy (u)du z x 0 42

43 Then, we get P xy(t) q x e qxt δ xy + t q x P xy (t) + z x q x Q xz P zy (t) q x e qxu Q xz P zy (u)du + q x Q xz P zy (t) z x 0 z x q xx P xy (t) + z x q xz P zy z q xz P zy (t) Now, this completes the proof. So the matrix A determines the existence of Markov chain and is analogous to transition matrix in discrete time Markov chain. Notice that Q provies the jumping mechanism and q x provides the waiting mechanism. So the matrix A, which is a combination of q x and Q, is fundamental in continuous Markov chains. What happens if we let t 0? We get P xy(0) z S q xz P zy (0) We can consider two cases here: 1. y x. P xx(0) q xx q x. 2. y x. P xy(0) q xy. Now, notice that A q xy is not a transition matrix. We can first look at some of its properties: q xy [0, ] q xy > 0 for y x, q xy 0 for y x y q xy 0 Let s take a look at an example: Example Consider the following matrix: A If we start from state 1, we are very likely to move to other states (notice that the magnitude of A 11 is large). If we start from state 3, we are not as likely to move to other states. This directly translates to waiting time. 43

44 Also, we notice that A 21 A 23 and A 31 A 32. This implies that you are equally likely to move to any other states if you start from either state 2 or 3. On the other hand, A 12 > A 13. So if you start from state 1, you are more likely to move to state 2 than 3. Let s go back to Champhen-Kolmogorov equation: P xy (t + h) z S P xz (t)p zy (h) We can then divide both sides by h P xy (t + h) P xy (t) h As we let h 0, we get the forward equation. z S P xz(t)p zy (h) P xy (t) h P xy z P xz (t)q zy Example (Poisson process). Consider λ λ λ λ 0 0 A 0 0 λ λ Find the distribution of X t. First, notice that P (X t 0) P 0 (t) P (τ 1 > t) 1 P (τ 1 < t) 1 (1 e λt ) e λt Then, by using the forward equation, we can compute P 0 (X t 1) P 01(t): P 01(t) z S P 0z (t)q z1 Solving the differential equation, we get P 00 (t)q 01 + P 01 (t)q 11 λp 00 (t) λp 01 (t) P 01 (t) λte λt. 44

45 By induction, we find that Therefore, we get e λt P 0n (t) t 0 n 0 (λt)n n! λe λs P 0(n 1) (s)ds λs (λs)n 1 λe n 1 e λs ds P 0n (t) (λt)n e λt n! Example Consider a Markov chain with state space S {0, 1} where ( ) λ λ A µ µ Find P xy (t) for x, y S. In this example, we can use the backward equation. First, observe that P 00(t) z S q 0z P z0 (t) q 00 P 00 (t) + q 01 P 10 (t) λp 00 (t) + λp 10 (t) Likewise, we have P 10(t) z S q 1z P z0 (x) q 10 P 00 (t) + q 11 P 10 (t) µp 00 (t) µp 10 (t) Combining the two equations, we get P 00(t) P 10(t) (λ + µ)(p 00 (t) P 10 (t)). Solving the differential equation, we get P 00 (t) P 10 (t) e (λ+µ)t Then, we have P 00(t) λp 00 (t) + λ(p 00 (t) e (λ+µ)t ) λe (λ+µ)t 45

46 Integrating, Lastly, we have P 00 (t) P 00 (0) λ t 0 e (λ+µ)s ds λ λ + µ (1 e (λ+µ)t ) P 00 (t) 1 λ λ + µ (1 e (λ+µ)t ) P 00 (t) µ λ + µ + λ λ + µ e (λ+µ)t P 10 (t) µ λ + µ µ λ + µ e (λ+µ)t Example (Birth-death continuous time). We can write a general infinitesimal matrix for continuous time birth-death Markov chain as follows: λ 0 λ µ 1 (λ 1 + µ 1 ) λ 1 0 A 0 µ 2 (λ 2 + µ 2 ) λ Example Consider A Then, we get P 00 (t) 1, P 55 (t) 1, P 17 (t) 0. Example (Branching process). Each individual will wait an exponential time with parameter λ > 0 independently. At the end of the waiting time, the individual will produce 2 offsprings with probability p or no offsprings with probability 1 p. Let X t be the total number of individuals at time t. It is clear that S {0, 1, 2,... }. First, we know that q 00 0 because if you have no population, it will stay at 0 forever and no offsprings will not be produced. Then, it directly follows that q 0i 0 for all i > 0. Now, we look at q 11. Since we have only one individual whose waiting time has parameter λ, we have q 11 λ. Using the probabilities given, we get q 10 (1 p)λ and q 12 pλ. Notice that each individual has an independent waiting time distribution. So two individuals cannot have identical waiting time so p(τ i τ j ) 0 for all i j where τ i is the waiting time of an individual. So you can either go to state x + 1 (produce 2 offsprings) or x 1 (produce 0 offsprings) given that you re at state x. Now, we want to compute q xx. Notice that the jump will happen 46

47 when the first person out of x individuals produces offsprings. So we want the minimum time of n waiting times. By theorem 5.1, we know that the minimum waiting time is an exponential distribution with λx. So we get q xx λx. Then, it directly follows that q x x+1 λxp and q x x 1 λx(1 p). Example (Infinite server Queueing model). Customers arrive for service according to a poisson process with parameter λ. Each customer will be served after arrival. The serving then follows exponential with parameter µ. All services are independent. Then, we want to look at the transition matrix, Q. Notice that a customer arrives at a rate λ and any of the customers can be served at a per customer rate µ. Then, clearly, the probability of going from x to x + 1 is λ/(λ + xµ) and the probability of going from x to x 1 is xµ/(λ + xµ), where x is the number of customers. So we can write hte infinitesimal matrix as follows: A λ 0 λ 0 0 µ 1 (µ 1 + λ 1 ) λ 1 Example (Pure birth Markov Chain). For pure bith Markov Chain, we have µ x 0 for all x. Then, the forward equation is given by Therefore, we get P xy(t) z S P xz (t)q zy P x(y 1) (t)q (y 1)y + P xy (t)q y y P x(y 1) (t)λ y 1 λ y P xy (t) P xy(t) λ y P xy (t) + λ y 1 P x(y 1) (t) Now, we want to solve this system of equation. If y < x, then P xy (t) 0. If y x, then we have λ x 1 P x(x 1) (t). Thus, we have If y x + 1, we get P xx(t) λ x P xx (t) P xx(t) e λxt, P xx (0) 1 P x(x+1) (t) λ x+1 P x(x+1) (t) + λ x P xx (t) λ x+1 P x(x+1) (t) + λ x e λxt, which yields P x(x+1) + λ x+1p x(x+1) λ x e λxt. 47

48 Integrating, we get P x(x+1) (t) Lastly, if y > x + 1, we get { λx te λx+1t, λ x+1 λ x λ x λ x+1 λ x e λx+1t ( e (λ x+1] λ x)t 1 ), λ x+1 λ x P xy(t) λ y P xy (t) + λ y 1 P x(y 1) (t) e λyt ( P xy(t) + λ y P xy (t) ) λ y 1 P x(y 1) (t)e λyt ( e λ yt P xy (t) ) λy 1 P x(y 1) (t)e λyt This is the general pure birth process. t P xy (t) λ y 1 e λy(t s) P x(y 1) (s)ds. Example (Pure birth). Pure birth happens when µ x > 0 and λ x 0. This relates to a biological process called Coalescent Markov Chain. Example (Yule Process). Yule process is a linear growth Markov chain were λ x ax, where a > 0. Essentially, it is a special type of Pure Birth process. So let s find the solution. When y < x, we get P xy (t) 0. When y x, we get P xx (t) e axt. When y x + 1, we get P x(x+1) (t) When y x + 2, we get By induction, we get 0 λ x λ x+1 λ x ( e λ xt e λx+1t) x ( e at) x ( 1 e at ). t P x(x+2) (t) λ x+1 e λx+2(t s) P x(x+1) (s)ds 0 t λ x+1 e λx+2(t s) x ( e as) x ( 1 e as ) ds 0 ( ) x + 1 (e at ) x ( 1 e at ) 2 2 P xy (t) This is negative binomial! ( ) y 1 (e at ) x ( 1 e at ) y x. y x 48

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Probability Distributions

Probability Distributions Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)

More information

Math 597/697: Solution 5

Math 597/697: Solution 5 Math 597/697: Solution 5 The transition between the the ifferent states is governe by the transition matrix 0 6 3 6 0 2 2 P = 4 0 5, () 5 4 0 an v 0 = /4, v = 5, v 2 = /3, v 3 = /5 Hence the generator

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Homework set 4 - Solutions

Homework set 4 - Solutions Homework set 4 - Solutions Math 495 Renato Feres Probability background for continuous time Markov chains This long introduction is in part about topics likely to have been covered in math 3200 or math

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Readings: Finish Section 5.2

Readings: Finish Section 5.2 LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather 1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

11.4. RECURRENCE AND TRANSIENCE 93

11.4. RECURRENCE AND TRANSIENCE 93 11.4. RECURRENCE AND TRANSIENCE 93 Similar arguments show that simple smmetric random walk is also recurrent in 2 dimensions but transient in 3 or more dimensions. Proposition 11.10 If x is recurrent and

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Stat 150 Practice Final Spring 2015

Stat 150 Practice Final Spring 2015 Stat 50 Practice Final Spring 205 Instructor: Allan Sly Name: SID: There are 8 questions. Attempt all questions and show your working - solutions without explanation will not receive full credit. Answer

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

Probability and Statistics Concepts

Probability and Statistics Concepts University of Central Florida Computer Science Division COT 5611 - Operating Systems. Spring 014 - dcm Probability and Statistics Concepts Random Variable: a rule that assigns a numerical value to each

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Probability Distributions

Probability Distributions Lecture 1: Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

MATH HOMEWORK PROBLEMS D. MCCLENDON

MATH HOMEWORK PROBLEMS D. MCCLENDON MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010 Exercises Stochastic Performance Modelling Hamilton Institute, Summer Instruction Exercise Let X be a non-negative random variable with E[X ]

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Birth-Death Processes

Birth-Death Processes Birth-Death Processes Birth-Death Processes: Transient Solution Poisson Process: State Distribution Poisson Process: Inter-arrival Times Dr Conor McArdle EE414 - Birth-Death Processes 1/17 Birth-Death

More information

Lectures on Probability and Statistical Models

Lectures on Probability and Statistical Models Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered

More information

Time Reversibility and Burke s Theorem

Time Reversibility and Burke s Theorem Queuing Analysis: Time Reversibility and Burke s Theorem Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Yannis A. Korilis. Outline Time-Reversal

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS ARI FREEDMAN Abstract. In this expository paper, I will give an overview of the necessary conditions for convergence in Markov chains on finite state spaces.

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

Continuous Time Processes

Continuous Time Processes page 102 Chapter 7 Continuous Time Processes 7.1 Introduction In a continuous time stochastic process (with discrete state space), a change of state can occur at any time instant. The associated point

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Slides 8: Statistical Models in Simulation

Slides 8: Statistical Models in Simulation Slides 8: Statistical Models in Simulation Purpose and Overview The world the model-builder sees is probabilistic rather than deterministic: Some statistical model might well describe the variations. An

More information

Chapter 2. Markov Chains. Introduction

Chapter 2. Markov Chains. Introduction Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Applied Stochastic Processes

Applied Stochastic Processes STAT455/855 Fall 26 Applied Stochastic Processes Final Exam, Brief Solutions 1 (15 marks (a (7 marks For 3 j n, starting at the jth best point we condition on the rank R of the point we jump to next By

More information

x x 1 0 1/(N 1) (N 2)/(N 1)

x x 1 0 1/(N 1) (N 2)/(N 1) Please simplify your answers to the extent reasonable without a calculator, show your work, and explain your answers, concisely. If you set up an integral or a sum that you cannot evaluate, leave it as

More information

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov

More information

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations Chapter 5 Statistical Models in Simulations 5.1 Contents Basic Probability Theory Concepts Discrete Distributions Continuous Distributions Poisson Process Empirical Distributions Useful Statistical Models

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov

More information

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading Stochastic Processes 18445 MIT, fall 2011 Day by day lecture outline and weekly homeworks A) Lecture Outline Suggested reading Part 1: Random walk on Z Lecture 1: thursday, september 8, 2011 Presentation

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

1 Types of stochastic models

1 Types of stochastic models 1 Types of stochastic models Models so far discussed are all deterministic, meaning that, if the present state were perfectly known, it would be possible to predict exactly all future states. We have seen

More information

De los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes:

De los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes: Probabilidades y Estadística (M) Práctica 7 2 cuatrimestre 2018 Cadenas de Markov De los ejercicios de abajo (sacados del libro de Georgii, Stochastics) se proponen los siguientes: 6,2, 6,3, 6,7, 6,8,

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe

Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem. Wade Trappe Queuing Networks: Burke s Theorem, Kleinrock s Approximation, and Jackson s Theorem Wade Trappe Lecture Overview Network of Queues Introduction Queues in Tandem roduct Form Solutions Burke s Theorem What

More information

Modelling Complex Queuing Situations with Markov Processes

Modelling Complex Queuing Situations with Markov Processes Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions

More information

14 Branching processes

14 Branching processes 4 BRANCHING PROCESSES 6 4 Branching processes In this chapter we will consider a rom model for population growth in the absence of spatial or any other resource constraints. So, consider a population of

More information