ISyE 6761 (Fall 2016) Stochastic Processes I

Size: px
Start display at page:

Download "ISyE 6761 (Fall 2016) Stochastic Processes I"

Transcription

1 Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG Last Revision: May 25, 217 Table of Contents 1 Probability Theory Convolution Generating Functions Branching Processes Continuity Theorem Random Wal Discrete Time Marov Chains State Space Decomposition Computation of f (n) ij Periodicity Solidarity Properties More State Space Decomposition Absorption Probabilities Stationary Distributions Limiting Distribution Renewal Theory Convolution Laplace Transform Renewal Functions Renewal Reward Process Renewal Equation Direct Riemann Integrability Regenerative Processes Poisson Random Variable These notes are currently a wor in progress, and as such may be incomplete or contain errors. i

2 Fall 216 ACKNOWLEDGMENTS ACKNOWLEDGMENTS: Special thans to Michael Baer and his L A TEX formatted notes. They were the inspiration for the structure of these notes. ii

3 Fall 216 ABSTRACT Abstract The purpose of these notes is to provide the reader with a secondary reference to the material covered in ISyE iii

4 Fall PROBABILITY THEORY Errata Test 1 - October 13th Test 2 - November 17th Breadown of Grading: Test 1 (3%), Test 2 (3%), Assignments (1%), Final (3%) All material will be posted on t-square Instructor Office Hours: 329 Groseclose (T,W,Th) 12:3pm-1:3pm TA (Rui Gao): 425E Main Building (F) 12:3pm-2:3pm 1 Probability Theory Definition 1.1. A stochastic process is a collection of random variables {X(t) : t T } defined on a common probability space indexed by T. For example X() can be the number of customers in a service system at time or the number of arrivals to a queuing system during the n th interarrival time. Example 1.1. (Non-negative integer valued random variables) Let X be a random variable taing values {, 1, 2,..., }. Define p P (X ) for, 1, 2,... and P (X < ) p, P (X ) p 1 p. Define { P (X ) > E(X) p P (X ) If f : [, 1,..., ] [, ]. We can also define E[f(x)] f()p If f : [, 1,..., ] [, ]. We can define E[f + (x)] E[f (x)] f + ()p, f + max[f, ] f ()p, f min[f, ] E[f(x)] E[f + (x)] E[f (x)] The expected value is finite if and only if E[ f(x) ] <. We call the special transformation below variance: V ar(x) E [ (X E(X)) 2] Example 1.2. (Binomial Random Variable) Denoted as b(; n, p), we have ( ) n P (X ) p (1 p) n 1

5 Fall PROBABILITY THEORY with expectation: E(X) n ( n n n 1 np ) p (1 p) n n! ( 1)!(n )! p (1 p) n (n 1)!!(n 1)! p (1 p) n 1 } {{ } 1 np and variance: V ar(x) E(X 2 ) (E(X)) 2 E(X 2 )... n(n 1)p 2 + np and reducing gives us V ar(x) np(1 p). Example 1.3. (Poisson random variable) Denoted as p(; λ), we have P (X ) e λ λ,, 1, 2,...! E(X) λ, V ar(x) λ Example 1.4. (Geometric random variable) Denoted as g(; p) and counting as the number of failures before the first success, we have P (X ) (1 p) p,, 1, 2,... E(X) V ar(x) 1 p p 2 (1 p) p 1 p p Lemma 1.1. If Xis an integer valued non-negative random variable then E(X) P (X > ). Proof. By direct evaluation: P (X > ) j+1 P (X j) j1 j 1 P (X j) 1 jp (X j) j1 In the multivariate case we have a random vector with non-negative integer valued components X (X 1,..., X n ) with joint distribution P (X 1 1,..., X n n ) p 1,..., n If f attains non-negative values, then If f attains values in the real line, then E(f(X)) ( 1,..., n) Remar 1.1. (Properties of the expected value and variance) f( 1,..., n )p 1,..., n E[f(X)] E[f + (X)] E[f (X)] 2

6 Fall PROBABILITY THEORY 1) For a 1,..., a n R, E [ n i1 a ix i ] n i1 a ie[x i ] 2) If X 1,..., X n are independent random variables and f 1,..., f n are bounded functions, then E [ n i1 f i(x i )] n i1 E [f i(x i )] 3) If E[X 2 i ] < for i 1,..., n and Cov(X i, X j ) for all i 1,..., n and j 1,..., n then V ar( n i1 a ix i ) n i1 an i V ar(x i) 1.1 Convolution Suppose X and Y are independent non-negative integer valued random variables with P (X ) a and P (Y ) b. Then, P (X + Y n) n P (X, Y n ) n a b n Definition 1.2. The convolution of two sequences {a n } and {b n } is the new sequence {c n } where the n th element c n is defined by n c n a b n We write {c n } {a n } {b n }. Denote {p }... {p } {p } n p n. Example 1.5. Suppose X is a p(; λ) random variable and Y is a p(; µ) random variable. Suppose X and Y are independent. Then, X + Y is a p(; λ + µ) random variable. The proof is as follows: P (X + Y n) n P (X )P (Y n ) n e λ λ! e µ λ n (n )! e (λ+µ) (λ + µ) n n! e (λ+µ) (λ + µ) n n! n ( ) ( n λ ) ( µ ) n λ + µ λ + µ Example 1.6. If X is a b(; n, p) and Y is a b(; m, p) and X and Y are independent. Then X + Y is b(; n + m, p). Remar 1.2. (Some properties of convolution) 1) Convolution of two probability mass functions is a probability mass function. 2) X + Y d Y + X (equal in distribution; commutative) 3) X + (Y + Z) d (X + Y ) + Z (associative) 1.2 Generating Functions Definition 1.3. Let a, a 1, a 2... be a numerical sequence. If there exists s > such that A(s) a s converges in s < s, then we call A(s) the generating function of the sequence {a n }. If {p : } then P (s) p s E[s X ]. If p 1 then P (1) 1. Example 1.7. If X is p(; λ) then P (s) e λ λ s e λ(s 1), s >! 3

7 Fall PROBABILITY THEORY If X is b(; n, p), If X is g(; p), Remar 1.3. Note that and P (s) P (s) d n ds n P (s) n ( ) n p (1 p) n s (1 p + ps) n (1 p) ps p 1 (1 p)s, s < 1 1 p ( 1)...( n + 1)p s n d n ds n P (s) s n!p n n! ( n)! p s n Proposition 1.1. The probability generating function uniquely defines its probability mass function. Proposition 1.2. Let X have a probability mass function with p P (X ) and p 1. Let q P (X > ) and define Q(s) q s. Then Q(s) 1 P (s), s (, 1) 1 s Proof. By direct evaluation, Q(s) j+1 j1 p j s j1 p j 1 s j 1 s 1 1 s j 1 p j s p j p j s j j1 1 1 s (1 p P (s) + p ) 1 P (s) 1 s j1 Remar 1.4. By the Monotone Convergence Theorem, Remar 1.5. By direct evaluation, lim Q(s) lim q s lim q s q E[X] s 1 s 1 s 1 Example 1.8. If X is g(; p) then d ds P (s) s1 p E[X] 1 d 2 ds 2 P (s) s1 E[X(X 1)]. d n ds n P (s) s1 E[X(X 1)...(X n + 1)] P (s) p 1 (1 p)s d ds P (s) p(1 p) (1 (1 p)s) 2 d ds P (s) s1 p(1 p) p 2 1 p p 4

8 Fall PROBABILITY THEORY Remar 1.6. Note that V ar(x) P (1) + P (1) (P (1)) 2. Remar 1.7. The generating function of the sum of independent random variables is the product of their generating functions. (1) Formally, if X i for i 1, 2 are independent non-negative integer valued random variables with generating functions P Xi (s) E [ s Xi], i 1, 2 and s 1 then P X1+X 2 (s) E [ s X1+X2] E[s X1 ]E[s X2 ] P X1 (s)p X2 (s) (2) If {a j } and {b j } are two sequences with generating functions A(s), B(s) then the generating functions of {a n } {b n } is A(s)B(s). This is obvious from the definition: ( n ) a b n s n a b n s n n n a s A(s)B(s) n b n s n Example 1.9. If X 1, X 2 are respectively p(; λ), p(; µ) and X 1 and X 2 are independent, then which is the generating function of a p(; λ + µ). P X1+X 2 (s) e (λ+µ)(s 1) Example 1.1. Suppose that X 1,..., X n are independent and identically distributed (iid) random variables with { 1 with p X i, i 1,..., n with (1 p) Then P Xi (s) ps + (1 p), P X1+...+X n (s) (ps + (1 p)) n Remar 1.8. (Random sums of random variables) Consider iid non-negative random variables {X n : n 1} with p P (X 1 ), P X1 (s) E[s X1 ]. Let N be independent of {X n : n 1} and suppose that P (N j) α j for j, 1, 2,... Define s, s 1 X 1,..., s N X X N From conditional probability, P (S n j) P (S N j N )P (N ) P (S j)p (N ) p j α 5

9 Fall PROBABILITY THEORY and so P SN (s) j s j α j p j α s j p j α P s (s) α (P X1 (s)) E [(P X1 (s)) N] P N (P X1 (s)) Example Suppose N is p(; λ) and From our previous expression, X 1 { 1 with prob. p with prob. 1 p P sn (s) P N (P X1 (s)) exp(λ(ps p)) exp(λp(s 1)) and s N is p(; λp). Remar 1.9. (Wald s identity) Note that E[s N ] d ds P N (P X1 (s)) P s1 N(P X1 (1))P X 1 (1) E[N]E[X 1 ] 1.3 Branching Processes Definition 1.4. Let {Z n,j : n 1, j 1} be iid non-negative random variables having common probability mass functions {p }. Define {Z n : n } by: Z 1 Z 1 Z 1,1 If Z n then Z n+1. This is a branching process. Z 2 Z 2,1 + Z 2, Z 2,Z1. Z n Z n,1 + Z n, Z n,zn 1 Remar 1.1. Define P n (s) E(s Zn ) and P (s) E(s Z1 ) p s and note that P (s) s P 1 (s) P (s) E(s Z1 ) p s P 2 (s) P 1 (P (s)) P (P (s)) P 3 (s) P 2 (P (s)) P (P (P (s))) P (P 2 (s)). P n (s) P n 1 (P (s)) P (P n 1 (s)) Example Suppose Z n,j is a Bernoulli random variable which is equal to 1 with probability p and otherwise. Then 6

10 Fall PROBABILITY THEORY P (s) (1 p) + ps and P 2 (s) (1 p) + p(1 p) + p 2 s P 3 (s) (1 p) + p(1 p) + p 2 (1 p) + p 3 s. P n (s) (1 p) + p(1 p) (1 p)p n 1 + p n s [ ] n 1 (1 p) p + p n s Example What is E(Z n )? Suppose that E(Z 1 ) m. Then P n(s) P (P n 1 (s))p n 1(s) P n(1) mp n 1(1). m 2 P n 2(1) m n Remar Consider the event {extinction} n1 {Z n }. Let Π P ({extinction}) P ( n1 {Z n }). Note that {Z n } {Z n+1 }. We have ( ) ( n ) Π P {Z } lim P {Z } n n1 1 lim n P (Z n ) lim n P n() where P n (s) E(s Zn ). This is a very difficult method of determining extinction probability. Remar Consider iid {Z n,j : n 1, j 1} having probability mass function {p }. Note that if We will now consider the case where < p < 1. p Π p 1 Π 1 Theorem 1.1. If m E[Z 1 ] < 1 then Π 1. If m > 1 then Π < 1 and is the unique non-negative solution to the equation s P (s) which is less than 1. Proof. Let us first show that Π is a solution of s P (s) and define Π n P (Z n ) where {Π n } is a non-decreasing sequence converging to Π. Recall that P n+1 (s) P (P n (s))) Π n+1 P (Π n ) at s and hence Π lim n Π n+1 lim n P (Π n) P (Π) Next we show that Π is the smallest solution of P (s) s in [, 1]. Suppose q is some other solution to P (s) s with q 1. Note that Π 1 P () P (q) q Π 2 P (Π 1 ) P (q) q. Π n q as n then Π n Π and Π q. Finally note that P (s) is convex since P (s) 2 ( 1)p s 2. Suppose 7

11 Fall PROBABILITY THEORY m < 1 P (1) E(Z 1 ) m < 1. If P (1) m 1 then in a left neighbourhood of 1, P (s) cannot be below the line y s and similarly if P (1) m > 1 in a left neighbourhood of 1, P (s) must intersect y s at some point < s < 1 (see Resnic, p. 23). 1.4 Continuity Theorem Let {X n : n } be non-negative integer valued random variables with P (X n ) p (n), P n(s) E(s Xn ) Then X n converges in distribution to X if lim n p(n) p (),, 1,... Theorem 1.2. Suppose for each n 1, 2,... {p (n) : } is a probability mass function {, 1, 2,...} so that p (n), p (n) 1 Then there exists a sequence {p () P (s), < s < 1 such that : } such that lim n p(n) p () for all, 1,... if and only if there exists a function lim P n(s) P (s) n Proof. ( ) Suppose p (n) p () and fix s (, 1), ɛ > and pic m large enough such that im+1 s i < ɛ Then observe that P n (s) P (s) p (n) s p (n) p () s m m m p () p (n) p () s + p (n) p () s + p (n) p () s + ɛ s m+1 m+1 p (n) s p () s Hence, lim P n(s) P (s) < ɛ n and since ɛ was arbitrary, we are done. { } ( ) For a fixed let be a subsequence such that lim p (n ) n p(n ) { exists. Let p (n ) } be another subsequence such that 8

12 Fall PROBABILITY THEORY lim n p(n ) exists. Remar that lim n lim n p (n ) s lim n P n (s) P (s) p (n ) s lim n P n (s) P (s) Then the two subsequences have the same probability generating function. Since the probability generating function uniquely defines the probability mass function, all subsequences yield the same limit and hence lim n p(n) exists. 1.5 Random Wal Definition 1.5. Let {X n : n 1} be iid random variables (r.v.s) taing values 1 and 1. with P (X 1 1) p and P (X 1 1). Let n S, S 1 X 1,..., S n Then {S n : n } is called the simple random wal. Remar Define N inf{n 1 : S n 1} and φ n P (N n) with φ, φ 1 p. For n 2, suppose we have 1 step of 1, it taes j steps to get 1, and steps to get 1. Then we should have 1 + j + n with with more details below: Since A j is independent of B n j 1 then {N n} A j B n j 1 1 n 2 φ n (1 p)φ j φ n j 1 n 2 j1 X {X 1 1} A j B n j 1 j1 { { inf inf { { n : n : } } n X i+1 1 j i1 } } n X i+j+1 1 n j 1 i1 n 2 P (N n) (1 p)p (A j )P (B n j 1 ) j1 n 2 (1 p)φ j φ n j 1 j1 9

13 Fall PROBABILITY THEORY Now define Φ(s) n sn φ n and note that Φ(s) ps φ n s n n2 n2 (1 p) (1 p) (1 p) (1 p)φ j φ n j 1 s n n 2 (1 p)s j1 n2 (1 p)φ j φ n j 1 s n n 2 j j nj+2 j nj+2 s j φ j j (1 p)sφ 2 (s) s n φ j φ n j 1 s j φ j s n j φ n j 1 nj+2 s n j 1 φ n j 1 and we have the following quadratic: (1 p)sφ 2 (s) Φ(s) + ps with the solution Note that so it must be the case that Remar With our new function, we can get Φ(s) 1 ± 1 4p(1 p)s 2 2(1 p)s p(1 p)s 2 Φ() lim s 2(1 p)s Φ(s) 1 1 4p(1 p)s 2 2(1 p)s P (N < ) Φ(1) 1 1 4p(1 p) 2(1 p) 1 2p 1 2(1 p) If p 1/2 then But if p 1/2 then P (N < ) p 2(1 p) Let s calculate E(N) when p 1/2. First note that p 1 p P (N < ) P (N ) 1 2p 1 p 2 2p 2(1 p) 1 E(N) and hence Φ (1) E(N) 2p 1 2p 1 2p 1 2(1 p) { p p 1 p > 1 2 Remar Let N inf{n 1 : S n } and f n P (N n), f with observation that only f 2n P (N 2n) > for n 1, 2,... Let F (s) s 2n f 2n n If X 1 1 then N 1+inf {n : n i1 X i+1 1} 1+N + and if X 1 1 then N 1+inf {n : n i1 X i+1 1} 1+N 1

14 Fall DISCRETE TIME MARKOV CHAINS with the remar that P (N + n) φ n. Now F (s) E [ s N] E [ s N 1{X 1 1} ] + E [ s N 1{X 1 1} ] [ ] [ ] E s 1+N + 1{X 1 1} + E s 1+N 1{X 1 1} [ s(1 p)e s N +] + spe [s N ] s(1 p)φ(s) + spe [s N ] Now, { } { n N d inf n : x i+1 1 inf n : i1 inf { n : } n x i 1 i1 } n ( x i ) 1 i1 and hence P ( X 1 1) 1 p, P ( X 1 1) p and with the final result Remar Let s calculate E[s N ] 1 s 1 4p(1 p)s 2 2ps F (s) s(1 p) 1 1 4p(1 p)s 2 2(1 p)s 1 1 4p(1 p)s 2 + sp 1 s 1 4p(1 p)s 2 2ps P (N < ) F (s) 1 (1 2p) p 1 p 1 2 2(1 p) p > 1 2 2p p < 1 2 So E[N ] for p 1/2. However, also note that if p 1/2 then E[N ] F (1) lim F s (s) lim s 1 s 1 1 s 2 2 Discrete Time Marov Chains Remar 2.1. Let P (X ) a for, 1,...with a i 1. Suppose U is a uniform random variable in (, 1) and define ( 1 ) Y 1 a i, a i (U) i where 1(a, b)(u) ( is 1 if a U b and otherwise. Then X and Y have the same probability mass function. So Y if and 1 only if U i a i, ) i1 a i. Definition 2.1. Given S {, 1, 2,...} with a P (X ) and define P {p ij : i, j } which we call the probability transition matrix. Define X 1 ( 1 a i, i i1 ) a i (U ) i1 11

15 Fall DISCRETE TIME MARKOV CHAINS and f(i, u) on S [, 1] as where f(i, u) if and only if u X n 1, U, U 1,..., U n. Here are some properties: (1) P (X ) a and f(i, u) 1 1 p ij, j j p ij (u) ( 1 j p ij, ) j p ij. Now define X n+1 f(x n, U n+1 ) where X n depends on P (X n+1 j X n i) P (f(x n, U n+1 ) j X n i) P (f(i, U n+1 ) j) p ij (2) [Marov Property] We can see from (1) that P (X n+1 j X i, X 1 i 1,..., X n i) P (f(x n, U n+1 ) X i, X 1 i 1,..., X n i) P (f(i, U n+1 ) j) p ij (3) A application of the above is P (X n+1 1, X n+2 2,..., X n+m m X i,..., X n i) P (X n+1 1, X n+2 2,..., X n+m m X n i) P (X 1 1, X 2 2,..., X m m X i,..., X n i) Definition 2.2. Any stochastic process {X n : n } satisfying P (X n+1 j X n i) p ij and P (X n+1 j X i, X 1 i 1,..., X n i) p ij is a called a Marov chain with initial distribution {a } and probability transition matrix P. Proposition 2.1. Given a Marov chain, the finite dimensional distributions are given of the form P (X i,..., X i ) a i p ii 1...p i 1 i Proof. (1) Suppose that for all j,..., 1. Then P (X i i,..., X j i j ) > P (X i,..., X i ) P (X i X i,..., X 1 i 1 )P (X i,..., X 1 i 1 ) Now suppose that there exists a j such that and let p i 1 i P (X 1 i 1 X i,..., X 1 i 2 )P (X i,..., X 1 i 2 ) p i 1 i p i 2 i 1...p ii 1 a i P (X i i,..., X j i j ) j inf {j : P (X i,..., X j i j ) } If j, then P (X i ) and the result holds trivially. If j > then P (X i i,..., X j 1 i j 1) > and hence P (X i i,..., X j i j ) P (X j i j X i,..., X j 1 i j 1)P (X i,..., X j 1 i j 1) p ij 1 i j (2) Conversely, given a density {a }, a transition matrix P, and a process {X n } whose finite dimensional distribution is given as P (X i,..., X i ) a i p ii 1...p i 1 i 12

16 Fall DISCRETE TIME MARKOV CHAINS then {X n } is a Marov chain with P (X ) a P (X n+1 j X n i) p ij P (X n+1 j X i,..., X n i) p ij P (X n+1 j X i,..., X n i) P (X n+1 j, X n i,..., X i ) P (X n i,..., X i ) p ij Example 2.1. (Branching process) The branching process {Z n } has i n 1 P (Z n i n Z i,..., Z n 1 i n 1 ) P Z n,j i n Z i,..., Z n 1 i n 1 and since the branching process is Marov and computable. j1 i n 1 P Z n,j i n j1 ( i ) P (Z n+1 j Z n i) P Z n, j Example 2.2. (Random wal) Let {X n } be iid random variables with P (X n ) a and define S, S n n i1 X i. Then and 1 p i j P (S n+1 i n+1 S,..., S n i n ) P (S n + X n+1 i n+1 S,..., S n i n ) P (i n + X n+1 i n ) p P (X n+1 i n+1 i n ) a in+1 i n Example 2.3. (Inventory model) Let I(t) denote the inventory level at time t. Suppose the inventory level is checed at fixed times T, T 1, T 2,... Define X n I(T n ). If X n s, purchase enough units to bring the inventory level to S. Otherwise do not purchase any new items. Assume that new units are replenished in a negligible amount of time. Let D n be the demand during [T n 1, T n ] and assume {D n, n } is a sequence of independent and identically distributed random variables and independent of X. Suppose X S and no baclogs are allowed. Then, { max(x n D n+1, ) X n > s X n+1 max(s D n+1, ) X n s with state space {, 1,..., S}. Example 2.4. (Discrete time queue) (1) Consider a queuing model where T, T 1, T 2,... denote the departure times from the system. Let X(t) be the number of customers at time t and X n X(T n + ) where T n + is the time right after the n th departure. Let A n denote the number of arrivals in the time interval [T n 1, T n ). Then X n+1 max(x n + A n+1 1, ) If P (A 1 ) a then this is a discrete time Marov process with transition matrix i j 1 P ij a + a 1 i j o/w a j i+1 13

17 Fall DISCRETE TIME MARKOV CHAINS (2) Let T, T 1, T 2,... denote the times that customers arrive at the system. Let X n X(T n ) where T n is the time right after the n th arrival and S n+1 be the number of service completions in the time interval [T n, T n+1 ) with state space {, 1, 2,...}. Then X n+1 max(x n S n+1 + 1, ) If P (S 1 ) b then this is a discrete time Marov process with transition matrix i+1 b j 1 P ij j i 2 o/w b i j+1 Proposition 2.2. Using the notation p (2) ij n and i, j S (P 2 ) ij p ip j and p (n) ij p (n) ij P (X n j X i) p ip (n 1) j p(n 1) i p j, we have for all Proof. Clearly it holds for n, 1. Now suppose it holds for, 1,..., n. Then P (X n+1 j X i) P (X n+1 j, X 1 X i) P (X n+1 j X 1, X i)p (X 1 X i) P (X n+1 j X 1 )P (X 1 X i) P (X n j X )P (X 1 X i) p (n) j p i p i p (n) j p (n) i p j Notation 1. We call the equation the Chapman-Komolgorov equation. Corollary 2.1. P (X n j) i a ip (n) ij p (n+m) ij p (n) i p(m) j Proof. Immediate from P (X n j) i P (X n j X i)a i i p (n) ij a i Notation 2. From the boo we will denote P (X n j) a (n) j. 2.1 State Space Decomposition Let {X n : n } be a Marov chain with state space S. Set B S and τ B inf{n : X n B} which we call the hitting time of B. We use τ j τ {j}. Definition 2.3. For i, j S we say state j is accessible from state i if and we denote it as i j. Obviously i i. P (τ j < X i) > 14

18 Fall DISCRETE TIME MARKOV CHAINS Proposition 2.3. For i j we have i j if and only if there exists n > such that p (n) ij >. That is, P (X n j X i) >. Proof. Suppose that there exists n such that p (n) ij > and note that {X n j} {τ j n} {τ j < } < P (X n j X i) P (τ j n X i) P (τ j < X i) Now suppose that P (τ j < X i) > and assume that p (n) ij which is a contradiction. for all n. Then P (τ j < X i) lim P (τ j n X i) n ( n ) lim P {X j} X i n limsup n n P (X j X i) Definition 2.4. States i and j communicate i j if they are accessible from each other (i.e. i j and j i). Communication is an equivalence class as follows (1) i i (reflexive) (2) i j if and only if j i (symmetric) (3)) i j and j j then i (transitive) (1) and (2) are obvious. For (3) suppose n and m are such that p (n) ij we are done. > and p (m) j >. Then p (n+m) i Remar 2.2. We can then partition the state space into equivalence classes C, C 1,... such that l p(n) il p (m) l > and C i C j, i C i S Example 2.5. Consider a Marov chain with state space {, 1, 2, 3} and P with equivalence classes {}, {1, 2}, {3}. Notation 3. Here is one way to represent Marov chains (with a Marov probability transition diagram): (1 p) 2 B 1 q(1 q) p 2 A p(1 p) p(1 p) q 2 1 D (1 q) 2 C q(1 q) 15

19 Fall DISCRETE TIME MARKOV CHAINS Example 2.6. Now consider a Marov chain with S {1, 2, 3, 4} and P and {1, 2}, {3, 4} are equivalence classes. Example 2.7. S {, 1, 2,...} is an equivalence class for P(S 1 j) a j (note we may use P and P interchangeably for probability of ) and i1 a i a P i1 a i a 1 a i1 a i a 2 a 1 a. This is an example of an irreducible Marov chain. Definition 2.5. A Marov chain is irreducible if the state space consists of only one equivalence class. This means that i j for all i, j S. Definition 2.6. A set of states C S is closed if for any i C we have P (τ C c X i) 1. If a singleton is closed then it is called an absorbing state. Proposition 2.4. (i) C is closed if and only if for all i C and j C c we have p ij. (ii) j is absorbing if and only if p jj 1. Proof. (i) ( )Suppose that P (τ C c and then clearly p ij for j C c. ( ) Conversely suppose that p ij for all j C c. Then, and Continuing in this manner, we have P (τ C c (ii) This is obvious. X i) 1. Then we now that there exists no n such that p (n) ij P (τ C c 1 X i) p ij j C c P (τ C c 2 X i) P (τ C c 1 X i) + P (τ C c 2 X i) + P (X 1 C, X 2 C c X i) p i p j j C c C n X i) and thus lim n P (τ C c n X i). > for j C c Example 2.8. Consider X n+1 { max(x n D n+1, ) max(s D n+1, ) X n > s X n s 16

20 Fall DISCRETE TIME MARKOV CHAINS with X < S and P (D 1 ) p. P S p }{{} P 11. S p }{{} P s1 s+1 p }{{} P (s+1)1 s+2 p } {{ } P (s+2)1. S p }{{} P S1 p S 1 p S 2 p }{{} P 1S p S 1 p S 2 p p S p S 1 p 1 }{{} P (s+1)s p p S p S 1 p 2 }{{} P (s+1)s p 1 p p S 1 p }{{} P SS.. Note that since i and i for any i S then this system is irreducible. Example 2.9. Suppose that P (X i) 1 and define τ i (), τ i (1) inf{m 1 : X m i}. Suppose that τ i (1) < and define τ i (2) inf{m > τ i (1) : X m i}. Continuing in this manner, assuming that τ i (n) <, then we define τ i (n + 1) inf{m > τ i (n) : X m i} Let α, α 1 τ i (1), α 2 τ i (2) τ i (1),..., α n τ i (n) τ i (n 1) and define on τ i (1) <, τ i (2) <,..., τ i (n) <. ε 1 (α 1, X 1, X 2,..., X τi(1)) ε 2 (α 2, X τi(1)+1, X τi(1)+2,..., X τi(2)). ε n (α n, X τi(n 1)+1, X τi(n 1)+2,..., X τi(n)) Proposition 2.5. Suppose that X i. Then we have ε 1, ε 2,..., ε are iid with respect to the probability measure Proof. Consider P ( τ i (1) <,..., τ i () < ) P (ε 1 (, i 1, i 2,..., i ), ε 2 (l, j 1, j 2,..., j ), τ i (1) <, τ i (2) < ) We need i i, j i and furthermore i 1 i,..., i 1 i and j 1 i,..., j l 1 i. So, P (ε 1 (, i 1, i 2,..., i ), ε 2 (l, j 1, j 2,..., j ), τ i (1) <, τ i (2) < ) P (X 1 i 1, X 2 i 2,..., X 1 i 1, X i, X +1 j 1, X +2 j 2,..., X +l 1 j l 1, X +l i) P (X +1 j 1, X +2 j 2,..., X +l i X 1 i 1, X 2 i 2,..., X i)p (X 1 i 1, X 2 i 2,..., X i) P (X 1 j 1, X 2 j 2,..., X l i)p (X 1 i 1, X 2 i 2,..., X i) P (X 1 j 1, X 2 j 2,..., X l 1 j l 1, τ i (2) l)p (X 1 i 1, X 2 i 2,..., X 1 i 1, τ i (1) ) Summing over the margins that are not τ i (1) l, τ i (1) on both sides of the equation (wrt j 1, j 2,..., j l 1, i 1, i 2,..., i 2 ), 17

21 Fall DISCRETE TIME MARKOV CHAINS we get which implies P (τ i (1) l)p (τ i (1) ) P (τ i (1), τ i (2) l) P (α 2 l)p (α 1 ) P (α 1, α 2 l) P (τ 1 < )P (τ < ) P (τ 1 <, τ 2 < ) This process may be generalized for not just pairwise ε i but any arbitrary group of ε i and so we are done. Corollary 2.2. Suppose initial state j i. Then still have ε 1,..., ε with respect to P ( τ 1 <,..., τ < ) Note that ε 1 will no longer have the same distribution as ε 2. Definition 2.7. State i is recurrent if the chain returns to i in a finite number of steps. Otherwise it is transient. That is, State i is recurrent if P (τ i (1) < X i) 1 State i is transient if P (τ i (1) < X i) < 1 P (τ i (1) X i) > A recurrent state is positive recurrent if E[τ i (1) X i] <. Otherwise if E[τ i (1) X i] then a recurrent state is null recurrent. Definition 2.8. For n 1 define f () j f (n) j P (τ (1) n X j) f j f (n) j P (τ (1) < X j) n Therefore, a state i is recurrent if and only if f ii 1 and a recurrent state i is positive recurrent if and only if E[τ i (1) X i] n Remar 2.3. Define F ij (s) n sn f (n) ij and P ij (s) n sn p (n) ij Proposition 2.6. a) We have for i S and for < s < 1 we have b) We have for i j and for < s < 1 we have p (n) ii P (n) ij n P ii (s) n nf (n) ii < f () ii p (n ) ii, n F ii (s) f () ij p(n ) jj, n P ij (s) F ij (s)p jj (s) 18

22 Fall DISCRETE TIME MARKOV CHAINS Proof. a) Remar that P (X n i X i) p (n) ii n P (X n i, τ i (1) X i) 1 n P (X τi(1)+n i, τ i (1) X i) 1 n P (τ i (1) X i)p (X n i X i) 1 n 1 Now with this result, the second part can be written as n1 s n p (n) ii P ii (s) 1 f () ii p (n ) ii s n n n1 1 n s n n1 1 1 s f () ii F ii (s)p ii (s) and so P ii (s) 1 F ii (s)p ii (s) F ii (s) 1/(1 P ii (s)). b) By direct evaluation, p (n) f () ii p (n ) ii f () ii p (n ) ii n s n p (n ) ii ij P (X n j X i) n P (τ i (j) X i)p (X n j X j) n f (n) ij p (n ) jj and so n s n p (n) ij n s n n f () ij p(n ) jj P ij (s) F ij (s)p jj (s) Corollary 2.3. A state i is recurrent if and only if f ii 1 if and only if P ii (1) p (n) ii f ii < 1 if and only p (n) ii <. Remar 2.4. Define N j n1 1(X n j) which denotes the number of visits to state j. Then [ ] E[N j X i] E 1(X n 1) X i E[N j X i] n1 E [1(X n 1) X i] n1 P (X n j X i) n1 n1 p (n) ij. Thus i is transient if and only if 19

23 Fall DISCRETE TIME MARKOV CHAINS That is, state i is recurrent if and only if E[N i X i]?. Proposition 2.7. (i) We have for i, j S and non-negative integer { 1 f ii P (N j X i) f ij f 1 jj (1 f jj ) 1 (ii) If j is transient, then for all states i P (N j < X i) 1 and E[N j X i] f ij /(1 f jj ) and P (N j X j) (1 f jj )f jj. (iii) If j is recurrent then P (N j X j) 1. Proof. (i) We first calculate and similarly, P (N j 1 X i) P (τ j (1) < X i) f ij P (N j X i) P (τ j () < X i) P (N j X i) f ij f 1 jj P (τ j (1) <, τ j (2) <,..., τ j () < X i) P (τ j (1) < X i) [P (τ j (1) < X )] 1 Hence P (N j X i) P (N j X i) P (N j + 1 X i) f ij f 1 jj f ij f jj P (N j X i) f ij f 1 jj (1 f jj ) (ii) We can directly calculate P (N j X i) lim P (N j X i) lim f ijf jj and E[N j X i] P (N j > X i) P (N j + 1 X i) f ij fjj f ij 1 f jj The last statement follows from an application of (i): P (N j X j) (1 f jj )f jj (iii) We compute this directly as P (N j X j) lim P (N j ) lim f jj 1 2

24 Fall DISCRETE TIME MARKOV CHAINS 2.2 Computation of f (n) ij By definition, f (1) ij p ij and f (n) ij P (X 1 j, X 2 j,..., X n 1 j, X n j X i) P (X 1, X 2 j,..., X n 1 j, X n j X i) S, j S, j S, j S, j f (n) ij S, j P (X 2 j,..., X n 1 j, X n j X i, X 1 )P (X 1 X i) P (X 2 j,..., X n 1 j, X n j X 1 )P (X 1 X i) P (X 1 j,..., X n 1 j, X n 1 j X )P (X 1 X i) p i f (n 1) j Remar 2.5. Define the column vector f (n) (f (n) 1j, f (n) 2j column replaced by a column of zeroes. Then we can write,..., f (n) ij f (n) (j) P f (n 1) (j) P (n 1) f (1),...f (n) S j )T and the matrix (j) P as the P matrix with the j th 2.3 Periodicity Definition 2.9. The period of a state i, d(i), is defined as d(i) gcd(n 1 : p (n) ii > ) If d(i) 1 then we say state i is periodic. If d(i) > 1 then we say state i has period d(i). Example 2.1. Let {X } be a sequence of iid r.v.s with P (X 1) p, P (X 1) q with p + q 1 and < p, q < 1. Define S and S n S + n 1 X. Then {S n : n } is a Marov chain with S {..., 1,, 1,...}and q i j 1 P ij p i j 1 otherwise It is clear that d() 2 since p (n) > for n divisible by 2. Example Let {X } be a sequence of iid r.v.s with P (X 1) p, P (X ) r, P (X 1) q with p + r + q 1 and < p, r, q < 1. Define S and S n S + n 1 X. Then d() 1 and state is aperiodic. Example Consider a Marov chain with S {1, 2, 3} and 1 P Since p (2) 11, p(3) 11 > then d(1) 1. 21

25 Fall DISCRETE TIME MARKOV CHAINS 2.4 Solidarity Properties Definition 2.1. A property is called a solidarity or equivalence property if whenever state i has a property and i j then j also has the same property. So if C is an equivalence class and if i C has a property, then all j C has the same property. Proposition 2.8. Recurrence[1], transience[2], and periodicity[3] are equivalence class properties. Proof. [1,2] Suppose that i j and i is recurrent. Then, there exists n such that p (n) ij > and similarly there exists m such that p (m) ji >. In order to prove that j is recurrent, we will show n p(n) jj. Then, p (n++m) jj Since i is recurrent, then n p(n) ii and hence l p (l) jj β S p (m) ji cp () ii α S p (n++m) jj p (m) jα p() αβ p(n) βj p () ii p(n) ij The contrapositive tells us that transience is an equivalence property., c p(m) ji p (n) ij > [3] Suppose i j and i has period d(i) and j has period d(j) and from our previous result, we now p (n++m) jj If then p () ii p (n+m+) jj cp () ii 1 and p (n+m) ii >. Then (n + m + ) 2 d(j). Now, and so d(j) is also a divisor of {n 1 : p (n) ii symmetric relationship and hence d(i) d(j). c p () ii c > so (n + m) 1 d(j). On the other hand, if is such that p () ii (n + m + ) (n + m) ( 2 1 )d(j) cp () ii. > we have > }. Then, d(i) d(j). Similarly, we can obtain d(j) d(i) since is a Example Going bac to a recent example, let {X } be a sequence of iid r.v.s with P (X 1) p, P (X ) r, P (X 1) q with p + r + q 1 and < p, r, q < 1. Define S and S n S + n 1 X. Let us chec that p (n). Now since p (2n+1) for n N and p (2n) ( ) 2n p n (1 p) n (2n)! n n!n! pn (1 p) n 2πe 2n (2n) 2n+ 1 2 p n (1 p) n 2πe 2n n 2n+1 (4p(1 p))n πn using Stirling s approximation which states n! 2πe n n n Now for p 1 2 we have p(2n) 1 πn which in the tail of the series defines a series larger than the Harmonic series and hence n N p(n). We may repeat the same procedure for p < 1 2, p > 1 2 to see in these cases that n N p(n) <. Hence, state is transient and all states are transient. (For completeness, we can also repeat the above using the upper bound of Stirling s formula) Example Consider the Simple Branching process with S {, 1, 2,...}, P (Z ij ) p, p 1 1, and note that is an absorbing state and hence it is recurrent. Assume p. Then f P (Z n+1 Z n ) (p 1 ) < 1 22

26 Fall DISCRETE TIME MARKOV CHAINS and in this case, all states are transient. Suppose p 1 p 1 f is transient and hence all states are transient again. Now suppose that < p < 1. Then, f P (Z 1 Z ) 1 P (Z 1 Z ) 1 (p ) < 1 and so all states except are transient in any type of branching process. 2.5 More State Space Decomposition We can decompose the state space S into S T ( i C i) where C i s are closed sets of recurrent states, T is a set of transient states (not necessarily in the same equivalence class). Proposition 2.9. Suppose j is recurrent and for j we have j. Then, (i) is recurrent (ii) j (iii) f j f j 1 Proof. (i) was proven in a previous lecture. We first show (ii). This, we need to prove that j. Suppose that j is not accessible from ; that is Since j there exists m such that p (m) j P (X n j, n 1 X ) 1 > and since j is recurrent, we also have n p(n) jj. Now, P (X l j, l m X j) Thus, this is a contradiction and j is accessible from. (iii) Since j, there exists m such that Since j is recurrent, we have f jj 1. Therefore, P (X l j, X m, l m X j) P (X m X j)p (X l j, l m X j) p (m) j P (X l j, l 1 X ) }{{} 1 > P (X 1 j, X 2 j,..., X m 1 j, X m X j) > 1 f jj P (τ j (1) X j) P (τ j (1), X m X j) P (X 1 j, X 2 j,..., X m, τ j (1) X j) P (τ j (1) X 1 j, X 2 j,..., X m 1 j, X j, X m ) P (X 1 j, X 2 j,..., X m 1 j, X m X j) P (τ j (1) X m ) P (X 1 j, X 2 j,..., X m 1 j, X m X j) }{{}}{{} 1 f j > and hence 1 f j f j 1. By symmetry, f j 1 as well. 23

27 Fall DISCRETE TIME MARKOV CHAINS Corollary 2.4. The state space S of a Marov chain can be decomposed as S T C 1 C 2... where T consists of transient states (not necessarily in one class) and C 1, C 2,... are closed disjoint classes of recurrent states. If j C α then { 1 C α f j otherwise Furthermore, if we relabel the states so that for i 1, 2,... states in C i have consecutive labels with states in C 1 having the smallest labels, C 2 the next smallest, etc. We can represent this as C 1 C 2 C 3 T C 1 P 1 C 2 P 2 C 3 P T Q 1 Q 2 Q 3 Q T where P 1, P 2, P 3 are square stochastic matrices. Remar 2.6. If S contains an infinite number of states, it is possible for S T as have seen in the simple random wal. If S is finite however, not all states can be transient. Proposition 2.1. If S is finite, not all states can be transient. Proof. Suppose that S {, 1, 2,..., m} and S T. Let j T and note that for any i S. Now since j S p(n) ij which is impossible. n p (n) ij < is the row sum of P (n) it is 1 and 1 lim Example Consider S {, 1, 2, 4} with n j S p (n) ij j S lim n p(n) ij j S P q p 2 q p 3 q p 4 1 Here, C 1 {}, C 2 {4} and T {1, 2, 3} since we may write P q p 2 q p 3 q p 24

28 Fall DISCRETE TIME MARKOV CHAINS Example Consider S {1, 2, 3, 4, 5} with P /2 1/2 2 1/4 3/4 3 1/3 2/3 4 1/4 1/2 1/4 6 1/3 1/3 1/3 Drawing the probability transition diagram, we can see C {1, 3, 5} with T {2, 4}. 2.6 Absorption Probabilities Definition Suppose that S T C 1 C 2... and define τ inf{n : X n / T } as the exit time from T. Of course it is possible that P (τ X i) >. Assume P (τ < X i) 1 and let ( ) Q R P, Q (Q P ij, i, j T ), R (R l, T, l T c ) 2 When τ is finite, X τ is the first state that the chain visits outside the transient states. Define u i P (X τ X i) u i (C l ) P (X τ C l X i) C l u i Remar 2.7. We claim that Q (n) ij p (n) ij. To see this, remar that Q (n) ij p ij1 p j1j 2...p jn 1j j 1,...,j n 1 T P (X n j, τ > n X i) P (X n j X i) p (n) ij 25

29 Fall STATIONARY DISTRIBUTIONS since {X n j} {τ > n}. From this, we can also see that n Q(n) ij <. Now, u ij P (X τ j X i) S P (X τ j, X 1 X i) T P (X τ j, X 1 X i) + T P (X τ j, X 1 X i) + p ij T T T T T P (τ n, X τ j, X 1 X i) + p ij n2 T c P (X τ j, X 1 X i) P (X 2 T, X 3 T,..., X n 1 T, X n j, X 1 X i) + p ij n2 P (X 2 T, X 3 T,..., X n 1 T, X n j X 1 )P (X 1 X i) + p ij n2 P (X 2 T, X 3 T,..., X n 1 T, X n j X 1 )p i + p ij n2 P (τ n 1, X n 1 j X 1 )p i + p ij n2 T P (X τ j X )p i + p ij u ij T p i u j + p ij Hence if U (u ij, i T, j T c ) then U QU + R U(I Q) R and if (I Q) 1 exists then U (I Q) 1 R, (I Q) 1 n Q n This also implies that (I Q) 1 ij [ ] E 1(X n j X i) n expected # of visits to j from i n p (n) ij 3 Stationary Distributions Definition 3.1. A stochastic process {Y n : n } is stationary if of integers m and > we have (Y, Y 1,..., Y m ) d (Y, Y +1,..., Y m+ ) Let π {π j : j S} be a probability distribution. It is called a stationary distribution for the Marov chain with transition matrix P if π T π T P, π j S π P j, j S Let P π be the distribution of the chain when the initial distribution is π. That is, P π ([ ]) i S P ([ ] X i)π i 26

30 Fall STATIONARY DISTRIBUTIONS Proposition 3.1. With respect to P π we have that {X n : n } is a stationary process. Thus, P π (X n i, X n+1 i 1,..., X n+ i ) P π (X i, X 1 i 1,..., X i ) for any n,, and i, i 1,..., i S. In particular, P π (X n j) π j for all n, j S. Proof. We can compute directly P π (X n i, X n+1 i 1,..., X n+ i ) i S P (X n i, X n+1 i 1,..., X n+ i X i)p (X i) i S π i p (n) ii p ii 1 p i1i 2...p i 1 i π i p ii 1 p i1i 2...p i 1 i P π (X i, X 1 i 1,..., X i ) Definition 3.2. We call ν {ν j : j S} an invariant measure if ν T ν T P. If ν is an invariant measure and a probability distribution then it is is a stationary distribution. Proposition 3.2. Let i S be recurrent and define for j S ν j E 1(X n j) X i P (X n j, τ i (1) > n X i) n τ i(1) 1 Then ν is an invariant measure. If i is positive recurrent, then is a stationary distribution. π j n ν j E [τ i (1) X i] Proof. We will first show that ν T ν T P. Clearly ν i 1. Now consider j i. We need to show that ν j S ν p j. Now 27

31 Fall STATIONARY DISTRIBUTIONS since X τi(1) i and X i, then we have ν j E 1 n τ i(1) 1(X n j) X i [ ] E 1(X n j, τ i (1) n) X i n1 E [1(X n j, τ i (1) n) X i] n1 P (X n j, τ i (1) n X i) n1 p ij + p ij + p ij + p ij + p ij + P (X n j, τ i (1) n X i) n2 n2 n2 n2 n2 S i S i S i S i P (X n j, X n 1, τ i (1) n X i) P (X n j X n 1, τ i (1) n, X i) P (X n 1, τ i (1) n, X i) P (X n j X n 1, τ i (1) n, X i) P (X n 1, τ i (1) n X i) p j P (X n 1, τ i (1) n X i) Next, we observe that {τ i (1) n, X n 1 } {X 1 i, X 2 i,..., X n 1 i, X n 1 } 28

32 Fall STATIONARY DISTRIBUTIONS and we may continue as ν j p ij + p ij ν i + n2 S i n2 p j P (X n 1, τ i (1) n X i) p j P (X n 1, τ i (1) n X i) S i p ij ν i + p j P (τ i (1) n, X n 1 X i) S i p ij ν i + S i p ij ν i + S i p ij ν i + S i p ij ν i + S i p ij ν i + S i n2 p j P (τ i (1) n + 1, X n X i) n1 p j n1 p j n1 p j E P (τ i (1) n + 1, X n X i) E [1(τ i (1) n + 1, X n ) X i] n τ i(1) 1 p j ν S p j ν 1(X n ) X i So ν j is an invariant measure. Next, we calculate and we are done as the normalized ν j is π j. E 1(X n j) X i ν j j S j S n τ i(1) 1 E 1(X n j) X i j S n τ i(1) 1 E 1(X n j) X i E n τ i(1) 1 j S τ i(1) 1 n 1 X i E [τ i (1) X i] Proposition 3.3. If the Marov chain is irreducible and recurrent, then an invariant measure ν exists and satisfies < ν j <, j S and ν is unique up to a constant. If ν T 1 ν T 1 P and ν T 2 ν T 2 P then ν 1 cν 2. Furthermore, if the Marov chain is 29

33 Fall STATIONARY DISTRIBUTIONS positive recurrent and irreducible, there exists a unique stationary distribution π where π j 1 E[τ j (1) X j] Lemma 3.1. (Strong Law of Large Numbers (SLLN)) Suppose {Y n } is a sequence of iid r.vs with E( Y i ) <. Then, ( n i1 P lim Y ) i E[Y 1 ] 1 n n (converges almost surely (a.s.)). Proposition 3.4. Suppose the Marov chain is irreducible and positive recurrent, and let π be the unique stationary distribution. Then N n lim f(x n) N f(j)π j, a.s. P n lim f(x n) f(j)π j 1 N N N N j S j S Note that if f() 1( i) then N n lim f(x n) π i N N Proof. Remar that if f is non-negative (f ), then j S f(j)π j j S E f(j) [ E j S E [ τi(1) 1 n [ τi(1) 1 n 1(X n j) X i E [τ i (1) X i] τi(1) 1 n f(j)1(x n j) X i E [τ i (1) X i] ] j S f(j)1(x n j) X i E [τ i (1) X i] [ τi(1) 1 ] E n f(x n ) X i E [τ i (1) X i] ( ) ] ] [ τi(1) ] E n1 f(x n) X i E [τ i (1) X i] where (*) is because X X τi(1) i. Now define B(N) sup { : τ i () N}, the number of visits to i before time N, and η τ i(+1) nτ f(x i()+1 n). The sequence {η } is a sequence of iid r.vs (times between Marov processes starting from the same state are independent) and τ 1 m i(1) lim η E f(x n ) X i m m Next, remar that with lower bound τ i(b(n)) n τ i(b(n)) n 1 f(x n ) f(x n ) n1 N f(x n ) n τ i(1) n τ i(1) n f(x n ) + τ i(b(n)+1) n τ i(b(n)) nτ i(1)+1 B(N) 1 f(x n ) + 1 η f(x n ) f(x n ) and similarly upper bound of τ i(b(n))+1 n f(x n ) τ i(1) n B(N) f(x n ) + 1 η 3

34 Fall STATIONARY DISTRIBUTIONS Looing at the limiting behaviour: τi(1) lim n f(x n) N N Now, and similarly } {{ } where (?) comes from the fact that from the SLLN and comes from the fact that B(N) η N B(N) 1 lim η N N B(N) 1 1 η lim N N lim N N n f(x n) lim N N τi(1) n f(x n) N } {{ } B(N) 1 lim η B(N) (?) E[η 1 X i] N N B(N) E[τ i (1) X i] B(N) 1 lim η B(N) 1 (?) E[η 1 X i] N N B(N) 1 E[τ i (1) X i] B(N) 1 lim η N B(N) E[η 1 X i] B(N) lim N N 1 E[τ i (i) X i] B(N) + 1 η N τ i (B(N)) N τ i (B(N) + 1) τ i(b(n)) N B(N) B(N) τ i(b(n) + 1) B(N) + 1 B(N) B(N) + 1 τ i (B(N)) lim N N B(N) B(N) lim τ i (B(N) + 1) B(N) + 1 N B(N) B(N) + 1 E[τ i (1) X i] N B(N) E[τ i(1) X i] 1 Hence we finally have [ τi(1) ] N n lim f(x n) E n1 f(x n) X i N N E [τ i (1) X i] j S f(j)π j Corollary 3.1. If f is bounded then N n lim E [f(x n) X i] f(j)π j N N j S In particular, if f() 1[ j] then we have E [f(x n ) X i] P (X n j X i) p (n) ij. So, lim N N n1 p(n) ij N N n1 π j lim P n Π N N Proof. We now that N n lim f(x n) f(j)π j, a.s. N N j S and suppose that f() M for all S. That is, N n f(x n) N M 31

35 Fall STATIONARY DISTRIBUTIONS Then, by the dominated convergence theorem E [ lim N ] N n f(x n) X n N [ N ] E n f(x n) X n lim N N lim N N n j S f(j)π j E [f(x n ) X n ] N 3.1 Limiting Distribution Proposition 3.5. A limit distribution is a stationary distribution. Proof. Directly, we have π j lim n p(n) ij lim n Suppose that S {, 1, 2,...}. Remar that for all M N, Suppose there exists some j such that π j lim M n p (n) i p j S p (n) i p j M π p j π j > π p j which is impossible. Thus, we have π j > π p j π j j j π j p j π p j π 1 Theorem 3.1. Suppose the Marov chain is irreducible and aperiodic and that a stationary distribution π exists with π T π T P and j S π j 1 with π j Then: (1) The Marov chain is positive recurrent (2) π is a limit distribution with lim n p(n) ij π j, i, j S (3) For all j S, π j > (4) The stationary distribution is unique Proof. (1) If the chain were transient then and so π j i S π ip (n) ij lim n p(n) ij, i, j S for all j S. But if π j, j S then we cannot have j S π j 1. Now, ν j E 1 n τ i(1) 1(X n j) X i cπ j 32

36 Fall STATIONARY DISTRIBUTIONS and thus > ν j E 1(X n j) X i j S j S 1 n τ i(1) E 1(X n j) X i j S 1 n τ i(1) E 1(X n j) X i E 1 n τ i(1) j S 1 n τ i(1) E [τ i (1) X i] 1 X i So the chain is positive recurrent. This gives us π j 1 E [τ j (1) X j] and (3) and (4) follow from a previous proposition and the above remar. The proof of (2) is much more involved. We first start with a lemma. Lemma 3.2. Let the chain be irreducible and aperiodic. Then for i, j S there exists n (i, j) such that for all n n (i, j) we have p (n) ij >. Proof. Define Λ {n : p (n) jj > }. (1) We now that the greatest common divisor of the set Λ is 1. (2) If m Λ, n Λ then m + n Λ by the fact that p (n+m) jj S p (n) j p(m) j p (n) jj p(m) jj > Then Λ contains all sufficiently large integers, say n n 1, such that p (n) jj >. In order to see this, for n r + n 1 we have p (r) ij >. So given i, j S there exists r such that p (n) ij S p (r) ij p(n r) j p (r) ij p(n r) jj > by choice of r and n r n 1. Proof. [using coupling ] (of (2) in the previous theorem) Let {X n } be the original Marov chain, and {Y n } be independent of {X n } and the same transition matrix as {X n } but the initial distribution of {Y n } is π. So P (Y n j) π j for any n N. Define ε n (X n, Y n ) so that {ε n } is a Marov chain with states in S S. Now, P (ε n+1 (, l) ε n (i, j)) p i p jl P (ε n+1 (, l) ε (i, j)) p (n) i p(n) jl and there exists n 1, n 2 such that n n 1 and m n 2, p (n) i p (n) ij p(n) jl >. Thus, {ε n } is an irreducible Marov chain. > and p (m) j >. Then for all n max(n 1, n 2 ), we have 33

37 Fall STATIONARY DISTRIBUTIONS Define π,l π π l. Then the product of the stationary distributions is a stationary distribution for {ε n }: (i,j) S S π i,j P (ε n+1 (, l) ε n (i, j)) (i,j) S S i S π i π j p i p jl π i p i π j p jl π π l π,l and since l S S π,l S π l S π l 1 then {ε n } is positive recurrent. Define for i S, τ i,i inf{n : ε n (i, i )} with the fact that P (τ i,i < ) 1 (from recurrence of {ε n }. Now, P (X n j, τ i,i n) P (X n j, τ i,i n) By a similar construction, we can also show that Next, if we suppose that X i, then p (n) ij n P (X n j, τ i,i n) m S S S S S n m n m n P (ε n (j, ), τ i,i m) m j S n P (ε n (j, ) τ i,i m)p (τ i,i m) m n P (ε n (j, ) ε m (i, i ))P (τ i,i m) m n P (ε n m (j, ) ε (i, i ))P (τ i,i m) m n m P (Y n j, τ i,i n) p (n m) i,j p (n m) i, P (τ i,i m) p (n m) i,j P (τ i,i m) p (n m) i, S }{{} 1 p (n m) i,j P (τ i,i m) n m π j P (X n j) P (Y n j) p (n m) i,j P (τ i,i m) P (X n j, τ i,i n) + P (X n j, τ i,i > n) P (Y n j, τ i,i n) P (Y n j, τ i,i > n) P (X n j, τ i,i > n) P (Y n j, τ i,i > n) E[1(X n j)1(τ i,i > n)] E[1(Y n j)1(τ i,i > n)] E [[1(X n j) 1(Y n j)] 1(τ i,i > n)] p (n) ij π j E[1(τ i,i > n)] P (τ i,i > n) 34

38 Fall STATIONARY DISTRIBUTIONS Taing limits on n for both sides yields: lim n p(n) ij π j Definition 3.3. An irreducible, aperiodic, positive recurrent Marov chain is called an ergodic Marov chain. Corollary 3.2. Assume that a Marov chain is irreducible and aperiodic. A stationary distribution exists if and only if the chain is positive recurrent if and only if a limit distribution (defined through lim n P n ) exists. If the chain is irreducible and periodic, existence of a stationary distribution is equivalent to positive recurrent states. Proposition 3.6. If the Marov chain is irreducible and aperiodic and either null recurrent or transient, then lim n p(n) ij, for all i, j S We can conclude that in a finite state irreducible Marov chain, no state can be null recurrent. Example 3.1. Consider the inventory example with X n X(τ + n ) where τ + n is right after the n th departure. Define X n+1 max(x n 1, ) + A n+1 where A n+1 is the number of arrivals during the (n + 1) th service time and {A n } a sequence of iid r.vs. Denote P (A 1 ) a for, 1, 2,... and note that, starting from state, P a a 1 a 2 a 3 a a 1 a 2 a 3 a a 1 a 2 a 3 a a 1 a 2 a 3 This gives us the following sequence of equations for the stationary distribution:.. π a π + a π 1. π 1 a 1 π + a 1 π 1 + a π 2. n+1 π n a n π + a n+1 j π j π n 1 Using the generating series Π(s) n sn π n, A(s) n sn a n we have n n. j1 Π(s) s n π n π s n a n + π A(s) + π A(s) + 1 s n π j s j 1 j1. s n n+1 a n+1 j π j j1 nj 1 a n+1 j s n j+1 } {{ } A(s) π j s j A(s) j1 π A(s) + 1 s (Π(s) π )A(s) and hence Π(s) π A(s) ( 1 1 s s A(s) s ) π A(s) A(s) s 1 s 35

39 Fall RENEWAL THEORY Using the fact that Π(1) 1, we evaluate lim s 1 Π(s). First, using l Hopital s rule, 1 A(s) lim A (1) s 1 1 s a ρ and so the limit becomes with existence requiring that ρ < 1. lim Π(s) 1 π s 1 1 ρ π 1 ρ, ρ < 1 4 Renewal Theory Definition 4.1. Suppose that {Y n : n } is a sequence of independent non-negative random variables. Furthermore, suppose the sequence {Y n : n 1} is iid with common distribution F ( ). We assume for all n 1 P (Y n < ) and P (Y n ) < 1 For n, define S n Y + Y Y n. The sequence {S n : n } is called a renewal process. The process is called delayed if P (Y > ) > and pure if S Y. If F ( ) 1 then the process is called a proper renewal process. If F ( ) < 1 then the process is called terminating or transient. Example ) Replacement times of a machine where the lifetimes are independent identically distributed random variables. 2) Suppose {X n : n } is a Marov chain with finite state space S. Fix state i and define τ (i) inf{n : X n i} τ n+1 (i) inf{n τ n (i) : X n i} Then {τ n (i) : n } is a renewal process. If X i, it is a pure renewal process. Otherwise, it is a delayed renewal process. 3) A machine is either up or down. The sequence of on times are iid r.vs and the sequence of off times are iid r.vs. Definition 4.2. Define N(t) n 1 [,t](s n ). We call {N(t) : t } a counting process and U(t) E[N(t)] a renewal function. [Review your Lebesgue-Stieltjes integrals more here] (1) If U(x) is absolutely continuous, then g(x)du(x) for some density function u(x) where U(b) U(a) b a u(s)ds. g(x)u(dx) g(x)u(x)dx 2) Suppose that U is discrete. Then lim h U(a i + h) U(a i h) U(a i ) w i. Thus, U has atoms at locations {a i } of weight {w i }. Then g(x)u(dx) g(x)du(x) i g(a i )w i 3) Suppose we have a mixed measure U(x) αu AC (x) + βu D (x) for α, β >. Then g(x)u(x) α g(x)u AC (x) + β g(a i )w i Remar 4.1. Consider the case where U AC (x) x u(s) ds and U d(x) { 1 x x < where for x > we have U(x) 36

40 Fall RENEWAL THEORY U AC (x) + U d (x) 1 + x u(s) ds. Then, g(x)u(dx) g() + g(x)u(x)dx 4.1 Convolution Suppose all functions are defined on [, ). A function g is called locally bounded if g is bounded on finite intervals. For a locally bounded non-negative function g and a non-negative distribution function F define the convolution of F and g as Here are some properties: 1. F g(t) for all t F g(t) 2. F g(t) is locally bounded because for s t: F g(s) and hence sup s t F g(s) sup s t g(s) F (t). g(t x)f (dx), for t s s g(s x)f (dx) g(s x) F (dx) s sup s t sup g(s) F (s) s t g(s x)f (dx) 3. If g is bounded and continuous, then F g is bounded and continuous. To see this, suppose that Y 1 is the random variable with distribution F. Then F g(t) g(t x)f (dx) E[g(t Y 1 )] If t n t then g(t n Y 1 ) g(t Y 1 ) almost surely from the Central Limit Theorem (CLT). From dominated convergence, we have E[g(t n Y 1 )] E[g(t Y 1 )] 4. The convolution can be repeated F (F g) where F (x) 1 (x) [, ) F 1 (x) F (x) F 2 (x) F F (x). F n (x) F F...F (x) 5. Let X 1 and X 2 be two independent random variables with distributions F 1 and F 2. Then F 1 F 2 is the distribution of 37

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

1 Generating functions

1 Generating functions 1 Generating functions Even quite straightforward counting problems can lead to laborious and lengthy calculations. These are greatly simplified by using generating functions. 2 Definition 1.1. Given a

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Renewal theory and its applications

Renewal theory and its applications Renewal theory and its applications Stella Kapodistria and Jacques Resing September 11th, 212 ISP Definition of a Renewal process Renewal theory and its applications If we substitute the Exponentially

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 7

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 7 MS&E 321 Spring 12-13 Stochastic Systems June 1, 213 Prof. Peter W. Glynn Page 1 of 7 Section 9: Renewal Theory Contents 9.1 Renewal Equations..................................... 1 9.2 Solving the Renewal

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Disjointness and Additivity

Disjointness and Additivity Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices

Stationary Probabilities of Markov Chains with Upper Hessenberg Transition Matrices Stationary Probabilities of Marov Chains with Upper Hessenberg Transition Matrices Y. Quennel ZHAO Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba CANADA R3B 2E9 Susan

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 295 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 16 Queueing Systems with Two Types of Customers In this section, we discuss queueing systems with two types of customers.

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Elementary Probability. Exam Number 38119

Elementary Probability. Exam Number 38119 Elementary Probability Exam Number 38119 2 1. Introduction Consider any experiment whose result is unknown, for example throwing a coin, the daily number of customers in a supermarket or the duration of

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Question Points Score Total: 70

Question Points Score Total: 70 The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

Adventures in Stochastic Processes

Adventures in Stochastic Processes Sidney Resnick Adventures in Stochastic Processes with Illustrations Birkhäuser Boston Basel Berlin Table of Contents Preface ix CHAPTER 1. PRELIMINARIES: DISCRETE INDEX SETS AND/OR DISCRETE STATE SPACES

More information

Lecture 10: Semi-Markov Type Processes

Lecture 10: Semi-Markov Type Processes Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Zdzis law Brzeźniak and Tomasz Zastawniak

Zdzis law Brzeźniak and Tomasz Zastawniak Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST

RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST J. Appl. Prob. 45, 568 574 (28) Printed in England Applied Probability Trust 28 RELATING TIME AND CUSTOMER AVERAGES FOR QUEUES USING FORWARD COUPLING FROM THE PAST EROL A. PEKÖZ, Boston University SHELDON

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

1 Delayed Renewal Processes: Exploiting Laplace Transforms

1 Delayed Renewal Processes: Exploiting Laplace Transforms IEOR 6711: Stochastic Models I Professor Whitt, Tuesday, October 22, 213 Renewal Theory: Proof of Blackwell s theorem 1 Delayed Renewal Processes: Exploiting Laplace Transforms The proof of Blackwell s

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Random Models. Tusheng Zhang. February 14, 2013

Random Models. Tusheng Zhang. February 14, 2013 Random Models Tusheng Zhang February 14, 013 1 Introduction In this module, we will introduce some random models which have many real life applications. The course consists of four parts. 1. A brief review

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Stat 134 Fall 2011: Notes on generating functions

Stat 134 Fall 2011: Notes on generating functions Stat 3 Fall 0: Notes on generating functions Michael Lugo October, 0 Definitions Given a random variable X which always takes on a positive integer value, we define the probability generating function

More information

Probabilistic Aspects of Computer Science: Markovian Models

Probabilistic Aspects of Computer Science: Markovian Models Probabilistic Aspects of Computer Science: Markovian Models S. Haddad September 29, 207 Professor at ENS Cachan, haddad@lsv.ens-cachan.fr, http://www.lsv.ens-cachan.fr/ haddad/ Abstract These lecture notes

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 1 Solutions to Homework 6 6.262 Discrete Stochastic Processes MIT, Spring 2011 Let {Y n ; n 1} be a sequence of rv s and assume that lim n E[ Y n ] = 0. Show that {Y n ; n 1} converges to 0 in

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

On the static assignment to parallel servers

On the static assignment to parallel servers On the static assignment to parallel servers Ger Koole Vrije Universiteit Faculty of Mathematics and Computer Science De Boelelaan 1081a, 1081 HV Amsterdam The Netherlands Email: koole@cs.vu.nl, Url: www.cs.vu.nl/

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin

Intro Refresher Reversibility Open networks Closed networks Multiclass networks Other networks. Queuing Networks. Florence Perronnin Queuing Networks Florence Perronnin Polytech Grenoble - UGA March 23, 27 F. Perronnin (UGA) Queuing Networks March 23, 27 / 46 Outline Introduction to Queuing Networks 2 Refresher: M/M/ queue 3 Reversibility

More information

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days? IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov

More information

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS By MATTHEW GOFF A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY Department

More information