SMSTC (2007/08) Probability.

Size: px
Start display at page:

Download "SMSTC (2007/08) Probability."

Transcription

1 SMSTC (27/8) Probability Contents 12 Markov chains in continuous time Markov property and the Kolmogorov equations Finite state space Infinite state space Jump chains Classification of states Stationary distribution Ergodic theorem Applications Exercises (i)

2 SMSTC (27/8) Probability Lecture 12: Markov chains in continuous time Xuerong Mao, Strathclyde University a Contents 12.1 Markov property and the Kolmogorov equations Finite state space Infinite state space Jump chains Classification of states Stationary distribution Ergodic theorem Applications Exercises a xuerong@stams.strath.ac.uk 12 1

3 SMST C: Probability Markov property and the Kolmogorov equations Let S be a countable space. A continuous-time stochastic process {X(t)} t with values in S is a family of random variables X(t) : Ω S. As in the study of discrete-time Markov chain, we are going to consider ways in which we might specify the probabilistic behaviour (or law) of {X(t)} t. These should enable us to find, at least in principle, any probability connected with the process e.g. P(X(t 1 ) = i 1,..., X(t n ) = i n ). There are subtleties in this problem not present in the discrete-time case. They arise because, for a countable disjoint union P( n A n ) = n P(A n ), whereas for an uncountable union t A t there is no such rule. To avoid these subtleties as far as possible we shall restrict our attention to processes X(t) : Ω S which are right-continuous, namely lim X(t, ω) = X(t, ω) for all t, ω Ω. s t In this chapter we are concerned with Markovian stochastic processes with continuous time and countable state space S. We define the Markov property of a process {X(t)} t by P(X(t) = j X(t 1 ) = i 1,..., X(t n ) = i n, X(s) = i) = P(X(t) = j X(s) = i) (12.1) for any t 1... t n s t and i 1,..., i n, i, j S. Denote p ij (s, t) := P(X(t) = j X(s) = i).

4 SMST C: Probability 12 3 If p ij (s, t) = p ij (t s) the process has stationary transition probabilities. Unless otherwise stated this property will be assumed henceforth. There is a possibility that a process may reach infinity in finite time. We will assume that if this happens the process stays at infinity for ever (infinity is then called a coffin state). This is called the minimal construction. The following proposition gives some basic properties of the transition probabilities. Proposition Proof p ij (t) 1. (12.2) p ij (t) 1. (12.3) j S p ij (s + t) = p ik (s)p kj (t). (12.4) k S p ij () = 1 (i = j). (12.5) The first and forth statements are trivial, while the second follows from p ij (t) = P(X(t) S X() = i). j S If the inequality is strict, the process is dishonest. To show the third equation, the Chapman

5 SMST C: Probability 12 4 Kolmogorov equation, we compute p ij (s + t) = P(X(s + t) = j X() = i) = P(X(s) = k X() = i)p(x(s + t) = j X(s) = k) k S = k S p ik (s)p kj (t), noticing that the coffin state is ruled out by our construction. As in the case of discrete time it is convenient to express things in matrix notation. Let P (t) = (p ij (t)) i,j S. Then (12.4) can be written P (s + t) = P (s)p (t). (12.6) Hence {P (t)} t is a semigroup. It is stochastic if there is equality in (12.3), and sub-stochastic otherwise. To proceed we need to assume some regularity. We call the process (or the semigroup) standard if the transition probabilities are continuous at, i.e. if lim t p ij (t) = p ij (). (12.7) Lemma Let {P (t)} t be a standard semigroup. Then p ij (t) is a continuous function of t for all i, j. Proof We shall show that for any i, j, p ij (t + h) p ij (t) 1 p ii (h), h >.

6 SMST C: Probability 12 5 From (12.4) we have that p ij (t + h) = k S p ik (h)p kj (t) so p ij (t + h) p ij (t) = (P ii (h) 1)p ij (t) + k i p ik (h)p kj (t). Noting p kj (t) 1, we have p ik (h)p kj (t) p ik (h) = 1 p ii (h). k i k i Hence p ij (t + h) p ij (t) (1 p ii (h))(1 p ij (t)), whence the claim follows. The following proposition describes the differentiability of the transition probabilities. Proposition For a standard stochastic semigroup {P (t)} t we have (i) ṗ ii () := dp ii ()/dt exists and is non-positive (but not necessarily finite); (ii) ṗ ij () := dp ij ()/dt exists and is finite for i j. We omit the proof but refer the reader to [2]. Let Q = (q ij ) = (ṗ ij ()), which is called the generator of {P (t)} t or the Markov chain. The following result is left as Exercise Lemma If S is finite then j S q ij j S q ij. =, while if S is countably infinite we have

7 SMST C: Probability Finite state space When the state space is finite, using Proposition 12.2 and Lemma 12.2 we have q ii = j i q ij. We will find that it is convenient to set q i = q ii. For any t write the Taylor expansions and p ij (h) = p ij (t, t + h) = q ij h + o(h) p ii (h) = p ii (t, t + h) = 1 q i h + o(h). We hence call q ij the intensity of the transition i j. The following theorem is the key result in the theory of continuous time Markov chains. Theorem If the state space is finite, then the transition probabilities obey the Kolmogorov forward equation dp (t) = P (t)q (12.8) dt and the Kolmogorov backward equation Proof By (12.4), dp (t) dt = QP (t). (12.9) p ij (t + h) = k S p ik (t)p kj (h) = p ij (t)p jj (h) + k j p ik (t)p kj (h),

8 SMST C: Probability 12 7 so p ij (t + h) p ij (t) h = p ij (t) 1 p jj(h) h + k j p ik (t) p kj(h) h. Letting h yields dp ij (t) dt = p ij (t)q j + k j p ik (t)q kj = k S p ik (t)q kj. This proves the forward equation (12.8). To get the backward equation (12.9) we proceed in a similar fashion, but now writing p ij (t + h) = p ik (h)p kj (t). k S Infinite state space Theorem 12.1 is also true for a large class of Markov chains with infinite state space. The necessary assumptions have to do with assuring smoothness of the transition probabilities p ij (t). The semigroup {P (t)} t is said to be uniform if lim t p ii (t) = 1 uniformly in i S. The following lemma gives an easy criterion for uniformity (for a proof please see e.g. [2]).

9 SMST C: Probability 12 8 Lemma (i) {P (t)} t is uniform if sup i S q i <. (ii) {P (t)} t is uniform if and only if q ij = i S. j S Theorem If {P (t)} t is uniform then it obeys the Kolmogorov forward equation (12.8) as well as the backward equation (12.9). From now on we only consider the Markov chain whose generator Q obeys sup i S q i <. Hence the transition probability matrix {P (t)} t is uniform. By Proposition 12.2 and Lemma 12.3, we recall that the generator Q = (q ij ) i,j S of the Markov chain has the following properties: q ii < for all i; q ij for all i j; j S q ij = for all i. Such a matrix is also known as a Q-matrix. In addition to the Q-matrix, we let the initial distribution be λ = (λ j ) j S, thought as a row-vector. From now on we will say that {X(t)} t is Markov (λ, Q). To close this section, let us state the strong Markov property without proof (the proof can be found in e.g. [4]). As defined in the discrete-case, a random variable T with values in [, ] is called a stopping time of {X(t)} t if for each t [, ] the event {T t} depends only on {X(s) : s t}.

10 SMST C: Probability 12 9 Theorem Let {X(t)} t be Markov (λ, Q) and let T be a stopping time. Then, conditional on T < and X T = i, {X(T + t)} t is Markov (δ i, Q) and independent of {X(s) : s t}.

11 SMST C: Probability Jump chains As the state space S is countable, the right-continuity of the Markov chain {X(t)} t means that for all ω Ω and t there exists ɛ > such that X(s, ω) = X(t, ω) for t s t + ɛ. Thus, every path of the process must remain constant for a while in each new state. We may have three possibilities for the sorts of path: (i) the path makes finitely many jumps and then becomes stuck in some state forever; (ii) the path makes infinitely many jumps but only finitely many in any interval [, t]; (iii) the path makes infinitely many jumps in a finite interval [, ζ). In case (iii), ζ is called the explosion time and we set X(t) = for t ζ as assumed before. Define for n =, 1,..., and J =, J n+1 = inf{t J n : X(t) X(J n )} S n = { Jn J n 1 if J n 1 <, otherwise for n = 1, 2,.... We call J, J 1,... the jump times and S 1, S 2,... the holding times. Note that the right-continuity forces S n > for all n. If J n+1 = for some n, we define X( ) = X(J n ), the final value, otherwise X( ) is undefined. The (first) explosion time ζ is defined by ζ = sup n J n = S n. n=1

12 SMST C: Probability The discrete-time process {Y n } n given by Y n = X(J n ) is called the jump chain. This is simply the sequence of values taken by X(t) up to explosion. Associated to the matrix Q we define a jump matrix Π = (π ij ) i,j S by { qij /q π ij = i if j i and q i, if j i and q i = ; { if qi, π ii = 1 if q i =. Note that Π is a stochastic matrix. In the previous section, the Markov chain {X(t)} t is described in terms of the semigroup {P (t)} t. Another equivalent way to describe how the process evolves is in terms of the jump chain and holding times as described in the following theorem. Theorem Let {X(t)} t be a right-continuous Markov chain with generator Q. Let Q be a Q-matrix with jump matrix Π and semigroup {P (t)} t. Then the following statements are equivalent: (i) Conditional on X() = i, the jump chain {Y n } n is discrete-time Markov (δ i, Π)and for each n 1, conditional on Y = i,..., Y n 1 = i n 1, the holding times S 1,..., S n are independent exponential random variables of parameters q i,..., q in 1 respectively. (ii) For all n =, 1, 2,..., all times t... t n t n+1 and all states i, i 1,..., i n+1, P(X(t n+1 ) = i n+1 X(t ) = i,..., X(t n ) = i n ) = p in i n+1 (t n+1 t n ).

13 SMST C: Probability Again we refer the reader to [4] for the proof. This theorem shows that the process evolves in following way: given that the chain {X(t)} t starts at i at time t =, it waits there for an exponential time S 1 of parameter q i and then jumps to a new state, choosing state i 1 with probability π i,i 1. It then waits for an exponential time S 2 of parameter q i1 and then jumps to a new state, choosing state i 2 with probability π i1,i 2. It further evolves in the same fashion.

14 SMST C: Probability Classification of states As in the study of the discrete-time Markov chains, it is useful to identify the class structure of a continuous Markov chain. It is observed from the previous section that the continuous Markov chain can be described in terms of the jump chain. This indicates clearly that the class structure of the continuous Markov chain is simply the discrete-time class structure of the corresponding jump chain. That is, the notions of closed class, irreducible closed class, absorbing state are inherited from the jump chain. Similarly, we say that i reaches j if P i (X(t) = j for some t ) >. where, as before, we write P i (A) for the conditional probability P(A X() = i). We say that i and j communicate if they reach each other. Theorem For distinct states i and j the following statements are equivalent: (i) i reaches j; (ii) i reaches j for the jump chain; (iii) q i i 1 q i1 i 2 q in 1 i n > for some states i, i 1,..., i n with i = i and i n = j; (iv) p ij (t) > for all t > ; (v) p ij (t) > for some t >. Proof Implications (iv) (v) (i) (ii) are clear. If (ii) holds, then by Exercise 1 8, there are some states i, i 1,..., i n with i = i and i n = j such that π i i 1 π i1 i 2 π in 1 i n >, which implies (iii).

15 SMST C: Probability We now note that if q uv >, then for all t >. Thus, if (iii) holds, then for all t >, and (iv) holds. p uv (t) P u (J 1 t, Y 1 = v, S 2 > t) = (1 e q ut )π uv e q vt > p ij (t) p i i 1 (t/n)p i1 i 2 (t/n) p in 1 i n (t/n) > We observe that condition of Theorem 12.5 shows that the situation in continuous-time is simpler than in discrete-time, where it may be possible to reach a state, but only after a certain length of time, and then only periodically. Just as in the discrete-time theory, we say that a state i is recurrent if while we say i is transient if P i ({t : X(t) = i} is unbounded) = 1, P i ({t : X(t) = i} is unbounded) =. The following theorem shows that recurrence and transience are determined by the jump chain again. Theorem We have: (i) If i is recurrent for the jump chain {Y n } n, then i is recurrent for {X(t)} t. (ii) If i is transient for the jump chain {Y n } n, then i is transient for {X(t)} t. (iii) Every state is either recurrent or transient.

16 SMST C: Probability (iv) Let C be an irreducible closed class. Then either all states in C are transient or all are recurrent. Proof (i) Suppose i is recurrent for {Y n } n. If X() = i, then J n and X(J n ) = Y n = i infinitely often with probability 1. We must therefore have P i ({t : X(t) = i} is unbounded) = 1. (ii) Suppose i is transient for {Y n } n. If X() = i, then N = sup{n : Y n = i} <, so {t : X(t) = i} is bounded by J n+1, which is finite with probability 1, because {Y n : n N} cannot include an absorbing state. (iii) Apply Theorem 1.3 to the jump chain. (iv) Apply Theorem 1.4 to the jump chain. The next result gives the conditions for recurrence and transience. We denote by T i the first passage time of {X(t)} t to state i, defined by Theorem We have: T i = inf{t J 1 : X(t) = i}. (i) If q i = or P i (T i < ) = 1, then i is recurrent and p ii (t)dt =. (ii) If q i > and P i (T i < ) < 1, then i is transient and p ii (t)dt <.

17 SMST C: Probability Proof If q i = then {X(t)} t cannot leave i, so i is recurrent and p ii (t) = 1 for all t which yields p ii (t)dt =. Suppose then q i >. Let N i denote the first passage time of {Y n } n to state i. Then P i (T i < ) = P i (N i < ). By Theorem 12.6 and the corresponding result of the discrete-time Markov chains, we see that i is recurrent (resp. transient) if and only if P i (T i < ) = 1 (resp. P i (T i < ) < 1). Write π (n) ij for the (i, j) entry in Π n. Compute p ii (t)dt = E i (1 {X(t)=i} )dt = E i 1 {X(t)=i} dt = E i S n+1 1 {Yn =i} = n= = 1 q i π (n) ii. The assertions now follow from Theorems 12.6 and 1.3. E i (S n+1 Y n = i)p i (Y n = i) The following result shows that the recurrence and transience are determined by any discrete-time sampling of {X(t)} t. Theorem Let > be any given step-size and set Z n = X(n ). (i) If i is recurrent for {X(t)} t, then it is recurrent for {Z n } n. (ii) If i is transient for {X(t)} t, then it is transient for {Z n } n. n=

18 SMST C: Probability Proof Assertion (i) is obvious. To show (ii), we estimate, for n t < (n + 1), p ii ((n + 1) ) p ii (t)p(x(t + s) = i for s [, ] X(t) = i) = p ii (t)p i (X(s) = i for s [, ]) = p ii (t)p i (S 1 > ) = p ii (t)e qi. Then p ii (t)dt = (n+1) n= n and the result follows from Theorems 12.7 and 1.3. p ii (t)dt e q i p ii ((n + 1) ) n=

19 SMST C: Probability Stationary distribution In this section and next one, we let {X(t)} t be a Markov chain with the state space S and the generator Q as well as the semigroup {P (t)} t. We always assume that Q is a Q-matrix. Definition A stationary distribution of the Markov chain is a probability distribution λ = (λ i ) i S on S, thought of as a row-vector, such that λq =. It can also be called an invariant distribution. To see why λ is also called an invariant distribution, we observe that if the state space is finite, the the backward equation shows d dt λp (t) = λ d P (t) = λqp (t), dt so λq = implies λp (t) = λp () for all t. It is more difficult to show this if the state space is infinite as the interchange of differentiation with the summation involved in multiplication by λ is not justified. In this case, an entirely different proof is needed but not presented here. Theorem Let Q be a Q-matrix with the jump matrix Π and let λ be a probability distribution on S. The following statements are equivalent: (i) λ is stationary; (ii) µπ = µ, where µ = (µ i ) i S with µ i = λ i q i. Proof We have q i (π ij δ ij ) = q ij for all i, j S, so (µ(π I)) j = i S µ i (π ij δ ij ) = i S π i q ij = (λq) j.

20 SMST C: Probability Recall that a state i is recurrent if q i = or P i (T i < ) = 1. If q i = and the expected return time m i = E i (T i ) <, we then say i is positive recurrent. Otherwise a recurrent state i is called null recurrent. It can be showed that i is positive recurrent if and only if it is positive recurrent for the jump chain. Theorem If the Markov chain {X(t)} t is irreducible and positive recurrent, then it has a unique stationary distribution. Proof We assume the state space S consists at least two states; otherwise the theorem is trivial. Then the irreducibility forces q i > for all i. By Theorems 12.5 and 12.6, the jump chain {Y n } n is irreducible and positive recurrent. By Theorem 1.6, the jump chain has a unique stationary distribution µ = (µ i ) i S which obeys µπ = µ. By Theorem 12.9, we can take λ i = µ i /q i to obtain the stationary distribution for the Markov chain. The uniqueness follows from the unique stationary distribution for the jump chain and Theorem We are now concerned with the limiting behavior of p ij (t) at t and its relation to stationary distribution. The situation is analogous to the case of discrete-time and is in fact simpler as there is no longer any possibility of periodicity. Let us prepare a lemma. Lemma Let Q be a Q-matrix with semigroup {P (t)} t. Then for all t, h, Proof By Lemma 12.1 p ij (t + h) p ij (t) = 1 e q ih. p ij (t + h) p ij (t) 1 p ii (h) P i (J 1 h) 1 e q ih.

21 SMST C: Probability 12 2 Theorem Assume that the Markov chain {X(t)} t is irreducible and positive recurrent, and its unique stationary distribution is λ = (λ j ) j S. Then for all states i, j we have lim t p ij(t) = λ j Proof Let {X(t)} t be Markov (δ i, Q). Fix h > and consider the h-skeleton Z n = X(nh). By the Markov property P(Z n+1 = i n+1 Z = i,..., Z n = i n ) = p in i n+1 (h) so {Z n } n is discrete-time Markov (δ i, P (h)). By Theorem 12.3, the irreducibility implies p ij (h) > for all i, j so {Z n } n is irreducible and aperiodic. By Exercise 12-3, λ is a stationary distribution for {Z n } n. So, by Theorem 1.7, lim n p ij(nh) = λ j. Now, fix state i. For any ɛ > we can find h > so that and then find N such that 1 e q is ɛ/2 for s h, p ij (nh) λ i ɛ/2 for n N. Hence for t Nh, we have nh t < (n + 1)h for some n N and p ij (t) λ i p ij (t) p ij (nh) + p ij (nh) λ i ɛ

22 SMST C: Probability by Lemma This completes the proof. To close this section, let us introduce the concept of detailed balance for the continuous-time chains. A Q-matrix Q and a probability distribution λ are said to be in detailed balance if λ i q ij = λ j q ji for all i, j. It is easy to see that if Q and λ are in detailed balance, then λ is a stationary distribution. We leave the proof as an exercise.

23 SMST C: Probability Ergodic theorem The following ergodic theorem gives the long-run proportion of time spent by a continuous-time chain in each state as in the discrete-time case. Theorem Let {X(t)} t be Markov (ρ, Q), where ρ is any probability distribution on S. If it is irreducible, then 1 t lim 1 {X(s)=i} ds = 1 a.s. (12.1) t t m i q i where m i = E i (T i ) is the expected return time to state i. Moreover, if the chain is positive recurrent, then for any bounded function f : S R, we have 1 lim t t t f(x(s))ds = f := i S where (λ i ) i S is the unique stationary distribution, namely λ i = 1/(m i q i ). Proof λ i f(i) a.s (12.11) If the chain is transient then the total time spent in any state i is finte, so 1 t t 1 {X(s)=i} ds 1 t 1 {X(s)=i} ds = 1 m i q i as m i =. Suppose then that the chain is recurrent. Fix any state i. Then X(t) hits i with probability 1 and the long-run proportion of time in i equals the long-run proportion of time in i after first hitting i. So, by the stonge Markov property (of the jump chain), it is sufficient to consider the

24 SMST C: Probability case when ρ = δ i. Set T i = and define, for n =, 1, 2,..., M n+1 i T n+1 i = inf{t > T n i = inf{t > T n i + M n+1 i L n+1 i = Ti n+1 T n i. : X(t) i} T n i, : X(t) = i}, By the strong Markov property (of the jump chain) at the stopping times Ti n for n we see that L 1 i, L2 i,... are independent and identically distributed with mean m i, and that Mi 1, M i 2,... are independent and identically distributed with mean 1/q i. Hence, by the strong law of large numbers, and Hence L 1 i lim + + Ln i n n Mi 1 lim + + M i n n n Mi 1 lim + + M i n n L 1 i + + Ln i Moreover, we note that Ti n = L 1 i + + Ln i and Ti n for Ti n t < T n+1, we have i Ti n Mi M i n Ti n+1 L 1 i + + Ln i 1 t t = m i a.s. = 1 q i a.s. = 1 m i q i n+1 /Ti 1 {X(s)=i} ds T i n+1 Ti n a.s. 1 as n with probability 1. Now, Mi M i n+1 L 1 i + +. Ln+1 i

25 SMST C: Probability Letting t we obtain the assertion (12.1). In the positive recurrent case, by writting 1 t f(x(s))ds t f = ( 1 t 1 {X(s)=1} ds λ i )f(i) (12.12) t i S we can show another assertion (12.11) in the same way as Theorem 1.8 was proved. We leave the details as an exercise.

26 SMST C: Probability Applications Example (General birth process) A general birth process is a Markov chain on the state space S = {, 1, 2,...} with the generator q q Q = q 1 q 1. If sup i q i <, then the semigroup {P (t)} t is uniform. If i q 1 i <, then q i so sup i q i = and {P (t)} t is not uniform. Let us now consider a birth process which has constant intensity of births, namely q i = q for all i S. Clearly, {P (t)} t is uniform (provided that q < ). The forward equation yields In particular, if j =, we have dp jk (t) dt = qp jk (t) + qp j,k 1 (t). dp k (t) = qp k (t) + qp,k 1 (t). (12.13) dt The initial conditions are assumed to be p () = 1 and p i () = for i 1, so that the process starts in state. In order to solve this differential equation, we will attempt to convert it into a partial differential equation for the probability generating function G(s; t) = Es X(t) = i p i (t)s i.

27 SMST C: Probability Multiplying both sides of (12.13) by s k and summing over k we get G(s; t) = qg(s; t) + qsg(s, t). t For a fixed value of s we see that so that G(s; t) t = q(1 s)g(s; t) G(s; t) = A(s)e q(1 s)t. From the initial condition G(s; ) = 1, we must have A(s) = 1, so X(t) follows a Poisson distribution with mean qt. In general qt (qt)j i p ij (t) = p ij (s, s + t) = e (j i)!, j i. We observe that the process we just derived is the Poisson process discussed in the previous Chapter. Example (Birth and death process) A birth and death process is a Markov chain on the state space S = {, 1, 2,...} with the generator b b Q = d 1 (b 1 + d 1 ) b 1.

28 SMST C: Probability This process was introduced by McKendrick in 1925, used by Feller in 1939 to describe biological population growth, and studied in detail by Kendall later. When it is used to describe biological population, the stationary distribution is particularly interested. To determine the stationary distribution λ = (λ j ) j S, the equation λq = yields with solution b λ + d 1 λ 1 =, b j 1 λ j 1 (b j + d j )λ j + d j+1 λ j+1 =, j 1, λ j = b j 1 d j λ j 1 = b j 1 b d j d 1 λ, j 1. But j λ j = 1, whence Naturally we require to give ( 1 + λ = j=1 j=1 ( 1 + b j 1 b ) λ = 1. d j d 1 b j 1 b d j d 1 j=1 < b j 1 b d j d 1 ) 1

29 SMST C: Probability and λ j = b j 1 b ( 1 + d j d 1 j=1 b j 1 b d j d 1 ) 1, j 1. As a special case, let b j = b and d j = d and assume r := b/d < 1. Then which is a geometric distribution. λ = 1 r and λ j = (1 r)r j, j 1,

30 SMST C: Probability Exercises Prove Lemma Show that the general birth process defined in Example 12.1 is dishonest if i 1 q 1 i < Assume that the Markov chain {X(t)} t is irreducible and positive recurrent, and its unique stationary distribution is λ = (λ) j S. Show λp (t) = λ t If a Q-matrix Q and a probability distribution λ are in detailed balance, show that λ is a stationary distribution Show assertion (12.11) from (12.12).

31 SMST C: Probability 12 3 References [1] K.L. Chung, Markov Chains with Stationary Transition Probabilities, Springer, 2nd edition, [2] D. Freedman, Markov Chains, Spinger, [3] P. Guttorp, Stochastic Modelling of Scientific Data, Chapman and Hall, [4] J.R. Norris, Markov Chains, C.U.P., [5] D. Revus, Markov Chains, North-Holland, [6] D. Williams, Proability with Martingales, C.U.P., 1991.

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

The Transition Probability Function P ij (t)

The Transition Probability Function P ij (t) The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it

More information

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

STAT 380 Continuous Time Markov Chains

STAT 380 Continuous Time Markov Chains STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Continuous time Markov chains

Continuous time Markov chains Chapter 2 Continuous time Markov chains As before we assume that we have a finite or countable statespace I, but now the Markov chains X {X(t) : t } have a continuous time parameter t [, ). In some cases,

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00 Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013

More information

Markov Processes Cont d. Kolmogorov Differential Equations

Markov Processes Cont d. Kolmogorov Differential Equations Markov Processes Cont d Kolmogorov Differential Equations The Kolmogorov Differential Equations characterize the transition functions {P ij (t)} of a Markov process. The time-dependent behavior of the

More information

1 Types of stochastic models

1 Types of stochastic models 1 Types of stochastic models Models so far discussed are all deterministic, meaning that, if the present state were perfectly known, it would be possible to predict exactly all future states. We have seen

More information

Insert your Booktitle, Subtitle, Edition

Insert your Booktitle, Subtitle, Edition C. Landim Insert your Booktitle, Subtitle, Edition SPIN Springer s internal project number, if known Monograph October 23, 2018 Springer Page: 1 job: book macro: svmono.cls date/time: 23-Oct-2018/15:27

More information

Zdzis law Brzeźniak and Tomasz Zastawniak

Zdzis law Brzeźniak and Tomasz Zastawniak Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing

More information

Continuous-Time Markov Chain

Continuous-Time Markov Chain Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

MATH37012 Week 10. Dr Jonathan Bagley. Semester

MATH37012 Week 10. Dr Jonathan Bagley. Semester MATH37012 Week 10 Dr Jonathan Bagley Semester 2-2018 2.18 a) Finding and µ j for a particular category of B.D. processes. Consider a process where the destination of the next transition is determined by

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

LECTURE #6 BIRTH-DEATH PROCESS

LECTURE #6 BIRTH-DEATH PROCESS LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Stochastic Modelling Unit 1: Markov chain models

Stochastic Modelling Unit 1: Markov chain models Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson

More information

ISM206 Lecture, May 12, 2005 Markov Chain

ISM206 Lecture, May 12, 2005 Markov Chain ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Analysis of the Dynamic Travelling Salesman Problem with Different Policies

Analysis of the Dynamic Travelling Salesman Problem with Different Policies Analysis of the Dynamic Travelling Salesman Problem with Different Policies Santiago Ravassi A Thesis in The Department of Mathematics and Statistics Presented in Partial Fulfillment of the Requirements

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

M3/4/5 S4 Applied Probability

M3/4/5 S4 Applied Probability M3/4/5 S4 Applied Probability Autumn 215 Badr Missaoui Room 545 Huxley Building Imperial College London E-Mail: badr.missaoui8@imperial.ac.uk Department of Mathematics Imperial College London 18 Queens

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

1 Random walks: an introduction

1 Random walks: an introduction Random Walks: WEEK Random walks: an introduction. Simple random walks on Z.. Definitions Let (ξ n, n ) be i.i.d. (independent and identically distributed) random variables such that P(ξ n = +) = p and

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Examination paper for TMA4265 Stochastic Processes

Examination paper for TMA4265 Stochastic Processes Department of Mathematical Sciences Examination paper for TMA4265 Stochastic Processes Academic contact during examination: Andrea Riebler Phone: 456 89 592 Examination date: December 14th, 2015 Examination

More information

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Irregular Birth-Death process: stationarity and quasi-stationarity

Irregular Birth-Death process: stationarity and quasi-stationarity Irregular Birth-Death process: stationarity and quasi-stationarity MAO Yong-Hua May 8-12, 2017 @ BNU orks with W-J Gao and C Zhang) CONTENTS 1 Stationarity and quasi-stationarity 2 birth-death process

More information

1 IEOR 4701: Continuous-Time Markov Chains

1 IEOR 4701: Continuous-Time Markov Chains Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains. Sarah Filippi Department of Statistics  TA: Luke Kelly Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

LTCC. Exercises solutions

LTCC. Exercises solutions 1. Markov chain LTCC. Exercises solutions (a) Draw a state space diagram with the loops for the possible steps. If the chain starts in state 4, it must stay there. If the chain starts in state 1, it will

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

An Introduction to Probability Theory and Its Applications

An Introduction to Probability Theory and Its Applications An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I

More information

Advanced Computer Networks Lecture 2. Markov Processes

Advanced Computer Networks Lecture 2. Markov Processes Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

Continuous Time Markov Chains

Continuous Time Markov Chains Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov

More information

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture 21. David Aldous. 16 October David Aldous Lecture 21 Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these

More information

stochnotes Page 1

stochnotes Page 1 stochnotes110308 Page 1 Kolmogorov forward and backward equations and Poisson process Monday, November 03, 2008 11:58 AM How can we apply the Kolmogorov equations to calculate various statistics of interest?

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

At the boundary states, we take the same rules except we forbid leaving the state space, so,. Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information