Stochastic Processes (Week 6)

Similar documents
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Chapter 7. Markov chain background. 7.1 Finite state space

Markov Chains, Stochastic Processes, and Matrix Decompositions

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7

Stochastic Processes

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

Positive and null recurrent-branching Process

Lecture 5: Random Walks and Markov Chain

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Statistics 150: Spring 2007

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Probability & Computing

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

A review of Continuous Time MC STA 624, Spring 2015

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

STOCHASTIC PROCESSES Basic notions

Markov processes and queueing networks

Chapter 2. Markov Chains. Introduction

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov Chains and Stochastic Sampling

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

MARKOV CHAINS AND HIDDEN MARKOV MODELS

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Modern Discrete Probability Spectral Techniques

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

SMSTC (2007/08) Probability.

Summary of Stochastic Processes

Applied Stochastic Processes

STATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final

Markov Chains on Countable State Space

Necessary and sufficient conditions for strong R-positivity


Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Markov Chains, Random Walks on Graphs, and the Laplacian

Classification of Countable State Markov Chains

Convex Optimization CMU-10725

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

12 Markov chains The Markov property

MATH36001 Perron Frobenius Theory 2015

Stochastic Simulation

An Introduction to Entropy and Subshifts of. Finite Type

RECURRENCE IN COUNTABLE STATE MARKOV CHAINS

Lecture #5. Dependencies along the genome

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

STAT STOCHASTIC PROCESSES. Contents

Statistics 992 Continuous-time Markov Chains Spring 2004

Homework set 3 - Solutions

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

The Markov Chain Monte Carlo Method

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Zdzis law Brzeźniak and Tomasz Zastawniak

Lecture 2: September 8

Continuous Time Markov Chain Examples

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

2. Transience and Recurrence

1 Continuous-time chains, finite state space

2 Discrete-Time Markov Chains

Spectral radius, symmetric and positive matrices

1 Invariant subspaces

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Math Homework 5 Solutions

Lecture 7: Positive Semidefinite Matrices

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading

Perron Frobenius Theory

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

MATH 56A: STOCHASTIC PROCESSES CHAPTER 3

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

6 Markov Chain Monte Carlo (MCMC)

arxiv: v2 [math.pr] 25 May 2017

4. Ergodicity and mixing

Numerical Linear Algebra Homework Assignment - Week 2

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003

Markov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University

Waiting time distributions of simple and compound patterns in a sequence of r-th order Markov dependent multi-state trials

A New Look at Matrix Analytic Methods

The Transition Probability Function P ij (t)

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Math 203A - Solution Set 1

Geometric Mapping Properties of Semipositive Matrices

STA 294: Stochastic Processes & Bayesian Nonparametrics

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

Some Definition and Example of Markov Chain

Properties of Matrices and Operations on Matrices

Markov Processes Hamid R. Rabiee

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

T. Liggett Mathematics 171 Final Exam June 8, 2011

25.1 Ergodicity and Metric Transitivity

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

1 Determinants. 1.1 Determinant

Transcription:

Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov s backward and forward equation Kolmogorov s forward and backward equation, Embedded Markov chain 3.2.2 Large time behavior Definition 3.1. A continuous-time Markov chain is irreducible if all states communicate, i.e, for each x, y S, there exists a sequence of z 1, z 2,, z j S with α(x, z 1 ), α(z 1, z 2 ),, α(z j 1, z j ), α(z j, y) all strictly positive. About periodicity, it does not occur to continuous-time Markov chain because Lemma 3.1. For any irreducible continuous-time Markov chain, P t has strictly positive entries for all t > 0. Proof. (i). if x = y, it is trivial to check that for any integer n, P t (x, x) Pt/n n (x, x), if P t(x, x) = 0, then P t/n (x, x) = 0 for any integer n, it contradicts with P 0 (x, x) = 1; 1

(ii). if x y, by definition, there exists k 1, k 2,, k m such that P t/2 n(x, k 1 )P t/2 n(k 1, k 2 ) P t/2 n(k m 1, k m )P t/2 n(k m, y) lim = n + (t/2 n ) m+1 = α(x, k 1 )α(k 1, k 2 ) α(k m 1, k m )α(k m, y) > 0. That is, P t/2 n(x, k 1 )P t/2 n(k 1, k 2 ) P t/2 n(k m 1, k m )P t/2 n(k m, y) > 0 for sufficiently large n. Therefore, P t (x, y) P t/2 n(x, k 1 )P t/2 n(k 1, k 2 ) P t/2 n(k m, y)p t (m+1)/2 n t(y, y) > 0. Remark: From the proof we can conclude that, an equivalent definition for irreducibility is, a continuous-time Markov chain is irreducible if for every x, y S, P t (x, y) > 0 for some t. Corollary 3.1. A continuous time Markov chain is reducible if and only if its embedded chain is irreducible. Denote H x as the holding time in state x, i.e, H x = inf{t > 0 : X t x, X 0 = x}, and T x,x = inf{t H x : X t = x, X 0 = x}, the amount of time until the Markov chain re-visits state x after the first change of state if X 0 = x. Definition 3.2. State x is called recurrent if with probability 1, the Markov chain will return to state x within a finite interval of time, i,e, P (T x,x < ) = 1. Otherwise, it is called transient. Remark: A state x for a continuous time Markov chain is recurrent/transient if and only if it is recurrent/transient for the embedded discrete-time chain. Consequently, an irreducible Markov continuous time Markov chain is recurrent. Moreover, if it is recurrent, the total amount of time that X t stays at x, 0 I(X s = x)ds is infinite with probability 1. Proof. Note that X 0 = Y 0 = x, define τ x = inf{n 1 : Y n = x}, then T x,x = T 1 + T 2 + + T τx. T x,x = if and only if τ x =. I(X 0 s = x)ds = Nx H x,k where H x,k s are independent exponential distributed random variables with rate α(x) and N x = lim I(Y n = x). n=0 k=1 2

Definition 3.3. For continuous time Markov chain, is said to be an invariant probability distribution if P t = for all t > 0. Lemma 3.2. A nonnegative vector with 1 = 1 is an invariant probability distribution if and only if A = 0. Proof. If P t = for all t > 0, then 0 = d(p t)(y) = π(x) dp t(x, y) = π(x) p t (x, z)a z,y z S = π(x)p t (x, z)a z,y = π(z)a z,y = (A)(y). z S z S Conversely, if A = 0, then ( ) d π(x)p t (x, y) = π(x) dp t(x, y) = π(x) A xz P t (z, y) z S = π(x)a xz P t (z, y) = (A)(z)P t (z, y) = 0. z S z S P t is constant and P t = P 0 =. Note that for an irreducible continuous-time Markov chain X t with finite state space S, the embedded discrete time Markov chain Y n is irreducible, recurrent with finite state space S, hence there exists a unique positive invariant probability vector. Denote the one-step transition matrix of Y n is P. Since A x,y = α(x)p (x, y) for x y, and α(x) for x = y, Through direct calculation, for any η = (η(x)), we have η τ A = 0 η(x)a x,y = 0 y S x y η(x)α(x)p (x, y) = η(y)α(y) P = where = (π(x)), π(x) = η(x)α(x). i.e, this is proportional to the unique invariant probability vector of Y n. Hence, there exists a unique positive probability vector η satisfying η τ A = 0 and η(x) = ( ) 1 π(x) π(x) α(x) α(x). Another method of verification is to follow exercise 3.4 in the textbook: 3

Theorem 3.1. For an irreducible continuous-time Markov chain with finite s- tate space S, there is a unique probability vector satisfying A = 0; all the eigenvalues of A have negative real part. Proof. Note that A is the infinitesimal generator for an irreducible continuoustime Markov chain, it has such properties: the row sums equal to 0; diagonal elements are nonnegative; off-diagonal elements are nonnegative. Let a be some positive number greater than the absolute values of all the entries of A, then P = 1 a A + I is the transition matrix for a discrete-time, irreducible, aperiodic Markov Chain: (i). P xy = 1 a A xy + I(x = y) 0; (ii). y P xy = 1 A xy + 1 = 1; a y (iii). P xx = 1 a A xx + 1 > 0; (iv). by definition, for each x, y S, there exists j distinct z 1, z 2,, z j with A xz1, A z1 z 2,, A zj 1 z j, A zj y are strictly positive. S P j+1 xy i.e, x y. P xz1 P z1 z 2 P zj 1 z j P zj y A xz 1 A z1 z 2 A zj 1 z j A zj y a j+1 > 0, For an irreducible, aperiodic Markov chain, by Perron-Frobenius Theorem, P has a unique left eigenvector with eigenvalue 1 and that is a probability vector, all the other eigenvalues of P have absolute values strictly less than 1. Theorem 3.2. For an irreducible continuous-time Markov chain with finite state space S, lim P t =, where A = 0. t + Proof. Fixed t > 0, Q = P t can be considered as the one-step transition probability matrix for an irreducible, aperiodic discrete-time Markov chain with finite state space S, lim Q l = where is the invariant probability distribution l of the Markov chain, it is independent of choice of t > 0. Moreover, A dp t 0 = lim t = lim P ta =. t A 4

3.2.3 Exit distributions and hitting times Suppose X t is a continuous time irreducible Markov chain on finite state space S. I. Define T = inf{t 0 : X t x}, i.e, the time of the first exist from x then T is exponential with parameter α(x) if assume X 0 = x, therefore, E(T X 0 = x) = 1/α(x). II. For a fixed state z S, define Y = inf{t 0 : X t = z}, i.e, the time of the first visit to z Define b(x) = E(Y X 0 = x),clearly, b(z) = 0. Denote b = (b(x)) x z. Theorem 3.3. Let à be the matrix obtained from A by deleting the row and the column associated to the state z, then b = à 1 1. Lemma 3.3. The à in Theorem 3.3 is invertible. Proof. We observe that for Ã, the row sums are all nonpositive and at least one of these row sums is strictly negative. Otherwise, α(z, y) = 0 for all y z, contradicts with the irreducibility assumption. From Theorem 3.1, we know that ( πa ) = 0, 1 T π = 1 has a unique positive A T solution. Suppose S = n, then rank = n, and rank(a) = n 1, hence {π : πa = 0} has dimension 1. Denote the adjoint matrix of A by A, then A A = 0, each row of A is a solution to πa = 0 and all entries are nonzero, therefore, à is invertible. Proof of Theorem 3.3. By definition, for x z, b(x) = E(Y X 0 = x) = E(T X 0 = x) + = 1 α(x) + i.e, α(x)b(x) = 1 + y S,y x = 1 + y S,y x y S,y x,z 1 T α(x, y) α(x) b(y), α(x, y)b(y) α(x, y)b(y), 5 y S,y x P (X T = y X 0 = x)e(y X 0 = y)

which implies 1 + y z A x,y b(y) = 0, i.e, 0 = 1 + Ã b, b = Ã 1 1. Example 3.1. Consider a Markov chain with four states {0, 1, 2, 3}, and infinitesimal generator, A = 0 1 2 3 0 1 1 0 0 1 1 3 1 1 2 0 1 2 1 3 0 1 1 2 let z = 3, compute b = (b(x)) x 3 where b(x) = E(Y X 0 = x), Y = inf{t 0 : X t = 3}. Exercises: Ã = 0 1 2 0 1 1 0 1 1 3 1, b = Ã 1 1 = (8/3, 5/3, 4/3). 2 0 1 2 1. Assume = (π(x)) is the invariant probability distribution for an irreducible continuous time Markov chain, let V x (t) = t I(X 0 s = x)ds, i.e, the time spent in state x up to time t, show that V x (t) lim t t = π(x) almost surely, i.e, π(x) is the proportion of time spent in state x over long periods of time. 2. (Detailed balance condition) Please show that (1) For a discrete-time Markov chain with one-step transition probability matrix P and state space S, if a nonnegative vector = (π(x)) satisfies π(x)p(x, y) = π(y)p(y, x) for all x y and π(x) = 1, then is an invariant probability distribution. (2) For a continuous-time Markov chain with infinitesimal generator A and state space S, if a nonnegative vector = (π(x)) satisfies π(x)a x,y = π(y)a y,x for all x y and π(x) = 1, then is an invariant probability distribution. 6