Classification of Countable State Markov Chains

Similar documents
Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Birth-death chain models (countable state)

Countable state discrete time Markov Chains

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

Homework 3 posted, due Tuesday, November 29.

Math Homework 5 Solutions

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

The cost/reward formula has two specific widely used applications:

2. Transience and Recurrence

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Markov Chains Handout for Stat 110

Markov Processes Hamid R. Rabiee

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

STOCHASTIC PROCESSES Basic notions

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Stochastic2010 Page 1

Markov Chains and Stochastic Sampling

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Finite-Horizon Statistics for Markov chains

88 CONTINUOUS MARKOV CHAINS

Markov Chains (Part 3)

Mathematical Foundations of Finite State Discrete Time Markov Chains

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Lecture 9 Classification of States

SMSTC (2007/08) Probability.

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Lectures on Markov Chains

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Positive and null recurrent-branching Process

Statistics 150: Spring 2007

2 Discrete-Time Markov Chains

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Markov Chains, Stochastic Processes, and Matrix Decompositions

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

I will post Homework 1 soon, probably over the weekend, due Friday, September 30.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov chains. Randomness and Computation. Markov chains. Markov processes

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

ISE/OR 760 Applied Stochastic Modeling

Stochastic Models: Markov Chains and their Generalizations

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

A review of Continuous Time MC STA 624, Spring 2015

Lecture 20 : Markov Chains

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

Probability, Random Processes and Inference

12 Markov chains The Markov property

Stochastic Processes (Week 6)

Non-homogeneous random walks on a semi-infinite strip

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

IEOR 6711, HMWK 5, Professor Sigman

RECURSION EQUATION FOR

Let's contemplate a continuous-time limit of the Bernoulli process:

Detailed Balance and Branching Processes Monday, October 20, :04 PM

FDST Markov Chain Models

On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes

Summary of Results on Markov Chains. Abstract

Necessary and sufficient conditions for strong R-positivity

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Chapter 16 focused on decision making in the face of uncertainty about one future

Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

25.1 Ergodicity and Metric Transitivity

Reinforcement Learning

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

CSE 105 THEORY OF COMPUTATION

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

Discrete and Continuous Random Variables

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

STAT STOCHASTIC PROCESSES. Contents

On hitting times and fastest strong stationary times for skip-free and more general chains

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing

Probability & Computing

Markov decision processes and interval Markov chains: exploiting the connection

An Introduction to Stochastic Modeling

Adventures in Stochastic Processes

MARKOV PROCESSES. Valerio Di Valerio

Markov Chains Absorption Hamid R. Rabiee

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

Lecture #5. Dependencies along the genome

Social network analysis: social learning

Theorems. Theorem 1.11: Greatest-Lower-Bound Property. Theorem 1.20: The Archimedean property of. Theorem 1.21: -th Root of Real Numbers

Linear Dependence of Stationary Distributions in Ergodic Markov Decision Processes

Markov Chains. Contents

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

Transcription:

Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive recurrent Topological analysis is not enough anymore! Any communication class with one-way connections to another communication class must be transient, because there is a finite probability to follow a path that leads to another communication class from which there is no return. Any closed communication class with finitely many states is positive recurrent. But can't decide the transience/recurrent property of a closed communication class with infinitely many states from topology alone. Need some analysis. It will suffice to assume that we are looking at an irreducible Markov chain since we can treat a closed communication class as an irreducible Markov chain. We will develop a systematic procedure for deciding whether an Stoch14 Page 1

irreducible Markov chain is transient, positive recurrent, or null recurrent by putting together some propositions that we'll now prove. Proposition 1: An irreducible Markov chain has a stationary distribution if and only if it is positive recurrent. Proof: We showed, in our proof of existence for FSDT MCs that positive recurrence is enough to guarantee the existence of a stationary distribution. In the other direction, why does the existence of a stationary distribution imply that MC must be positive recurrent? Given a stationary distribution, if we initialize the MC with this stationary distribution then for all n. But for both null recurrent and transient MCs, one can show These two statements contradict each other. So a MC with a stationary distribution must be positive recurrent. Proposition 2: If an irreducible Markov chain does not have a invariant measure, then it must be transient. Proof: Our proof for the existence of stationary distribution for irreducible FSDT MCs in fact, under examination, shows that any recurrent MC has an invariant measure. This is not a decisive test however; if one has a MC with an invariant measure that does not have a stationary distribution, could be null recurrent or could be transient. Proposition 3: Decisive test for transience for an irreducible MC 1) 2) Choose any reference state Let be the matrix obtained by deleting the row and column from the probability transition matrix P. If the only bounded, nonnegative (column vector) solution to the equation is, then the MC is recurrent. Otherwise it is transient. Similarly, with the same definition of, if the equation has any unbounded solution (, then the MC is recurrent. (This is not an if and only if statement). We'll prove the first statement; the second statement is derived in Karlin and Taylor Sec. 3.4. Stoch14 Page 2

We'll prove the first statement; the second statement is derived in Karlin and Taylor Sec. 3.4. Proof of first statement in Proposition 3: Choose the reference state and define: for any This is the probability that starting from state, that state is never visited. Recurrence is equivalent to the statement: for all Now we set up an equation for the Define for some. To compute an equation for these quantities, we will do a first step analysis very similarly to the absorption probability calculations, and their connection to answering questions about irreducible MCs (which of two states gets hit first). We employ the same strategy: turn the target state into an absorbing state; that doesn t change the answer to the question. This modified Markov chain, with the canonical decomposition, will have the probability transition matrix: All states in the modified MC must be transient because they have a finite probability to hit the absorbing state. Now we will set up the formula for the absorption probability in state from any other Stoch14 Page 3

state in this modified MC. Notice that, contrary to the finite state case, it is not guaranteed to be absorbed by the single absorbing state. The same first step analysis as before gives the recurrence relation: (The and are normally matrices in absorption probability calculations,but here they're just column vectors because we only have one absorbing state in the modified MC.) So now we've shown that. But this equation can have multiple solutions, but as defined is not just any solution, but some particular solution of this equation that can have multiple solutions. So Stoch14 Page 4

which of the solutions to this equation is the right one? Lemma: is the maximal solution to the equation satisfying the following property: for all This means that any other solution to the equation must satisfy for all Proof of Lemma: First we write Markov chain dynamics. under the modified Now let be any solution to satisfying. Then: Continuing by induction, Stoch14 Page 5

Before returning to the classification problem, note that this lemma gives a useful result for how to calculate the probability that, in a countable state irreducible Markov chain, one will ever visit a state from another state j. That's fine, but what does this have to do with the proposition we're trying to prove. So if the only bounded nonnegative solution to is, then the class of solutions to the problem in the lemma is just the single solution which has maximal solution, which is equivalent to recurrence of original MC. What about in the other direction? If the condition of the proposition 3 is negated, that means there exists some nonzero, bounded, nonnegative solution to. By rescaling the solution by its maximum value, we obtain a solution that satisfies and that is, a nonzero solution satisfying the conditions of the lemma. Because is the maximal solution, And therefore the original MC must be transient. Putting together the above considerations and propositions, we have a systematic way for classifying the transience/recurrence properties of MCs. 1. 2. Decompose the Markov chain into communication classes, and decide the properties of whatever communication classes can be determined from topology alone. Any non-closed communication class must be transient. Any closed communication class with finitely many states must be positive recurrent. This only leaves closed communication classes with infinitely many states. Treat each such class as an irreducible MC in its own right for the purpose of the following analysis. Two parallel tracks which can be followed in either order: a. Look for an invariant measure ( with i. If you can find an invariant measure that can be normalized ( into a stationary distribution, the MC must be positive recurrent. ii. If you can show that no invariant measure exists, then the MC must be transient. iii. If you find an invariant measure that can't be normalized, then you only know the Stoch14 Page 6

b. class is transient or null recurrent. Have to go to test b. Decisive test for transience. Choose any convenient reference state and form the matrix Q by deleting the corresponding row and column from the probability transition matrix. Look for solutions to the equation i. If you find a nonzero, nonnegative, bounded solution, then the MC must be transient. ii. If you find an unbounded solution, then the MC must be recurrent. iii. If you can prove that the only nonnegative, bounded solution is, then the MC must be recurrent. Next time, we will show how this strategy can be employed to classify a general birth-death MC. Stoch14 Page 7