= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Similar documents
Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Markov Chains (Part 4)

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Markov Chains, Stochastic Processes, and Matrix Decompositions

ISE/OR 760 Applied Stochastic Modeling

P(X 0 = j 0,... X nk = j k )

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

STOCHASTIC PROCESSES Basic notions

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Markov Chain Model for ALOHA protocol

Markov Chains Handout for Stat 110

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Markov Chains (Part 3)

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

LECTURE 3. Last time:

Stochastic Processes

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Simulation - Lectures - Part III Markov chain Monte Carlo

Social network analysis: social learning

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

MARKOV PROCESSES. Valerio Di Valerio

Discrete Markov Chain. Theory and use

Markov Chains and Stochastic Sampling

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Problems. HW problem 5.7 Math 504. Spring CSUF by Nasser Abbasi

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n.

ON A CONJECTURE OF WILLIAM HERSCHEL

ISM206 Lecture, May 12, 2005 Markov Chain

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

An Application of Graph Theory in Markov Chains Reliability Analysis

Markov Processes Hamid R. Rabiee

IEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 2: Markov Chains and Queues in Discrete Time

The Markov Decision Process (MDP) model

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Classification of Countable State Markov Chains

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Chapter 16 focused on decision making in the face of uncertainty about one future

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Online Social Networks and Media. Link Analysis and Web Search

CMPSCI 240: Reasoning Under Uncertainty

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

Non-homogeneous random walks on a semi-infinite strip

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

Solution: (Course X071570: Stochastic Processes)

2. Transience and Recurrence

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

Mathematical Methods for Computer Science

The cost/reward formula has two specific widely used applications:

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

4.7.1 Computing a stationary distribution

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

Reinforcement Learning

Markov chains. Randomness and Computation. Markov chains. Markov processes

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Recitation 13 10/31/2008. Markov Chains

Quantitative Model Checking (QMC) - SS12

The Distribution of Mixing Times in Markov Chains

ISyE 6650 Probabilistic Models Fall 2007

Stat-491-Fall2014-Assignment-III

Probability, Random Processes and Inference

Markov Chains Absorption Hamid R. Rabiee

ABSTRACT MARKOV CHAINS, RANDOM WALKS, AND CARD SHUFFLING. Nolan Outlaw. May 2015

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Lecture 20 : Markov Chains

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

IEOR 6711, HMWK 5, Professor Sigman

Markov Chains, Random Walks on Graphs, and the Laplacian

Readings: Finish Section 5.2

MARKOV CHAIN MONTE CARLO

Generating Functions. STAT253/317 Winter 2013 Lecture 8. More Properties of Generating Functions Random Walk w/ Reflective Boundary at 0

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

INTRODUCTION TO MCMC AND PAGERANK. Eric Vigoda Georgia Tech. Lecture for CS 6505

An Introduction to Entropy and Subshifts of. Finite Type

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

1 Random walks: an introduction

18.440: Lecture 33 Markov Chains

STAT 380 Continuous Time Markov Chains

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Markov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

LTCC. Exercises solutions

MSc MT15. Further Statistical Methods: MCMC. Lecture 5-6: Markov chains; Metropolis Hastings MCMC. Notes and Practicals available at

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

2 Discrete-Time Markov Chains

6 Markov Chain Monte Carlo (MCMC)

Transcription:

Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The one step transition probability is the probability of X 1 j given that X i. Or, P{X 1 j X i} (1) If the MC has stationary transition probabilities then, P{X 1 j X i} P{X n+1 j X n i} (2) If P is the matrix of transition probabilities then is the i,j th element in P as shown below; p 11 p 12 p 1j p 1n p 21 p 22 p 2j p 2n P p i1 p i2 p in (3) p n1 p n2 p nj p nn Property 2 Let {Xn : n, 1, }be a Markov Chain with Markov Matrix P. Then the probability of going from state i to state j in n steps is the n-step transition probability given by, P n (i, j) P{X n j X i} (4) where P n (i, j) is the i,j th element in the n-step probability transition matrix P n. Property 3 Let {Xn : n, 1, }be a Markov Chain with Markov Matrix P. Let the initial probability row vector be p i P{X i} p 1 p 2 p j p n. Then, P{X n j} p i P j (5) Property 4 The first passage time T ij is a random variable defining the time it takes to reach a fixed state j for the first time from the initial state i and is defined by, T ij min{n 1 : X i, X n j} (6) (n) If X n i, then T ii represents the first return to state i. If f ij represents the first passage probability to state j from state i in n steps the, the probability of eventually reaching state j is given by, f ij n 1 f (n) ij n P{T ij < X i} (7) If f ij 1, then the passage to state j is certain and is called a recurrent state and if it is < 1 then the first passage to j is uncertain and the state is called transient. Thus, we can define a first passage probability matrix F whose elements can be filled with either 1 or or values in between. 1 and can be filled by inspecting the state transition diagram but the in between values have to be calculated. Similarly, the number of crossings of state j from the state i is another random variable N ij with N ii being at least 1. We can define the expected number of visits (returns) to state j from state i by r ij defined by,

r ij E{N ij X i} (8) Thus, we can define the return matrix R whose elements will be, or any value in between. Again the values and can be filled by inspection of the state transition diagram and the in between values require calculation. As in the first passage probabilities, the state j is called recurrent if r ij and transient if r ij <, r ij and f ij are connected by the relationship, r ij 1 1 f jj ; i j, Expected # of returns to state j f ij 1 f jj ; i j, Expected # of passages to state j i, j 1, 2, n (9) Property 5 Let X be a Markov chain with matrix P and let C be a subset of the state space E, i.e, C E. Then C is closed if j C P ij 1, for all i C (1) Property 6 A closed set of states C E containing no proper subsets that are closed is called irreducible. If the number of states within an irreducible set are finite then each state in C is recurrent. For an irreducible set every state communicates with every other state. Property 7 Let X be a Markov chain with matrix P. If all the states are irreducible then a steady state row-vector defined by π can be determined from πp π, Or, π I P (11) Property 8 Let X be a Markov chain with finite state space E and k distinct irreducible sets. Let C a be the a-th irreducible set and P a the corresponding Markov matrix restricted to C a. The matrix F will define the first passage probabilities. a. If j is a transient state, then n P{X n j X i} (12) b. If i and j both belong to the a-th irreducible set, n P{X n j X i} π(j) (13) if is a row vector of steady state probabilities then, 1 (14) P a and π(i) i C a c. If state j is recurrent and i is not in its irreducible set, n P{X n j X i} f ij π(j) (15) where π(j) is given by eq.(14) d. If state j is recurrent and X is in the same irreducible set as j, then n 1 n 1 n I(X m X j) π(j) (16) m

where I(X m X j) 1 if Xm j and for any other state. e. If state j is recurrent, then. E T jj 1 π(j) (17) Property 9 Let X be a Markov chain and let B be the finite set of all transient states. Let Q be the transition matrix restricted to set B. Then for i, j A r ij E{N ij X i} (I Q) 1 (i, j) (18) Calculation of F matrix The matrix P is rewritten so that each irreducible recurrent set is treated as a single state with probability 1. To determine the probability of reaching a recurrent state from a transient state we have to find the probability of reaching the appropriate irreducible set. Thus, the rewritten matix P is written as, 1 P 1 1 b 1 b 2 b 3 Q where b j is the vector is the one-step probability of going from transient state i to the irreducible set a given by, b a (i) (2) j C a Example: The P matrix is given by;.6.4 P.7.3.7.3.4.3.3 1.2.1.1.1.2.3.3.2.4.1 The corresponding P matrix is given by, 1 Property 1 P P a P b B 1 B 2 Q 1.2+.2.1+.1+.1.3.2.3 +.3+.2+.4.9.1 Let X be a Markov chain with the Markov matrix P given in the reduced form of eq. (19). Then for a transient state i and a recurrent state j we have, for each j in the irreducible set. (19) (21) (22) (I Q) 1 b a (i) (23)

In the above example Q.2.3.1 and b 1.2 b 2.3.9, (I Q) 1 1.25.42 1.11 (I Q) 1 b 1.25, (I Q) 1 b 2.75 1 (24) Property 11 From property 9 we can write for any Markov chain X 1 1 ; i j rjj f ij i, j 1, 2, n (25) r ij ; i j r jj For the above example the R matrix can be written as, R 1.25.42. (26) 1.11 The last block is obtained from property 9, viz., (I Q) 1 1.25.42. From property (11) we can form the block matrix for the F matrix as 1.11 1 1.42 F 3 1.25 1.11.2.375 1 1 (27).1 1.25 1.11 From eqs. (24) and (27) we can formulate the F matrix as follows bearing in mind that for recurrent states the entries are all 1. 1 1 F 1 1 1 1 1 1 1 1 1 1 1.25.25.75.75.75.2.38 1 1 1.1 Calculation of Steady State Transition Matrix P We are now in a position to calculate the steady state transition matrix using properties (8b) and (8c). The steady state probabilities for the irreducible states P a and P b are obtained from property (8b). In eq. (21) the matrices P a, P b and Q are:.7.3.6.4.2.3 P a : P b.4.3.3 : Q (29).7.3.1 1 (28)

We solve for the steady state probability vectors from a P a a and bp b b the normalization equations π a1 + π a2 1 and π b1 + π b2 + π b3 1. Thus we have,.3.3.4.4 π a I P a π a1 π a2 : π b I P b π b1 π b2 π b3.4.7.3 (3).7.7 1 1 After discarding one equation from each one of eqs.(3) and adding the normalization equations we can write the following linear equations..3.7 π.4.7 π b1 a1 and 1 1 π 1.3 1 π b2 (31) a2 1 1 1 π b3 1 and solving for a and b we have, π π a1.6364 b1.6422 and π π b2.2752 (32) a2.3636 π b3.826 Now we have to determine the steady state probabilities for the transient states from property (8c), the F matrix of eq. (28) and eqs. (32) as follows:.6364.3636 P ss.6364.3636.25.6364.25.3636.75.6462.75.2752.75.826.6364.3636 1.6462 1.2752 1.826 Or, P ss.6364.3636.6364.3636.1591.199.4817.264.619 (33) This matrix is the same as calculated by using recursive techniques.