Markov Chains Absorption Hamid R. Rabiee

Similar documents
Markov Chains Absorption (cont d) Hamid R. Rabiee

Markov Processes Hamid R. Rabiee

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Markov Chains. Chapter Introduction. 1. w(1) = (.5,.25,.25) w(2) = (.4375,.1875,.375) w(3) = (.40625, , ) 1 0.

Markov Chains Handout for Stat 110

ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Waiting Time to Absorption

The cost/reward formula has two specific widely used applications:

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Chains (Part 4)

Sequence modelling. Marco Saerens (UCL) Slides references

Lecture 20 : Markov Chains

Absorbing Markov chains (sections 11.1 and 11.2)

Matrices: 2.1 Operations with Matrices

The Mabinogion Sheep Problem

Mathematica reimbursement

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

ISE/OR 760 Applied Stochastic Modeling

4.7.1 Computing a stationary distribution

6.842 Randomness and Computation March 3, Lecture 8

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

2. Transience and Recurrence

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

ISyE 6650 Probabilistic Models Fall 2007

ISM206 Lecture, May 12, 2005 Markov Chain

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

Problems. HW problem 5.7 Math 504. Spring CSUF by Nasser Abbasi

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Chapter 3: Markov Processes First hitting times

Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Birth-death chain models (countable state)

Social network analysis: social learning

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Classification of Countable State Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Markov Chains. Chapter Introduction. Specifying a Markov Chain

STOCHASTIC PROCESSES Basic notions

Elementary maths for GMT

Markov chains (week 6) Solutions

Countable state discrete time Markov Chains

Mathematics 13: Lecture 10

IEOR 6711: Professor Whitt. Introduction to Markov Chains

The Boundary Problem: Markov Chain Solution

CHAPTER 6. Markov Chains

Statistics 150: Spring 2007

EE263 Review Session 1

ORF 522. Linear Programming and Convex Analysis

Lecture 10: Powers of Matrices, Difference Equations

Introduction General Framework Toy models Discrete Markov model Data Analysis Conclusion. The Micro-Price. Sasha Stoikov. Cornell University

Continuous Time Markov Chain Examples

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains. CS70 Summer Lecture 6B. David Dinh 26 July UC Berkeley

Evolutionary dynamics on graphs

Convergence Rate of Markov Chains

Chapter 16 focused on decision making in the face of uncertainty about one future

2. The Power Method for Eigenvectors

2 Discrete-Time Markov Chains

1.3 Convergence of Regular Markov Chains

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Markov Chains and Transition Probabilities

Markov Chains (Part 3)

Definition 2.3. We define addition and multiplication of matrices as follows.

Markov Processes Cont d. Kolmogorov Differential Equations

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Markov Decision Processes

Continuous Time Markov Chains

Lecture 9 Classification of States

88 CONTINUOUS MARKOV CHAINS

IFT 6760A - Lecture 1 Linear Algebra Refresher

The Distribution of Mixing Times in Markov Chains

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

1 Gambler s Ruin Problem

Disjointness and Additivity

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

30.4. Matrix Norms. Introduction. Prerequisites. Learning Outcomes

Practice problems. Practice problems. Example. Grocery store example 2 dairies. Creamwood Cheesedale. Next week This week Creamwood 1 Cheesedale 2

Linear Algebra Solutions 1

18.175: Lecture 30 Markov chains

MATH 446/546 Test 2 Fall 2014

Data Mining and Analysis: Fundamental Concepts and Algorithms

Matrix Multiplication

Linear Algebra I Lecture 10

Math 597/697: Solution 5

Lectures on Probability and Statistical Models

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

Transcription:

Markov Chains Absorption Hamid R. Rabiee

Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is (i.e., p ii = ). A Markov chain is absorbing if it has at least one absorbing state, and if from every state it is possible to go to an absorbing state (not necessarily in one step). 0 3 4 States 0 and 4 are absorbing

The canonical form By separating transient (TR) and absorbing (ABS) states, the transition matrix of any absorbing Markov chain can be written as: And as time passes we can see that: 3

Absorption theorem In an absorbing MC the probability that the process will be absorbed is. (i.e. Q n 0 as n ). Proof sketch: By definition of an absorbing MC, There exist a path S from any non-absorbing state s j to an absorbing state. So there is a positive probability p j of taking this path every time the process starts from s j. Therefore there exists p and m, such that the probability of not absorbing after m steps is at most p. After km steps the probability of not being absorbed is at most p k, and as time goes to infinity this probability approaches zero. 4

The Fundamental Matrix Definition: For an absorbing Markov chain P, the following matrix is called the fundamental matrix for P. N = I Q Theorem: For an absorbing MC the matrix I Q has an inverse N, and N = I + Q + Q +. The ij-entry n ij of the Matrix N is the expected number of times the chain is in state s j, given that it starts in state s i. 5

Proof: I Q x = 0 x = Qx x = Q n x. Since Q n 0, we have Q n x 0, so x = 0. Thus x = 0 is the only point in the nullspace of I Q, therefore I Q = N exists. I Q I + Q + Q + + Q n = I Q n+ I + Q + Q + + Q n = N(I Q n+ ). Letting n tend to infinity we have: N = I + Q + Q + 6

Proof (cont d): Consider two transient states i and j, and suppose that S i is the initial state. X (k) : a R.V. which equals if the chain is in state s j after k steps, and equals 0 otherwise. We have: P X k = = (Q k ) ij The expected number of times the chain is in state s j in the first n steps, given that it starts in state s i is: E X 0 + X + + X n = (Q 0 ) ij + (Q ) ij + + (Q n ) ij As n goes to infinity we have: E X 0 + X + = (Q 0 ) ij + (Q ) ij + = N ij 7

Example: Consider the following Markov chain (D random walk with 5 states): 0 3 4 The transition matrix in canonical form is: ; 8

Example (cont d): If we start in state, then the expected number of times in states, and 3 before being absorbed are, and. 9

Time to Absorption: Question: Given that the chain starts in state s i, what is the expected number of steps before the chain is absorbed? Reminder: Starting from s i, the expected number of steps the process will be in state s j before absorption is N ij. Therefor j N ij is the expected number of steps before absorption. Theorem: Let t i be the expected number of steps before the chain is absorbed, given that the chain starts in state s i, and let t be the column vector whose i-th entry is t i. Then t = Nc, where c is a column vector all of whose entries are. 0

Absorption Probabilities: Question: Given that the chain starts in the transient state s i, what is the probability that it will be absorbed in the absorbing state s j? Intuition: Starting from s i, the expected number the process will be in state s k before absorption is N ik. Each time, the probability to move to state s j is R kj (kj-th element of matrix R introduced in the canonical form).

Absorption Probabilities: Theorem: Let B ij be the probability that an absorbing chain will be absorbed in the absorbing state s j if it starts in the transient state s i. Let B be the matrix with entries b ij. Then B is a t-by-r matrix, and B = NR, where N is the fundamental matrix and R is as in the canonical form. Proof: B ij = n k (n) q ik rkj = k n (n) q ik rkj = = NR ij k n ik r kj

Example: In previous example (D random walk with 5 states) we found that: Hence The expected number of steps before absorption when the process starts from states,, 3 is 3, 4 and 3 respectively. 3

Example (cont d): From the canonical form: Hence Here the first row tells us that, starting from state, there is probability 3/4 of absorption in state 0 and /4 of absorption in state 4. 4

References Grinstead C. M, and Snell J. L, Introduction to probability, American Mathematical Society, 997 5