Markov Chains (Part 3)

Similar documents
Markov Chains (Part 4)

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Markov Processes Hamid R. Rabiee

Readings: Finish Section 5.2

STOCHASTIC PROCESSES Basic notions

Lecture 9 Classification of States

Chapter 16 focused on decision making in the face of uncertainty about one future

Stochastic Models: Markov Chains and their Generalizations

Probability, Random Processes and Inference

2. Transience and Recurrence

Math Homework 5 Solutions

ISM206 Lecture, May 12, 2005 Markov Chain

Markov Chains Handout for Stat 110

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Markov chain Monte Carlo

Markov Chains. Chapter 16. Markov Chains - 1

1 Gambler s Ruin Problem

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

ISE/OR 760 Applied Stochastic Modeling

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

The Theory behind PageRank

Markov Chains on Countable State Space

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Lecture 4 - Random walk, ruin problems and random processes

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Mathematical Methods for Computer Science

IE 5112 Final Exam 2010

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

1 Gambler s Ruin Problem

Some Definition and Example of Markov Chain

2 Discrete-Time Markov Chains

Markov Chains, Stochastic Processes, and Matrix Decompositions

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Markov Chains Introduction

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Discrete Markov Chain. Theory and use

1 Ways to Describe a Stochastic Process

Interlude: Practice Final

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Markov Chains. Contents

Lecture 20 : Markov Chains

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Countable state discrete time Markov Chains

1 Random Walks and Electrical Networks

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Classification of Countable State Markov Chains

4.7.1 Computing a stationary distribution

Bayesian Methods with Monte Carlo Markov Chains II

ISyE 6650 Probabilistic Models Fall 2007

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

LTCC. Exercises solutions

Markov chains. Randomness and Computation. Markov chains. Markov processes

Math-Stat-491-Fall2014-Notes-V

Lectures on Markov Chains

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

On asymptotic behavior of a finite Markov chain

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

30 Classification of States

Asymptotic properties of imprecise Markov chains

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Lecture 5: Random Walks and Markov Chain

Discrete Time Markov Chain (DTMC)

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Statistics 150: Spring 2007

M3/4/5 S4 Applied Probability

Chapter 11 Advanced Topic Stochastic Processes

Markov Chains and Stochastic Sampling

Lecture 2 : CS6205 Advanced Modeling and Simulation

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

18.175: Lecture 30 Markov chains

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Computer Engineering II Solution to Exercise Sheet Chapter 12

18.440: Lecture 33 Markov Chains

Question Points Score Total: 70

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

FINITE MARKOV CHAINS

STAT STOCHASTIC PROCESSES. Contents

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Contents. CONTENTS Page 1

Stochastic Processes

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Reinforcement Learning

6 Markov Chain Monte Carlo (MCMC)

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

ON A CONJECTURE OF WILLIAM HERSCHEL

Markov Chains, Random Walks on Graphs, and the Laplacian

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Transcription:

Markov Chains (Part 3) State Classification Markov Chains -

State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is a positive probability of transitioning to state j in some number of steps. This is written j i State j is accessible from state i " j if and only if there is a directed path from i to j in the state transition diagram. Note that every state is accessible from itself because we allow n= in the above definition and p () ii =P(X =i X =i)=>. Markov Chains -

State Classification Example Consider the Markov chain &.4.5 P = %.6.5.3.5.7.4.8 #.. " Draw its state transition diagram.6.4.5.5.7. Which states are accessible from state? States and are accessible from state Which states are accessible from state 3? States, 3, and 4 are accessible from state 3 Is state accessible from state 4? No.3 3 4..5.4.8 Markov Chains - 3

State Classification Example Now consider a Markov chain with the following state transition diagram Example Is state accessible from state? Yes Is state accessible from state? No Is state accessible from state? Yes Is state accessible from state? No Markov Chains - 4

State Classification Communicability States i and j communicate if state j is accessible from state i, and state i is accessible from state j (denote j i) Communicability is Reflexive: Any state communicates with itself Symmetric: If state i communicates with state j, then state j communicates with state i Transitive: If state i communicates with state j, and state j communicates with state k, then state i communicates with state k For the examples, which states communicate with each other? Markov Chains - 5

State Classification Communicability Example :.6.4.5.5.7., 3 4.3 3 4..5.4.8 Example :,, Markov Chains - 6

State Classes Two states are said to be in the same class if the two states communicate with each other, that is i j, then i and j are in same class. Thus, all states in a Markov chain can be partitioned into disjoint classes If states i and j are in the same class, then i j. If a state i is in one class and state j is in another class, then i and j do not communicate. How many classes exist in the examples? Which states belong to each class? Markov Chains - 7

State Classes Example :.6.4.5.5.7., 3 4 classes {,} {,3,4}.3 3 4..5.4.8 Example :,, 3 classes {} {} {} Markov Chains - 8

Gambler s Ruin Example Consider the gambling game with probability p=.4 of winning on any turn State transition diagram and one-step transition probability matrix:.4.4 3 3 3 " % '.6.4 '.6.4' # ' &.6.6 How many classes are there? Three: {} {,} {3} Markov Chains - 9

Irreducibility A Markov chain is irreducible if all states belong to one class (all states communicate with each other). If there exists some n for which p ij (n) > for all i and j, then all states communicate and the Markov chain is irreducible. If a Markov chain is not irreducible, it is called reducible. If a Markov chain has more than one class, it is reducible. Are the examples reducible or irreducible? Ex : Reducible {,} {,3,4} Ex : Reducible {} {} {} Gambler s Ruin Ex: Reducible {} {,} {3} Markov Chains -

Examples of Irreducible Chains Weather example p Sun -p Rain -q q Inventory example P(D = ) P(D =) P(D = ) P(D " 3) P(D = ) 3 P(D ") P(D =) P(D =) P(D = ) P(D = ) P(D " ) P(D " 3) P(D = ) Markov Chains -

Periodicity of the Gambler s Ruin.4.4 3.6.6 Observe: if you start in State at time, you can only come back to it in times, 4, 6, 8, In other words, is the greatest common denominator of all integers n >, for which p ii (n) > We say, the period of State is. The period of State is also. And observe, they are in the same class. State has a period of, called aperiodic Markov Chains -

Periodicity The period of a state i is the greatest common denominator (gcd) of all integers n >, for which p ii (n) > Periodicity is a class property If states i and j are in the same class, then their periods are the same State i is called aperiodic if there are two consecutive numbers s and (s+) such that the process can be in state i at these times, i.e., the period is Markov Chains - 3

Markov Chains - 4 Periodicity Examples Which of the following Markov chains are aperiodic? Which are irreducible? " # % & = P " # % & = 4 3 4 3 3 P " # % & = 4 3 4 3 3 P Irreducible, because all states communicate Period =3 Irreducible Aperiodic Reducible, classes Each class is aperiodic /3 / /4 /3 / 3/4 / / / / 3 /3 /4 /3 3/4

Transient States Consider the gambling example again: Suppose you are in state. What is the probability that you will never return to state again? For example, if you win in state, and then win again in state, then you will never return to state again. The probability this happens in.4*.4 =.6 Thus, there is a positive probability that, starting in state, you will never return to state again. State is called a transient state. In general, a state i is said to be transient if, upon entering state i, there is a positive probability that the process may never return to state i again A state i is transient if and only if there exists a state j (different from i) that is accessible from state i, but i is not accessible from j In a finite-state Markov chain, a transient state is visited only a finite number of times Markov Chains - 5

Recurrent States A state that is not transient is called recurrent. State i is said to be recurrent if, upon entering state i, the process will definitely return to state i Since a recurrent state definitely will be revisited after each visit, it will be visited infinitely often. A special type of recurrent state is an absorbing state, where, upon entering this state, the process will never leave it. State i is an absorbing state if and only if p ii =. Recurrence (and transience) is a class property In a finite-state Markov chain, not all states can be transient Why? Because there has to be another state j to move to, so there would have to be states Markov Chains - 6

Transient and Recurrent States Examples Gambler s ruin: Transient states: {,} Recurrent states: {} {3} Absorbing states: {} {3} Inventory problem Transient states: None Recurrent states: {,,,3} Absorbing states: None.4.4 3 P(D =).6 P(D = ) P(D = ).6 P(D " 3) P(D = ) 3 P(D = ) P(D ") P(D =) P(D =) P(D = ) P(D = ) P(D " ) P(D " 3) Markov Chains - 7

Examples of Transient and Recurrent States Transient {} {} Aperiodic Absorbing {} Aperiodic Recurrent {,,} Period = 3 Transient {,}, Period = Recurrent {,3}, Period = Recurrent {,}, Aperiodic Markov Chains - 8

Ergodic Markov Chains In a finite-state Markov chain, not all states can be transient, so if there are transient states, the chain is reducible If a finite-state Markov chain is irreducible, all states must be recurrent In a finite-state Markov chain, a state that is recurrent and aperiodic is called ergodic A Markov chain is called ergodic if all its states are ergodic. We are interested in irreducible, ergodic Markov chains Markov Chains - 9