Markov Chains (Part 4)

Similar documents
Markov Chains (Part 3)

ISM206 Lecture, May 12, 2005 Markov Chain

Chapter 16 focused on decision making in the face of uncertainty about one future

Markov Processes Hamid R. Rabiee

STOCHASTIC PROCESSES Basic notions

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

18.440: Lecture 33 Markov Chains

18.600: Lecture 32 Markov Chains

ISE/OR 760 Applied Stochastic Modeling

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

18.175: Lecture 30 Markov chains

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.

The Markov Decision Process (MDP) model

The cost/reward formula has two specific widely used applications:

Markov Chains. Chapter 16. Markov Chains - 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

4.7.1 Computing a stationary distribution

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Statistics 150: Spring 2007

Markov Chains Absorption (cont d) Hamid R. Rabiee

The Transition Probability Function P ij (t)

Markov Chains Handout for Stat 110

Lecture 20 : Markov Chains

Probability, Random Processes and Inference

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Discrete Markov Chain. Theory and use

Markov Chains Absorption Hamid R. Rabiee

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

Lecture 9 Classification of States

Readings: Finish Section 5.2

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

MATH HOMEWORK PROBLEMS D. MCCLENDON

Markov Chains Introduction

Markov chains (week 6) Solutions

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

ISyE 6650 Probabilistic Models Fall 2007

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

1 Gambler s Ruin Problem

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Lecture 10: Powers of Matrices, Difference Equations

IEOR 6711: Stochastic Models I. Solutions to Homework Assignment 9

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

MATH3283W LECTURE NOTES: WEEK 6 = 5 13, = 2 5, 1 13

Lecture 5: Random Walks and Markov Chain

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

ON A CONJECTURE OF WILLIAM HERSCHEL

Reinforcement Learning

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Lecture 5: Introduction to Markov Chains

Interlude: Practice Final

1 Gambler s Ruin Problem

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov chains. Randomness and Computation. Markov chains. Markov processes

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

Quantitative Model Checking (QMC) - SS12

Math Camp Notes: Linear Algebra II

MAA704, Perron-Frobenius theory and Markov chains.

2 DISCRETE-TIME MARKOV CHAINS

2 Discrete-Time Markov Chains

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Stochastic process. X, a series of random variables indexed by t

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Stochastic Models: Markov Chains and their Generalizations

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

P(X 0 = j 0,... X nk = j k )

Lecture 2 : CS6205 Advanced Modeling and Simulation

Practice problems. Practice problems. Example. Grocery store example 2 dairies. Creamwood Cheesedale. Next week This week Creamwood 1 Cheesedale 2

Hidden Markov Models (HMM) and Support Vector Machine (SVM)

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Markov Chains and MCMC

Markov Processes Cont d. Kolmogorov Differential Equations

UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON

Eleventh Problem Assignment

Markov Chains and Stochastic Sampling

On random walks. i=1 U i, where x Z is the starting point of

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

88 CONTINUOUS MARKOV CHAINS

On asymptotic behavior of a finite Markov chain

Markov Processes and Queues

1 Random Walks and Electrical Networks

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

IE 336 Seat # Name. Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

Markov Chains. Contents

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MH4702/MAS446/MTH437 Probabilistic Methods in OR

Discrete Time Markov Chain (DTMC)

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Transcription:

Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1

Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible ergodic Markov chain, lim p = n#".285.285.285.285 ( n) ij.263.263.263.263 where π j = steady state probability of being in state j j.166#.166.166.166" Markov Chains - 2

Some Observations About the Limit The behavior of this important limit depends on properties of states i and j and the Markov chain as a whole. If i and j are recurrent and belong to different classes, then p (n) ij =0 for all n. (n If j is transient, then lim p ) ij = 0 n "# for all i. Intuitively, the probability that the Markov chain is in a transient state after a large number of transitions tends to zero. In some cases, the limit does not exist Consider the following Markov chain: if the chain starts out in state 0, it will be back in 0 at times 2,4,6, and in state 1 at times 1,3,5,. Thus p (n) 00 =1 if n is even and p (n) (n ) 00 =0 if n is odd. Hence the limit lim p does 00 n "# not exist. Markov Chains - 3

Steady-State Probabilities How can we find these probabilities without calculating P (n) for very large n? The following are the steady-state equations: M # =1 j= 0 " j M " j = #" i p ij for all j = 0,...,M i= 0 " j 0 for all j = 0,...,M In matrix notation we have π T P = π T Solve a system of linear equations. Note: there are M+2 equations and only M+1 variables (π 0, π 1,, π M ), so one of the equations is redundant and can be M dropped - just don t drop the equation = 1 " j j= 0 Markov Chains - 4

Solving for the Steady-State Probabilities M T P = T and i =1 " # i=0 0 1 M " ' ' %' ' ' # p 00 p 01... p 0M p 10 p 11... p 1M p (M&1)M p M 0 p M1... p MM ( ( ( = " # 0 1 M ( ( % % 0 p 00 + 1 p 10 ++ M p M 0 = 0 0 p 01 + 1 p 11 ++ M p M1 = 1 = 0 p 0M + 1 p 1M ++ M p MM = M 0 + 1 ++ M = 1 Idea is to go from steady state to steady state: X t M j " M " j i 0 " i " 0 Markov Chains - 5 t

Steady-State Probabilities Examples Find the steady-state probabilities for P = & 0.3 % 0.6 0.7# 0.4" P = & 1 3 1 2 0 % 2 3 0 1 4 0 # 1 2 3 4 " Inventory example P = & 0.080 0.632 0.264 % 0.080 0.184 0.368 0.368 0.184 0.368 0 0.368 0.368 0.368# 0 0 0.368" Markov Chains - 6

Other Applications of Steady-State Probabilities Expected recurrence time: we are often interested in the expected number of steps between consecutive visits to a particular (recurrent) state. What is the expected number of sunny days between rainy days? What is the expected number of weeks between ordering cameras? Long-run expected average cost per unit time: in many applications, we incur a cost or gain a reward every time a Markov chain visits a specific state. If we incur costs for carrying inventory, and costs for not meeting demand, what is the long-run expected average cost per unit time? Markov Chains - 7

Expected Recurrence Times The expected recurrence time, denoted µ jj, is the expected number of transition between two consecutive visits to state j. The steady state probabilities, π j, are related to the expected recurrence times, µ jj, as 1 µ jj = for all j = 0,1,..., M j Markov Chains - 8

Weather Example What is the expected number of sunny days in between 0 1 rainy days? First, calculate π j. 0 0.8+ 1 0.6 = 0 0 =1 1 Sun 0 Rain 1 # " 0.8 0.2 0.6 0.4 & % 0 0.2 + 1 0.4 = 1 0 + 1 = 1 ( 1 1 )0.2 + 1 0.4 = 1 0.2 = 0.8 1 1 =1/ 4 and 0 = 3 / 4 Now, µ 11 = 1/π j = 4 For this example, we expect 4 sunny days between rainy days. Markov Chains - 9

Inventory Example What is the expected number of weeks in between orders? First, the steady state probabilities are: 0 = 0.286, 1 = 0.285, 2 = 0.263, 3 = 0.166 Now, µ 00 = 1/π 0 = 3.5 For this example, on the average, we order cameras every three and a half weeks. Markov Chains - 10

P = Expected Recurrence Times " 0.3 0.7% ' # 0.6 0.4& Examples ( 0 = 6 13 ( 1 = 7 13 µ 00 = 13 6 = 2.1667 µ 11 = 13 7 =1.857 0.7 0.3 0 1 0.4 P = 0.6 " 1 2 3 3 0 % ' 1 2 0 1 2 ' 0 1 3 ' # 4 4 & ( 0 = 3 15 ( 1 = 4 15 ( 2 = 8 15 µ 00 = 15 3 = 5 µ 11 = 15 4 = 3.75 µ 22 = 15 8 =1.875 2/3 1/2 1/3 0 1 2 3/4 Markov Chains - 11 1/2 1/4

Steady-State Cost Analysis Once we know the steady-state probabilities, we can do some long-run analyses Assume we have a finite-state, irreducible Markov chain Let C(X t ) be a cost at time t, that is, C(j) = expected cost of being in state j, for j=0,1,,m The expected average cost over the first n time steps is # E % 1 % n n " C X t t =1 ( ) The long-run expected average cost per unit time is a function of steady state probabilities % lim E ' 1 n " # &' n n C X t t =1 ( ) & ( '( ( M * )* = + jc j j =0 ( ) Markov Chains - 12

Steady-State Cost Analysis Inventory Example Suppose there is a storage cost for having cameras on hand: C( i) = " 0 if i = 0 2 if i =1 # 8 if i = 2 % 18 if i = 3 The long-run expected average cost per unit time is " 0 C 0 ( ) + " 1 C( 1) + " 2 C( 2) + " 3 C( 3) ( ) + 0.285( 2) + 0.268( 8) + 0.166( 18) = 0.286 0 = 5.662 Markov Chains - 13

First Passage Times - Motivation In many applications, we are interested in the time at which the Markov chain visits a particular state for the first time. If I start out with a dollar, what is the probability that I will go broke (for the first time) after n gambles? If I start out with three cameras in my inventory, what is the expected number of days after which I will have none for the first time? The answers to these questions are related to an important concept called first passage times Markov Chains - 14

First Passage Times The first passage time from state i to state j is the number of transitions made by the process in going from state i to state j for the first time When i = j, this first passage time is called the recurrence time for state i Let f ij (n) = probability that the first passage time from state i to state j is equal to n What is the difference between f ij (n) and p ij (n)? X t j i 0 t t+n p ij (n) includes paths that visit j f ij (n) does not include paths that visit j Markov Chains - 15

Some Observations about First Passage Times First passage times are random variables and have probability distributions associated with them f ij (n) = probability that the first passage time from state i to state j is equal to n These probability distributions can be computed using a simple idea: condition on where the Markov chain goes after the first transition For the first passage time from i to j to be n>1, the Markov chain has to transition from i to k (different from j) in one step, and then the first passage time from k to j must be n-1. This concept can be used to derive recursive equations for f (n) ij Markov Chains - 16

First Passage Times The first passage time probabilities satisfy a recursive relationship f (1) ij = p (1) ij = p ij f ij (2) = f ij (n) = " (1) p ik f kj k j " (n#1) p ik f kj k j X t M j i p im p ij p i0 p ii f ij (n "1) f (n "1) Mj (n "1) f 0 j 0 t t+1 t+n Markov Chains - 17

First Passage Times Inventory Example Suppose we were interested in the number of weeks until the first order (start in State 3, X 0 =3) Then we would need to know what is the probability that the first order is submitted in Week 1? ( 1 f 30 = p 30 = 0.080 Week 2? f (2) ( 1) 30 = " p 3k f k 0 = p 31 f (1) 10 + p 32 f (1) 20 + p 33 f 30 Week 3? ( ) = # ( 2) ( 2 p 3k f k0 = p 31 f ) ( 2 10 + p 32 f ) 2 20 + p 33 f 30 f 30 3 k "0 k0 = p 31 p 10 + p 32 p 20 + p 33 p 30 = 0.184(0.632) + 0.368(0.264) + 0.368(0.080) = 0.243 ( ) Markov Chains - 18

Probability of Ever Reaching j from i If the chain starts out in state i, what is the probability that it visits state j at some future time? This probability is denoted f ij and f ij = " n=1 f ij (n) If f ij =1, then the chain starting at i definitely reaches j at some future time, in which case f (n) ij is a genuine probability distribution for the first passage time. On the other hand, if f ij <1, the chain starting at i may never reach j. In fact, the probability that this happens is 1-f ij Markov Chains - 19

Expected First Passage Times The expected first passage time from state i to state j is µ ij = if f ij <1 µ ij = E" (n) # f ij & n=1 % = nf ij If f ij =1, we can also calculate µ ij using the idea to condition on where the chain goes after one transition (n) if f ij =1 µ ij =1p ij + " p ik (1+ µ kj ) = " p ik + " p ik µ kj =1+ " p ik µ kj k j k k j k j µ = 1+ M ij p ik k = 0 k " j µ kj Markov Chains - 20

Expected First Passage Times Inventory Example Find the expected time until the first order is submitted µ =1+ p µ + p µ + p µ " 30 31 10 32 20 33 30 µ =1+ p µ + p µ + p µ # 20 21 10 22 20 23 30 µ =1+ p µ + p µ + p µ % 10 11 10 12 20 13 30 Solve simultaneously, µ 10 =1.58 weeks µ 20 = 2.51 weeks µ 30 = 3.50 weeks Find the expected time between orders µ 00 = 1 " 0 = 3.50 weeks Markov Chains - 21