Markov Processes Cont d. Kolmogorov Differential Equations

Similar documents
The Transition Probability Function P ij (t)

Statistics 150: Spring 2007

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Quantitative Model Checking (QMC) - SS12

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Continuous-Time Markov Chain

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Examination paper for TMA4265 Stochastic Processes

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

SMSTC (2007/08) Probability.

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Data analysis and stochastic modeling

Stochastic process. X, a series of random variables indexed by t

Continuous time Markov chains

Stochastic Processes

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

T. Liggett Mathematics 171 Final Exam June 8, 2011

Math 416 Lecture 11. Math 416 Lecture 16 Exam 2 next time

IEOR 6711, HMWK 5, Professor Sigman

Part II: continuous time Markov chain (CTMC)

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

STAT 380 Continuous Time Markov Chains

Statistics 992 Continuous-time Markov Chains Spring 2004

Markov processes and queueing networks

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Time Reversibility and Burke s Theorem

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Lecture 4a: Continuous-Time Markov Chain Models

Chapter 3: Markov Processes First hitting times

Introduction to Queuing Networks Solutions to Problem Sheet 3

STAT STOCHASTIC PROCESSES. Contents

Stochastic Processes

EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00

Chapter 16 focused on decision making in the face of uncertainty about one future

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Markov Chain Model for ALOHA protocol

IE 5112 Final Exam 2010

Readings: Finish Section 5.2

MARKOV PROCESSES. Valerio Di Valerio

A review of Continuous Time MC STA 624, Spring 2015

SOLUTIONS IEOR 3106: Second Midterm Exam, Chapters 5-6, November 8, 2012

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Markov Reliability and Availability Analysis. Markov Processes

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Part I Stochastic variables and Markov chains

QUEUING MODELS AND MARKOV PROCESSES

Continuous Time Markov Chains

Continuous time Markov chains

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Question Points Score Total: 70

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Lecture 20: Reversible Processes and Queues

LTCC. Exercises solutions

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

CS 798: Homework Assignment 3 (Queueing Theory)

Operations Research II, IEOR161 University of California, Berkeley Spring 2007 Final Exam. Name: Student ID:

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

An Overview of Methods for Applying Semi-Markov Processes in Biostatistics.

Cover Page. The handle holds various files of this Leiden University dissertation

Markov Chains in Continuous Time

Stochastic Modelling Unit 1: Markov chain models

Availability. M(t) = 1 - e -mt

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MH4702/MAS446/MTH437 Probabilistic Methods in OR

1 IEOR 4701: Continuous-Time Markov Chains

Markov Chains (Part 4)

Problem Set 8

Classification of Queuing Models

1 Continuous-time chains, finite state space

Name of the Student:

Performance Evaluation of Queuing Systems

56:171 Operations Research Final Exam December 12, 1994

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON

Non Markovian Queues (contd.)

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Probability, Random Processes and Inference

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Stochastic Processes (Week 6)

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Queueing systems in a random environment with applications

Random Walk on a Graph

Chapter 3 Balance equations, birth-death processes, continuous Markov Chains

Lecture 20 : Markov Chains

Markov Chains Absorption Hamid R. Rabiee

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

Reinforcement Learning

ISyE 6650 Probabilistic Models Fall 2007

Markov Chains Absorption (cont d) Hamid R. Rabiee

Transcription:

Markov Processes Cont d Kolmogorov Differential Equations The Kolmogorov Differential Equations characterize the transition functions {P ij (t)} of a Markov process. The time-dependent behavior of the process is studied upon solving these. Using the Chapman-Kolmogorov Equations, we write 1

P ij (t + h) = P ii (h)p ij (t) k E P ik(t)p kj (s) = k E,k i P ik(h)p kj (t) + Subtracting P ij (t) from both sides yields P ij (t + h) P ij (t) = k E,k i P ik(h)p kj (t) + (1 P ii (h))p ij (t) Dividing this expression by h and letting h 0 yields lim h 0 P ij (t+h) P ij (t) h = lim h 0 P ik(h) h k E,k i P kj (t) + lim h 0 ( 1 Pii (h) h ) P ij (t)

Using q ij and ν i as previously defined, we find the following systems of (backward) Kolomogorov equations P ij (t) = q ikp kj (t) ν i P ij (t) for t 0 and j E. k E,k i The initial condition is given by P ij (0) = δ ij. In matrix notation, this may be stated as dt d P(t) = QP(t) for t 0, P(0) = I. In a similar fashion the (forward) differential equations may be found yielding d dt P(t) = P(t)Q for t 0, P(0) = I. equations are used more frequently. In practice, the forward

For any t 0, it may be shown that the solution to the forward or backward equations is P(t) = e Qt. The term e Qt is called the matrix exponential. Computationally, this may be evaluated by the series expansion of the matrix exponential e Qt = n=0 (Qt) n n!, or through the identity e Q(t) = lim n (I Qt/n) n. Example: Find P(t) for the barber shop example given previously. (Computational Exercise)

Limiting Probabilities Consider a stable Markov Process, i.e. the embedded Markov chain defined at the transition epochs is irreducible and recurrent (ergodic). It may be shown that the transition functions P ij (t) of the corresponding Markov process possess a limit. This limit is independent of the starting state i. Let π j lim t P ij (t), i.e. the limiting probabilities. Differentiating with respect to time and interchanging

the order of the differentiation and limiting operations d gives, dt lim t P ij (t) = lim d t dt P ij (t) = 0. In matrix notation, this is equivalent to lim t P (t) = 0. Using the forward Kolomogorov equations, it follows that lim t P(t)Q = 0, which implies that πq = 0, i.e.

[π 0, π 1, π 2, ] ν 0 q 01 q 02 q 10 ν 1 q 12 q 20 q 21 ν 1 = [0, 0, 0, ] We can interpret the jth equation of this identity, k j π kq kj = π j ν j = 0, as prescribing that the steady state input rate to state j being equal to the steady state output rate. Hence, the limiting distribution for

a Markov process is the row vector π = {π j } that satisfies πq = 0 and π e = 1, where 1 is a column vector of 1 s. When the state space is finite, a simple way to compute the limiting probabilities is to replace the first linear equation of πq = 0 by π 1 = 1. This yields a matrix Q 1 that is defined as the matrix Q with the first column replaced by 1 and a row vector b = {1, 0, 0, }. This yields the system of equations πq 1 = b and the vector π is given by the first row of Q 1 1.

Example: Find the limiting probabilities of the barbershop example, i.e. Consider a barber shop with two barbers and two waiting chairs. Customers arrive at a rate of 5/hour and the barbers serve customers at a rate of 2/hour. Customers arriving to a fully occupied shop leave without being served. When the shop opens at 8 a.m. there are already two customers waiting to be served. Assume that the arrivals are Poisson, service times are exponential, and the arrival process is independent of the service process.

Example: Find the limiting probabilities of the salesman problem, i.e. A salesman lives in town a and is responsible for towns a, b, and c. After some study, it has been determined that the amount of time spent in any one city follows an exponentially distributed random variable with the mean time depending on the town. These mean times are 2 weeks in town a, 1 week in town c, and 1.5 weeks in town b. When he leaves town a, he is equally likely to go to either b or c, and when he leaves either b or c, there is a 75%

chance of returning home or 25% chance of going to the other town. Let X(t) be a random variable denoting the the town the salesman is in at time t. Absorbing Markov Processes Consider a Markov process with at least one state that is absorbing, i.e. the transition rate ν i = 0. A process with at least one absorbing state is called an absorbing Markov process. For such processes, we are interested

in examining the transient states of the system prior to absorption. Let T be a set of transient states, and let T c be a set of absorbing states. Further, let the generator matrix Q mxm be partitioned as follows where P = 0 0 R V

V is a (m-r)x(m-r) matrix of transition rates among transient states R is a (m-r)xr matrix of transition rates from transient to recurrent states. Let D ij (t) be a random variable denoting the duration of stay in transient state j during the interval (0, t) given that X(0) = i, where i is also a transient state,

and let µ ij = E[D ij ] and σ 2 ij = V ar[d ij]. that It follows N = {µ ij } = V 1. where the matrix N is the continuous analog of the fundamental matrix M computed for Markov chains. Further, the matrix N v = {σ 2 ij } = 2V 1 [V 1 I] = [V 1 V 1 ]

Let S be a matrix of ultimate absorption probabilities. It follows that S = V 1 R. The matrix S is the continuous analog of the matrix F computed for Markov chains. Example: Trauma Center: Consider a trauma center that has four operating beds and three beds for waiting patients. Arrival episodes of ambulances carrying

patients follow a Poisson process at a rate of one arrival per two hours. Let p i denote the probability a particular episode has i patients where p 1 = 0.7, p 2 = 0.2, p 3 = 0.1. The patients length of stay on an operating bed follows and exponential distribution with a mean of 2.5 hours. The trauma center has a policy of not admitting new arrivals as soon as one of its waiting beds is filled. The center is interested in studying center closures caused by capacity limitations starting from an epoch when all beds are empty.

Revenues and Costs Let X = X(t), t 0 be a Markov process with an irreducible, recurrent state space E, a profit rate vector f, and a matrix of jump profits h. Further, let π denote the steady-state limiting probabilities. Then, the long run profit per unit time is given by lim t E t 0 f(x s )ds + s t h(y s, Y s ) =

i E π(i) [ f(i) + k EQ(i, k)h(i, k) ] Example: Consider the salesman problem. Assume the revenue possible from each town varies. In town a, his profit is at a rate of $80 per day, in town b, $100 per day, and in town c, $125 per day. There is also a cost associated with changing towns, this cost is estimated to be $0.25 per mile and it is 50 miles from a to b, 65 miles from a to c, and 80 miles from

b to c. Determine the long-run average weekly profit using 5-day weeks.