Markov Processes Cont d Kolmogorov Differential Equations The Kolmogorov Differential Equations characterize the transition functions {P ij (t)} of a Markov process. The time-dependent behavior of the process is studied upon solving these. Using the Chapman-Kolmogorov Equations, we write 1
P ij (t + h) = P ii (h)p ij (t) k E P ik(t)p kj (s) = k E,k i P ik(h)p kj (t) + Subtracting P ij (t) from both sides yields P ij (t + h) P ij (t) = k E,k i P ik(h)p kj (t) + (1 P ii (h))p ij (t) Dividing this expression by h and letting h 0 yields lim h 0 P ij (t+h) P ij (t) h = lim h 0 P ik(h) h k E,k i P kj (t) + lim h 0 ( 1 Pii (h) h ) P ij (t)
Using q ij and ν i as previously defined, we find the following systems of (backward) Kolomogorov equations P ij (t) = q ikp kj (t) ν i P ij (t) for t 0 and j E. k E,k i The initial condition is given by P ij (0) = δ ij. In matrix notation, this may be stated as dt d P(t) = QP(t) for t 0, P(0) = I. In a similar fashion the (forward) differential equations may be found yielding d dt P(t) = P(t)Q for t 0, P(0) = I. equations are used more frequently. In practice, the forward
For any t 0, it may be shown that the solution to the forward or backward equations is P(t) = e Qt. The term e Qt is called the matrix exponential. Computationally, this may be evaluated by the series expansion of the matrix exponential e Qt = n=0 (Qt) n n!, or through the identity e Q(t) = lim n (I Qt/n) n. Example: Find P(t) for the barber shop example given previously. (Computational Exercise)
Limiting Probabilities Consider a stable Markov Process, i.e. the embedded Markov chain defined at the transition epochs is irreducible and recurrent (ergodic). It may be shown that the transition functions P ij (t) of the corresponding Markov process possess a limit. This limit is independent of the starting state i. Let π j lim t P ij (t), i.e. the limiting probabilities. Differentiating with respect to time and interchanging
the order of the differentiation and limiting operations d gives, dt lim t P ij (t) = lim d t dt P ij (t) = 0. In matrix notation, this is equivalent to lim t P (t) = 0. Using the forward Kolomogorov equations, it follows that lim t P(t)Q = 0, which implies that πq = 0, i.e.
[π 0, π 1, π 2, ] ν 0 q 01 q 02 q 10 ν 1 q 12 q 20 q 21 ν 1 = [0, 0, 0, ] We can interpret the jth equation of this identity, k j π kq kj = π j ν j = 0, as prescribing that the steady state input rate to state j being equal to the steady state output rate. Hence, the limiting distribution for
a Markov process is the row vector π = {π j } that satisfies πq = 0 and π e = 1, where 1 is a column vector of 1 s. When the state space is finite, a simple way to compute the limiting probabilities is to replace the first linear equation of πq = 0 by π 1 = 1. This yields a matrix Q 1 that is defined as the matrix Q with the first column replaced by 1 and a row vector b = {1, 0, 0, }. This yields the system of equations πq 1 = b and the vector π is given by the first row of Q 1 1.
Example: Find the limiting probabilities of the barbershop example, i.e. Consider a barber shop with two barbers and two waiting chairs. Customers arrive at a rate of 5/hour and the barbers serve customers at a rate of 2/hour. Customers arriving to a fully occupied shop leave without being served. When the shop opens at 8 a.m. there are already two customers waiting to be served. Assume that the arrivals are Poisson, service times are exponential, and the arrival process is independent of the service process.
Example: Find the limiting probabilities of the salesman problem, i.e. A salesman lives in town a and is responsible for towns a, b, and c. After some study, it has been determined that the amount of time spent in any one city follows an exponentially distributed random variable with the mean time depending on the town. These mean times are 2 weeks in town a, 1 week in town c, and 1.5 weeks in town b. When he leaves town a, he is equally likely to go to either b or c, and when he leaves either b or c, there is a 75%
chance of returning home or 25% chance of going to the other town. Let X(t) be a random variable denoting the the town the salesman is in at time t. Absorbing Markov Processes Consider a Markov process with at least one state that is absorbing, i.e. the transition rate ν i = 0. A process with at least one absorbing state is called an absorbing Markov process. For such processes, we are interested
in examining the transient states of the system prior to absorption. Let T be a set of transient states, and let T c be a set of absorbing states. Further, let the generator matrix Q mxm be partitioned as follows where P = 0 0 R V
V is a (m-r)x(m-r) matrix of transition rates among transient states R is a (m-r)xr matrix of transition rates from transient to recurrent states. Let D ij (t) be a random variable denoting the duration of stay in transient state j during the interval (0, t) given that X(0) = i, where i is also a transient state,
and let µ ij = E[D ij ] and σ 2 ij = V ar[d ij]. that It follows N = {µ ij } = V 1. where the matrix N is the continuous analog of the fundamental matrix M computed for Markov chains. Further, the matrix N v = {σ 2 ij } = 2V 1 [V 1 I] = [V 1 V 1 ]
Let S be a matrix of ultimate absorption probabilities. It follows that S = V 1 R. The matrix S is the continuous analog of the matrix F computed for Markov chains. Example: Trauma Center: Consider a trauma center that has four operating beds and three beds for waiting patients. Arrival episodes of ambulances carrying
patients follow a Poisson process at a rate of one arrival per two hours. Let p i denote the probability a particular episode has i patients where p 1 = 0.7, p 2 = 0.2, p 3 = 0.1. The patients length of stay on an operating bed follows and exponential distribution with a mean of 2.5 hours. The trauma center has a policy of not admitting new arrivals as soon as one of its waiting beds is filled. The center is interested in studying center closures caused by capacity limitations starting from an epoch when all beds are empty.
Revenues and Costs Let X = X(t), t 0 be a Markov process with an irreducible, recurrent state space E, a profit rate vector f, and a matrix of jump profits h. Further, let π denote the steady-state limiting probabilities. Then, the long run profit per unit time is given by lim t E t 0 f(x s )ds + s t h(y s, Y s ) =
i E π(i) [ f(i) + k EQ(i, k)h(i, k) ] Example: Consider the salesman problem. Assume the revenue possible from each town varies. In town a, his profit is at a rate of $80 per day, in town b, $100 per day, and in town c, $125 per day. There is also a cost associated with changing towns, this cost is estimated to be $0.25 per mile and it is 50 miles from a to b, 65 miles from a to c, and 80 miles from
b to c. Determine the long-run average weekly profit using 5-day weeks.