FDST Markov Chain Models Tuesday, February 11, 2014 2:01 PM Homework 1 due Friday, February 21 at 2 PM. Reading: Karlin and Taylor, Sections 2.1-2.3 Almost all of our Markov chain models will be time-homogenous, meaning the dynamical rules are invariant with respect to the epoch, and this simplifies the description of the Markov chain in that the stochastic update rule and the probability transition matrix do not depend explicitly on the epoch: 1) Two-state system (M=2) which can abstractly be thought of as an on/off system. State 1: Off/free/unbound/tumble/rain State 2: On/busy/bound/run/dry When the system is on, then there is a probability q for the system to turn off during the next epoch. When the system is off, there is a probability p for the system to turn on during the next epoch. Probability transition matrix: Supplement with appropriate initial probability distribution. with Stochastic update rule, as always, could be written down in principle, but awkward. 2) Queueing models with maximum capacity M (Karlin and Taylor, Sec. 2.2C) We'll consider for now a queue with a single server that handles one request/demand at a time; any other pending requests are put into the queue. Stoch14 Page 1
into the queue. We define a state space for the queue by counting the number of requests that are either being actively served or in the queue. As for the parameter domain, what should an epoch correspond to? Equally spaced time intervals Each completion of a request Each arrival of a request Let's first consider the case in which an epoch corresponds to a fixed time interval. We will assume that the time interval in question is such that it is very unlikely that two or more changes will happen to the system over that time interval (typical, convenient, but not always necessary assumption). Otherwise the model is much more complicated to write down. With this simplifying assumption about the time step corresponding to the epoch, the following can happen: Request can be completed (with probability q) New request arrives (with probability p) Nothing changes (with probability 1-p-q) Queue has maximum capacity M; rejects further requests. We'll write down the Markov chain model in both formulations Probability transition matrix (again with suitable initial distribution) Stochastic update rule Stoch14 Page 2
Intuitively, with a model like this where the state is incremented or decremented randomly, it's natural to write the role of the noise as additive. But not quite...there are "reflecting" boundary conditions. Here is a compact way to handle these boundary conditions: Now let's consider formulating a queueing model where the epochs are defined by the moments at which a service is completed. We'll look at this again from both modeling standpoints. is now the number of requests being served or in the queue after the nth request has been processed. The information needed to formulate the Markov chain model in this setting is: where service period. Probability transition matrix is the probability that j requests arrive during a Stoch14 Page 3
Stochastic update rule: Or better yet, where is the random number of arrivals during the nth service period, and has probability distribution given by the 3) Random Walk on a Graph (Lawler Ch. 1) In the simplest version, when the random walker is at a given node (state) of the system, it chooses an edge with equal probability to make Stoch14 Page 4
its next move. But for applications it's more useful to allow general probabilities to move along the possible edges, provided that the probabilities for each edge leaving a node adds up to 1. No need for the probabilities corresponding to a given edge to be the same in both directions. An general probability transition matrix for the above graph could be: With all entries being nonnegative and all row sums equal 1. Interpretation is that is the probability that a random walker at node i will move to node j over the next epoch. are the probability that a random walker at node i stays at node i over the next epoch. This more general framework is useful in applications nodes could represent actual discrete spatial locations, i.e., patches between which an animal moves nodes could represent more abstract categories of state electronic excitation states configurations of biomolecules (see the work of Christof Schütte) financial/credit conditions of organizations/countries/individuals Stochastic update rule awkward. We could imagine that the model presented above corresponds to some system being observed at regular time intervals. One could alternatively write down a Markov chain model where the epochs are defined in terms of the times at which the state changes. This could be done from scratch. Then the probability transition matrix would have the same structure except that the diagonals would be 0. Alternatively, one could derive such a Markov chain model from the originally posed Markov chain model (formulated in terms of regular time steps), provided we assume the original Markov chain model had a small enough time step that it didn't miss any transitions. This is what's known as deriving the embedded Markov chain from the original Markov chain. Stoch14 Page 5
To derive the probability transition matrix for the embedded Markov chain from the probability transition matrix of the original Markov chain, we do a conditional probability calculation. This is just one particular relation that follows from the fact than any probability rule remains valid if a condition is added consistently to all probabilities appearing in it. This is because one is simply replacing the Stoch14 Page 6
given probability measure by the corresponding conditioned probability measure (think Bayesian). Stoch14 Page 7