Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Similar documents
Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3. A renewal process is a generalization of the Poisson point process.

Countable state discrete time Markov Chains

This is now an algebraic equation that can be solved simply:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Classification of Countable State Markov Chains

Stochastic2010 Page 1

Birth-death chain models (countable state)

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

Let's contemplate a continuous-time limit of the Bernoulli process:

Homework 3 posted, due Tuesday, November 29.

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

The cost/reward formula has two specific widely used applications:

stochnotes Page 1

I will post Homework 1 soon, probably over the weekend, due Friday, September 30.

Statistics 150: Spring 2007

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

IEOR 6711, HMWK 5, Professor Sigman

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

FDST Markov Chain Models

2. Transience and Recurrence

Continuous time Markov chains

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

(implicitly assuming time-homogeneity from here on)

Detailed Balance and Branching Processes Monday, October 20, :04 PM

Adventures in Stochastic Processes

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

Stochastic process. X, a series of random variables indexed by t

Part I Stochastic variables and Markov chains

Poisson Point Processes

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

So in terms of conditional probability densities, we have by differentiating this relationship:

MARKOV PROCESSES. Valerio Di Valerio

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

A review of Continuous Time MC STA 624, Spring 2015

Discrete and Continuous Random Variables

1 IEOR 4701: Continuous-Time Markov Chains

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

Math Homework 5 Solutions

Mathematical Framework for Stochastic Processes

HITTING TIME IN AN ERLANG LOSS SYSTEM

Stochastic modelling of epidemic spread

The Transition Probability Function P ij (t)

Name of the Student:

ISM206 Lecture, May 12, 2005 Markov Chain

Lecturer: Olga Galinina

Birth-Death Processes

The exponential distribution and the Poisson process

Stochastic modelling of epidemic spread

LECTURE #6 BIRTH-DEATH PROCESS

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Probability, Random Processes and Inference

Random Walk on a Graph

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

M.Sc. (MATHEMATICS WITH APPLICATIONS IN COMPUTER SCIENCE) M.Sc. (MACS)

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Advanced Computer Networks Lecture 2. Markov Processes

SMSTC (2007/08) Probability.

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

An Introduction to Stochastic Modeling

Bernoulli Counting Process with p=0.1

CS 237: Probability in Computing

STAT STOCHASTIC PROCESSES. Contents

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Queuing Theory. Using the Math. Management Science

Lecture 1: Brief Review on Stochastic Processes

Since D has an exponential distribution, E[D] = 0.09 years. Since {A(t) : t 0} is a Poisson process with rate λ = 10, 000, A(0.

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

Queueing Theory. VK Room: M Last updated: October 17, 2013.

Let's transfer our results for conditional probability for events into conditional probabilities for random variables.

Finite-Horizon Statistics for Markov chains

Markov Chains (Part 3)

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010

Stochastic Petri Net

Examination paper for TMA4265 Stochastic Processes

Stochastic Processes (Week 6)

Continuous-time Markov Chains

Question Points Score Total: 70

Chapter 16 focused on decision making in the face of uncertainty about one future

T. Liggett Mathematics 171 Final Exam June 8, 2011

PROBABILITY AND STOCHASTIC PROCESSES A Friendly Introduction for Electrical and Computer Engineers

Markov Processes Hamid R. Rabiee

µ n 1 (v )z n P (v, )

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

GI/M/1 and GI/M/m queuing systems

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

AARMS Homework Exercises

1 Delayed Renewal Processes: Exploiting Laplace Transforms

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Stochastic Models: Markov Chains and their Generalizations

Solutions For Stochastic Process Final Exam

Transcription:

Large-Time Behavior for Continuous-Time Markov Chains Friday, December 02, 2011 10:58 AM Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). How are formulas for large-time behavior of discrete-time Markov chains modified for continuous-time Markov chains? Classification of Markov chains (transience vs. recurrence) Communication classes are determined the same way as for discretetime Markov chains by looking at topology. Transience or recurrence is determined by whether the embedded discrete-time Markov chain is transient or recurrent. Positive recurrence vs. null recurrence Again, an irreducible positive recurrent CTMC is precisely an irreducible CTMC with a stationary distribution. (And if there is not a stationary distribution, then the CTMC is not positive recurrent.) Stationary distribution has to be computed differently than for the embedded discrete time Markov chain because the amount of time spent in states is relevant. Stationary distribution for a CTMC: Should be a time-independent solution to the forward Kolmogorov equation (which describes how the New Section 3 Page 1

probability distribution for a state evolves). As for discrete-time Markov chains, one can try to find detailed balance solutions: This is overdetermined, so solution is not guaranteed to exist, but it's usually easy to check if a detailed balance solution can be found, and if so, it does give a good stationary distribution (assuming the other two conditions about nonnegativity and normalization). Once this classification is done, then we can compute the long-time properties by the following extensions to discrete-time theory: 1. For positive recurrent classes, one can determine the long-time behavior by using the stationary New Section 3 Page 2

distribution and the following continuous-time version of the law of large number for continuoustime Markov chains: 2. When starting in a transient state, one can do an absorption probability calculation to determine which (if any) of the recurrent classes the Markov chain will move into (with what probabilities). This is done by applying the absorption probability calculations to the embedded discrete-time Markov chain. 3. If one wants to compute the expected reward/cost accumulated while moving through transient states, until one hits a recurrent class, then one can use the following modification to the discrete-time formula: Note that here f is the rate at which the cost/reward is accumulated. New Section 3 Page 3

Proper derivation using first-step analysis can be found in Lawler Sec. 3.3, Karlin and Taylor Ch. 4. 4. And as for discrete-time Markov chains, not much more can be said quantitatively about null recurrent classes beyond the general properties already described for them in the discrete-time setting. Similar formulas apply when one extends to continuous state space (stochastic differential equations) but now the infinitesmal generator A is given by a second order differential operator. Renewal processes Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3 Renewal processes generalize the Poisson process by relaxing the Markovian property of the associated counting process. In particular, the incidents generated by a renewal process only lose memory when a new incident occurs. But while waiting between incidents, the process does have memory of how long it has been waiting. This relaxation of the Markov property (of the associated counting process) means that the time intervals between successive incidents in a renewal process are New Section 3 Page 4

independent but not necessarily exponentially distributed. (Usually the times between successive incidents are identically distributed except maybe the time until the first event). Applications: Equipment replacement Neuronal signals: particularly if one is looking at outgoing signals. Following in distance as a spatial renewal process Generic events associated to continuous-time Markov chains with the strong Markov property: Times of successive visits to a certain state More generally, if continuous-time Markov chain models are modified to have nonexponential distributions for times between state changes, this is called a semi- Markov process, but is not necessarily a renewal process because the state change may not refresh the system. Queueing theory with nonexponential times for requests quitting the queue Branching processes with age considerations, so the branching events are not separated by exponentially distributed times Epidemiological models where the times between infection and recovery are not exponentially distributed. The theory of semi-markov processes is relatively New Section 3 Page 5

difficult, and I haven't really seen a unified treatment. But for specific models, it's possible to develop a good deal of analysis (and renewal processes are just the simplest example.) Let's begin by viewing a renewal process as a point process on the positive real line. time of the jth incident The interincident (interarrival) times: For a standard (pure) renewal process, all the interincident times are assumed to be independent and identically distributed. (A commonly studied variation, called a delayed renewal process, allows T 1 to have a different probability distribution, but we won't go into this.) So pure renewal processes have their statistical dynamics New Section 3 Page 6

completely described by the probability distribution for the interincident times, and this is usually expressed in terms of a cumulative distribution function (CDF): For continuously distributed interincident times, one can express the CDF in terms of the PDF of the interincident times by: The reason why CDFs are used in renewal process theory (rather than PDFs) is that the interincident time often is not purely continuous; it could be discrete or even a hybrid discrete-continuous random variable. CDFs are a unified way to describe discrete, continuous, hybrid random variables. For a Poisson point process, this would be a renewal process with interincident PDF: New Section 3 Page 7

What random variables associated to a renewal process are of fundamental interest in applications? The associated counting process: Counts how many incidents have happened before time t. Total life Current life Residual life To warm up, let's compute these quantities for Poisson point process. This will lead us to the Poisson paradox. New Section 3 Page 8

We already have computed the properties of the Poisson counting process N(t); it's given by a Poisson distribution with mean From the Markov property and associated memorylessness of how long one has spent since the last Poisson event, the residual life of a Poisson point process must be exponentially distributed with mean For computing the current life, this involves looking at the past of time t. Recall that the Markov property can be applied in either time direction. But there's a catch, that the current life can't be bigger than t. If one had extended the renewal process back to negative times, then the current life would have the same probability distribution as the residual life, but if we are only looking at the renewal process on the positive time axis (as is the convention), then one has to apply a cutoff at time t. New Section 3 Page 9

The total life can be computed as: By the Markov property, one of these summands is a random variable of the past, and one is a random variable of the future, and so they must be independent (for Poisson point process!) New Section 3 Page 10