Stochastic2010 Page 1

Similar documents
Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

Homework 3 posted, due Tuesday, November 29.

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Finite-Horizon Statistics for Markov chains

The cost/reward formula has two specific widely used applications:

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Birth-death chain models (countable state)

Countable state discrete time Markov Chains

This is now an algebraic equation that can be solved simply:

Classification of Countable State Markov Chains

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

FDST Markov Chain Models

2 Discrete-Time Markov Chains

Detailed Balance and Branching Processes Monday, October 20, :04 PM

2. Transience and Recurrence

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Stochastic modelling of epidemic spread

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Stochastic modelling of epidemic spread

Let's contemplate a continuous-time limit of the Bernoulli process:

So in terms of conditional probability densities, we have by differentiating this relationship:

MARKOV PROCESSES. Valerio Di Valerio

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

I will post Homework 1 soon, probably over the weekend, due Friday, September 30.

(implicitly assuming time-homogeneity from here on)

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

STOCHASTIC PROCESSES Basic notions

Convergence of Feller Processes

Part I Stochastic variables and Markov chains

Markov Processes Hamid R. Rabiee

Lecture 1: Brief Review on Stochastic Processes

Lecture 10: Powers of Matrices, Difference Equations

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

IEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.

The SIS and SIR stochastic epidemic models revisited

Statistics 150: Spring 2007

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Probability Distributions

Lecture 20 : Markov Chains

88 CONTINUOUS MARKOV CHAINS

Markov chains. Randomness and Computation. Markov chains. Markov processes

1 Stochastic Dynamic Programming

Let's transfer our results for conditional probability for events into conditional probabilities for random variables.

Availability. M(t) = 1 - e -mt

Stability of the two queue system

Reading: Karlin and Taylor Ch. 5 Resnick Ch. 3. A renewal process is a generalization of the Poisson point process.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

1 Types of stochastic models

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Inventory Model (Karlin and Taylor, Sec. 2.3)

Probability Distributions

Stochastic Models: Markov Chains and their Generalizations

Markov Chains (Part 3)

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS

Stochastic Petri Nets. Jonatan Lindén. Modelling SPN GSPN. Performance measures. Almost none of the theory. December 8, 2010

Question Points Score Total: 70

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing


6 Continuous-Time Birth and Death Chains

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

4452 Mathematical Modeling Lecture 16: Markov Processes

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

IEOR 6711, HMWK 5, Professor Sigman

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

1 IEOR 4701: Continuous-Time Markov Chains

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

AARMS Homework Exercises

Lectures on Markov Chains

An Introduction to Stochastic Modeling

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

Modelling Complex Queuing Situations with Markov Processes

Lecturer: Olga Galinina

Data analysis and stochastic modeling

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Math Homework 5 Solutions

Basic Probability space, sample space concepts and order of a Stochastic Process

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Markov decision processes and interval Markov chains: exploiting the connection

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG)

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Basic Definitions: Indexed Collections and Random Functions

Positive and null recurrent-branching Process

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Discrete Time Markov Chain (DTMC)

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Transcription:

Stochastic2010 Page 1 Extinction Probability for Branching Processes Friday, April 02, 2010 2:03 PM Long-time properties for branching processes Clearly state 0 is an absorbing state, forming its own recurrent class. The other states in the state space will generically be in a communication class except for unusual choices of offspring distribution (i.e., only even numbers of offspring). In any case, the states 1,2, are all transient since they have one-way communication with the absorbing state 0. This implies in particular that it isn't sensible to talk about stationary distributions -- what will happen, in any realization, is either the branching process will go extinct or will increase indefinitely. This suggests the following questions are relevant for branching processes, for each choice of offspring probability distribution What is the probability to go extinct (rather than to grow indefinitely)? How long does it take to go extinct when the branching process does go extinct? If the branching process goes extinct, how much cost/reward did the branching process incur until it went extinct? When the branching process grows indefinitely, how fast does it grow? These questions find particular application in, for example epidemiology or technology markets. We'll address the first question in detail; the second and third can be attacked by similar methods. The fourth is trickier. Consider the extinction probability, which is same notion as an absorption probability: The special structure of the branching process, namely the independence of each agent, means that:

Stochastic2010 Page 2 But each agent has equivalent stochastic rules for its descendants so each of these factors is just the probability for a given single agent to go extinct, namely a(1). Use first-step analysis just as we did for general absorption probability calculation but now exploit special branching process structure. (partition on the state of the Markov chain at epoch 1)

Stochastic2010 Page 3 So the extinction probability must satisfy the equation: Does this equation have unique solution, and if not, which solution is the right one? Let's take care of some special cases first: But on the other hand, if This implies that one of the two following graphical possibilities apply:

Stochastic2010 Page 4 We see that the possible solutions to are: a=1 (always a solution) nontrivial solution 0<a<1 when One can show (see the references) that when the nontrivial solution exists, it is the correct value for the extinction probability. Notice that this all says the following: When the mean number of offspring per agent is less than one, the population is guaranteed to go extinct. When the mean number of offspring per agent is greater than or equal to one, the population has some nonnegative probability to go extinct, which can be computed by the solution to a nonlinear equation Continuous-time Markov chains References: Lawler Ch. 3, Karlin & Taylor Ch. 4 We now consider how to modify our modeling and analysis when the transitions in state are allowed to happen at moments along a continuous time (parameter) domain rather than at discrete epochs. But state space is still discrete (possibly countably infinite).

Stochastic2010 Page 5 state space is still discrete (possibly countably infinite). The realizations of a CTMC will be piecewise constant, with the transition times being random and continuously distributed, and the transitions also being random. By convention, the realizations of a CTMC are taken to be right-continuous (cadlag process). Markov property in continuous time: Notions are easier to work with these finite-time representations of the past; if process is nice (cadlag is enough) then can approximate a complete history with a sufficiently fine finite mesh of points -- passing to the limit involves some measure-theoretic technicalities (see filtrations). Just as for discrete-time Markov chains, many powerful formulas become available if we consider (as we will) time-homogenous CTMC In what situations are CTMC used more modeling? First of all, why would one use a CTMC rather than a DTMC? The reason one should ask this is that for any given CTMC X(t), one can construct some associated DTMC: X n = X(n t) Also we can construct the embedded DTMC : (need strong Markov property which says that Markov property (formulated for deterministic times) carries over to randomly chosen times, provided those random times are Markov times, meaning that they are determined only by information available and up to including the actual time.

Stochastic2010 Page 6 So why would a CTMC model be any better than these associated DTMC? one may have a large number of states,some of which are visited sometimes very briefly, and the DTMC obtained from regular time observations may miss it. The embedded DTMC misses timing information; if you augment the embedded DTMC with time spent in each state,then equivalent to a CTMC. CTMC are sometimes good spatial discretizations of continuous-space processes. Simulating a CTMC can actually be easier than simulating a DTMC with regular time steps. Often times in physics and engineering contexts, CTMC models are easier to formulate because it describes instantaneous changes, which can generally be linearly superposed. But finite-time effects generally do not linearly superpose. This explains the kinds of contexts in which one sees CTMCs often used: statistical modeling in atomic and quantum physics chemical reaction networks and biomolecular reaction networks dynamics on more general networks (disease spread on sexual or social networks)

Stochastic2010 Page 7