Discrete Time Markov Chain of a Dynamical System with a Rest Phase

Similar documents
AARMS Homework Exercises

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread

An Introduction to Stochastic Epidemic Models

AN INTRODUCTION TO STOCHASTIC EPIDEMIC MODELS-PART I

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

2 One-dimensional models in discrete time

Derivation of Itô SDE and Relationship to ODE and CTMC Models

The dynamics of disease transmission in a Prey Predator System with harvesting of prey

1 Types of stochastic models

SIMILAR MARKOV CHAINS

Analysis of bacterial population growth using extended logistic Growth model with distributed delay. Abstract INTRODUCTION

Markov Processes Hamid R. Rabiee

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

MARKOV PROCESSES. Valerio Di Valerio

Dynamics of Disease Spread. in a Predator-Prey System

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Dynamical Systems in Biology

Statistics 150: Spring 2007

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

Introduction to Stochastic Processes with Applications in the Biosciences

LECTURE #6 BIRTH-DEATH PROCESS

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Markov-modulated interactions in SIR epidemics

An Overview of Methods for Applying Semi-Markov Processes in Biostatistics.

MATH3203 Lecture 1 Mathematical Modelling and ODEs

Nonstandard Finite Difference Methods For Predator-Prey Models With General Functional Response

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

2 Discrete-Time Markov Chains

The SIS and SIR stochastic epidemic models revisited

Thursday. Threshold and Sensitivity Analysis

Birth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes

Feedback-mediated oscillatory coexistence in the chemostat

Name Student ID. Good luck and impress us with your toolkit of ecological knowledge and concepts!

ON THE COMPLETE LIFE CAREER OF POPULATIONS IN ENVIRONMENTS WITH A FINITE CARRYING CAPACITY. P. Jagers

Dynamical Systems and Chaos Part II: Biology Applications. Lecture 6: Population dynamics. Ilya Potapov Mathematics Department, TUT Room TD325

Linear-fractional branching processes with countably many types

A BINOMIAL MOMENT APPROXIMATION SCHEME FOR EPIDEMIC SPREADING IN NETWORKS

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Asynchronous and Synchronous Dispersals in Spatially Discrete Population Models

PERSISTENCE AND PERMANENCE OF DELAY DIFFERENTIAL EQUATIONS IN BIOMATHEMATICS

Lecture Notes: Markov chains

88 CONTINUOUS MARKOV CHAINS

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Probability Distributions

Sensitivity and Stability Analysis of Hepatitis B Virus Model with Non-Cytolytic Cure Process and Logistic Hepatocyte Growth

Stochastic2010 Page 1

HANDBOOK OF APPLICABLE MATHEMATICS

Monte Carlo and cold gases. Lode Pollet.

Math 142-2, Homework 2

A NUMERICAL STUDY ON PREDATOR PREY MODEL

Probability Distributions

SIR Epidemic Model with total Population size

Modeling with differential equations

MODELING AND ANALYSIS OF THE SPREAD OF CARRIER DEPENDENT INFECTIOUS DISEASES WITH ENVIRONMENTAL EFFECTS

Markov Chains Handout for Stat 110

Lecture Notes: Markov chains Tuesday, September 16 Dannie Durand

2D-Volterra-Lotka Modeling For 2 Species

Gerardo Zavala. Math 388. Predator-Prey Models

Modelling the spread of bacterial infectious disease with environmental effect in a logistically growing human population

Bayesian Methods with Monte Carlo Markov Chains II

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

4452 Mathematical Modeling Lecture 16: Markov Processes

Population Model. Abstract. We consider the harvest of a certain proportion of a population that is modeled

Stochastic processes and Markov chains (part II)

Physics: spring-mass system, planet motion, pendulum. Biology: ecology problem, neural conduction, epidemics

Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.

Endemic persistence or disease extinction: the effect of separation into subcommunities

Data analysis and stochastic modeling

Budding Yeast, Branching Processes, and Generalized Fibonacci Numbers

Introduction to Dynamical Systems

Introduction to SEIR Models

COMPETITION OF FAST AND SLOW MOVERS FOR RENEWABLE AND DIFFUSIVE RESOURCE

LOTKA-VOLTERRA SYSTEMS WITH DELAY

An Introduction to Evolutionary Game Theory: Lecture 2

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

Mathematics, Box F, Brown University, Providence RI 02912, Web site:

An Application of Graph Theory in Markov Chains Reliability Analysis

Markov Chains and MCMC

Mathematical Modeling and Analysis of Infectious Disease Dynamics

Stochastic process. X, a series of random variables indexed by t

Lecture 4a: Continuous-Time Markov Chain Models

Spotlight on Modeling: The Possum Plague

Markov Chains, Stochastic Processes, and Matrix Decompositions

Lattice models of habitat destruction in a prey-predator system

1 An Introduction to Stochastic Population Models

Systems of Ordinary Differential Equations

Continuous-time predator prey models with parasites

The Leslie Matrix. The Leslie Matrix (/2)

Ecology Regulation, Fluctuations and Metapopulations

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

PERMANENCE IN LOGISTIC AND LOTKA-VOLTERRA SYSTEMS WITH DISPERSAL AND TIME DELAYS

Introduction to Machine Learning CMU-10701

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Stabilization through spatial pattern formation in metapopulations with long-range dispersal

Transcription:

Discrete Time Markov Chain of a Dynamical System with a Rest Phase Abstract A stochastic model, in the form of a discrete time Markov chain, is constructed to describe the dynamics of a population that grows through cell-division, and which has a rest state (no growth or death). Transition probabilities are described and the asymptotic dynamics are compared with those of a deterministic discrete dynamical system. In contrast with the deterministic model, the stochastic model predicts extinction even with positive net growth. With zero net growth, the deterministic model has a locally-asymptotically stable steady state, but the stochastic model asserts no absorbing states (except for certain limiting cases). The existence of a unique stationary probability distribution is established in this case, and a transition probability matrix is constructed to plot such distributions. Expected extinction times of populations, with and without a rest phase, are compared numerically, using Monte Carlo simulations. Keywords: Stochastic model; discrete time Markov chain; population dynamics. 1 Introduction A distinguishing feature of a stochastic model is that it associates a probability to the occurrence of the outcome, while its deterministic counterpart predicts an outcome with absolute certainly, given a set of parameter values. Even though continuous time Markov chain models have received most attention in terms of biological applications such as predation, competition and epidemic process [2], discrete time Markov chain (DTMC) models have also been used abundantly. See, for example, [1, 2, 3, 6, 16]. The present paper will focus on the construction of a discrete time stochastic model, in the form of a Markov chain, describing the dynamics, for example, of microbial cells, where the members can switch between an active phase where they can divide and also die, and a rest state (reservoir, quiescent, or dormant phase/state) where they do not reproduce or die. Members of the active class can switch to the rest phase at a constant positive rate, and can switch back to the active state at a (generally 1

different) constant positive rate, in response to certain environmental or demographic factors. Such dynamical systems have been used to deterministically model a variety of biological processes. Lewis et al. [12] study the populations in which individuals switch between a mobile state (with positive mortality) and a sedentary subpopulation (which reproduces). A similar model is studied in [7]. In [9] transport equations with quiescent phases have been studied. Chemostat models are extended in [1, 13] by a quiescent phase. SIR models with a reservoir state are examined in [8]. A predatorprey system is studied in [14], where the predator can leave the habitat and return. As seen from the prey, the predator would enter a quiescent phase. A stochastic model of cellular quiescence, in the form of a branching process, is presented in [15]. The effects of cellular quiescence to cancer drug treatment are studied in a continuous-time stochastic model in [11]. The purpose of the current paper is to present a more fundamental framework, in terms of construction of a discrete time stochastic model, incorporating a quiescent state in the dynamics of a population and compare the asymptotic dynamics with the corresponding deterministic model. The approach is similar to that of [1], which compares the dynamics of deterministic and stochastic epidemic models. In the current paper, (i) the transition probabilities are described to compute the population levels at the next time step, using the current population levels; (ii) it is shown that the population modeled by the DTMC is predicted to be washed out asymptotically, with a positive probability, despite positive net growth (in contrast with the monotonically increasing population predicted by the deterministic model); (iii) the stochastic transition matrix is constructed for a special case of zero net population growth, which is then used to establish the existence of, and plot the stationary probability distribution when the corresponding deterministic model has a steady state; (iv) the expected time to extinction of a population with rest phase is compared, through Monte Carlo simulations, with the average time to extinction of an otherwise identical population but without a rest phase. The paper is organized as follows. In section 2, the asymptotic behavior of the population (humans, microbial cells) is examined by modeling it as a deterministic system of difference equations. In section 3, a stochastic model is constructed in the form of a DTMC of the associated stochastic process. Section 3.1 contains the construction of a transition matrix, and the plots of stationary probability distributions for a special case of zero net growth. In section 3.2, the expected time to the absorption of solutions of the two systems with and without a reservoir is compared using Monte Carlo simulations. 2

2 The Deterministic Model A deterministic model for a reservoir state will serve as a reference. Denoting the concentration of individuals in the active and reservoir phases as and Y, respectively, the constant switching rate from to Y as β > and that from Y to as α >, the discrete-time deterministic model has the form (t + t) = (1 + m t δ t β t)(t) + α ty (t) Y (t + t) = (1 α t)y (t) + β t(t), (1) where m t and δ t are, respectively, the number of new recruitments (births) and the number of deaths per individual in the active phase, in time t. α t and β t are, respectively, the number of moves per individual from the reservoir (rest phase) to the active phase, and back, in time t. To guarantee the existence of non-negative solutions and their asymptotic convergence to equilibria, the following assumptions are sufficient (which assert that for any given individual, the net number of movements out of each class in time t cannot exceed one): Then the following result can be established. < α t < 1, [β (m δ)] t 1. (2) Theorem 1. (a) If m δ, then (, ) is the unique fixed point of (1). (,) is locally asymptotically stable if and only if m < δ. (b) If m = δ, then (1) has locally asymptotically stable fixed points of the form ( x, β α x). Furthermore, with () + Y () = N, (i) (, N) is a locally asymptotically stable state in the limiting case α + ; (ii) (N, ) is a locally asymptotically stable state in the limiting case β + ; A simple proof is given in the appendix. Theorem 1 establishes the asymptotic dynamics of the deterministic model (1), which will be used for the comparison of corresponding results for the counterpart stochastic model. 3 Formulation of a Discrete Time Markov Chain Following [1], suppose the population size is bounded by M. Let (t) and Y(t) denote discrete random variables for the number of individuals in the active and resting classes, respectively. Let t {, t, 2 t,..., } and (t), Y(t) {, 1, 2,..., M}, (t) + Y(t) M. The discrete-time stochastic dynamical system { (t), Y(t)} t= is a bivariate process [2], with a joint probability function p (x,y) (t) := Prob[( (t), Y(t)) = 3

(x, y)], where x, y {, 1, 2,..., M}, and x + y M. The probability of transition from the state (x, y) to (x + k, y + j) is defined by (using the notation of [1]): p (x+k,y+j),(x,y) ( t) := Prob[(, Y) = (k, j) ( (t), Y(t)) = (x, y)], (3) where = (t + t) (t), Y = Y(t + t) Y(t). In the DTMC, t is assumed to be small enough that at most one change occurs in [t, t + t]: either growth of a single individual in the active phase, a death of a single individual in the active phase, or a shift from one phase to the other. Thus following [3], the transition probabilities for the model (1) are defined using the rates in (1) (multiplied by t). Prob[( (t + t), Y(t + t)) = (x + 1, y) ( (t), Y(t)) = (x, y)] = mx t, Prob[( (t + t), Y(t + t)) = (x 1, y) ( (t), Y(t)) = (x, y)] = δx t, Prob[( (t + t), Y(t + t)) = (x + 1, y 1) ( (t), Y(t)) = (x, y)] = αy t, Prob[( (t + t), Y(t + t)) = (x 1, y + 1) ( (t), Y(t)) = (x, y)] = βx t, Prob[( (t + t), Y(t + t)) = (x, y) ( (t), Y(t)) = (x, y)] = 1 [(m + δ + β)x + αy] t. In order for the probabilities to be non-negative and bounded by 1, we assume: (m + δ + α + β)m t 1. (4) Since all parameters are positive, assumptions (4) are stronger than (2), and are satisfied if t is sufficiently small. It should be mentioned that a set of less restrictive assumptions, namely mm t 1, δm t 1, αm t 1, βm t 1, could be invoked, if the stochastic model did not allow for the possibility of no change in the state in time t. Further assuming that the process is time homogeneous, p (x+k,y+j),(x,y) (t + t, t) =p (x+k,y+j),(x,y) ( t) mx t, (k, j) = (1, ); δx t, (k, j) = ( 1, ); αy t, (k, j) = (1, 1); = βx t, (k, j) = ( 1, 1); 1 [(m + δ + β)x + αy] t, (k, j) = (, );, otherwise. (5) The sum of transition probabilities is 1 as all possible changes in the state (x, y) are accounted for. The transition matrix is not expressible in a simple form; it depends on the ordering of k and j [3]. (Transition matrix for a special case is constructed in section 3.1). It is finally assumed that the process satisfies the Markov Property: Prob[( (t + t), Y(t + t)) ( (), Y()), ( ( t), Y( t)),, ( (t), Y(t))] = Prob[( (t + t), Y(t + t)) ( (t), Y(t))]. (6) Applying (5) and (6), the probabilities at time t + t are related to those at time t. 4

Proposition 1. The state probabilities satisfy the difference equations: p (x 1,y) (t)m(x 1) t + p (x+1,y) (t)δ(x + 1) t +p (x+1,y 1) (t)β(x + 1) t + p (x 1,y+1) (t)α(y + 1) t +p (x,y) (t)[1 (m + δ + β)x t αy t], x = 2,..., M 2, y = 1,..., M 3, x + y M 1, p (x,y) (t + t) = p (x 1,y) (t)m(x 1) t + p (x+1,y 1) (t)β(x + 1) t +p (x 1,y+1) (t)α(y + 1) t + p (x,y) (t)[1 (δ + β)x t αy t], x = 2,..., M 1, y = 1,..., M 2, x + y = M, p (x,) (t + t) =p (x 1,) (t)m(x 1) t + p (x+1,) (t)δ(x + 1) t + p (x 1,1) (t)α t + p (x,) (t)(1 [(m + δ + β)x t]), x = 2,..., M 1, p (,y) (t + t) =p (1,y) (t)δ t + p (1,y 1) (t)β t + p (,y) (t)(1 αy t), y = 1,..., M 1, p (1,y) (t + t) =p (2,y) (t)δ 2 t + p (,y+1) (t)α(y + 1) t + p (2,y 1) (t)β 2 t + p (1,y) [1 (m + δ + β)x t αy t], y = 1,..., M 2, (7) p (1,) (t + t) =p (2,) (t)δ 2 t + p (,1) (t)α t + p (1,) [1 (m + δ + β)x t], p (M,) (t + t) =p (M 1,) (t)m(m 1) + p (M 1,1) (t)α t + p (M,) (t)[1 (β + δ)m] t, p (,M) (t + t) =p (1,M 1) (t)β t + p (,M) (t)(1 αm) t, p (1,M 1) (t + t) =p (2,M 2) β 2 t + p (,M) αm t + p (1,M 1) [1 (δ + β) t α(m 1) t], p (,) (t + t) =p (1,) (t)δ t + p (,) (t). The associated directed graph showing positive transition probabilities lies on a two dimensional lattice, shown in Figure 1. It is a reducible chain consisting of two communication classes [2], namely {(, )} and {(x, y) : x, y =, 1, 2,..., M, x + y M} {(, )}. The later is not closed, since p (,),(1,) ( t) >. From (5), it is clear that p (,),(,) ( t) = 1, so (, ) is an absorbing state. The class {(, )} is closed, since no other state can be reached from (, ). All other states are transient and asymptotically all sample paths are absorbed into the state (, ). On the other hand, (,) is an attracting equilibrium of the deterministic model (1) if m < δ but it is an unstable steady state if m > δ. The following result is established: Theorem 2. Assume m δ. Then the DTMC is reducible with two communication classes. (, ) is an absorbing state and all other states are transient. This result implies that all sample paths are eventually absorbed into the trivial steady state. The absorption may be slow, depending on the parameter values, particularly when m > δ. Of course for the deterministic model (1) this absorption is impossible when m > δ and () + Y () >. This result, based on the stochastic model, is thus in contrast with that of the deterministic counterpart (Theorem 1(a)). 5

Figure 1: Directed graph of the model (1). Figure 2 shows three sample paths of the DTMC model with m δ. Despite that the deterministic solution rises exponentially due to the positive net growth when m > δ, one of the sample paths of the stochastic model is absorbed within first few time steps. The resulting probability distribution will be bimodal (not shown in this work). 3.1 Case m = δ. The corresponding dynamical system { (t)} t= with constant total population size N (due to zero net growth) is a univariate stochastic process with a single random variable, (t), since Y(t) = N (t). We investigate the probability distribution as a function of time by formulating a DTMC for a single random variable (t) {, 1, 2,..., N}. Consider the probability vector p(t) = (p (t), p 1 (t),..., p N (t)) T associated with (t) where the associated probability function is given by p x (t) = Prob[ (t) = x] for x =, 1, 2,, N, t =, t, 2 t,..., 6

+Y 2 1 2 4 6 8 3 2 1 2 4 6 8 Time +Y 6 4 2 2 15 1 2 4 6 8 5 2 4 6 8 Time Figure 2: Three sample paths of the DTMC model are graphed with the deterministic solution (dashed line) for the case m < δ (left) and m > δ (right). The parameter values are; Left: () = 2, Prob[ () = 2] = 1, Y () = 1, Prob[Y() = 1] = 1, m =.1, δ =.3, α = β =.1, Right: t =.1, () = 1, Prob[ () = 1] = 1, Y () =, Prob[Y() = ] = 1, m =.2, δ =.1, α =.1, β =.1. with N x= p x(t) = 1.. Defining the probability of transition from the state (t) = x to the state (t + t) = w in time t and assuming the Markov property and timehomogeneity: p wx ( t) = Prob[ (t + t) = w (t) = x] (where t is small enough so that (t) can change by at most one in time t; w {x + 1, x 1, x}), one gets p wx ( t) = α(n x) t, w = x + 1; βx t, w = x 1; 1 [α(n x) + βx] t, w = x;, otherwise. In order to describe the process as a birth-and-death process, we denote (8) b(x) = α(n x), d(x) = βx, (9) so that (8) is rewritten as p wx ( t) = b(x) t, w = x + 1; d(x) t, w = x 1; 1 [b(x) + d(x)] t, w = x;, otherwise. (1) It should be noted from (9) that b() = αn, b(n) = d() =. Indeed if no individual is present in class at time t, no shift from can be expected in time t + t, and the only addition to this class in time [t, t + t] is an individual moving in from class Y at 7

the rate α (note that Y (t) = N, so that the probability of this shift is αy (t) = αn). On the other hand, if all individuals are in class at time t, there will be no more shift to in time t + t. Since all possible changes in x during time t are accounted for, the sum of all probabilities is 1. Furthermore we choose t small enough so that no transition probability exceeds 1; max {[b(x) + d(x)] t} 1. x {,...,N} Applying the definition of transition probability p wx and the Markov property, the probabilities at time t + t can be expressed in terms of the probabilities at time t: Proposition 2. The state probabilities p x (t) satisfy the following difference equations: p x (t + t) = p x 1 (t)b(x 1) t + p x+1 (t)d(x + 1) t + p x (t) (1 [b(x) + d(x)] t), = p x 1 (t)α(n x + 1) t + p x+1 (t)β(x + 1) t + p x (t) (1 [αn (α β)x] t), x = 1, 2,..., N 1, p (t + t) = p 1 (t)β t + p (t)(1 αn) t, p N (t + t) = p N 1 (t)α t + p N (t)(1 βn) t. Ordering the states from to N, a transition matrix T ( t) can now be constructed. Let t ij denote the i, jth element of T ( t). Then t ij := p ij ( t). Hence, it follows that t ii = p ii ( t) = 1 [α(n i) + βi] t = 1 [αn i(α β)] t, i =,..., N, t i,i+1 = p i,i+1 ( t) = β(i + 1) t, i =,..., N 1, t i+1,i = p i+1,i ( t) = α(n i) t, i =,..., N 1, t ij = otherwise. The DTMC has the following transition matrix: T ( t) = 1 αn t β t αn t 1 [αn (α β)] t 2β t α(n 1) t 1 [αn 2(α β)] t α(n 2) t........ (N 1)β t 1 [αn (N 1)(α β)] t Nβ t α t 1 Nβ t T ( t) is a tridiagonal stochastic matrix (unit column sums). Let p() be the initial probability vector. Then the probability vector in time t is given by p( t) = T ( t)p(). Similarly p(2 t) = T ( t)p( t) = T 2 ( t)p(), and so on. At time n t, (11) p(t) = T n ( t)p(). (12) 8.

The probability vector p(t) gives the probability distribution of the random variable (t) at time t. The asymptotic dynamics of the DTMC are given by lim p(t) = lim T n ( t)p(). t n Figure 3: Directed graph of the model (1) with m = δ. Figure 3 shows the directed graph of the DTMC. The chain consists of a single communication class and is therefore irreducible with all states persistent (recurrent). The chain is aperiodic, hence there exists a unique stationary probability distribution (Corollary 2.4 of [2]) (illustrated in Figures 4 6). For the limiting case when switching in one direction is very slow, the DTMC has an absorbing state. If transition into (respectively out of) the active state is slow, then the directed graph takes the form depicted in Figure 7(a) (resp. 7(b)). The state (resp. N) is absorbing, and all other states are transient. Indeed for α + (resp. β +), it can be shown that since T n ( t) is also a stochastic matrix [4], lim T n ( t) = n 1 1 1......, α + ;...... 1 1 1 The following result sums up the above discussion:, β +. Theorem 3. Assume m = δ. Then, the probability distribution of the random variable (t), at time t, is determined by the probability vector p(t), given by (12). Furthermore, 1. All states of the DTMC are recurrent, and a unique probability distribution exists. 2. is an absorbing state of the DTMC, and all other states are transient in the limiting case α +. 9

6 5 4 3 Probability 1.8.6.4.2 2 1 1 2 3 4 5 Time 2 Time Steps 4 1 5 Figure 4: Left: Three sample paths of the DTMC model are graphed with the deterministic solution (dashed line) for the case m = δ, when α = β. The steady state of the deterministic model contains = 5 individuals. Right: The corresponding probability distribution of the DTMC is plotted as a function of time. The stationary probability distribution follows a path similar to the equilibrium solution of the deterministic model. The parameter values are t =.1, α = β =.1, N = 1, () = 2, Prob[ () = 2] = 1. 9 8 7 1 6.8 5 4 3 2 1 1 2 3 4 5 Time Probability.6.4.2 2 Time Steps 4 1 5 Figure 5: Left: Three sample paths of the DTMC model are graphed with the deterministic solution (dashed line) for the case m = δ, when α > β. Right: The corresponding probability distribution of the DTMC is plotted as a function of time. The stationary probability distribution follows a path similar to the equilibrium solution of the deterministic model. The parameter values are t =.1, α =.1, β =.1, N = 1, () = 2, Prob[ () = 2] = 1. 3. N is an absorbing state of the DTMC, and all other states are transient in the limiting case β +. 1

14 12 1 1.8 8 6 Probability.6.4.2 4 2 1 2 3 4 5 Time 2 Time Steps 4 1 5 Figure 6: Left: Three sample paths of the DTMC model are graphed with the deterministic solution (dashed line) for the case m = δ, when α < β. Right: The corresponding probability distribution of the DTMC is plotted as a function of time. The stationary probability distribution follows a path similar to the equilibrium solution of the deterministic model. The parameter values are t =.1, α =.1, β =.1, N = 1, () = 2, Prob[ () = 2] = 1. Figure 7: Directed graph of model (1) with m = δ and α + (a) and β + (b). While Theorem 1 provides the precise, locally stable steady states for the special case with no net growth, Theorem 3 asserts no steady states in this case. Figures 4-6 demonstrate the existence of stationary probability distributions, in line with Theorem 3. Based on the transition probabilities (8), three stochastic realizations of the stochastic process { (t)} t= for t {, t, 2 t, } are plotted in Figure 4 (equal 11

switch rates in and out of the rest phase), Figure 5 (faster switching out of the reservoir) and Figure 6 (faster switching into the rest state) together with the solution of the corresponding deterministic model, for the case m = δ. The initial population size is 2 in the active class. For illustration, a total (constant) population size of N = 1 is chosen. The corresponding stationary probability distribution is also plotted in each case, based on the probability vector p(t) given by (12). For β +, (,) is an unstable steady state of the two models; every increase in (t) (and, respectively, in (t)), is accompanied by a decrease in Y (t) (and, respectively, in Y(t)), and vice-versa. Therefore as t, the solution ((t), Y (t)) of the deterministic model approaches a non-zero equilibrium, whose value depends on the parameters α and β. The solution ( (t), Y(t)) of the stochastic model approaches a probability distribution around the solution of the deterministic model. The two models agree on the asymptotic dynamics corresponding to the limiting cases (slow switching to or from the rest phase). 3.1.1 Mean of the Random Variables, with m = δ. The difference equation for the mean can be derived from (11). The mean of (t) is given by E( (t)) = xp x (t). Corollary 1. Assume m = δ. Then the mean of the random variable (t) is equal to the solution (t) of the deterministic model (1). A proof is given in the Appendix. x= 1.8 1 Probability.6.4.2 Probability.8.6.4 2 Time Steps 4 1 5.2 1 2 3 4 5 Time Steps 1 5 Figure 8: Probability distribution of the DTMC with m = δ. Parameter values are: t =.1, N = 1, () = 2, Prob[ () = 2] = 1. α = 1 5, β =.1 (left) and α =.2, β = 1 5 (right). 12

3.2 Expected Time to Extinction: Monte Carlo Simulations The expected extinction time of a population with a rest phase, (modeled by the stochastic model), is investigated numerically, using Monte Carlo simulations. With positive net growth, the extinction times may be extremely long, since the probability of extinction, even though positive, is small. The comparison of the extinction time of populations with negative net growth (m < δ) is therefore presented, where it is known that all solutions will be absorbed within a reasonable amount of time, regardless of the switch rates α and β. The time to extinction of the two types of populations is computed using Monte Carlo simulations, one with the capability of escape in the reservoir (rest phase), and the other population without a reservoir state. To carry out the comparison, three sample paths for each of the two populations are plotted, based on the transition probabilities (5), in Figure 9 (left) to indicate the advantage of a reservoir for longer survival. Furthermore, 1, stochastic realizations of each population type were executed using Matlab21b to construct histograms showing the distribution of mean extinction time for each population type (Figure 9 (right)). The histograms are evident of the clear advantage with the rest phase, due to which the extinction time has increased by more than double (39 time units, as compared to 17). 1 9 8 7 6 5 4 3 2 1 2 4 6 8 1 12 Time Figure 9: Left: Three sample paths of the DTMC model are graphed for each of the systems without rest states (α = β =, thick lines) and with rest state (α = β =.1, thin lines). Parameter values are m =.1, δ =.2, t =.1, Prob[ () = 1] = 1, Prob[Y() = 1] = 1. Right: Histograms based on 1, stochastic realizations showing time to extinction for a system with resting states with an average extinction time over 39 time units, and 1, realizations for the one without a resting state (α = β = ) with an average extinction time over 17 time units. The last bar, at time 1, contains the number of simulations (out of 1, for each) that took over 1 time units to extinction. It contains 423 sample paths for the model with rest state and 74 without rest state. The parameter values are m =.1, δ =.3, α =.1, β =.1, t =.1, Prob[ () = 1] = 1, Prob[Y() = 1] = 1. 13

Discussion A stochastic model, in the form of a discrete time Markov chain model, is constructed for a dynamical system describing a population whose members can switch between an active state where they can multiply through cell-division or die, and a rest or quiescent phase, which only serves as a reservoir, without growth or decline in the total population. It is shown that despite that the deterministic model exhibits a monotonically increasing population with a positive net growth, the corresponding stochastic model associates a positive probability to extinction. Unique stationary probability distributions are shown to exist in the case of no net population growth. A transition matrix is constructed for this case, which is used to plot stationary probability distributions when the deterministic model gives precise, locally-asymptotically stable steady states. Using Monte Carlo simulations, the expected time to extinction is shown to increase with a rest state. 4 Appendix Proof of Theorem 1. Suppose (x, y) is a fixed point of (1). Then or, x = (1 + m t δ t β t)x + α ty, y = (1 α t)y + β tx. (m δ)x βx + αy =, βx αy =. If m δ, then (, ) is the only solution of (13). In order to discuss the stability of the fixed point, it should be noted that (1) reduces to ( + Y )(t + t) = (1 + m δ)( + Y )(t), so that for () + Y () >, the solutions of (1) diverge to infinity when m > δ, and converge to (, ) with m < δ. To prove (b), consider the case m = δ. Then the solutions of (13) are given by y = β x. In order to discuss the stability of these fixed points, it should be noted that α (1) gives ((t) + Y (t)) =, so that (t) + Y (t) = () + Y () = N (i.e., constant). Therefore (1) reduces to a single equation If x is a fixed point of (14), then (13) (t + t) = (1 β t)(t) + α t(n (t)). (14) x = (1 β t)x + α t(n x). (15) Denote the right side of (15) by g(x). Then g (x) = 1 β t α t < 1 using (2). This ensures local convergence to the fixed point x [5]. Now (15) gives x + if α + and x N if β +. 14

Proof of Corollary 1. Multiplying (11) by x and summing on x gives E( (t + t)) = = + xp x (t + t) x= xp x 1 (t)α(n x + 1) t + x=1 xp x (t) x= N 1 x= xp x (t)α(n x) t x= xp x+1 (t)β(x + 1) t xp x (t)βx t x= N 1 = E( (t)) + (x + 1)p x (t)α t(n x) + (x 1)p x (t)βx t x= xp x (t)α tn + x= = E( (t)) + α t + β t + α t x 2 p x (t)α t x= x= x=1 x 2 p x (t)β t (x + 1)p x (t)(n x) (N + 1)p N (t)α t x= (x 1)p x (t)x α t x=1 x 2 p x (t) β t x= = E( (t)) β t xp x (t)n x= x 2 p x (t) x= xp x (t) + α t x= x= (N x)p x (t), so that E( (t + t)) = E( (t)) β te( (t)) + α te(n (t)). References [1] L.J.S. Allen, A.M. Burgin (2). Comparison of deterministic and stochastic SIS and SIR models in discrete time. Math. Biosc. 163: 1-33. [2] L.J.S. Allen (23). An Introduction fo Stochastic Processes with Applications to Biology. Pearson Education, Inc. [3] L.J.S. Allen (28) An Introduction to Stochastic Epidemic Models. in: Mathematical Epidemiology, Springer-Verlag, 81-13. [4] N.T.J. Bailey (1964). The elements of stochastic processes with applications to the natural sciences. Wiley, New York. 15

[5] S.D. Conte and C. de Boor (198). Elementary Numerical Analysis: An Alogrithmic Approach. McGraw Hill Book Company. [6] B.A. Craig and P.P. Sendi (22). Estimation of the transition matrix of a discretetime Markov chain. Health Econ. 11: 33-42. [7] K.P. Hadeler, M.A. Lewis (22). Spatial dynamics of the diffusive logistic equation with a sedentary compartment. Can. Appl. Math. Quart. 1:473-499. [8] K.P. Hadeler (29). Epidemic models with reservoirs. in: Modeling and dynamics of infectious diseases. 11:253-267. [9] T. Hillen (23). Transport equations with resting phases. Europ. J. Appl. Math. 14:613-636. [1] W. Jäger, S. Krömker, B. Tang (1994). Quiescence and transient growth dynamics in chemostat models. Math. Biosci. 119:225-239. [11] N.L. Komarova and D. Wodarz (27). Stochastic modeling of cellular colonies with quiescence: An application to drug resistance in cancer. Theor. Popul. Biol. 72: 523-538. [12] M.A. Lewis and G. Schmitz (1996). Biological invasion of an organism with separate mobile and stationary states: Modeling and analysis. Forma 11:1-25. [13] T. Malik, H.L. Smith (26). A resource-based model of microbial quiescence. J. Math. Biol. 53:2 231-252. [14] M.G. Neubert, N. Klepac, N. van den Driessche (22). Stabilizing dispersal delays in predator-prey metapopulation models. Theor. Popul. Biol. 61:339-347. [15] P. Olofsson (28). A stochastic model of a cell population with quiescence. J. Biol. Dyn. 2:4 386-391. [16] N. Ye (2). A Markov chain model of temporal behavior for anomaly detection. Proceedings of the 2 IEEE Workshop on Information Assurance and Security United States Military Academy, West Point, NY, 6-7 June, 2. 16