Discrete Time Markov Chain of a Dynamical System with a Rest Phase
|
|
- Jasmin Stanley
- 6 years ago
- Views:
Transcription
1 Discrete Time Markov Chain of a Dynamical System with a Rest Phase Abstract A stochastic model, in the form of a discrete time Markov chain, is constructed to describe the dynamics of a population that grows through cell-division, and which has a rest state (no growth or death). Transition probabilities are described and the asymptotic dynamics are compared with those of a deterministic discrete dynamical system. In contrast with the deterministic model, the stochastic model predicts extinction even with positive net growth. With zero net growth, the deterministic model has a locally-asymptotically stable steady state, but the stochastic model asserts no absorbing states (except for certain limiting cases). The existence of a unique stationary probability distribution is established in this case, and a transition probability matrix is constructed to plot such distributions. Expected extinction times of populations, with and without a rest phase, are compared numerically, using Monte Carlo simulations. Keywords: Stochastic model; discrete time Markov chain; population dynamics. 1 Introduction A distinguishing feature of a stochastic model is that it associates a probability to the occurrence of the outcome, while its deterministic counterpart predicts an outcome with absolute certainly, given a set of parameter values. Even though continuous time Markov chain models have received most attention in terms of biological applications such as predation, competition and epidemic process [2], discrete time Markov chain (DTMC) models have also been used abundantly. See, for example, [1, 2, 3, 6, 16]. The present paper will focus on the construction of a discrete time stochastic model, in the form of a Markov chain, describing the dynamics, for example, of microbial cells, where the members can switch between an active phase where they can divide and also die, and a rest state (reservoir, quiescent, or dormant phase/state) where they do not reproduce or die. Members of the active class can switch to the rest phase at a constant positive rate, and can switch back to the active state at a (generally 1
2 different) constant positive rate, in response to certain environmental or demographic factors. Such dynamical systems have been used to deterministically model a variety of biological processes. Lewis et al. [12] study the populations in which individuals switch between a mobile state (with positive mortality) and a sedentary subpopulation (which reproduces). A similar model is studied in [7]. In [9] transport equations with quiescent phases have been studied. Chemostat models are extended in [1, 13] by a quiescent phase. SIR models with a reservoir state are examined in [8]. A predatorprey system is studied in [14], where the predator can leave the habitat and return. As seen from the prey, the predator would enter a quiescent phase. A stochastic model of cellular quiescence, in the form of a branching process, is presented in [15]. The effects of cellular quiescence to cancer drug treatment are studied in a continuous-time stochastic model in [11]. The purpose of the current paper is to present a more fundamental framework, in terms of construction of a discrete time stochastic model, incorporating a quiescent state in the dynamics of a population and compare the asymptotic dynamics with the corresponding deterministic model. The approach is similar to that of [1], which compares the dynamics of deterministic and stochastic epidemic models. In the current paper, (i) the transition probabilities are described to compute the population levels at the next time step, using the current population levels; (ii) it is shown that the population modeled by the DTMC is predicted to be washed out asymptotically, with a positive probability, despite positive net growth (in contrast with the monotonically increasing population predicted by the deterministic model); (iii) the stochastic transition matrix is constructed for a special case of zero net population growth, which is then used to establish the existence of, and plot the stationary probability distribution when the corresponding deterministic model has a steady state; (iv) the expected time to extinction of a population with rest phase is compared, through Monte Carlo simulations, with the average time to extinction of an otherwise identical population but without a rest phase. The paper is organized as follows. In section 2, the asymptotic behavior of the population (humans, microbial cells) is examined by modeling it as a deterministic system of difference equations. In section 3, a stochastic model is constructed in the form of a DTMC of the associated stochastic process. Section 3.1 contains the construction of a transition matrix, and the plots of stationary probability distributions for a special case of zero net growth. In section 3.2, the expected time to the absorption of solutions of the two systems with and without a reservoir is compared using Monte Carlo simulations. 2
3 2 The Deterministic Model A deterministic model for a reservoir state will serve as a reference. Denoting the concentration of individuals in the active and reservoir phases as and Y, respectively, the constant switching rate from to Y as β > and that from Y to as α >, the discrete-time deterministic model has the form (t + t) = (1 + m t δ t β t)(t) + α ty (t) Y (t + t) = (1 α t)y (t) + β t(t), (1) where m t and δ t are, respectively, the number of new recruitments (births) and the number of deaths per individual in the active phase, in time t. α t and β t are, respectively, the number of moves per individual from the reservoir (rest phase) to the active phase, and back, in time t. To guarantee the existence of non-negative solutions and their asymptotic convergence to equilibria, the following assumptions are sufficient (which assert that for any given individual, the net number of movements out of each class in time t cannot exceed one): Then the following result can be established. < α t < 1, [β (m δ)] t 1. (2) Theorem 1. (a) If m δ, then (, ) is the unique fixed point of (1). (,) is locally asymptotically stable if and only if m < δ. (b) If m = δ, then (1) has locally asymptotically stable fixed points of the form ( x, β α x). Furthermore, with () + Y () = N, (i) (, N) is a locally asymptotically stable state in the limiting case α + ; (ii) (N, ) is a locally asymptotically stable state in the limiting case β + ; A simple proof is given in the appendix. Theorem 1 establishes the asymptotic dynamics of the deterministic model (1), which will be used for the comparison of corresponding results for the counterpart stochastic model. 3 Formulation of a Discrete Time Markov Chain Following [1], suppose the population size is bounded by M. Let (t) and Y(t) denote discrete random variables for the number of individuals in the active and resting classes, respectively. Let t {, t, 2 t,..., } and (t), Y(t) {, 1, 2,..., M}, (t) + Y(t) M. The discrete-time stochastic dynamical system { (t), Y(t)} t= is a bivariate process [2], with a joint probability function p (x,y) (t) := Prob[( (t), Y(t)) = 3
4 (x, y)], where x, y {, 1, 2,..., M}, and x + y M. The probability of transition from the state (x, y) to (x + k, y + j) is defined by (using the notation of [1]): p (x+k,y+j),(x,y) ( t) := Prob[(, Y) = (k, j) ( (t), Y(t)) = (x, y)], (3) where = (t + t) (t), Y = Y(t + t) Y(t). In the DTMC, t is assumed to be small enough that at most one change occurs in [t, t + t]: either growth of a single individual in the active phase, a death of a single individual in the active phase, or a shift from one phase to the other. Thus following [3], the transition probabilities for the model (1) are defined using the rates in (1) (multiplied by t). Prob[( (t + t), Y(t + t)) = (x + 1, y) ( (t), Y(t)) = (x, y)] = mx t, Prob[( (t + t), Y(t + t)) = (x 1, y) ( (t), Y(t)) = (x, y)] = δx t, Prob[( (t + t), Y(t + t)) = (x + 1, y 1) ( (t), Y(t)) = (x, y)] = αy t, Prob[( (t + t), Y(t + t)) = (x 1, y + 1) ( (t), Y(t)) = (x, y)] = βx t, Prob[( (t + t), Y(t + t)) = (x, y) ( (t), Y(t)) = (x, y)] = 1 [(m + δ + β)x + αy] t. In order for the probabilities to be non-negative and bounded by 1, we assume: (m + δ + α + β)m t 1. (4) Since all parameters are positive, assumptions (4) are stronger than (2), and are satisfied if t is sufficiently small. It should be mentioned that a set of less restrictive assumptions, namely mm t 1, δm t 1, αm t 1, βm t 1, could be invoked, if the stochastic model did not allow for the possibility of no change in the state in time t. Further assuming that the process is time homogeneous, p (x+k,y+j),(x,y) (t + t, t) =p (x+k,y+j),(x,y) ( t) mx t, (k, j) = (1, ); δx t, (k, j) = ( 1, ); αy t, (k, j) = (1, 1); = βx t, (k, j) = ( 1, 1); 1 [(m + δ + β)x + αy] t, (k, j) = (, );, otherwise. (5) The sum of transition probabilities is 1 as all possible changes in the state (x, y) are accounted for. The transition matrix is not expressible in a simple form; it depends on the ordering of k and j [3]. (Transition matrix for a special case is constructed in section 3.1). It is finally assumed that the process satisfies the Markov Property: Prob[( (t + t), Y(t + t)) ( (), Y()), ( ( t), Y( t)),, ( (t), Y(t))] = Prob[( (t + t), Y(t + t)) ( (t), Y(t))]. (6) Applying (5) and (6), the probabilities at time t + t are related to those at time t. 4
5 Proposition 1. The state probabilities satisfy the difference equations: p (x 1,y) (t)m(x 1) t + p (x+1,y) (t)δ(x + 1) t +p (x+1,y 1) (t)β(x + 1) t + p (x 1,y+1) (t)α(y + 1) t +p (x,y) (t)[1 (m + δ + β)x t αy t], x = 2,..., M 2, y = 1,..., M 3, x + y M 1, p (x,y) (t + t) = p (x 1,y) (t)m(x 1) t + p (x+1,y 1) (t)β(x + 1) t +p (x 1,y+1) (t)α(y + 1) t + p (x,y) (t)[1 (δ + β)x t αy t], x = 2,..., M 1, y = 1,..., M 2, x + y = M, p (x,) (t + t) =p (x 1,) (t)m(x 1) t + p (x+1,) (t)δ(x + 1) t + p (x 1,1) (t)α t + p (x,) (t)(1 [(m + δ + β)x t]), x = 2,..., M 1, p (,y) (t + t) =p (1,y) (t)δ t + p (1,y 1) (t)β t + p (,y) (t)(1 αy t), y = 1,..., M 1, p (1,y) (t + t) =p (2,y) (t)δ 2 t + p (,y+1) (t)α(y + 1) t + p (2,y 1) (t)β 2 t + p (1,y) [1 (m + δ + β)x t αy t], y = 1,..., M 2, (7) p (1,) (t + t) =p (2,) (t)δ 2 t + p (,1) (t)α t + p (1,) [1 (m + δ + β)x t], p (M,) (t + t) =p (M 1,) (t)m(m 1) + p (M 1,1) (t)α t + p (M,) (t)[1 (β + δ)m] t, p (,M) (t + t) =p (1,M 1) (t)β t + p (,M) (t)(1 αm) t, p (1,M 1) (t + t) =p (2,M 2) β 2 t + p (,M) αm t + p (1,M 1) [1 (δ + β) t α(m 1) t], p (,) (t + t) =p (1,) (t)δ t + p (,) (t). The associated directed graph showing positive transition probabilities lies on a two dimensional lattice, shown in Figure 1. It is a reducible chain consisting of two communication classes [2], namely {(, )} and {(x, y) : x, y =, 1, 2,..., M, x + y M} {(, )}. The later is not closed, since p (,),(1,) ( t) >. From (5), it is clear that p (,),(,) ( t) = 1, so (, ) is an absorbing state. The class {(, )} is closed, since no other state can be reached from (, ). All other states are transient and asymptotically all sample paths are absorbed into the state (, ). On the other hand, (,) is an attracting equilibrium of the deterministic model (1) if m < δ but it is an unstable steady state if m > δ. The following result is established: Theorem 2. Assume m δ. Then the DTMC is reducible with two communication classes. (, ) is an absorbing state and all other states are transient. This result implies that all sample paths are eventually absorbed into the trivial steady state. The absorption may be slow, depending on the parameter values, particularly when m > δ. Of course for the deterministic model (1) this absorption is impossible when m > δ and () + Y () >. This result, based on the stochastic model, is thus in contrast with that of the deterministic counterpart (Theorem 1(a)). 5
6 Figure 1: Directed graph of the model (1). Figure 2 shows three sample paths of the DTMC model with m δ. Despite that the deterministic solution rises exponentially due to the positive net growth when m > δ, one of the sample paths of the stochastic model is absorbed within first few time steps. The resulting probability distribution will be bimodal (not shown in this work). 3.1 Case m = δ. The corresponding dynamical system { (t)} t= with constant total population size N (due to zero net growth) is a univariate stochastic process with a single random variable, (t), since Y(t) = N (t). We investigate the probability distribution as a function of time by formulating a DTMC for a single random variable (t) {, 1, 2,..., N}. Consider the probability vector p(t) = (p (t), p 1 (t),..., p N (t)) T associated with (t) where the associated probability function is given by p x (t) = Prob[ (t) = x] for x =, 1, 2,, N, t =, t, 2 t,..., 6
7 +Y Time +Y Time Figure 2: Three sample paths of the DTMC model are graphed with the deterministic solution (dashed line) for the case m < δ (left) and m > δ (right). The parameter values are; Left: () = 2, Prob[ () = 2] = 1, Y () = 1, Prob[Y() = 1] = 1, m =.1, δ =.3, α = β =.1, Right: t =.1, () = 1, Prob[ () = 1] = 1, Y () =, Prob[Y() = ] = 1, m =.2, δ =.1, α =.1, β =.1. with N x= p x(t) = 1.. Defining the probability of transition from the state (t) = x to the state (t + t) = w in time t and assuming the Markov property and timehomogeneity: p wx ( t) = Prob[ (t + t) = w (t) = x] (where t is small enough so that (t) can change by at most one in time t; w {x + 1, x 1, x}), one gets p wx ( t) = α(n x) t, w = x + 1; βx t, w = x 1; 1 [α(n x) + βx] t, w = x;, otherwise. In order to describe the process as a birth-and-death process, we denote (8) b(x) = α(n x), d(x) = βx, (9) so that (8) is rewritten as p wx ( t) = b(x) t, w = x + 1; d(x) t, w = x 1; 1 [b(x) + d(x)] t, w = x;, otherwise. (1) It should be noted from (9) that b() = αn, b(n) = d() =. Indeed if no individual is present in class at time t, no shift from can be expected in time t + t, and the only addition to this class in time [t, t + t] is an individual moving in from class Y at 7
8 the rate α (note that Y (t) = N, so that the probability of this shift is αy (t) = αn). On the other hand, if all individuals are in class at time t, there will be no more shift to in time t + t. Since all possible changes in x during time t are accounted for, the sum of all probabilities is 1. Furthermore we choose t small enough so that no transition probability exceeds 1; max {[b(x) + d(x)] t} 1. x {,...,N} Applying the definition of transition probability p wx and the Markov property, the probabilities at time t + t can be expressed in terms of the probabilities at time t: Proposition 2. The state probabilities p x (t) satisfy the following difference equations: p x (t + t) = p x 1 (t)b(x 1) t + p x+1 (t)d(x + 1) t + p x (t) (1 [b(x) + d(x)] t), = p x 1 (t)α(n x + 1) t + p x+1 (t)β(x + 1) t + p x (t) (1 [αn (α β)x] t), x = 1, 2,..., N 1, p (t + t) = p 1 (t)β t + p (t)(1 αn) t, p N (t + t) = p N 1 (t)α t + p N (t)(1 βn) t. Ordering the states from to N, a transition matrix T ( t) can now be constructed. Let t ij denote the i, jth element of T ( t). Then t ij := p ij ( t). Hence, it follows that t ii = p ii ( t) = 1 [α(n i) + βi] t = 1 [αn i(α β)] t, i =,..., N, t i,i+1 = p i,i+1 ( t) = β(i + 1) t, i =,..., N 1, t i+1,i = p i+1,i ( t) = α(n i) t, i =,..., N 1, t ij = otherwise. The DTMC has the following transition matrix: T ( t) = 1 αn t β t αn t 1 [αn (α β)] t 2β t α(n 1) t 1 [αn 2(α β)] t α(n 2) t (N 1)β t 1 [αn (N 1)(α β)] t Nβ t α t 1 Nβ t T ( t) is a tridiagonal stochastic matrix (unit column sums). Let p() be the initial probability vector. Then the probability vector in time t is given by p( t) = T ( t)p(). Similarly p(2 t) = T ( t)p( t) = T 2 ( t)p(), and so on. At time n t, (11) p(t) = T n ( t)p(). (12) 8.
9 The probability vector p(t) gives the probability distribution of the random variable (t) at time t. The asymptotic dynamics of the DTMC are given by lim p(t) = lim T n ( t)p(). t n Figure 3: Directed graph of the model (1) with m = δ. Figure 3 shows the directed graph of the DTMC. The chain consists of a single communication class and is therefore irreducible with all states persistent (recurrent). The chain is aperiodic, hence there exists a unique stationary probability distribution (Corollary 2.4 of [2]) (illustrated in Figures 4 6). For the limiting case when switching in one direction is very slow, the DTMC has an absorbing state. If transition into (respectively out of) the active state is slow, then the directed graph takes the form depicted in Figure 7(a) (resp. 7(b)). The state (resp. N) is absorbing, and all other states are transient. Indeed for α + (resp. β +), it can be shown that since T n ( t) is also a stochastic matrix [4], lim T n ( t) = n , α + ; The following result sums up the above discussion:, β +. Theorem 3. Assume m = δ. Then, the probability distribution of the random variable (t), at time t, is determined by the probability vector p(t), given by (12). Furthermore, 1. All states of the DTMC are recurrent, and a unique probability distribution exists. 2. is an absorbing state of the DTMC, and all other states are transient in the limiting case α +. 9
10 Probability Time 2 Time Steps Figure 4: Left: Three sample paths of the DTMC model are graphed with the deterministic solution (dashed line) for the case m = δ, when α = β. The steady state of the deterministic model contains = 5 individuals. Right: The corresponding probability distribution of the DTMC is plotted as a function of time. The stationary probability distribution follows a path similar to the equilibrium solution of the deterministic model. The parameter values are t =.1, α = β =.1, N = 1, () = 2, Prob[ () = 2] = Time Probability Time Steps Figure 5: Left: Three sample paths of the DTMC model are graphed with the deterministic solution (dashed line) for the case m = δ, when α > β. Right: The corresponding probability distribution of the DTMC is plotted as a function of time. The stationary probability distribution follows a path similar to the equilibrium solution of the deterministic model. The parameter values are t =.1, α =.1, β =.1, N = 1, () = 2, Prob[ () = 2] = N is an absorbing state of the DTMC, and all other states are transient in the limiting case β +. 1
11 Probability Time 2 Time Steps Figure 6: Left: Three sample paths of the DTMC model are graphed with the deterministic solution (dashed line) for the case m = δ, when α < β. Right: The corresponding probability distribution of the DTMC is plotted as a function of time. The stationary probability distribution follows a path similar to the equilibrium solution of the deterministic model. The parameter values are t =.1, α =.1, β =.1, N = 1, () = 2, Prob[ () = 2] = 1. Figure 7: Directed graph of model (1) with m = δ and α + (a) and β + (b). While Theorem 1 provides the precise, locally stable steady states for the special case with no net growth, Theorem 3 asserts no steady states in this case. Figures 4-6 demonstrate the existence of stationary probability distributions, in line with Theorem 3. Based on the transition probabilities (8), three stochastic realizations of the stochastic process { (t)} t= for t {, t, 2 t, } are plotted in Figure 4 (equal 11
12 switch rates in and out of the rest phase), Figure 5 (faster switching out of the reservoir) and Figure 6 (faster switching into the rest state) together with the solution of the corresponding deterministic model, for the case m = δ. The initial population size is 2 in the active class. For illustration, a total (constant) population size of N = 1 is chosen. The corresponding stationary probability distribution is also plotted in each case, based on the probability vector p(t) given by (12). For β +, (,) is an unstable steady state of the two models; every increase in (t) (and, respectively, in (t)), is accompanied by a decrease in Y (t) (and, respectively, in Y(t)), and vice-versa. Therefore as t, the solution ((t), Y (t)) of the deterministic model approaches a non-zero equilibrium, whose value depends on the parameters α and β. The solution ( (t), Y(t)) of the stochastic model approaches a probability distribution around the solution of the deterministic model. The two models agree on the asymptotic dynamics corresponding to the limiting cases (slow switching to or from the rest phase) Mean of the Random Variables, with m = δ. The difference equation for the mean can be derived from (11). The mean of (t) is given by E( (t)) = xp x (t). Corollary 1. Assume m = δ. Then the mean of the random variable (t) is equal to the solution (t) of the deterministic model (1). A proof is given in the Appendix. x= Probability Probability Time Steps Time Steps 1 5 Figure 8: Probability distribution of the DTMC with m = δ. Parameter values are: t =.1, N = 1, () = 2, Prob[ () = 2] = 1. α = 1 5, β =.1 (left) and α =.2, β = 1 5 (right). 12
13 3.2 Expected Time to Extinction: Monte Carlo Simulations The expected extinction time of a population with a rest phase, (modeled by the stochastic model), is investigated numerically, using Monte Carlo simulations. With positive net growth, the extinction times may be extremely long, since the probability of extinction, even though positive, is small. The comparison of the extinction time of populations with negative net growth (m < δ) is therefore presented, where it is known that all solutions will be absorbed within a reasonable amount of time, regardless of the switch rates α and β. The time to extinction of the two types of populations is computed using Monte Carlo simulations, one with the capability of escape in the reservoir (rest phase), and the other population without a reservoir state. To carry out the comparison, three sample paths for each of the two populations are plotted, based on the transition probabilities (5), in Figure 9 (left) to indicate the advantage of a reservoir for longer survival. Furthermore, 1, stochastic realizations of each population type were executed using Matlab21b to construct histograms showing the distribution of mean extinction time for each population type (Figure 9 (right)). The histograms are evident of the clear advantage with the rest phase, due to which the extinction time has increased by more than double (39 time units, as compared to 17) Time Figure 9: Left: Three sample paths of the DTMC model are graphed for each of the systems without rest states (α = β =, thick lines) and with rest state (α = β =.1, thin lines). Parameter values are m =.1, δ =.2, t =.1, Prob[ () = 1] = 1, Prob[Y() = 1] = 1. Right: Histograms based on 1, stochastic realizations showing time to extinction for a system with resting states with an average extinction time over 39 time units, and 1, realizations for the one without a resting state (α = β = ) with an average extinction time over 17 time units. The last bar, at time 1, contains the number of simulations (out of 1, for each) that took over 1 time units to extinction. It contains 423 sample paths for the model with rest state and 74 without rest state. The parameter values are m =.1, δ =.3, α =.1, β =.1, t =.1, Prob[ () = 1] = 1, Prob[Y() = 1] = 1. 13
14 Discussion A stochastic model, in the form of a discrete time Markov chain model, is constructed for a dynamical system describing a population whose members can switch between an active state where they can multiply through cell-division or die, and a rest or quiescent phase, which only serves as a reservoir, without growth or decline in the total population. It is shown that despite that the deterministic model exhibits a monotonically increasing population with a positive net growth, the corresponding stochastic model associates a positive probability to extinction. Unique stationary probability distributions are shown to exist in the case of no net population growth. A transition matrix is constructed for this case, which is used to plot stationary probability distributions when the deterministic model gives precise, locally-asymptotically stable steady states. Using Monte Carlo simulations, the expected time to extinction is shown to increase with a rest state. 4 Appendix Proof of Theorem 1. Suppose (x, y) is a fixed point of (1). Then or, x = (1 + m t δ t β t)x + α ty, y = (1 α t)y + β tx. (m δ)x βx + αy =, βx αy =. If m δ, then (, ) is the only solution of (13). In order to discuss the stability of the fixed point, it should be noted that (1) reduces to ( + Y )(t + t) = (1 + m δ)( + Y )(t), so that for () + Y () >, the solutions of (1) diverge to infinity when m > δ, and converge to (, ) with m < δ. To prove (b), consider the case m = δ. Then the solutions of (13) are given by y = β x. In order to discuss the stability of these fixed points, it should be noted that α (1) gives ((t) + Y (t)) =, so that (t) + Y (t) = () + Y () = N (i.e., constant). Therefore (1) reduces to a single equation If x is a fixed point of (14), then (13) (t + t) = (1 β t)(t) + α t(n (t)). (14) x = (1 β t)x + α t(n x). (15) Denote the right side of (15) by g(x). Then g (x) = 1 β t α t < 1 using (2). This ensures local convergence to the fixed point x [5]. Now (15) gives x + if α + and x N if β +. 14
15 Proof of Corollary 1. Multiplying (11) by x and summing on x gives E( (t + t)) = = + xp x (t + t) x= xp x 1 (t)α(n x + 1) t + x=1 xp x (t) x= N 1 x= xp x (t)α(n x) t x= xp x+1 (t)β(x + 1) t xp x (t)βx t x= N 1 = E( (t)) + (x + 1)p x (t)α t(n x) + (x 1)p x (t)βx t x= xp x (t)α tn + x= = E( (t)) + α t + β t + α t x 2 p x (t)α t x= x= x=1 x 2 p x (t)β t (x + 1)p x (t)(n x) (N + 1)p N (t)α t x= (x 1)p x (t)x α t x=1 x 2 p x (t) β t x= = E( (t)) β t xp x (t)n x= x 2 p x (t) x= xp x (t) + α t x= x= (N x)p x (t), so that E( (t + t)) = E( (t)) β te( (t)) + α te(n (t)). References [1] L.J.S. Allen, A.M. Burgin (2). Comparison of deterministic and stochastic SIS and SIR models in discrete time. Math. Biosc. 163: [2] L.J.S. Allen (23). An Introduction fo Stochastic Processes with Applications to Biology. Pearson Education, Inc. [3] L.J.S. Allen (28) An Introduction to Stochastic Epidemic Models. in: Mathematical Epidemiology, Springer-Verlag, [4] N.T.J. Bailey (1964). The elements of stochastic processes with applications to the natural sciences. Wiley, New York. 15
16 [5] S.D. Conte and C. de Boor (198). Elementary Numerical Analysis: An Alogrithmic Approach. McGraw Hill Book Company. [6] B.A. Craig and P.P. Sendi (22). Estimation of the transition matrix of a discretetime Markov chain. Health Econ. 11: [7] K.P. Hadeler, M.A. Lewis (22). Spatial dynamics of the diffusive logistic equation with a sedentary compartment. Can. Appl. Math. Quart. 1: [8] K.P. Hadeler (29). Epidemic models with reservoirs. in: Modeling and dynamics of infectious diseases. 11: [9] T. Hillen (23). Transport equations with resting phases. Europ. J. Appl. Math. 14: [1] W. Jäger, S. Krömker, B. Tang (1994). Quiescence and transient growth dynamics in chemostat models. Math. Biosci. 119: [11] N.L. Komarova and D. Wodarz (27). Stochastic modeling of cellular colonies with quiescence: An application to drug resistance in cancer. Theor. Popul. Biol. 72: [12] M.A. Lewis and G. Schmitz (1996). Biological invasion of an organism with separate mobile and stationary states: Modeling and analysis. Forma 11:1-25. [13] T. Malik, H.L. Smith (26). A resource-based model of microbial quiescence. J. Math. Biol. 53: [14] M.G. Neubert, N. Klepac, N. van den Driessche (22). Stabilizing dispersal delays in predator-prey metapopulation models. Theor. Popul. Biol. 61: [15] P. Olofsson (28). A stochastic model of a cell population with quiescence. J. Biol. Dyn. 2: [16] N. Ye (2). A Markov chain model of temporal behavior for anomaly detection. Proceedings of the 2 IEEE Workshop on Information Assurance and Security United States Military Academy, West Point, NY, 6-7 June, 2. 16
AARMS Homework Exercises
1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationAn Introduction to Stochastic Epidemic Models
An Introduction to Stochastic Epidemic Models Linda J. S. Allen Department of Mathematics and Statistics Texas Tech University Lubbock, Texas 79409-1042, U.S.A. linda.j.allen@ttu.edu 1 Introduction The
More informationAN INTRODUCTION TO STOCHASTIC EPIDEMIC MODELS-PART I
AN INTRODUCTION TO STOCHASTIC EPIDEMIC MODELS-PART I Linda J. S. Allen Department of Mathematics and Statistics Texas Tech University Lubbock, Texas U.S.A. 2008 Summer School on Mathematical Modeling of
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More information2 One-dimensional models in discrete time
2 One-dimensional models in discrete time So far, we have assumed that demographic events happen continuously over time and can thus be written as rates. For many biological species with overlapping generations
More informationDerivation of Itô SDE and Relationship to ODE and CTMC Models
Derivation of Itô SDE and Relationship to ODE and CTMC Models Biomathematics II April 23, 2015 Linda J. S. Allen Texas Tech University TTU 1 Euler-Maruyama Method for Numerical Solution of an Itô SDE dx(t)
More informationThe dynamics of disease transmission in a Prey Predator System with harvesting of prey
ISSN: 78 Volume, Issue, April The dynamics of disease transmission in a Prey Predator System with harvesting of prey, Kul Bhushan Agnihotri* Department of Applied Sciences and Humanties Shaheed Bhagat
More information1 Types of stochastic models
1 Types of stochastic models Models so far discussed are all deterministic, meaning that, if the present state were perfectly known, it would be possible to predict exactly all future states. We have seen
More informationSIMILAR MARKOV CHAINS
SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in
More informationAnalysis of bacterial population growth using extended logistic Growth model with distributed delay. Abstract INTRODUCTION
Analysis of bacterial population growth using extended logistic Growth model with distributed delay Tahani Ali Omer Department of Mathematics and Statistics University of Missouri-ansas City ansas City,
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationDynamics of Disease Spread. in a Predator-Prey System
Advanced Studies in Biology, vol. 6, 2014, no. 4, 169-179 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/asb.2014.4845 Dynamics of Disease Spread in a Predator-Prey System Asrul Sani 1, Edi Cahyono
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationDynamical Systems in Biology
Dynamical Systems in Biology Hal Smith A R I Z O N A S T A T E U N I V E R S I T Y H.L. Smith (ASU) Dynamical Systems in Biology ASU, July 5, 2012 1 / 31 Outline 1 What s special about dynamical systems
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
(February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe
More informationIntroduction to Stochastic Processes with Applications in the Biosciences
Introduction to Stochastic Processes with Applications in the Biosciences David F. Anderson University of Wisconsin at Madison Copyright c 213 by David F. Anderson. Contents 1 Introduction 4 1.1 Why study
More informationLECTURE #6 BIRTH-DEATH PROCESS
LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationMarkov-modulated interactions in SIR epidemics
Markov-modulated interactions in SIR epidemics E. Almaraz 1, A. Gómez-Corral 2 (1)Departamento de Estadística e Investigación Operativa, Facultad de Ciencias Matemáticas (UCM), (2)Instituto de Ciencias
More informationAn Overview of Methods for Applying Semi-Markov Processes in Biostatistics.
An Overview of Methods for Applying Semi-Markov Processes in Biostatistics. Charles J. Mode Department of Mathematics and Computer Science Drexel University Philadelphia, PA 19104 Overview of Topics. I.
More informationMATH3203 Lecture 1 Mathematical Modelling and ODEs
MATH3203 Lecture 1 Mathematical Modelling and ODEs Dion Weatherley Earth Systems Science Computational Centre, University of Queensland February 27, 2006 Abstract Contents 1 Mathematical Modelling 2 1.1
More informationNonstandard Finite Difference Methods For Predator-Prey Models With General Functional Response
Nonstandard Finite Difference Methods For Predator-Prey Models With General Functional Response Dobromir T. Dimitrov Hristo V. Kojouharov Technical Report 2007-0 http://www.uta.edu/math/preprint/ NONSTANDARD
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
J. Appl. Prob. 41, 1211 1218 (2004) Printed in Israel Applied Probability Trust 2004 EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS and P. K. POLLETT, University of Queensland
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationApplied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system
Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity
More information2 Discrete-Time Markov Chains
2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationThe SIS and SIR stochastic epidemic models revisited
The SIS and SIR stochastic epidemic models revisited Jesús Artalejo Faculty of Mathematics, University Complutense of Madrid Madrid, Spain jesus_artalejomat.ucm.es BCAM Talk, June 16, 2011 Talk Schedule
More informationThursday. Threshold and Sensitivity Analysis
Thursday Threshold and Sensitivity Analysis SIR Model without Demography ds dt di dt dr dt = βsi (2.1) = βsi γi (2.2) = γi (2.3) With initial conditions S(0) > 0, I(0) > 0, and R(0) = 0. This model can
More informationBirth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes
DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains
More informationFeedback-mediated oscillatory coexistence in the chemostat
Feedback-mediated oscillatory coexistence in the chemostat Patrick De Leenheer and Sergei S. Pilyugin Department of Mathematics, University of Florida deleenhe,pilyugin@math.ufl.edu 1 Introduction We study
More informationName Student ID. Good luck and impress us with your toolkit of ecological knowledge and concepts!
Page 1 BIOLOGY 150 Final Exam Winter Quarter 2000 Before starting be sure to put your name and student number on the top of each page. MINUS 3 POINTS IF YOU DO NOT WRITE YOUR NAME ON EACH PAGE! You have
More informationON THE COMPLETE LIFE CAREER OF POPULATIONS IN ENVIRONMENTS WITH A FINITE CARRYING CAPACITY. P. Jagers
Pliska Stud. Math. 24 (2015), 55 60 STUDIA MATHEMATICA ON THE COMPLETE LIFE CAREER OF POPULATIONS IN ENVIRONMENTS WITH A FINITE CARRYING CAPACITY P. Jagers If a general branching process evolves in a habitat
More informationDynamical Systems and Chaos Part II: Biology Applications. Lecture 6: Population dynamics. Ilya Potapov Mathematics Department, TUT Room TD325
Dynamical Systems and Chaos Part II: Biology Applications Lecture 6: Population dynamics Ilya Potapov Mathematics Department, TUT Room TD325 Living things are dynamical systems Dynamical systems theory
More informationLinear-fractional branching processes with countably many types
Branching processes and and their applications Badajoz, April 11-13, 2012 Serik Sagitov Chalmers University and University of Gothenburg Linear-fractional branching processes with countably many types
More informationA BINOMIAL MOMENT APPROXIMATION SCHEME FOR EPIDEMIC SPREADING IN NETWORKS
U.P.B. Sci. Bull., Series A, Vol. 76, Iss. 2, 2014 ISSN 1223-7027 A BINOMIAL MOMENT APPROXIMATION SCHEME FOR EPIDEMIC SPREADING IN NETWORKS Yilun SHANG 1 Epidemiological network models study the spread
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationAsynchronous and Synchronous Dispersals in Spatially Discrete Population Models
SIAM J. APPLIED DYNAMICAL SYSTEMS Vol. 7, No. 2, pp. 284 310 c 2008 Society for Industrial and Applied Mathematics Asynchronous and Synchronous Dispersals in Spatially Discrete Population Models Abdul-Aziz
More informationPERSISTENCE AND PERMANENCE OF DELAY DIFFERENTIAL EQUATIONS IN BIOMATHEMATICS
PERSISTENCE AND PERMANENCE OF DELAY DIFFERENTIAL EQUATIONS IN BIOMATHEMATICS PhD Thesis by Nahed Abdelfattah Mohamady Abdelfattah Supervisors: Prof. István Győri Prof. Ferenc Hartung UNIVERSITY OF PANNONIA
More informationLecture Notes: Markov chains
Computational Genomics and Molecular Biology, Fall 5 Lecture Notes: Markov chains Dannie Durand At the beginning of the semester, we introduced two simple scoring functions for pairwise alignments: a similarity
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More information(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only
More informationProbability Distributions
Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)
More informationSensitivity and Stability Analysis of Hepatitis B Virus Model with Non-Cytolytic Cure Process and Logistic Hepatocyte Growth
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 3 2016), pp. 2297 2312 Research India Publications http://www.ripublication.com/gjpam.htm Sensitivity and Stability Analysis
More informationStochastic2010 Page 1
Stochastic2010 Page 1 Extinction Probability for Branching Processes Friday, April 02, 2010 2:03 PM Long-time properties for branching processes Clearly state 0 is an absorbing state, forming its own recurrent
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume II: Probability Emlyn Lloyd University oflancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester - New York - Brisbane
More informationMonte Carlo and cold gases. Lode Pollet.
Monte Carlo and cold gases Lode Pollet lpollet@physics.harvard.edu 1 Outline Classical Monte Carlo The Monte Carlo trick Markov chains Metropolis algorithm Ising model critical slowing down Quantum Monte
More informationMath 142-2, Homework 2
Math 142-2, Homework 2 Your name here April 7, 2014 Problem 35.3 Consider a species in which both no individuals live to three years old and only one-year olds reproduce. (a) Show that b 0 = 0, b 2 = 0,
More informationA NUMERICAL STUDY ON PREDATOR PREY MODEL
International Conference Mathematical and Computational Biology 2011 International Journal of Modern Physics: Conference Series Vol. 9 (2012) 347 353 World Scientific Publishing Company DOI: 10.1142/S2010194512005417
More informationProbability Distributions
Lecture 1: Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function
More informationSIR Epidemic Model with total Population size
Advances in Applied Mathematical Biosciences. ISSN 2248-9983 Volume 7, Number 1 (2016), pp. 33-39 International Research Publication House http://www.irphouse.com SIR Epidemic Model with total Population
More informationModeling with differential equations
Mathematical Modeling Lia Vas Modeling with differential equations When trying to predict the future value, one follows the following basic idea. Future value = present value + change. From this idea,
More informationMODELING AND ANALYSIS OF THE SPREAD OF CARRIER DEPENDENT INFECTIOUS DISEASES WITH ENVIRONMENTAL EFFECTS
Journal of Biological Systems, Vol. 11, No. 3 2003 325 335 c World Scientific Publishing Company MODELING AND ANALYSIS OF THE SPREAD OF CARRIER DEPENDENT INFECTIOUS DISEASES WITH ENVIRONMENTAL EFFECTS
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationLecture Notes: Markov chains Tuesday, September 16 Dannie Durand
Computational Genomics and Molecular Biology, Lecture Notes: Markov chains Tuesday, September 6 Dannie Durand In the last lecture, we introduced Markov chains, a mathematical formalism for modeling how
More information2D-Volterra-Lotka Modeling For 2 Species
Majalat Al-Ulum Al-Insaniya wat - Tatbiqiya 2D-Volterra-Lotka Modeling For 2 Species Alhashmi Darah 1 University of Almergeb Department of Mathematics Faculty of Science Zliten Libya. Abstract The purpose
More informationGerardo Zavala. Math 388. Predator-Prey Models
Gerardo Zavala Math 388 Predator-Prey Models Spring 2013 1 History In the 1920s A. J. Lotka developed a mathematical model for the interaction between two species. The mathematician Vito Volterra worked
More informationModelling the spread of bacterial infectious disease with environmental effect in a logistically growing human population
Nonlinear Analysis: Real World Applications 7 2006) 341 363 www.elsevier.com/locate/na Modelling the spread of bacterial infectious disease with environmental effect in a logistically growing human population
More informationBayesian Methods with Monte Carlo Markov Chains II
Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 Part 3
More informationQuantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)
Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University
More information4452 Mathematical Modeling Lecture 16: Markov Processes
Math Modeling Lecture 16: Markov Processes Page 1 4452 Mathematical Modeling Lecture 16: Markov Processes Introduction A stochastic model is one in which random effects are incorporated into the model.
More informationPopulation Model. Abstract. We consider the harvest of a certain proportion of a population that is modeled
Optimal Harvesting in an Integro-difference Population Model Hem Raj Joshi Suzanne Lenhart Holly Gaff Abstract We consider the harvest of a certain proportion of a population that is modeled by an integro-difference
More informationStochastic processes and Markov chains (part II)
Stochastic processes and Markov chains (part II) Wessel van Wieringen w.n.van.wieringen@vu.nl Department of Epidemiology and Biostatistics, VUmc & Department of Mathematics, VU University Amsterdam, The
More informationPhysics: spring-mass system, planet motion, pendulum. Biology: ecology problem, neural conduction, epidemics
Applications of nonlinear ODE systems: Physics: spring-mass system, planet motion, pendulum Chemistry: mixing problems, chemical reactions Biology: ecology problem, neural conduction, epidemics Economy:
More informationContinuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.
Extinction Probability for Branching Processes Friday, November 11, 2011 2:05 PM Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.
More informationEndemic persistence or disease extinction: the effect of separation into subcommunities
Mathematical Statistics Stockholm University Endemic persistence or disease extinction: the effect of separation into subcommunities Mathias Lindholm Tom Britton Research Report 2006:6 ISSN 1650-0377 Postal
More informationData analysis and stochastic modeling
Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt
More informationBudding Yeast, Branching Processes, and Generalized Fibonacci Numbers
Integre Technical Publishing Co., Inc. Mathematics Magazine 84:3 April 2, 211 11:36 a.m. olofsson.tex page 163 VOL. 84, NO. 3, JUNE 211 163 Budding Yeast, Branching Processes, and Generalized Fibonacci
More informationIntroduction to Dynamical Systems
Introduction to Dynamical Systems Autonomous Planar Systems Vector form of a Dynamical System Trajectories Trajectories Don t Cross Equilibria Population Biology Rabbit-Fox System Trout System Trout System
More informationIntroduction to SEIR Models
Department of Epidemiology and Public Health Health Systems Research and Dynamical Modelling Unit Introduction to SEIR Models Nakul Chitnis Workshop on Mathematical Models of Climate Variability, Environmental
More informationCOMPETITION OF FAST AND SLOW MOVERS FOR RENEWABLE AND DIFFUSIVE RESOURCE
CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 2, Number, Spring 22 COMPETITION OF FAST AND SLOW MOVERS FOR RENEWABLE AND DIFFUSIVE RESOURCE SILOGINI THANARAJAH AND HAO WANG ABSTRACT. In many studies of
More informationLOTKA-VOLTERRA SYSTEMS WITH DELAY
870 1994 133-140 133 LOTKA-VOLTERRA SYSTEMS WITH DELAY Zhengyi LU and Yasuhiro TAKEUCHI Department of Applied Mathematics, Faculty of Engineering, Shizuoka University, Hamamatsu 432, JAPAN ABSTRACT Sufftcient
More informationAn Introduction to Evolutionary Game Theory: Lecture 2
An Introduction to Evolutionary Game Theory: Lecture 2 Mauro Mobilia Lectures delivered at the Graduate School on Nonlinear and Stochastic Systems in Biology held in the Department of Applied Mathematics,
More informationLIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE
International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION
More informationMathematics, Box F, Brown University, Providence RI 02912, Web site:
April 24, 2012 Jerome L, Stein 1 In their article Dynamics of cancer recurrence, J. Foo and K. Leder (F-L, 2012), were concerned with the timing of cancer recurrence. The cancer cell population consists
More informationAn Application of Graph Theory in Markov Chains Reliability Analysis
An Application of Graph Theory in Markov Chains Reliability Analysis Pavel SKALNY Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science, VSB Technical University of
More informationMarkov Chains and MCMC
Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time
More informationMathematical Modeling and Analysis of Infectious Disease Dynamics
Mathematical Modeling and Analysis of Infectious Disease Dynamics V. A. Bokil Department of Mathematics Oregon State University Corvallis, OR MTH 323: Mathematical Modeling May 22, 2017 V. A. Bokil (OSU-Math)
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationLecture 4a: Continuous-Time Markov Chain Models
Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time
More informationSpotlight on Modeling: The Possum Plague
70 Spotlight on Modeling: The Possum Plague Reference: Sections 2.6, 7.2 and 7.3. The ecological balance in New Zealand has been disturbed by the introduction of the Australian possum, a marsupial the
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationLattice models of habitat destruction in a prey-predator system
22nd International Congress on Modelling and Simulation, Hobart, Tasmania, Australia, 3 to 8 December 2017 mssanz.org.au/modsim2017 Lattice models of habitat destruction in a prey-predator system Nariiyuki
More information1 An Introduction to Stochastic Population Models
1 An Introduction to Stochastic Population Models References [1] J. H. Matis and T. R. Kiffe. Stochastic Population Models, a Compartmental Perective. Springer-Verlag, New York, 2000. [2] E. Renshaw. Modelling
More informationSystems of Ordinary Differential Equations
Systems of Ordinary Differential Equations Scott A. McKinley October 22, 2013 In these notes, which replace the material in your textbook, we will learn a modern view of analyzing systems of differential
More informationContinuous-time predator prey models with parasites
Journal of Biological Dynamics ISSN: 1751-3758 Print) 1751-3766 Online) Journal homepage: http://www.tandfonline.com/loi/tjbd20 Continuous-time predator prey models with parasites Sophia R.-J. Jang & James
More informationThe Leslie Matrix. The Leslie Matrix (/2)
The Leslie Matrix The Leslie matrix is a generalization of the above. It describes annual increases in various age categories of a population. As above we write p n+1 = Ap n where p n, A are given by:
More informationEcology Regulation, Fluctuations and Metapopulations
Ecology Regulation, Fluctuations and Metapopulations The Influence of Density on Population Growth and Consideration of Geographic Structure in Populations Predictions of Logistic Growth The reality of
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationPERMANENCE IN LOGISTIC AND LOTKA-VOLTERRA SYSTEMS WITH DISPERSAL AND TIME DELAYS
Electronic Journal of Differential Equations, Vol. 2005(2005), No. 60, pp. 1 11. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) PERMANENCE
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationMath 456: Mathematical Modeling. Tuesday, March 6th, 2018
Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary
More informationStabilization through spatial pattern formation in metapopulations with long-range dispersal
Stabilization through spatial pattern formation in metapopulations with long-range dispersal Michael Doebeli 1 and Graeme D. Ruxton 2 1 Zoology Institute, University of Basel, Rheinsprung 9, CH^4051 Basel,
More information