1 Types of stochastic models
|
|
- Adam Davis
- 6 years ago
- Views:
Transcription
1 1 Types of stochastic models Models so far discussed are all deterministic, meaning that, if the present state were perfectly known, it would be possible to predict exactly all future states. We have seen instances (like the discrete logistic) of so-called chaotic systems where the determinism becomes weaker, in the sense that any difference, however small, in the initial state would lead to big changes in future states, thus making long-term prediction essentially impossible in such systems: indeed, as any measure of the present state will entail some error, we cannot know exactly the state and thus uncertainty of prediction will grow as the prediction horizon gets longer. However, though long-term prediction may be impossible, in principle such systems follow the paradigm of determinism. A stochastic model, instead, assigns only a probability distribution to future states. Even if we knew perfectly the present state, we could not predict future states (except for trivial cases; for instance, if a species is extinct, a stochastic model will generally predict that it will be extinct also in the future). Sometimes the uncertainty may arise from ignorance of many other factors that influence the population dynamics: such factors could the density of other species in the ecosystem, that we choose not to model as there are too many of these, the genetic composition of a population that we consider homogeneous. Otherwise, we may understand the effect of factors, such as the meteorological conditions, but they remain essentially unpredictable from one year to the next. Or we may dwell on the difference between the statistical regularity that a certain proportion of 50 year old males die every year, and the unpredictability of whether any of those will die in the current year. Whatever the fundamental reason for this, it is clear that in several cases the best we can do is to quantify the uncertainty. In population biology, two differeent forms of stochasticity have been extensively examined in models. In the so-called environmental stochasticity the demographic rates are allowed to vary unpredictably in time, representing the uncertainty arising, for instance, from climatic factors or variations in the density of species outside the model. Models with environmental stochasticity are generally written in terms of stochastic differential equations, that are generally based on differential equation models such as those examined so far in the book, or that will be seen later. We do not discuss at all this modelling framework in this book, not because we believe there is something wrong in this approach, but because we have not seen anything so interesting arising from the use of stochastic differential equations in ecology. 1
2 The other form of stochasticity is the so-called demographic stochasticity, where the stress is on the fact that biological populations are finite and discrete: they can vary only if one (or more) individuals are born or die or immigrate or emigrate. Hence, while demographic rates represent statistical averages, all demographic events are intrinsically stochastic. This chapter is devoted to this type of modelling and more particularly to birth-and-death processes, that are (stationary) Markov processes with state space the natural numbers (representing the number of individuals in a population, but also, in other contexts, people in a queue or machines in a production line), and where the only possible transitions (representing births or deaths) are one number up or one number down. Markov processes can be considered corresponding to ordinary (or partial) differential equations, in the sense that probabilities of future states depend only on present state, and not on past history. A short introduction to the essential properties of Markov processes with countable state space is presented in the Appendix, where some references are also given. Another example of a Markov process is examined in the section about stochastic epidemic models. 2 Birth and death models In a birth-and-death process, the only instantaneous transitions that are allowed are those from a state i to i + 1 ( a birth), and those from a state i to i 1 ( a death). Precisely, but using a colloquial style, we will assume that P(X(t + h) = i + 1 X(t) = i) = i h + o(h) P(X(t + h) = i 1 X(t) = i) = µ i h + o(h) P(X(t + h) = j X(t) = i) = o(h) j: j i >1 (1) where { i } and {µ i } are sequences of nonnegative coefficients, while o(h) o(h) represents any function such that lim = 0. h 0 h As the state space generally is N = 0, 1,..., it is necessary to assume that µ 0 = 0. The other coefficients are free to have any value, though we will discuss how to assign them in a way, similar to what accomplished in deterministic models, that reproduces biological features. Results will be fundamentally different according to whether it is assumed 0 = 0 (when the population reaches size 0, no births are possible; hence the population will 2
3 be extinct forever) or 0 > 0 (which may seem odd if one thinks of actual births, but may be sensible if immigration from outside is considered). From assumptions (1), one can obtain the (infinite) linear systems of differential equations for the matrices P (t). The derivation is based on the so-called Chapman-Kolmogorov equations: P ij (t + s) = k E P ik (t)p kj (s) for all i, j N, t, s 0. (2) Intuitively the equations state that for going from i in j in time t + s, the process will move to some k in time t, and then from k to j in time s. After some steps (outlined in the Appendix) one obtains the backward Kolmogorov equations: P ij(t) = i P i+1,j (t) + µ i P i 1,j (t) ( i + µ i )P i,j (t) i, j N. (3) (3) can be seen, for each j (that can be considered as a parameter), as a (finite or infinite) system of equations in the unknowns P ij, i E. Formally it can be written in matrix notation as P (t) = QP (t) with Q a tri-diagonal matrix with Q i,i+1 = i, Q i,i 1 = µ i, Q i,i = ( i + µ i ). In an analogous way to the backward equation, one can derive a different system of equations, known as Kolmogorov forward equations: P ij(t) = j 1 P i,j 1 (t) + µ i+1 P i,j+1 (t) ( i + µ i )P i,j (t) i, j N (4) or, in matrix notation, P (t) = P (t)q. Now i can be taken as a parameter and the unknowns are P ij, j N. However, at this level of generality, it is not possible to prove that (4) actually holds. For instance, in the Appendix it is shown that in the pure birth process (µ i 0), if i = i 2, with probability 1 realizations of the process reach infinity in a finite time, and one can construct different processes havng the same infinitesimal transitions; only one of them satisfies (4). The validity of (4), as well as the existence and unqiueness (for each initial condition) of solutions of (3) and (4) can be rigorously proved only under some conditions on the parameters. The condition i a + bi, i N where a and b are nonnegative constants avoids explosions in finite time and guarantees well-posedness for (3) and (4). 3
4 3 Stationary distribution A fundamental difference in the behaviour of birth-and-death processes hinges on whether 0 > 0 or 0 = 0. In the first case, that implies that immigration from outside occurs, all states are in the same communicating class and it is possible that a non-trivial stationary distribution exists. In the second case, when the population is extinct, it will be extinct forever. In mathematical terms 0 is an absorbing state. It can be proved that all other states are in the same transient class, and an important question is determining the probability of extinction, and the mean time before extinction. Looking for stationary solutions, we assume i > 0 for all i 0 and µ i > 0 for all i 1. As shown in the Appendix, the most convenient condition to check whether, for a Markov process with states in E, {π i } i E is a stationary solution is π i q ij = 0 j E. (5) i E For a f birth-and-death process, equations (5) become i 1 π i 1 ( i + µ i )π i + µ i+1 π i+1 = 0 i 1 0 π 0 + µ 1 π 1 = 0. (6) It is easier finding the solution of (6) through the so-called detailed balance equations (7): Lemma 1. A solution of (6) satisfies the equations i π i = µ i+1 π i+1. (7) Proof. By induction. For i = 0, (7) is the last of (6). Now assume that (7) holds for i 1. From the first of (6), we obtain µ i+1 π i+1 = ( i + µ i )π i i 1 π i 1 = i π i where the last equality comes from the inductive hypothesis. Note that the detailed balance equations (7) can be interpreted as saying that, at the stationary distribution, the rate at which the process moves (through births) from i to i + 1 must be equal to the rate at which the process moves (through deaths) from i + 1 to i. 4
5 From (7), one immediately has π 1 = 0 µ 1 π 0 and iteratively π n = 0 n 1 µ 1 µ n π 0. One can then find π 0 from the condition that n=0 π n = 1. Setting ρ n = 0 n 1 µ 1 µ n with ρ 0 = 1 the condition becomes π 0 n=0 ρ n = 1. Hence there are two possibilities: if n=0 ρ n <, then π 0 = 1 n=0 ρ n π i = ρ i n=0 ρ n is the unique stationary distribution. if n=0 ρ n =, there are no stationary distributions. 4 Probability of extinction We assume 0 = 0, so that if X(T ) = 0, X(t) = 0 for all t T. In other words, 0 is an absorbing state; we assume i + µ i > 0 for all i 1, so that there are no other absorbing states. We want to compute u i = P(X(T ) = 0 for some T > 0 X(0) = i). We compute this through the jump Markov chain Z n, i.e. u i = P(Z n = 0 for some n > 0 Z 0 = i). When X(t) is a birth-and-death process, Z n can jump only 1 upwards or downwards; in other words Z n is a random walk with P(Z n+1 = i+1 Z n = i) = i i + µ i P(Z n+1 = i 1 Z n = i) = µ i i + µ i i 1. 5
6 4.0.1 Hitting probabilities for random walks Let us consider a random walk Z n such that P(Z n+1 = i + 1 Z n = i) = p i P(Z n+1 = i 1 Z n = i) = q i = 1 p i i 1. The classical random walk occurs when p i p and necessarily q i q = 1 p. Let us first compute u m i = P(Z n hits 0 before m Z 0 = i) = P(exists n 0 s.t. Z n = 0 and 0 < Z k < m 0 k < n Z 0 = i). By conditioning on the first step, and using the Markov property one has u m i = p i P(Z n hits 0 before m Z 1 = i+1)+q i P(Z n hits 0 before m Z 1 = i 1). By shifting the time origin, it is clear that so that the equation reduces to P(Z n hits 0 before m Z 1 = i + 1) = u m i+1 u m i = p i u m i+1 + q i u m i 1, i = 1,... m 1. (8) These are m 1 equations to which the obvious conditions u m 0 = 1 u m m = 0 have to be added, leaving m 1 unknowns. The resulting linear system (8) has a unique solution, that we go ahead computing: set w i = u m i u m i+1, i = 0,..., m 1. Then (8) can be rewritten as p i w i = q i w i 1 = w i = q i q 1 i q j w 0 = w 0. (9) p i p 1 p j From w j s, we can compute i 1 1 u m i = (u m 0 u m 1 ) + (u m 1 u m 2 ) + + (u m i 2 u m i 1) + (u m i 1 u m i ) = w j. Setting i = m and using u m m = 0, we get 1 = m 1 i=0 w i = 6 m 1 i=0 i q j p j w 0 j=0
7 where the final step is due to (9). This gives so that one obtains and u m i 1 u m 1 = w 0 = u m 1 i 1 = 1 w j = 1 j=0 = 1 i 1 m 1 j=0 k=1 m 1 1 i i=0 1 i i=0 j, q j p j (10) q j p j q k p k w 0 = 1 i 1 j j=0 k=1 m 1 i i=0 q k p k. (11) q j p j Expressions (10) and (11) appear very cumbersome, but allow for an explicit computation of the probability of hitting 0 before m for an arbitrary random walk. The simplest case is that of a standard random walk, p i p, in which case ( ) m ( ) i ( ) i ( ) m u m 1 = 1 1 q q p p ( ) q q q 1 p m = ( ) m, u m p p q p i = 1 ( ) m = ( ) m q p q p (12) These expressions hold, as long as q p. Instead if p = q = 1/2, then u m i = m i m. We are not so interested in u m i as in its limit, as m, which (using the rules of σ-additivity) can be interpreted as q p q p lim m um i = u i = P(Z n = 0 for some n > 0 Z 0 = i). From (12), one sees that 1 if q p ( ) u i = i q if q < p. p In words, the probability of ever hitting 0 is 1, if the probability of moving to the left is greater or equal than the probability of moving to the right. 7
8 Instead, if the probability of moving to the right is greater than the probability of moving to the left, there is a positive probability that the random walk will drift to infinity without ever hitting 0. For a general random walk, one can use the same idea in (11), obtaining the following Proposition 1. If i i=0 then u i 1 for all i N. If the sum in (13) is finite, then u i = 1 q j p j = + (13) i 1 j j=0 k=1 i i=0 q j p j. q j p j Application to birth-and-death processes Consider the linear birth-and-death process i = i µ i = µi (14) corresponding to the Malthus s deterministic model. As seen above, Z n is a random walk with p i = i i + µ i = + µ q i = µ i i + µ i = µ + µ. Hence from the previous subsection, we see that the probability of extinction 1 if µ u i = ( µ ) i (15) if µ <. The property that u i = u i 1 can be justified intuitively on the basis that, since in this model birth and death rate of each individual are not influenced by how many other individuals are present, the probability u i that a population starting from i individuals gets extinct is equal to the product of the probability that each family-tree descending from one of the initial individuals gets extinct: u m 1. 8
9 Consider now an equivalent of the logistic model: i = i µ i = µi + νi 2 for some constants 0 <, µ, ν. Then Z n has probabilities p i = + µ + νi q i = µ + νi + µ + νi. It is then easy to see that condition (13) holds, so that extinction is certain starting from any initial population i. A similar property holds for any model in which the growth rate becomes negative at high densities: a sufficient condition for this is i ηµ i for i large enough and η < Time to extinction For processes in which extinction is certain, one can compute the mean time to extinction. This can be obtained following the same arguments as in the previous Section, but the time to transitions has to be taken into account. Let τ the random variable denoting the time to extinction, i.e. τ = inf{t : X(t) = 0}. Let W i = E(τ X(0) = i). Consider now T 1 as defined previously the time of the first transition, and condition on the value of X(T 1 ). We obtain W i = E(T 1 X(0) = i) + p i E(τ T 1 X(0) = i, X(T 1 ) = i + 1) + q i E(τ T 1 X(0) = i, X(T 1 ) = i 1) where p i is the probability (already computed) that the first transition is to the right, while q i = 1 p i is the probability that the first transition is to the left By the Markov property and the time-homogeneity of transitions, X(t T 1 ) is distributed like X(t) conditional on the initial condition, hence E(τ T 1 X(0) = i, X(T 1 ) = i + 1) = E(τ X(0) = i + 1) = W i+1. The previous equation can then be written as W i = 1 i + µ i + i i + µ i W i+1 + µ i i + µ i W i 1, i 1. (16) A boundary condition is clearly W 0 = 0, but we are now left with an infinite system of linear equation. A way out, similar to the analysis of the previous 9
10 section, could be to define the expected time to reach 0 or m, and then consider the limit for m. Since this would lead to rather lengthy computations, I prefer to refer to the following Theorem 2 (Lemma in Anderson(1991)). W i is the minimal nonnegative solution of (16). Let us apply this result to the Malthusian case where i = i and µ i = µi, and µ (we saw before that for > µ extinction is not certain). Equation (16) becomes ( + µ)w i = 1 i + W i+1 + µw i 1. Introducing U i = W i+1 W i, this becomes U i = µ U i 1 1 i (17) from which recursively one obtains ( µ ) i U i = U0 ( i ( µ ) i j 1 ( µ ) i j = U 0 i ( ) ) j 1, i 1. µ j (18) We still don t know U 0 = W 1. Letting i go to infinity in the term in brackets in the rightmost term in (18), we see the following: if + ( i ( µ ) j 1 µ ) j 1 j j = +, then for any choice of U 0 > 0, we would obtain > U 0 for i large enough, and thus U i < 0 for i large. This is inconsistent probabilistically (it cannot be E(τ X(0) = i + 1) < E(τ X(0) = i)) and would also lead to have W k < 0 for k large enough. The only possibility is that U 0 = +, i.e. W 1 = +. This means that the expected time to extinction starting with 1 individual (and thus with more than 1 individual) is infinite. The series diverges for µ; since this analysis assumes µ, this case occurs for = µ. if ( ) j + 1 < +, any choice of U µ j 0 ( ) j + 1 would lead to µ j a positive value of U i and thus of W i for all i. However, we had stated before that the expected time is given by the minimal nonnegative solution of (16), that is attained when U 0 = ( ) j + 1. µ j 10
11 In this case ( < µ) we thus have E(τ X(0) = 1) = = 1 /µ ( ) j 1 µ j = 1 x j 1 dx = 1 /µ 0 + /µ 0 x j 1 dx 1 1 x dx = 1 log ( 1 ) µ = 1 log ( µ µ ). (19) In the book by Karlin and Taylor, one can find similar computations in the case of a general birth-and-death process. In principle, one could compute mean extinction time, even when extinction is not certain. In that case, in order to obtain relevant results, one would have to condition on the fact that extinction does occur: E(τ X(0) = i, τ < infty). The results can be obtained in a way similar to above, but keeping into account the conditioning Extinction time with a bound on population size Finally, let us consider a variation of the Malthusian case where there exists an upper barrier that cannot be passed, i.e. { i if i < K i = 0 if i K µ i = µi. Once the process reaches state K it will stay there until a transition brings it back to i 1. Now (16) becomes ( + µ)w i = 1 i + W i+1 + µw i 1 1 i K 1 to which one must add W 0 = 0 and W K = 1 µk + W K 1. Passing to the variables U i, the last condition means U K 1 = 1 µk. Since (18) is still valid for 1 i K 1, one obtains 1 ( µ ) K 1 K 1 µk = U0 11 ( µ ) K 1 j 1 j
12 i.e. W 1 = U 0 = K 1 ( ) j 1 µ j + ( ) K 1 K 1 µ µk = ( ) j 1 µ j. (20) Thus, E(τ X(0) = 1) = W 1 = U 0 is given by the first K terms of a series that is convergent if < µ and divergent for µ; this observations means that, when µ, W 1 grows to infinity as K is increased. An asymptotic expansion makes it possible to quantify this statement. In fact, when > µ, one obtains W 1 = ( µ ) K K ( ) j K 1 µ ( ) K K 1 ( µ j = µ l=0 1 ( ) K ( µ K µ l=0 ) l 1 (K l) ) l = 1 ( µ)k ( ) K. (21) µ This can be written saying that the mean extinction time grows exponentially with K, precisely W 1 grows like e αk /K where α = log(/µ). Thus it becomes astronomically large for moderate K. On the other hand, for = µ, W 1 is given by the first terms of the harmonic series. Hence, its asymptotic expansion is well known: W 1 1 (log(k) + γ) (22) where γ is Euler s constant. Finally, for < µ, (20) are the first terms of a convergent series, so that W 1 can be approximated by its sum, i.e. (19). 4.2 Relations with deterministic processes In the case of the linear birth-and-death process (14), consider m i (t) = E(X(t) X(0) = i) = jp ij (t). (23) j=0 Using Kolmogorov forward equations (4), and interchanging (this could be rigorously justified by first showing that the series converge absolutely) deriva- 12
13 tive and (infinite) sums, one obtains = m i(t) = jp ij(t) = j=0 µ(k 1)kp i,k + k=1 j[p i,j+1 (j + 1)µ + p i,j 1 (j 1) p i,j j( + µ)] j=0 (k + 1)kp i,k k=0 ( + µ)j 2 p i,j = j=0 ( µ)kp i,k k=1 = ( µ)m i (t). (24) (24) shows that the expected value of the process follows Malthus equation with parameter r = µ, so that m i (t) = ie ( µ)t. This formula can be contrasted with (15). If > µ, the expected value of the population grows exponentially; still extinction may be likely. To give a numerical example, if = 1 and µ = 0.9, and the initial value is 1 individual, the expected value of the population at time t = 100 is e , but the probability of extinction is 9/10, and we are quite sure that, if extinction occurs, it has occurred before. Thus, there is 90 % probability that the population will be extinct, but its expected values is very large, meaning that, conditional on non-extinction, we expect it to be around Note that the Malthusian growth rate depends on the difference µ, while the extinction probability depends on the ratio µ/. Thus population with the same expected growth rate may have very different extinction probability. The fact that the expected value of the stochastic model coincides with the value of the deterministic model holds only for the linear case (14). In general, they differ, and the fundamental reason for this is that E(f(X)) f(e(x)) unless f is linear. One may wonder whether there is then any relation between stochastic and deterministic models. Indeed, there is one and basically it follows from the law of large numbers: as the number of trials grows to infinity, the faction of successes converges to the expected value. In this context, the problem requires mathematical techniques much beyond the level of this text. However, it is possible to quote a result, due especially to Kurtz and his co-workers, that have actually handled much more general cases and obtained much more detailed results. We assume that there exists a typical scale N of the population (it may represent habitat size), and that the parameters of the birth-and-death process depend on N as (N) i = Nb ( i N ) 13 µ (N) i = Nm ( i N ) (25)
14 where b and m are given functions. For instance, logistic growth could be represented by (N) i = i, µ (N) i = i(µ + ν i N ) (26) where ν i represents the extra mortality to crowding. In this case, we would N have b(x) = x and m(x) = x(µ + νx). The following theorem (a special case of Theorem from Kurtz, 1981) represents a law of large number for this case. Theorem 3. Let X (N) (t) be a family of birth-and-death process, with rates given by (25). Let F (x) = b(x) µ(x) be a Lipschitz function on R +. Let Then for all T > 0, X (N) (0) lim N N = x 0 0 w.p. 1. lim sup N t [0,T ] X(N) (t) N x(t) = 0 w.p. 1 (27) where x(t) is the solution of the Cauchy problem { x (t) = F (x(t)) x(0) = x 0. (28) In words, we can say that, as the scale of the population grows to infinity, the stochastic model converges (technically, it is an almost-everywhere convergence) to the deterministic model uniformly in any finite interval [0, T ]. Several aspects of (27) can be noted. First of all, if x 0 > 0, this means that the initial condition X (N) (0) = O(N), i.e. very large. The approximation of the stochastic model with equation (28) works well when N is large, as well as the initial condition X (N) (0). If instead X (N) (0) is kept fixed (as N increases) at some (small) value (the situation considered when looking at extinction probabilities), we have X lim (N) (0) N = 0. Then, if we assume (N) N 0 = 0 (extinction is possible), x(t) 0. This is not a useful information, because it does not allow to distinguish between the cases in which X (N) (T ) = 0 and those in which X (N) (T ) > 0 and the population will continue growing. The techniques of Section 4 are required to study the probability of extinction. Moreover, in that Section, we have shown that, when birth and death rates follow the logistic-like rule (26), then lim t + X (N) (t) = 0, however 14
15 large is N. On the other hand, it is easy to show that (if > µ) lim t + x(t) = µ that represent the carrying capacity for the deterministic equation. This ν shows that we cannot interchange limits: 0 = lim lim N t + X (N) (t) N lim lim t + N X (N) (t) N = µ. ν Equation (28) describes accurately the stochastic model (under the assumptions of the previous Theorem) only for finite intervals of time, while it is certain that sooner or later the stochastic model will randomly drift to extinction. However, the time scale of these fluctuations may be so large (see the computation in Subsection 4.1.1) to be irrelevant from a practical point of view. x x T T Figure 1: Some simulations of a birth-and-death process X(t) corresponding to the logistic differential equations: i = bi, µ i = (d + i(b d)/k)i. Here b = 1, d = 0.5, K = 30, X(0) = 1. On the left panel 10 simulations on a short interval: 4 of those undergo early extinction. On the right panel 2 of the simulations not going extinct on a longer time interval. In both cases, the solid line represents the solution of the differential equation X (t) = rx(t)(1 X(t)/K with r = 0.5, K = 30, X(0) = 1. Fig. 1 shows some simulations of the model that exhibit some typical features. Some simulations undergo early extinction; the probability of this event can be approximated very well by 15: for the parameter values used in the Figure, this amounts to 0.5. Those that do not undergo early extinction fluctuate around the equilibrium of the differential equations. The simulations are not very close to the 15
16 solution of the differential equation, as the previous Theorem suggests, as that is a limiting Theorem as a scale parameter (which in this case could be K) goes to infinity, while here K = 30 definitely is not very large. Other results obtained by Kurtz and co-workers provide central limit theorems for the convergence X (N) (t). These can be used to analyse the fluctuations around the equilibrium of the trajectories, but this is definitely beyond the level of this book. Finally, as discussed above, all these realizations of the process will eventually reach 0, leading to population extinction, but this is very difficult to see on the time scale at which simulations are run. Exercises 1. Let us consider a birth-and-death process in which the rate of transitions from m to m + 1 is β(m + 1), m 0; the rate of transitions from m to m 1 is γm, m 1. (a) Write down the Kolmogorov backward and forward equations for this model. (b) Show that there are no absorbing states. (c) Find under which conditions on the parameter β and µ there exists a stationary probability distribution. (d) Compute the stationary probability distribution (when it exists); is it one of the distributions that are considered in introductory courses in probability theory? (e) Intuitively, what will the trajectories of the birth-and-death process will do when there is no stationary probability distribution? 2. The release of sterile males is a technique has sometimes been applied in the attempt to eradicate pests. The idea is that a certain proportion of females will mate with the released sterile males and will not produce offspring, leading to a reduction of the population. Clearly, this can be effective only if sterile males are quite abundant compared to normal males. Repeating this process for a few generations (while normal males become less and less abundant) could lead to a strong reduction of the population, and possibly to extinction. We make extreme assumptions, in order to be able to build a very simplified model of this mechanism in the form of a birth-and-death process. 16
17 First, assume that the number of females and normal males is at all times equal: a male dies when and only when a female dies (at rate µ, independently of population size); offspring are born in pairs (one male and one female). Second, assume that the number of sterile males is kept constant at the value S (as soon as one dies, it is replaced by a newly released one). Finally, assume that each females mates at rate (independently of population size) with a male chosen at random among the normal and sterile ones present in the population: if the male chosen is normal, it produces one female and one male; if it sterile, it does not produce offspring. (a) Write the infinitesimal transition rates for this process (i.e., the rates at which the number of females changes from j at a different value). (b) Write down the corresponding Kolmogorov differential equations. (c) Noting that 0 is an absorbing state for the process, write down a system for the probabilities of the population to become extinct sooner or later, conditional to the initial number of females (and males) being equal to j. Intuitively, will these probabilities be always equal to 1? (d) Modify the model by assuming that there exists a level K > 0, such that when the number of females reaches the number K the mating rate drops to 0, while being given by the model above for j < K. Write down a system of equation for the mean time to extinction, conditional on the number of females (and males) at time 0. (e) Assume K = 3, = 1.2, µ = 1. Find the value of T 1, the mean time to extinction, conditional on 1 being the number of females (and males) at time 0 [I believe that a simple expression can be obtained using a generic value for S; if this seems too difficult, use S = 2] 17
SMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationBirth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes
DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains
More informationContinuous-Time Markov Chain
Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),
More informationSTAT STOCHASTIC PROCESSES. Contents
STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5
More informationChapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan
Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process
More informationMarkov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015
Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationSTAT 380 Continuous Time Markov Chains
STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous
More informationStatistics 992 Continuous-time Markov Chains Spring 2004
Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More information(implicitly assuming time-homogeneity from here on)
Continuous-Time Markov Chains Models Tuesday, November 15, 2011 2:02 PM The fundamental object describing the dynamics of a CTMC (continuous-time Markov chain) is the probability transition (matrix) function:
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationProbability Distributions
Lecture 1: Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
(February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe
More informationModelling Complex Queuing Situations with Markov Processes
Modelling Complex Queuing Situations with Markov Processes Jason Randal Thorne, School of IT, Charles Sturt Uni, NSW 2795, Australia Abstract This article comments upon some new developments in the field
More informationStochastic2010 Page 1
Stochastic2010 Page 1 Extinction Probability for Branching Processes Friday, April 02, 2010 2:03 PM Long-time properties for branching processes Clearly state 0 is an absorbing state, forming its own recurrent
More informationProbability Distributions
Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf)
More informationLectures on Probability and Statistical Models
Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationContinuous Time Markov Chains
Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More information6 Continuous-Time Birth and Death Chains
6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.
More informationEXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS
J. Appl. Prob. 41, 1211 1218 (2004) Printed in Israel Applied Probability Trust 2004 EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS and P. K. POLLETT, University of Queensland
More informationIntegrals for Continuous-time Markov chains
Integrals for Continuous-time Markov chains P.K. Pollett Abstract This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space.
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model
More informationAARMS Homework Exercises
1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality
More information14 Branching processes
4 BRANCHING PROCESSES 6 4 Branching processes In this chapter we will consider a rom model for population growth in the absence of spatial or any other resource constraints. So, consider a population of
More informationIntroduction to Stochastic Processes with Applications in the Biosciences
Introduction to Stochastic Processes with Applications in the Biosciences David F. Anderson University of Wisconsin at Madison Copyright c 213 by David F. Anderson. Contents 1 Introduction 4 1.1 Why study
More information2 Discrete-Time Markov Chains
2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More informationAt the boundary states, we take the same rules except we forbid leaving the state space, so,.
Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability
More informationHitting Probabilities
Stat25B: Probability Theory (Spring 23) Lecture: 2 Hitting Probabilities Lecturer: James W. Pitman Scribe: Brian Milch 2. Hitting probabilities Consider a Markov chain with a countable state space S and
More informationLast Update: March 1 2, 201 0
M ath 2 0 1 E S 1 W inter 2 0 1 0 Last Update: March 1 2, 201 0 S eries S olutions of Differential Equations Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections
More informationFinite-Horizon Statistics for Markov chains
Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update
More informationAPM 541: Stochastic Modelling in Biology Discrete-time Markov Chains. Jay Taylor Fall Jay Taylor (ASU) APM 541 Fall / 92
APM 541: Stochastic Modelling in Biology Discrete-time Markov Chains Jay Taylor Fall 2013 Jay Taylor (ASU) APM 541 Fall 2013 1 / 92 Outline 1 Motivation 2 Markov Processes 3 Markov Chains: Basic Properties
More informationCDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv
More informationRecursive Sequences in the Life Sciences
Recursive Sequences in the Life Sciences Recursive sequences (or difference equations) are often used in biology to model, for example, cell division and insect populations. In this biological context
More informationSection 9.2 introduces the description of Markov processes in terms of their transition probabilities and proves the existence of such processes.
Chapter 9 Markov Processes This lecture begins our study of Markov processes. Section 9.1 is mainly ideological : it formally defines the Markov property for one-parameter processes, and explains why it
More informationDerivation of Itô SDE and Relationship to ODE and CTMC Models
Derivation of Itô SDE and Relationship to ODE and CTMC Models Biomathematics II April 23, 2015 Linda J. S. Allen Texas Tech University TTU 1 Euler-Maruyama Method for Numerical Solution of an Itô SDE dx(t)
More information25.1 Ergodicity and Metric Transitivity
Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces
More informationThe Wright-Fisher Model and Genetic Drift
The Wright-Fisher Model and Genetic Drift January 22, 2015 1 1 Hardy-Weinberg Equilibrium Our goal is to understand the dynamics of allele and genotype frequencies in an infinite, randomlymating population
More informationLatent voter model on random regular graphs
Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with
More informationQueueing Networks and Insensitivity
Lukáš Adam 29. 10. 2012 1 / 40 Table of contents 1 Jackson networks 2 Insensitivity in Erlang s Loss System 3 Quasi-Reversibility and Single-Node Symmetric Queues 4 Quasi-Reversibility in Networks 5 The
More informationA REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS
A REVIEW AND APPLICATION OF HIDDEN MARKOV MODELS AND DOUBLE CHAIN MARKOV MODELS Michael Ryan Hoff A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment
More informationClassification of Countable State Markov Chains
Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive
More informationIEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.
IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationMarkov processes and queueing networks
Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationChapter 11 Advanced Topic Stochastic Processes
Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section
More informationThe SIS and SIR stochastic epidemic models revisited
The SIS and SIR stochastic epidemic models revisited Jesús Artalejo Faculty of Mathematics, University Complutense of Madrid Madrid, Spain jesus_artalejomat.ucm.es BCAM Talk, June 16, 2011 Talk Schedule
More informationStochastic Modelling Unit 1: Markov chain models
Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson
More informationContinuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.
Extinction Probability for Branching Processes Friday, November 11, 2011 2:05 PM Continuing the calculation of the absorption probability for a branching process. Homework 3 is due Tuesday, November 29.
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More informationMarkov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015
Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,
More informationTMA4265 Stochastic processes ST2101 Stochastic simulation and modelling
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes
More informationCDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical
CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one
More informationStochastic Shortest Path Problems
Chapter 8 Stochastic Shortest Path Problems 1 In this chapter, we study a stochastic version of the shortest path problem of chapter 2, where only probabilities of transitions along different arcs can
More informationLTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather
1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationSIMILAR MARKOV CHAINS
SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationConvergence of Feller Processes
Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationLinear-Quadratic Optimal Control: Full-State Feedback
Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually
More informationBRANCHING PROCESSES 1. GALTON-WATSON PROCESSES
BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented
More information1 (t + 4)(t 1) dt. Solution: The denominator of the integrand is already factored with the factors being distinct, so 1 (t + 4)(t 1) = A
Calculus Topic: Integration of Rational Functions Section 8. # 0: Evaluate the integral (t + )(t ) Solution: The denominator of the integrand is already factored with the factors being distinct, so (t
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationMATH 215/255 Solutions to Additional Practice Problems April dy dt
. For the nonlinear system MATH 5/55 Solutions to Additional Practice Problems April 08 dx dt = x( x y, dy dt = y(.5 y x, x 0, y 0, (a Show that if x(0 > 0 and y(0 = 0, then the solution (x(t, y(t of the
More informationBudapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány
Budapest University of Tecnology and Economics AndrásVetier Q U E U I N G January 25, 2000 Supported by Pro Renovanda Cultura Hunariae Alapítvány Klebelsberg Kunó Emlékére Szakalapitvány 2000 Table of
More informationEXTINCTION AND QUASI-STATIONARITY IN THE VERHULST LOGISTIC MODEL: WITH DERIVATIONS OF MATHEMATICAL RESULTS
EXTINCTION AND QUASI-STATIONARITY IN THE VERHULST LOGISTIC MODEL: WITH DERIVATIONS OF MATHEMATICAL RESULTS INGEMAR NÅSELL Abstract. We formulate and analyze a stochastic version of the Verhulst deterministic
More informationIntroduction to self-similar growth-fragmentations
Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation
More informationApproximating diffusions by piecewise constant parameters
Approximating diffusions by piecewise constant parameters Lothar Breuer Institute of Mathematics Statistics, University of Kent, Canterbury CT2 7NF, UK Abstract We approximate the resolvent of a one-dimensional
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationThe Leslie Matrix. The Leslie Matrix (/2)
The Leslie Matrix The Leslie matrix is a generalization of the above. It describes annual increases in various age categories of a population. As above we write p n+1 = Ap n where p n, A are given by:
More informationMath 6810 (Probability) Fall Lecture notes
Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),
More informationSTATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final
STATS 3U03 Sang Woo Park March 29, 2017 Course Outline Textbook: Inroduction to stochastic processes Requirement: 5 assignments, 2 tests, and 1 final Test 1: Friday, February 10th Test 2: Friday, March
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 6
MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationModeling Prey and Predator Populations
Modeling Prey and Predator Populations Alison Pool and Lydia Silva December 15, 2006 Abstract In this document, we will explore the modeling of two populations based on their relation to one another. Specifically
More informationpopulation size at time t, then in continuous time this assumption translates into the equation for exponential growth dn dt = rn N(0)
Appendix S1: Classic models of population dynamics in ecology and fisheries science Populations do not grow indefinitely. No concept is more fundamental to ecology and evolution. Malthus hypothesized that
More informationM4A42 APPLIED STOCHASTIC PROCESSES
M4A42 APPLIED STOCHASTIC PROCESSES G.A. Pavliotis Department of Mathematics Imperial College London, UK LECTURE 1 12/10/2009 Lectures: Mondays 09:00-11:00, Huxley 139, Tuesdays 09:00-10:00, Huxley 144.
More informationLecture 22 Girsanov s Theorem
Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n
More informationOn Backward Product of Stochastic Matrices
On Backward Product of Stochastic Matrices Behrouz Touri and Angelia Nedić 1 Abstract We study the ergodicity of backward product of stochastic and doubly stochastic matrices by introducing the concept
More information12 Markov chains The Markov property
12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience
More informationMATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations
MATH 415, WEEKS 14 & 15: Recurrence Relations / Difference Equations 1 Recurrence Relations / Difference Equations In many applications, the systems are updated in discrete jumps rather than continuous
More informationTOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology
TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology DONALD M. DAVIS Abstract. We use ku-cohomology to determine lower bounds for the topological complexity of mod-2 e lens spaces. In the
More informationprocess on the hierarchical group
Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of
More informationEXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013 Time: 9:00 13:00
Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag Page 1 of 7 English Contact: Håkon Tjelmeland 48 22 18 96 EXAM IN COURSE TMA4265 STOCHASTIC PROCESSES Wednesday 7. August, 2013
More informationLECTURE #6 BIRTH-DEATH PROCESS
LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death
More information1 Stochastic Dynamic Programming
1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future
More informationIntroductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19
Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each
More informationMeasure and integration
Chapter 5 Measure and integration In calculus you have learned how to calculate the size of different kinds of sets: the length of a curve, the area of a region or a surface, the volume or mass of a solid.
More information