Chapter 3: Birth and Death processes

Size: px
Start display at page:

Download "Chapter 3: Birth and Death processes"

Transcription

1 Chapter 3: Birth and Death processes Thus far, we have ignored the random element of population behaviour. Of course, this prevents us from finding the relative likelihoods of various events that might be of interest, for example extinction. In this chapter, we focus on demographic stochasticity. This component arises from the intrinsically stochastic nature of birth and death processes. Even without external noise, one can not predict the future population numbers with certainty. Stochastic models often predict population behaviour that is significantly different from their deterministic equivalent. In these cases, the randomness itself is central to the population dynamic. This randomness can, under some nonlinearities have a systematic influence on the population. Such effects can not be captured by a deterministic model. 3.) Introduction to Birth and Death processes A birth and death process is defined as any continuous-time Markov chain whose state space is the set of all non-negative integers and whose transition rates from state i to state j, r,, are equal to zero whenever i j >. That is, a birth and death process i j that is currently in state i can only go to either state i or state i +. When the state increases by one, we say that a birth has occurred and when the process decreases by one, we say that a death has occurred. We thus have: B ( ) = r, +, D( ) = r, p ( t + t) = p ( t) B( ) t + p ( t) D( + ) t + + p ( t)[ B( ) t D( ) t] + o( t), for > where p ( t) is the probability that the population size at time t is 9 (3..) The Markov assumption states that only the current population size is of use in predicting future population behaviour. Thus, by definition, other possible transition rate predictors like the environmental condition are ignored. A population whose size at time t + t is could have only had one of four things occurring in the preceding interval [ t, t + t] : a member of the population could have given birth; a member of the population could have died; no births or deaths could have occurred and, finally, there could have been more than one event (whether birth or death or both) in the preceding interval. This means: (3..) The probabilities of the first three events are given by the first three terms on the RHS,

2 and o( t) is the (negligible) probability of more than one event occurring within the interval [ t, t + t], resulting in the final population size at time t + t being. We need to consider the case when = there is a death when =. In this case: By subtracting p p ( t + t) = D( + ) p ( t) t + p ( t) + o( t) p ( t) = p ( t) B( ) + p ( t) D( + ) p ( t)[ B( ) + D( )] for > + p ( t) = p ( t) D( ) for = written in matrix form as: p(t) = p(t)r where p( t) = p ( t) p ( t) p ( t), p( t) =.. R = r with r = ij i, j=,,,... ij separately, as such a circumstance will only arise if p ( t) p ( t) p ( t).. r i, i+ r i, i = B( i) = D( i) r = ( B( i) + D( i)) i, i r = for i j > i, j (3..3) ( t) from both sides of equations (3..) and (3..3), and dividing by t, we obtain the so-called Kolmogorov forward equations. As t : (3..4) The Kolmogorov equations are an infinite system of differential equations. They can be (3..5) The Kolmogorov equations are the primary means to define a time-homogeneous Markov process. By solving these equations, we can find the probability distribution of as a function of time. 3.) Solving the Kolmogorov equations The Kolmogorov equations can be solved for a linear birth-only model. Let denote the initial population size. For a birth-only model (i.e. a model that assumes that the members of the population cannot die), the Kolmogorov equations are as follows: p ( t) = B( ) p ( t) B( ) p ( t) for with boundary condition p The differential equation for p ( t) is: ( ) = p ( t) = B( ) p ( t) (3..) (3..)

3 The above equations can be solved directly. Suppose that B( ) = λ i.e. the birth rate is not density-dependent. Upon substituting this transitional form into (3..), we get: p ( t) = d ln p t ( ) = dt p ( t) ln p ( t) = tλ + C where C is a constant p ( t) e t λ λ Using the boundary condition that p ( ) =, we then have: p ( t) = e λ t This expression derived for p t ( ) can then be substituted into the Kolmogorov differential equation for p p λt λt ( t) = e e, p ( t) + λ( + ) p ( t) = λ + + p ( t) = e λ + = + ( t) thus allowing us to derive an expression for p ( + t ) : λt p ( t) p ( t) e + + λ + + = λ λ + t t p ( t) = e λ e + C where C is a constant + Using the boundary condition that p + ( ) =, we then have: λ ( + ) t λt λt λ( ) λ e e λt p ( t) = e e + Repeating the procedure with the boundary condition p + ( ) =, it can be shown that: p + t λt + λt + λt λt ( t) = e e The first three terms, suggest that the probability mass function for the population number is: λ e e t λt + t e e e dt (3..3) This is a negative binomial distribution with parameters and e λ t. If p ( t ) does indeed satisfy equation (3..3), then the differential equation for p + ( t) would be: (3..4) Equation (3..4) is a first order differential equation. We can thus apply a result given by Jaeger et al. (974) on equation (3..4), with the boundary condition p + ( ) = to get: (3..5)

4 We have thus shown that if p p ( t) = D( + ) p ( t) D( ) p ( t) for + with boundary condition p p ( t) satisfies equation (3..3) then p t ( ) = p ( t) = D( ) p ( t) for t e bt e bt ( ) =,,..., = + ( ) also satisfies equation (3..3). We also know that equation (3..3) is true when =. Hence, by the induction principle, we then know that equation (3..3) is always true. We have thus proved that the probability mass function for a linear birth-only process is a egative Binomial Distribution with parameters and e λ t. Similarly, one can directly derive the probability distribution function for a deathonly process. The Kolmogorov equations for the death process are: with the differential equation for p ( t) being: (3..6) (3..7) As before, we start of solving for p ( t) using the boundary condition that p ( ) =. After which we can solve for p ( t ), p ( t),, p ( t ) using the boundary condition that p ( ) = for. When D( ) = b (i.e. the death rate is proportional to the population size), the resulting probability mass function for the death only process has the form: (3..8) Unlike the births-only process, the deaths-only process has a finite set outcomes. Thus, for a linear deaths-only process, the population size is binomially distributed with parameters and e bt. 3.3) Alternatives to the Kolmogorov Equations The direct method of solving the differential equations is quite laborious in dealing with birth-only and death-only equations. This is due to the necessity of deriving the probabilities for the various possible population sizes (or at least the first few probabilities) separately before one can derive the general expression for p ( t ). This direct method is even less practical when dealing with a model that allows for both births and deaths. Due to the dependence of p ( t) on both p ( t) and p + ( t ) in such models, one must solve the differential equations simultaneously contrast this with the successive solution of the equations in the two earlier models. This makes the Kolmogorov equations unwieldy for large populations as, in such cases, obtaining even

5 a numerical solution is often difficult. The following alternatives to the Kolmogorov equations have proved useful. A. The Continuous Approximation By its very nature the population size,, is a discrete random variable. By treating as a continuous random variable and re-interpreting p p ( t) + + t B D p t ( ) ( ) ( ) B( ) D( ) p ( t) p ( ) = δ where δ E d = B( ) D( ) dt = + E d B( ) D( ) dt 3 ( t) as s probability density function (which we shall denote by p ( t) to avoid confusion), one can derive an approximate probability distribution for the population. isbet et al. (98) derived a single, approximate differential equation for p ( t). By performing a Taylor expansion on p ( t) and discarding terms that are of third order and higher, the authors showed that: is the Dirac Delta function (3.3.) A slightly modified version of the proof given by isbet and Gurney (98) is given in Appendix A (some of the elements of the proof have been re-ordered in an attempt to make the proof more comprehensible). is: The boundary condition for equation (3.3.) when the initial population size is (3.3.) Unfortunately, due to its non-linear form, equation (3.3.) is analytically intractable. Even a numerical solution to the differential equation cannot be found due to the discontinuous nature of the boundary condition given in (3.3.). This implies that the continuous approximation cannot be used to derive an approximate probability distribution for. However, equation (3.3.) can be used to generate an approximate, quasi-equilibrium distribution (covered in Section 3.6) which, in turn, can be used to derive an approximate analytical expression for the mean time to extinction. B. Stochastic Differential Equations Both the Kolmogorov equations and their continuous approximation model the population probability distribution through time. The following model is based on the population size itself and can thus be readily compared with the deterministic models covered in Chapter. Stochastic differential equations are often also used to model environmental stochasticity. It can be shown (see Appendix A) that:

6 Thus, provided dt is sufficiently small for terms of order dt we have: We can thus write: Var[ d ] = E[ d ] E[ d ] = B( ) + D( ) dt B( ) D( ) dt B( ) + D( ) dt provided dt is sufficiently small d = B( ) D( ) dt + η( t) B( ) + D( ) dt where η( t) is a random variable with zero mean and unit variance and higher to be ignored, (3.3.3) (3.3.4) Equation (3.3.4) is merely a cumbersome restatement of the Kolmogorov equations: η( t) has an unusual, discrete probability distribution to accommodate the fact that can only take on integer values. However a tractable approximation to this equation is possible. isbet et al. (98) stated, for all but the smallest populations, that any change in the population size which is large enough to affect the transition probabilities must be the result of a large number of statistically independent births and deaths. Thus, d can be taken over a relatively long time increment dt ; η( t) will then have an approximately normal probability distribution (by the Central Limit Theorem). regarding as a continuous variable (which implies that dt is small since dt ε By must be constant see proof of equation (3.3.) in Appendix A) and η( t) as being normally distributed (implying dt is large), we have: The Wiener process, ω( t) d = B( ) D( ) dt + B( ) + D( ) dω( t) where ω( t) is a Wiener process (3.3.5) is a continuous random process with independent increments and which is also time homogeneous (i.e. ω( t) and ω( t + s) ω( s) have the same distribution for s and ω( ) = ). In addition, the Wiener process, ω( t) is ormally distributed with mean and variance σ t (where σ > ). isbet et al. (98) did not offer a resolution of the requirement that dt is both small and large. However, from empirical evidence in Section 3.5, equation (3.3.5) does seem to provide a good approximation to the Kolmogorov equations. The stochastic differential equation can alternatively be written as: d dt dω( t) = B( ) D( ) + B( ) + D( ) γ ( t) where γ ( t) = white noise dt (3.3.6) 4

7 Equation (3.3.6) can be interpreted as the sum of deterministic and stochastic contributions to d dt. Care must be taken with this interpretation since white noise is not well behaved (e.g. E γ ( t) = ). Stochastic differential equations prove to be especially useful for deriving the gross fluctuation characteristics of a population. C. Generating Functions We now derive a singular differential equation for the population s moment generating function. The moment generating function of any random variable characterises that random variable s probability distribution. Hence such an equation implicitly models the population s probability distribution through time. Consider the random variables ( t) and ( t + t) ( t). The variables represent the population size at time t and the net change in the population size over the interval [ t, t + t] respectively. If we let f ) represent the continuous transition rate from population size to size P ( t + t) ( t) = j ( t) = j ( + j, then: equations (as shown in (3..5)): M( θ, t) jθ = ( e ) f j M( θ, t) t j θ (3.3.8) ote that the θ operator acts only on M ( θ, t). So, for example, if f ( ) = a b more than one unit in the interval t. Hence we have: M( θ, t) θ θ = ( e ) B M( θ, t) ( e ) D M( θ, t) t θ θ + f ( ) t + o( t) for j j j t f ( ) + o( t) for j = 5 j (3.3.7) For the birth and death model, f ( ) = B ( ), f ( ) D = ( ) and f ( ) = for j j,,. Also, f j ( ) = r, + j using the notation for the transition rates introduced in Section 3.. Bailey (964) showed that the following differential equation for the moment generating function, M ( θ, t), corresponds to the set of probability differential = then f a b θ θ θ equation (3.3.8) is given in Appendix B. = and thus f M t a M t ( θ, ) M t b ( θ, ) ( θ, ). A proof of θ θ θ The birth and death process assumes that the population size cannot change by (3.3.9) The advantage of the above equation is easy to see: instead of having a possibly infinite set of differential equations to solve simultaneously, we only need to solve a single

8 differential equation. The moment generating function characterises the probability distribution of so the solution of equation (3.3.8) helps to identify the correct probability distribution of. One can come up with corresponding differential equations for the probability generating function, P( ϕ, t) and the cumulant generating function, K( θ, t). For K( θ, t), we use the relationship K( θ, t) = log M ( θ, t) (equation (3.4.3) below gives the differential equation of K( θ, t) for a birth and death process). If we substitute e θ = ϕ and θ = ϕ ϕ in (3.3.9), we then have the following differential equation for the probability generating function, P( ϕ, t) : + P( ϕ, t) = ( ϕ ) B ϕ P( ϕ, t) ( ϕ ) D ϕ P( ϕ, t) t ϕ ϕ B( ) = a; D( ) = b where a and b are constants M( θ, t) θ θ M ( θ, t) = a e + b e t θ M( θ, t) = b e θ e a b t θ ae b a e θ e a b t θ ae b (3.3.) Consider the simple case where the transition rates are proportional to the population size: In this case, the differential equation for M ( θ, t) becomes: (3.3.) (3.3.) Equation (3.3.) is a linear differential equation. Hence an analytical solution for M ( θ, t) is easily obtainable. Bailey (964) showed that the solution of equation (3.3.) θ with boundary condition M ( θ, ) e = (i.e. the initial population size is ) is: (3.3.3) Since the birth and death rates were simply proportional to the population size, it was easy to derive this analytical solution for M ( θ, t). If the birth and death rates are nonlinear, differential equations (3.3.9) and (3.3.) can become intractable. In such cases, we unfortunately cannot derive the exact probability distribution of and thus we need to look at ways to approximate the true probability distribution or to solve the equation numerically. 6

9 3.4) Approximate solutions to the Kolmogorov equations The difficulties encountered in deriving an analytical solution to the Kolmogorov equations particularly when the birth and death rates are nonlinear forces one to look at various, more solvable, models that approximate a population s probability distribution. This section is based on the work of Matis et al. (). Consider transition rates with the following mathematical form: B( ) = f ( ) = a b D( ) = f ( ) = a + b for a b where a, b > ; i =, and B( ) = D( ) = i The transition rates shown above only hold for a b i M = θ + θ M θ θ M ( e ) a ( e ) a + ( e )( b ) + ( e ) b t θ θ (3.4.) This implies that the differential equation for the cumulant generating function, K( θ, t), K = θ θ K θ θ K K ( e ) a ( e ) a ( e )( b ) ( e ) b t θ θ θ i 3 θ κ i ( t) θ θ K( θ, t) = = θκ ( t) + κ ( t) + κ 3( t) +... where κ i is the i th cumulant i!! 3! i>. This suggests that a (3.4.) >> b. Consequently, we expect the per capita birth rate to dominate when is small and the term b to dominate when is large. b can thus be interpreted as the effect of crowding on the population. A similar interpretation also holds for the death rates. By applying equation (3.3.9) to the above transition rates, we obtain: is: (3.4.3) A derivation of equation (3.4.3) is given in Appendix E. either of the above two differential equations is analytically tractable. We thus are unable to find an exact analytical expression for the probability mass function. We thus need to look at ways to derive an approximate expression for the probability mass function as it evolves through time. One alternative to generate an approximate probability mass function would be to try to derive expressions for the first few cumulants of from equation (3.4.3). In deriving such expressions, we make use of the following relationship: Equation (3.4.4), implies that: (3.4.4) K = + θ t t t + θ θκ ( ) κ κ t! 3 ( )! ( ) (3.4.5)

10 Furthermore, we have: K θ = κ ( t) + θκ ( t) + κ 3( t) +... θ! K = κ + + ( t) θκ 3( t)... θ We also need to use the series expansion of e θ : e θ = + θ + θ + θ ! 3! Substituting equations (3.4.5) (3.4.8) into equation (3.4.3), we thus have: (3.4.6) (3.4.7) (3.4.8) 3 θ θ θκ ( t) + κ ( t) + κ ( t) +... = 3! 3! θ θ θ θ θ 3 3 θ θ... κ ( ) θκ ( ) κ ( )... 3! 3! a + + +! 3! a t + t + t + +! 3 3 θ θ θ θ θ b + θ b κ ( t! 3! ( )! 3! θ ) + ( ) ( ) + ( ) + ( ) +... ( )! θκ κ θκ κ t t t t 3 3 Expanding, we thus have: (3.4.9) 3 θ θ θκ ( t) + κ ( t) + κ ( t) +... = 3! 3! a a b + bκκ b + bκθ θ + 3a a 3b b 6b + bκκ 3b + bκ 3 4 a + a b b κ κ + a a b b 4 b + b κ κ b + b κ 3 a a b + bκκ + 3a + a b + b 6 b b κ 6 b + b κ κ +... It can be seen that the differential equation for the i -th cumulant function has terms up to the i +-th cumulant this is due to the non-linear birth and death rates. The presence of these higher-order cumulants prevents us from solving the differential 8 3! θ! (3.4.) By equating the coefficients for the various powers of θ in equation (3.4.), we obtain the following system of differential equations for the cumulants: + 3( a a ) 3( b b ) 6( b + b ) κ 3( b b ) κ + κ etc 3 4 κ ( t) = ( a a ) ( b + b ) κ κ ( b + b ) κ κ ( t) = ( a + a ) ( b b ) κ κ + ( a a ) ( b b ) 4( b + b ) κ κ ( b + b ) κ 3 κ ( t) = ( a a ) ( b + b ) κ κ + 3( a + a ) ( b + b ) 6( b b ) κ 6( b + b ) κ κ 3 (3.4.)

11 equations in (3.4.) directly. Matis et al. () proposed using a cumulant truncation procedure. Here, one approximates the first i cumulants by setting all the cumulants of order i + or higher to zero. If we set all the cumulants of order 4 and above to zero, we can then solve the resulting three differential equations in (3.4.) numerically to find values for κ ( t ), κ ( t) and κ 3 ( t) approximation derived by Renshaw (998): +. 5 / 4 3/ 3/ p ( t) 4π ψ exp / 6κ 3 κ 3κ ψ + ψ dn. 5 where ψ = κ + κ ( n κ ) at various values of t. The cumulant values obtained can then be used to create a saddle-point approximation of the probability distribution. The probability distribution of may be approximated by a saddle-point probability distribution. The saddle-point is a density function that will take as its parameters the values of s cumulants up to a specified order and force the values of the density function s cumulants to match those of. It is this matching of the cumulants that makes the saddle-point an approximation of the true distribution of : one would expect the approximation to be more accurate if more cumulants are being matched (however, in some cases, this is not true). The values for the first three cumulants (derived from equation (3.4.)) can be substituted into the following saddle-point 3 (3.4.) Matis et al. () have stated that the investigations which they have performed into the accuracy of the above approximation have yielded results that were very encouraging. 3.5) Example We again consider the transition rates introduced in Chapter : for < B( ) = otherwise (3.5.) D( ) =. +. By solving the equation B( ) D( ) =, we can see that the equilibrium population size is Since this was a stable equilibrium state, more births tend to occur when < 7. 5 and more deaths tend to occur when > Three simulations of the continuous-time Markov model were performed using the above transition rates so as to get a feel of what a typical population trajectory might look like. In order to execute the simulations, we needed to divide the timeline into 9

12 intervals of sufficiently small lengths (in this case, the interval length was set to.) so as to make the probability of more than one birth or death occurring within an interval negligible. For each interval, we thus know (using t =. ) that the probability that a birth occurs is B( ) t = The probability that a death occurs is D( ) t =. +. and the probability that nothing happens is B( ) + D( ) t = One can then simulate which of the three possible events occurs in each interval and hence replicate the population trajectory over the period of interest. In the simulations performed, the initial population size was set to. The three simulated population trajectories are shown below: Population Trajectories Time Figure 3. Simulated runs of the Population when = Similarly, one can simulate various trajectories for the stochastic differential equation (SDE) approximate representation of the birth and death process. The SDE for the transitions given in (3.5.) is: d = dt dω( t) where ω( t) is a Wiener process (3.5.) As with the earlier model, we assume that the initial population size is and we use time increments of size.. Thus, by simulating the values that the normal random variable dω( t) takes over each time increment, one can derive the population increments through time. By adding these increments to the initial population size, one can derive the population trajectory. 3

13 The diagram below shows 3 typical trajectories for (3.5.): Population Trajectories Time Figure 3. Simulated SDE runs of the Population when = In Figure 3., one can clearly see that the population size never moves by more than one unit in any instant (since dt is made sufficiently small to exclude the possibility of multiple births and deaths within any time increment). This serves to highlight that the birth-and-death process is a discrete-state process in continuous time. However in Figure 3. the population size can change to any value within an instant. This is to be expected as the SDE treats the population size as a continuous variable. Of course, if one were to repeat the simulations, one would, in all likelihood, obtain appreciably different population trajectories from the ones shown in Figures 3. and 3. (since the population movements depend on the occurrence of random events). The initial population size is. From Figure., we can see that the birth rates are higher than the death rates when = and hence we would expect an upward trend in the population numbers initially. Such a trend is clearly evident at the outset of all three population simulations in both Figures 3. and 3.. Once the equilibrium state is reached, one can see from the above figures that the population then tends to vacillate around this point. This is to be expected as this is a stable equilibrium state. A million simulations were then run for the birth-and-death process and these were compared with ten thousand simulations for the SDE. Both sets of simulations took roughly an hour to run in Microsoft Excel on a Pentium 4, 8 MHz, 5MB RAM: the SDE simulations are relatively more time-consuming as random numbers from a ormal distribution must be generated to carry out these simulations. This takes a considerably longer period of time to complete than the Uniform random number generation required for the birth-and-death process as the programming language used 3

14 to execute the simulations required one of Microsoft Excel s statistical functions to generate the ormal random numbers. For both types of simulation the initial population size is set to. The resulting population sizes after each simulation run was recorded at time. These results were then grouped under a frequency distribution (values for the SDE were rounded to the nearest integer) and consequently the probability distribution at time, p ( ) for both the birth-and-death process and the stochastic differential equation could be estimated as the number of times that a particular population value occurred, divided by the number of simulations undertaken. The estimated probabilities for the two models are shown below: Comparisonof the actual and SDEsimulated probabilitiesat time t=.5. Proba.5. Birth anddeathprobabilities SDEprobabilities PopulationSize Figure 3.3 Simulated Probability Distributions for the two processes One can clearly see that the probability distribution for both processes is negatively skew at time. This is due, in part, to the population starting below the equilibrium size. The probability distribution derived using the SDE is a good approximation to the true distribution obtained by simulating the Birth and Death process directly. This is despite the fact that the population size is quite small and so the normality assumption implicit in the SDE model is contentious. The above transition rates are nonlinear and so we need to use the cumulant truncation method in order to obtain approximate values for the cumulants. If we choose to truncate all cumulants of order four and higher, we will obtain the following system of differential equations (see equations in (3.4.)): κ ( t) =. 8. 6κ κ. 6κ κ ( t) =. 3. 4κ κ κ κ. 3κ 3 (3.5.3) κ ( t) =. 8. 6κ κ κ. 96κ κ κ κ 3 3 with boundary conditionsκ ( ) = = and κ ( ) = κ 3( ) = 3

15 The solution of the first three cumulants at time was obtained numerically. The following values were obtained: κ ( ) = 6. 49, κ ( ) = 353., κ ( ) = These estimates compare favourably with the first three cumulant values observed for the one million simulations of the birth-and-death process: κ ( ) = 6. 5, κ ( ) = 35., κ ( ) = The estimates of the cumulants can then be substituted into the saddle-point approximation given in (3.4.) so as to get an approximate probability mass function for. The diagram below compares the saddle-point probabilities with the simulated relative frequencies from the Birth and Death process: Accuracy of the Saddlepoint approximation Proba Simulated Probabilities Saddlepoint approximations Population Size Figure 3.4 Comparison of the Saddle-point and Simulated probabilities From the above figure, one can see that the saddle-point approximation deviates substantially from the true Birth and Death probabilities. The saddle-point approximation does not seem to be valid for the above three values obtained for the cumulants the probability density function will be a complex number when n > 7. 5 as ψ (as defined in (3.4.)) will be negative over this range. This means that the saddle-point approximation, p ( ), is not defined for > 7. Matis et al. () applied the saddlepoint approximation successfully to various other transitional forms. However, they did not apply the saddle-point approximation when considering the transitional rates given in (..). Further research needs to be done to ascertain the reason for the failure of the saddle-point approximation for the transition rates in (..). 33

16 To see whether the saddle-point approximation worked better after a longer time interval, the population size at time was also studied. A hundred thousand simulations were run. The values observed for the first three cumulants were: κ ( ) = 7. 9, κ ( ) =. 58, κ ( ) =. 3 3 Using the formula for ψ (given in (3.4.)), we now find that the density function is complex over the range: n > Compare the cumulant values at time with those observed at time 5: κ ( 5) = 7. 35, κ ( 5) =. 49, κ ( 5) =. 3 Here, the density function becomes complex over the range: n > There are minimal changes in the values of the cumulants. This seems to suggest that the population is close to equilibrium at time (the concept of a population being in equilibrium is considered in more detail in section 3.6). A process X ( t) is said to be ergodic if all its cumulants (e.g. µ = E X ( t) ) are equal to the matching time averages of the process, (e.g. X T lim T X ( t ) dt ). One of the T = conditions of ergodicity which is satisfied by all the birth-and-death models considered in this research report is that the population should forget its initial population size after a suitably long period of time (see isbet et al. (98) and section 3.7). Thus we expect the values of the cumulants by time to be independent of the starting population size. Thus, for the transition rates considered in this example, the saddlepoint approximation is inappropriate; irrespective of the initial value of the population. 3.6) Quasi-Equilibrium Distribution isbet et al. (98) stated that a population with a true equilibrium state, p has: B( ) p = D( + ) p for =,,... p + B( ) B( )... B( ) = D( ) D( )... D( ) B ( ) p, > (3.6.) Intuitively, equation (3.6.) signifies that a population at equilibrium has an equal probability of increasing from size to size + as it has of decreasing from size + to size. By repeatedly applying the above recurrence relationship, one can show that: (3.6.) 34

17 Since we are ignoring migration, B( ) is zero. This implies that p = >. Since p i = i=, this means that p =. This is to be expected since extinction is an absorbing state when migration is ignored. However this distribution is of limited interest. Consequently, the concept of the quasi-equilibrium distribution is considered instead. Quasi-equilibrium is defined as the equilibrium probability distribution that the population would ultimately be subject to if it were never to become extinct. We now look at two possible methods of deriving the quasi-equilibrium distribution. A. The modified Markov Process Matis et al. () modified the original birth and death process to create a new Markov process whose probability distribution did not degenerate at equilibrium to an extinction probability of one. The coefficient matrix for this new process R m m p ( t) = p ( t) R = ΠR was based on the coefficient matrix for the birth and death process R (see equation (3..5)). By deleting the first row and column of R (thus excluding the state = ) and by assuming that D( ) = (which removes the only transition to the state = ), one obtains the coefficient matrix for the new process, R. Let p let p m ( t) = p m ( t) m ( t ) be the probability that this modified process is equal to at time t. Also =,,... =,,... and p m ( t) = p m ( t). We then have: At equilibrium, we would expect p m ( t ) =. So if we let Π = p =,,... (3.6.3) be the equilibrium distribution for the modified process (and the quasi-equilibrium distribution for the birth and death process) we then have: (3.6.4) By solving equation (3.6.4) for Π, we get the quasi-equilibrium distribution. However, the algebra may become tedious when the population is large. ote that R singular matrix, otherwise Π =. B. Locally Linear Approximations must be a In Chapter, a locally linear approximation in the vicinity of the population s equilibrium state was used to derive an approximate population model. An approximation around the population s deterministic equilibrium state, can also be used to derive an 35

18 approximate quasi-equilibrium distribution. By definition, the deterministic equilibrium state satisfies the following relationship: B( ) = D( ) isbet et al. (98) defined the three functions, f ( ), g( ) and n( t) : f ( ) = B( ) D( ) g( ) = B( ) + D( ) n( t) = ( t) (3.6.5) (3.6.6) By performing a Taylor expansion of f ( ) around and retaining only the leading term in the expansion, we have: A similar procedure on g( ) gives: db dd f ( + n) λn where λ d = d (3.6.7) Equation (3.6.7) approximates g( + n) Q where Q B( ) + D( ) = t p t np t n n Qp t ( ) λ ( ) ( ) n n n approximate expression for the quasi-equilibrium distribution: p exp Q λ approximation, the population size has the following Q ormal (approximate) distribution:, (3.6.) λ ote that λ < for a stable equilibrium state (see equation (..6)) and thus the (3.6.8) f ( ) to the first order whilst equation (3.6.8) approximates g( ) to the zero th order. These expressions can then be substituted into the continuous approximation given by equation (3.3.) to obtain: By setting t p (3.6.9) t ( ) =, one can solve the resulting differential equation to derive an + n (3.6.) The function clearly is Gaussian in form. isbet et al. stated that, with this locally linear variance ( Q λ ) for is positive. 3.6.A) Example (continued) For the transition rates considered in Section 3.5, we have = Furthermore, we know that: λ = =. 8 Q = B( 7. 5) + D( 7. 5) = 35. Thus, we have that ormal( 7. 5, ). 36 = 7. 5

19 The corresponding modified transition matrix is obtained using the transition rates in (3.5.): By solving equation (3.6.4), one can calculate the quasi-equilibrium distribution. The quasi-equilibrium distribution and its approximation are independent of the initial population,. The diagram below shows the quasi-equilibrium distribution; the probability distribution at time (obtained by simulation); and the locally linear ormal approximation: The equilibriumprobability distribution Proba Quasi-Equilibrium Distribution Simulated Probability Distribution at Time t= Linear Approximation Population Size Figure 3.5 The population at equilibrium From the diagram, one can see that the locally linear approximation provides a relatively good fit to both the population s probability distribution at time and the quasiequilibrium distribution. Thus, after a long enough time interval, the population assumes an approximately normal distribution. 37

20 3.7) Gross Fluctuation Characteristics of a Population The population s probability distribution is usually a means to an end rather than the end itself since it is almost impossible to estimate for any natural population. This is due to both the unreliability of most ecological population data and the difficulty involved in setting up replicate populations so that various probabilities may be estimated. However, gross fluctuation characteristics of a population, such as the mean, the variance and the autocovariance function, can usually be observed over time for a population. Thus such characteristics prove to be invaluable in calibrating any population model. The gross fluctuation characteristics should describe the properties of a population at equilibrium. However, since extinction is an absorbing state, the equilibrium state of a population is extinction. The characteristics of such a state are not very interesting and so we would rather base the gross fluctuation characteristics on a population in quasiequilibrium. This section is based on the work of isbet et al. (98). As before we let p be the probability that a population in quasi-equilibrium has size. We then have: µ = p (3.7.) = σ = p µ = (3.7.) Unfortunately, the above equations cannot be used to calculate the mean and the variance as the quasi-equilibrium distribution of a population is seldom estimable. A good way to relate the gross fluctuation characteristics to a measurable quantity is to equate the above statistical expectations to the corresponding time averages of a single population. That is, we assume the population is ergodic (see Section 3.5 for a definition of ergodicity). In order for such a procedure to be valid, the following conditions for ergodicity must hold: i. After a suitably long period of time the population should forget its initial value. ii. A population starting from a particular value should, in principle, be able to reach any other value. isbet et al. stated that the above conditions are satisfied by all birth and death models. The time averages of a single population (which we denote by ) are defined as: T µ = tim lim T T ( t ) dt (3.7.3) T σ tim = µ tim lim µ tim dt T T (3.7.4) 38

21 The time averages, µ tim and σ tim should equal the mean and the variance of the quasiequilibrium distribution as the population should spend most of its time in the quasiequilibrium state. Thus, from equation (3.6.), we get: Q µ tim = ; σ tim = λ One can also define the autocovariance function, C( τ ) using time averages: C( τ ) ( t) ( t τ ) d = B( ) D( ) dt + B( ) + D( ) dω( t) (3.7.5) (3.7.6) The autocovariance function gives one an indication of the time it takes for a population to forget its initial value. An alternative method of deriving the gross fluctuation characteristics is to use the SDE formulation. The stochastic differential equation used to model the population (see equation (3.3.5)) was: (3.7.7) Retaining the usage of the functions f ( ) and g( ), as defined in section 3.6, we have: d = f ( ) dt + g( ) dω( t) (3.7.8) If, consequently, one were to approximate the functions f ( ) and g( ) around by the equations (3.6.7) and (3.6.8), we would obtain the following linear SDE: dn = λn + Q γ ( t) dt where n = dω( t) γ ( t) = = white noise dt One can easily derive the gross fluctuation characteristics for a linear SDE using T T ~ x ( ) x( t) e iωt ω = dt = iωx~ ω dx~ ω dt 39 (3.7.9) Fourier methods. Consequently, a brief description of Fourier analysis is given below. (It is also advisable to consult Appendix C as it gives proofs to some key Fourier theorems.) For any function x( t), its Fourier transform, ~ x ( ω) is defined over the interval T ; T as: For a linear process, x, t we have: It is this result in particular which makes a linear SDE amenable to Fourier analysis. (3.7.) (3.7.)

22 The spectral density, S x ( ω ), of the function x( t) is defined as: S x x~ ( ω ) ( ω) = lim T T dn~ ( ω ) = iωn~ ( ω ) = λn~ ( ω ) + Q ~ γ ( ω ) dt ~ Q n( ω ) = ~ ( ) iω λ γ ω Q (3.7.5) Sn( ω) = S γ ( ω) λ + ω The above relationships are useful as we know that white noise has the following E ~ ( ) ; E ~ γ ω = γ ( ω) = T ; S ( ω) = By applying some results proved in Appendix C to the population, we obtain: n ( t ) = T n ~ ( ) n( t) = Sn( ω) dω (3.7.8) π By substituting equation (3.7.4) into (3.7.7) and taking expectations, we have: Q E n( t) = T E ~ γ ( ) = λ ( t) = E ( t) = Q dω Q (3.7.) σ tim = σ = E n( t) = = π λ + ω λ The time averages given above agree with the gross fluctuation characteristics derived γ (3.7.) If the population s equilibrium state is stable, the transient initial condition-dependent term will decay to zero and the persisting term becomes dominant. Since equation (3.7.9) is linear, we know by Fourier transforming equation (3.7.9) (and applying equation (3.7.)) that: Rearranging the terms, we thus have: (3.7.3) (3.7.4) The spectral density of the population is upon substituting equation (3.7.4) in equation (3.7.) given by: properties: (3.7.6) (3.7.7) (3.7.9) In addition, by substituting (3.7.5) into (3.7.8) and applying the spectral property of white noise (given in (3.7.6)), we have: via the continuous approximation (see equation (3.7.5)). Unlike the continuous approximation however, the stochastic differential equation formulation allows us to 4

23 easily derive the autocovariance function, C( τ ), for the population. It can be proved (see Appendix C) that: C( τ ) = Sn ( ω) cos ωτ dω π Substituting equation (3.7.5) into (3.7.), we have: Q C ( τ ) e λτ = λ Thus, the autocorrelation function, ρ( τ ), is: (3.7.) (3.7.) ρ( τ ) = e λτ (3.7.3) The functional form of (3.7.3) implies that the population sizes at two distinct points in time can never be negatively correlated. 3.7.A) Example (continued) Using equation (3.7.3), we find that the autocorrelation function for the example in section 3.5 (where λ =. 8 ) is: Autocorrelation function Correlation o Time lag Figure 3.6 Correlation between two points a distance τ apart One can see that the population size in the near future is closely correlated to the population size now. This is to be expected as the time interval is too small to allow for any more than a few births and deaths to occur. One can see that the population at times a distance apart are virtually uncorrelated. Thus we would expect the initial population size of to have virtually no impact on the population size at time. 3.8) Extinction The extinction of a species is of particular interest in all population studies. Society is rarely indifferent to the prospect of extinction of a species (whether it be the rhino or smallpox!). Extinction is an absorbing state. The finality of the extinction state is, in large 4

24 part, the justification for any interest in this state. The probability of a population being extinct at time t is p ( t). Since extinction is an absorbing state, p ( t) is always an increasing function of time. Let T denote the time to extinction when the current population size is. Also let f ( t) be the density function and F ( t) the cumulative distribution function of T. We thus have: F ( t) = Pr ( t) = ( ) = = p ( t) given ( ) = By taking the derivatives of both sides, we have: E b t f ( t) = p ( t) with ( ) = E = tp ( t) dt = D( ) tp ( t) dt = e bt e bt dt = b i= i where E = S( t) dt = p ( t) dt S( t) = P[ T > t] where is the quasi equilibrium probability p ( t) p p ( t) p = dp( t) = D( ) p ( t) dt dp( t) = D( ) p p( t) dt The solution to equation (3.8.8) is: 4 p ( t) = exp D( ) p t (3.8.) (3.8.) Let E denote the mean of T. Using the functional form for p ( t) (shown in equation (3..4)), we then have: So for the deaths-only process in section 3. (where D( ) = b ), we have: (3.8.3) (3.8.4) For the example in Section 3.5, one cannot obtain an analytical expression for p t ( ) and hence we will only be able to derive the mean time to extinction numerically using the above method. Alternatively, one could try to derive an approximate analytical expression. An alternative expression for (3.8.3) is: (3.8.5) isbet et al. (98) showed that a when a population is close to its quasi-equilibrium state, then: The Kolmogorov equation when = (see equation (3..4)) is: Substituting equation (3.8.6) into (3.8.7), we have: (3.8.6) (3.8.7) (3.8.8) (3.8.9)

25 By substituting equation (3.8.9) into equation (3.8.5), one has: E = exp D( ) p t dt = D( ) p (3.8.) This expression is independent of the initial population size,. This is because we are assuming that the population has reached the quasi-equilibrium state, which is independent of the initial population size. For the example in Section 3.5, D( ) =. and p = 7. 8 (this probability was calculated in Section 3.6.A by solving equation (3.6.4)). Substituting these numbers into equation (3.8.), the model estimates the mean time to extinction to be 6. time units; for any initial population size. One can obtain an exact result for the example in Section 3.5 using the fact that we are modelling the population as a Markov process. Let R + be a modified coefficient matrix of R (see equation (3..5)), obtained by deleting the first row and the first column of R. ( R + is not quite the same as R which was used in equation (3.6.3) as we do not make D( ) =. As such, R + is the coefficient matrix of the Kolmogorov equations amongst the transient states.) Let M = m ij times. The element m ij be the matrix of so-called mean residence is defined as the expected value of the total elapsed time that a population, which starts at size ( ) = i, will be of size j prior to the population becoming extinct. Matis et al. () stated that: M = R + The mean time to extinction given that ( ) =, denoted by E, is: (3.8.) E = m, j j (3.8.) For the example in section 3.5, R is a matrix, as the population size cannot increase above. The matrix R + is thus invertible and consequently, one can derive the expected time to extinction using equation (3.8.) for any initial population size. The matrix R + is the same as the matrix shown in Section 3.6.A, except instead of having the top left entry of the matrix, r, =. 85, we have r, =. 36. By applying equations (3.8.) and (3.8.) in turn, we thus find: E = The mean time to extinction increases as the population size increases. This is fairly intuitive as the population has to suffer the loss of an additional member of the 43

26 population in order to become extinct. However, the mean time to extinction, E is effectively the same for initial population sizes of four and above. It thus seems that the fact that the approximate expression for E (equation (3.8.)) is independent of the initial population size in not all that unreasonable. Indeed, the estimated time to extinction using equation (3.8.) is not far from any of the true mean times to extinction. Unfortunately, equations (3.8.) and (3.8.) cannot be used when the population sizes are very large as the resulting matrices are also large and hence difficult to manipulate. This is when the approximated analytical expression derived by isbet et al. (98) becomes especially useful. The Birth-and-Death model is one of the most widely-used stochastic representations of a population. In addition to its flexibility, the Birth-and-Death model is also appealing due to the simplicity of its underlying principle: the population number can only change when a member of the population gives birth or dies. In the following Chapter, we look at an alternative method of modelling demographic stochasticity. 44

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

6 Continuous-Time Birth and Death Chains

6 Continuous-Time Birth and Death Chains 6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 011 MODULE 3 : Stochastic processes and time series Time allowed: Three Hours Candidates should answer FIVE questions. All questions carry

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Time series models in the Frequency domain. The power spectrum, Spectral analysis

Time series models in the Frequency domain. The power spectrum, Spectral analysis ime series models in the Frequency domain he power spectrum, Spectral analysis Relationship between the periodogram and the autocorrelations = + = ( ) ( ˆ α ˆ ) β I Yt cos t + Yt sin t t= t= ( ( ) ) cosλ

More information

Let's transfer our results for conditional probability for events into conditional probabilities for random variables.

Let's transfer our results for conditional probability for events into conditional probabilities for random variables. Kolmogorov/Smoluchowski equation approach to Brownian motion Tuesday, February 12, 2013 1:53 PM Readings: Gardiner, Secs. 1.2, 3.8.1, 3.8.2 Einstein Homework 1 due February 22. Conditional probability

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ) Chapter 4 Stochastic Processes 4. Definition In the previous chapter we studied random variables as functions on a sample space X(ω), ω Ω, without regard to how these might depend on parameters. We now

More information

Random processes and probability distributions. Phys 420/580 Lecture 20

Random processes and probability distributions. Phys 420/580 Lecture 20 Random processes and probability distributions Phys 420/580 Lecture 20 Random processes Many physical processes are random in character: e.g., nuclear decay (Poisson distributed event count) P (k, τ) =

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Before we consider two canonical turbulent flows we need a general description of turbulence.

Before we consider two canonical turbulent flows we need a general description of turbulence. Chapter 2 Canonical Turbulent Flows Before we consider two canonical turbulent flows we need a general description of turbulence. 2.1 A Brief Introduction to Turbulence One way of looking at turbulent

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Signals and Spectra - Review

Signals and Spectra - Review Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step.

Any live cell with less than 2 live neighbours dies. Any live cell with 2 or 3 live neighbours lives on to the next step. 2. Cellular automata, and the SIRS model In this Section we consider an important set of models used in computer simulations, which are called cellular automata (these are very similar to the so-called

More information

System Identification

System Identification System Identification Arun K. Tangirala Department of Chemical Engineering IIT Madras July 27, 2013 Module 3 Lecture 1 Arun K. Tangirala System Identification July 27, 2013 1 Objectives of this Module

More information

Handbook of Stochastic Methods

Handbook of Stochastic Methods C. W. Gardiner Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences Third Edition With 30 Figures Springer Contents 1. A Historical Introduction 1 1.1 Motivation I 1.2 Some Historical

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Stochastic Modelling Unit 1: Markov chain models

Stochastic Modelling Unit 1: Markov chain models Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

One-Parameter Processes, Usually Functions of Time

One-Parameter Processes, Usually Functions of Time Chapter 4 One-Parameter Processes, Usually Functions of Time Section 4.1 defines one-parameter processes, and their variations (discrete or continuous parameter, one- or two- sided parameter), including

More information

Modeling with Itô Stochastic Differential Equations

Modeling with Itô Stochastic Differential Equations Modeling with Itô Stochastic Differential Equations 2.4-2.6 E. Allen presentation by T. Perälä 27.0.2009 Postgraduate seminar on applied mathematics 2009 Outline Hilbert Space of Stochastic Processes (

More information

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997) A Model of Human Capital Accumulation and Occupational Choices A simplified version of Keane and Wolpin (JPE, 1997) We have here three, mutually exclusive decisions in each period: 1. Attend school. 2.

More information

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Lecture 11: Logistic growth models #3

Lecture 11: Logistic growth models #3 Lecture 11: Logistic growth models #3 Fugo Takasu Dept. Information and Computer Sciences Nara Women s University takasu@ics.nara-wu.ac.jp 13 July 2009 1 Equilibrium probability distribution In the last

More information

The Kawasaki Identity and the Fluctuation Theorem

The Kawasaki Identity and the Fluctuation Theorem Chapter 6 The Kawasaki Identity and the Fluctuation Theorem This chapter describes the Kawasaki function, exp( Ω t ), and shows that the Kawasaki function follows an identity when the Fluctuation Theorem

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Applied Probability and Stochastic Processes

Applied Probability and Stochastic Processes Applied Probability and Stochastic Processes In Engineering and Physical Sciences MICHEL K. OCHI University of Florida A Wiley-Interscience Publication JOHN WILEY & SONS New York - Chichester Brisbane

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials

MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials. Hermite Equation In the study of the eigenvalue problem of the Hamiltonian for the quantum harmonic oscillator we have

More information

Handbook of Stochastic Methods

Handbook of Stochastic Methods Springer Series in Synergetics 13 Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences von Crispin W Gardiner Neuausgabe Handbook of Stochastic Methods Gardiner schnell und portofrei

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Random Process. Random Process. Random Process. Introduction to Random Processes

Random Process. Random Process. Random Process. Introduction to Random Processes Random Process A random variable is a function X(e) that maps the set of experiment outcomes to the set of numbers. A random process is a rule that maps every outcome e of an experiment to a function X(t,

More information

Module 3. Function of a Random Variable and its distribution

Module 3. Function of a Random Variable and its distribution Module 3 Function of a Random Variable and its distribution 1. Function of a Random Variable Let Ω, F, be a probability space and let be random variable defined on Ω, F,. Further let h: R R be a given

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

Notes for Expansions/Series and Differential Equations

Notes for Expansions/Series and Differential Equations Notes for Expansions/Series and Differential Equations In the last discussion, we considered perturbation methods for constructing solutions/roots of algebraic equations. Three types of problems were illustrated

More information

The Second Virial Coefficient & van der Waals Equation

The Second Virial Coefficient & van der Waals Equation V.C The Second Virial Coefficient & van der Waals Equation Let us study the second virial coefficient B, for a typical gas using eq.v.33). As discussed before, the two-body potential is characterized by

More information

Discrete time processes

Discrete time processes Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Brownian motion and the Central Limit Theorem

Brownian motion and the Central Limit Theorem Brownian motion and the Central Limit Theorem Amir Bar January 4, 3 Based on Shang-Keng Ma, Statistical Mechanics, sections.,.7 and the course s notes section 6. Introduction In this tutorial we shall

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

stochnotes Page 1

stochnotes Page 1 stochnotes110308 Page 1 Kolmogorov forward and backward equations and Poisson process Monday, November 03, 2008 11:58 AM How can we apply the Kolmogorov equations to calculate various statistics of interest?

More information

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde Gillespie s Algorithm and its Approximations Des Higham Department of Mathematics and Statistics University of Strathclyde djh@maths.strath.ac.uk The Three Lectures 1 Gillespie s algorithm and its relation

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather 1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

A&S 320: Mathematical Modeling in Biology

A&S 320: Mathematical Modeling in Biology A&S 320: Mathematical Modeling in Biology David Murrugarra Department of Mathematics, University of Kentucky http://www.ms.uky.edu/~dmu228/as320/ Spring 2016 David Murrugarra (University of Kentucky) A&S

More information

Recall the Basics of Hypothesis Testing

Recall the Basics of Hypothesis Testing Recall the Basics of Hypothesis Testing The level of significance α, (size of test) is defined as the probability of X falling in w (rejecting H 0 ) when H 0 is true: P(X w H 0 ) = α. H 0 TRUE H 1 TRUE

More information

The effect of emigration and immigration on the dynamics of a discrete-generation population

The effect of emigration and immigration on the dynamics of a discrete-generation population J. Biosci., Vol. 20. Number 3, June 1995, pp 397 407. Printed in India. The effect of emigration and immigration on the dynamics of a discrete-generation population G D RUXTON Biomathematics and Statistics

More information

Lecture 21: Spectral Learning for Graphical Models

Lecture 21: Spectral Learning for Graphical Models 10-708: Probabilistic Graphical Models 10-708, Spring 2016 Lecture 21: Spectral Learning for Graphical Models Lecturer: Eric P. Xing Scribes: Maruan Al-Shedivat, Wei-Cheng Chang, Frederick Liu 1 Motivation

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Note the diverse scales of eddy motion and self-similar appearance at different lengthscales of the turbulence in this water jet. Only eddies of size

Note the diverse scales of eddy motion and self-similar appearance at different lengthscales of the turbulence in this water jet. Only eddies of size L Note the diverse scales of eddy motion and self-similar appearance at different lengthscales of the turbulence in this water jet. Only eddies of size 0.01L or smaller are subject to substantial viscous

More information

Table of Contents [ntc]

Table of Contents [ntc] Table of Contents [ntc] 1. Introduction: Contents and Maps Table of contents [ntc] Equilibrium thermodynamics overview [nln6] Thermal equilibrium and nonequilibrium [nln1] Levels of description in statistical

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Session-Based Queueing Systems

Session-Based Queueing Systems Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

1 Types of stochastic models

1 Types of stochastic models 1 Types of stochastic models Models so far discussed are all deterministic, meaning that, if the present state were perfectly known, it would be possible to predict exactly all future states. We have seen

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

HANDBOOK OF APPLICABLE MATHEMATICS

HANDBOOK OF APPLICABLE MATHEMATICS HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume II: Probability Emlyn Lloyd University oflancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester - New York - Brisbane

More information

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University Topic 4 Unit Roots Gerald P. Dwyer Clemson University February 2016 Outline 1 Unit Roots Introduction Trend and Difference Stationary Autocorrelations of Series That Have Deterministic or Stochastic Trends

More information

An Introduction to Stochastic Epidemic Models

An Introduction to Stochastic Epidemic Models An Introduction to Stochastic Epidemic Models Linda J. S. Allen Department of Mathematics and Statistics Texas Tech University Lubbock, Texas 79409-1042, U.S.A. linda.j.allen@ttu.edu 1 Introduction The

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing Department of Applied Mathematics Faculty of EEMCS t University of Twente The Netherlands P.O. Box 27 75 AE Enschede The Netherlands Phone: +3-53-48934 Fax: +3-53-48934 Email: memo@math.utwente.nl www.math.utwente.nl/publications

More information

AARMS Homework Exercises

AARMS Homework Exercises 1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality

More information

ENGI 9420 Lecture Notes 1 - ODEs Page 1.01

ENGI 9420 Lecture Notes 1 - ODEs Page 1.01 ENGI 940 Lecture Notes - ODEs Page.0. Ordinary Differential Equations An equation involving a function of one independent variable and the derivative(s) of that function is an ordinary differential equation

More information

Stochastic Processes. A stochastic process is a function of two variables:

Stochastic Processes. A stochastic process is a function of two variables: Stochastic Processes Stochastic: from Greek stochastikos, proceeding by guesswork, literally, skillful in aiming. A stochastic process is simply a collection of random variables labelled by some parameter:

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Lycka till!

Lycka till! Avd. Matematisk statistik TENTAMEN I SF294 SANNOLIKHETSTEORI/EXAM IN SF294 PROBABILITY THE- ORY MONDAY THE 14 T H OF JANUARY 28 14. p.m. 19. p.m. Examinator : Timo Koski, tel. 79 71 34, e-post: timo@math.kth.se

More information

Chapter 3 - Temporal processes

Chapter 3 - Temporal processes STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect

More information