Generalization in prediction of predator-prey. populations modelled by perturbed ODEs

Size: px
Start display at page:

Download "Generalization in prediction of predator-prey. populations modelled by perturbed ODEs"

Transcription

1 Journal of Mathematical Biology manuscript No. (will be inserted by the editor) Sévérien Nkurunziza Generalization in prediction of predator-prey populations modelled by perturbed ODEs Received : date / Revised : date Abstract In this paper, we are interested in the estimation and the prediction problems for some cyclic predator-prey populations. Briefly, we generalize (and refine) the Froda and Nkurunziza (007) method to the case where the parameters of an ordinary differential equations (ODE) system are not constants. In the quoted paper, the authors consider a solution to the classical Lotka-Volterra ODE system, but they view the actual population sizes as random perturbations of the solutions to this ODE system where these perturbations follow correlated Ornstein-Uhlenbeck processes. In this paper, we establish a re-parametrization result which proves that a similar approach is useful when the parameters of ODE are functions of time. Further, we show that our method generalizes and improves the Froda and Nkurunziza (007) method. This research has received the financial support of NSERC Sévérien Nkurunziza Department of Mathematics and Statistics, University of Windsor, 401 Sunset Avenue, Windsor, Ontario, N9B 3P4 Tel. : ext Fax : severien@uwindsor.ca

2 S. Nkurunziza Keywords Lotka-Volterra ODE Wiener process interacting predator-prey species Ornstein-Uhlenbeck process adaptation factor. Mathematics Subject Classification (000) 6M10 9D5 6P10 1 Introduction Over the years, ecologists have observed that an oscillatory behaviour is what one observes in many animal populations and an extensive literature has evolved on the subject. Thus, it would be impossible to summarize it here. However, some references are Kendall et al. (1999), Ginzburg and Taneyhill (1994), Royama (199), Renshaw (1991), Brillinger (1981) and Boyce (000). Under some assumptions, the predator-prey interacting species is described by the classical Lotka-Volterra ODE system (see Lotka, 195 or Volterra, 1931) u(t) = (α βv(t))u(t), v(t) = (γu(t) δ)v(t) (u(0),v(0)) = (x 0,y 0 ) (1) where x 0 > 0, y 0 > 0 and (α > 0,β > 0,γ > 0,δ > 0). For the interpretation of the trajectory of the ODE (1), u(t) represents the size of the prey population, u(t) its derivative with respect to time, and v(t) represents the size of the predator population, v(t) its derivative with respect to time. The parameter δ is the death rate of the predator when the prey is absent, while α is the birth rate of the prey when the predator is absent. (In ecology, α is called the intrinsic growth rate). The parameters γ and β express the interaction between the two populations. Many ecologists agree that the model described in (1) is too simple to capture all the complexity present in nature because it neglects many other factors that can

3 Improvement in modelling and prediction for predator-prey populations 3 affect the interaction even when the prey is the main source of food for the predator. Moreover, it assumes exponential growth for each population in the absence of the other one, i.e. when β = 0. Also, in mathematical modelling, when the number of the parameters increases, the goodness-of-fit increases but the model becomes too complex and inconvenient for practitioners. In order to take care of the oscillatory behaviour fact and the roughness observed in real data, Froda and Colavita (005) introduced a stochastic model in continuous time in which the observed population sizes are viewed as the deterministic solution of the ordinary differential equations (ODE) system plus a second order, independent and identically distributed (i.i.d) error process. Just recently, Froda and Nkurunziza (007) suggested a more general stochastic model in continuous time in which the observed population sizes are viewed as the deterministic solution of the ODE (1) plus correlated Ornstein-Uhlenbeck processes. It can be verified that the model of Froda and Colavita (005) becomes the limit case of that suggested by Froda and Nkurunziza (007, Proposition.5). The common point of both quoted stochastic models is that the expected value of the observed population sizes is the solution of the Lotka-Volterra ODE system (1). However, the deterministic ODE model (1) neglects the fact that in practice the prey can adapt his hiding strategy in order to escape the predator s attack and this affects the death rate and the interaction parameters. In the similar way, over time, the predator will have to adapt his attack technic in order to capture the prey. Also, the prey can adapt itself to some ecological and/or environment changes that affect the birth rate. In this paper, we incorporate the adaptation factor through a version of the model (1) where the parameters are some positive and continuous functions of time t. In

4 4 S. Nkurunziza particular, we consider a deterministic ordinary differential equations system (ODE) ẋ(t) = κ(t)(α βy(t))x(t), ẏ(t) = κ(t)(γx(t) δ)y(t), () (x(0),y(0)) = (x 0,y 0 ) where κ(t) is a strictly positive and continuous function of time t. Note that the ODE (1) is a particular case of the ODE () with κ(t) = 1 for all t 0. Further, let {(X(t),Y (t)), t 0} be the observed population sizes process. As mentioned above, in this paper, we view the observed population sizes, (X(t),Y (t)), as the deterministic solution of (), (x(t),y(t)) plus an error which follows a pair of two correlated Ornstein-Uhlenbeck processes. From a statistical point of view, our final goal is to determine a predictor of population sizes at time t based on the past observed values of the process {(X(s),Y (s)), 0 s < t}. In Section 3, we establish a certain re-parametrization result (Corollary 3. or Corollary 3.3) that shows that the solution of () (x(t),y(t)) stay on the same closed curve as (u(t),v(t)) that is the solution of the ODE (1). Our method is based on the fact that the ODE system () admits closed orbits, i.e., for any non-equilibrium initial value (x 0,y 0 ). More precisely, there exists a constant η such that any solution (x(t), y(t)) stays on the closed curve H (x(t),y(t)) = H (u(t),v(t)) = η (3) with H (u(t),v(t)) = (γβu(t) δ logu(t)) + (βv(t) α logv(t)), (4) for all t 0.

5 Improvement in modelling and prediction for predator-prey populations 5 Also, it should be noted that (u,v) = (δ/γ,α/β) is an equilibrium point of the ODE system (1) and is a center (see Hirsch and Smale, 1974, page 6). Moreover, any solution (u(t), v(t)) of the ODE system (1) satisfies (3). Furthermore, a solution of the ODE (1) that starts at a value (x 0,y 0 ) in a neighborhood of (u,v) is an ω-periodic function. Moreover, we have (u,v) = 1 ( ω ω ) ( δ u(t)dt, v(t) dt = ω 0 0 γ, α ), β and under some conditions given in Section 3, (x(t),y(t)) is an ϖ d -periodic function and (x,y) = 1 ( ϖd ϖd ) ( δ κ(t)x(t) dt, κ(t)y(t) dt = K(ϖ d ) 0 0 γ, α ), (5) β where t K(t) = κ(s)ds, with t 0. 0 That means that (x,y) are the weighted average population sizes over one period. The ratio x = δ/γ is the amount of prey needed to sustain the predator population, and below y = α/β the prey starts to increase. An important feature of this model is that it assumes that the prey is the main source of food for the predator. For a detailed qualitative study of the classical Lotka-Volterra ODE system we refer to Hirsch and Smale (1974 Chapter 1). Further, we refine the model () by using the stochastic model introduced in Froda and Nkurunziza (007). In short, the observed population sizes are viewed as random perturbations of the solutions to the deterministic ODE system (). Namely, let {X(t),t 0} be the size of the prey population and {Y (t),t 0} be the size of the predator population, and assume logx(t) = logx(t) + e X t, logy (t) = logy(t) + e Y t, (6)

6 6 S. Nkurunziza where (x(t),y(t)) are solutions to () and e X t,e Y t, t 0 are two correlated Ornstein- Uhlenbeck processes as described in Section. For more detail about this stochastic model, the reader is referred to Froda and Nkurunziza (007). Our first goal consists in using the model (6) in order to derive the estimate of the values of the four parameters γ,β,δ,α. Furthermore, our second goal consists in using the model (6) in order to predict future values of the process (X(t),Y (t)),t 0. It should be noted that, for the case where κ(t) is constant, this problem has been solved by Froda and Nkurunziza (007). Thus, the main contribution of this paper consists in generalizing the method described in Froda and Nkurunziza (007) to the case where κ(t) is any positive and continuous function. Also, in Section 5, we prove that our method improves the Froda and Nkurunziza (007) estimate. Further, by incorporating the adaptation factor of the predator-prey species, one of our interests is to consider a more realistic model that account for different and important sources of variability. Indeed, let x(t), y(t) denote the deterministic population sizes at time t (solution of the ODE (). Assume that we sample N observations from (6), X(t i ),Y (t i ), i = 1,...,N but γ,β,δ,α are unknown. To this end, we estimate the parameters γ,β,δ,α and then, we estimate the deterministic population sizes at time t x(t), y(t). As a final goal, we consider to predict values X(t ),Y (t ) at t > t i,i = 1,...,N. It should be noted that the main steps are essentially the same as presented in Froda and Nkurunziza (007). In fact, we establish (in Section 3) a certain re-parametrization result that allows us to transform the estimation problem based on ODE () to the estimation problem based on ODE (1) and then, the other steps become the same as those given in Froda and Nkurunziza (007). In summary, the main steps are :

7 Improvement in modelling and prediction for predator-prey populations 7 (i) estimate the expectation (logx(t),logy(t)) of (logx(t),logy (t)) by ( logxt, logy ) t ; (ii) estimate the parameters of the noise process (et X,e Y t ) and let (êx t,êy t ) be the resulting estimates of this noise process at t ; and finally (iii) predict X t,y t at t > t i,i = 1,...,N from logx t = logx t + êx t, logy t = logy t + êy t. (7) In Section 4, we give more details of these main steps. In particular, we concentrate on the steps which are slightly different from that given in Froda and Nkurunziza (007). The remainder of the paper is organized as follows. Section gives the statistical model and preliminary results. Section 3 gives some qualitative properties of ODE and the main idea of our estimation and prediction method. In Section 4, we present the main steps of general framework of estimation and prediction. Section 5 is asymptotic results. Continuing Section 6 illustrates our method as applied to the Canadian minkmuskrat data as well as to the paramecium-didinium data sets. Finally, Section 7 is Conclusion and, technical proofs are given in the Appendix. Statistical model and preliminary results We assume that (X(t),Y (t)), which are the prey and predator population sizes as observed at time t [0, ), satisfy system (6), i.e. logx(t) = logx(t) + e X t, logy (t) = logy(t) + e Y t, (8) where (x(t),y(t)) is a solution to (). Also, we assume that each noise component {(e X t,e Y t ),t 0} is an Ornstein-Uhlenbeck process (see Kutoyants, 004, p.51 or

8 8 S. Nkurunziza Steele, 001, p. 140), with a particular dependence structure defined below. More precisely, we have, de X t = ce X t dt + τ dw X t, de Y t = ce Y t dt + τ dw Y t (9) where τ > 0, c > 0, and { Wt X,t 0 } as well as { Wt Y,t 0 } are Wiener processes (see Steele, 001, p. 9) which satisfy the following Assumption (C 1 ). Assumption (C 1 ) The Wiener processes { Wt X,t 0 } and { Wt Y,t 0 } are jointly Gaussian and such that, for all i, j = 1,,3,..., ( ) Cov Wt X i,wt Y j = ρ min(t i,t j ), where ρ < 1. The existence of Wiener processes which satisfy the Assumption (C 1 ) follows from standard stochastic calculus. The choice of a such error process with continuous paths is in agreement with the continuity in time of the solution of the ODEs (1). Also, note that the error processes {et X, t 0}, {e Y t, t 0} are ergodic Markov processes and, under Assumption (C ) given below, they are stationary Gaussian. Further, the model (8) has as the advantage that it captures the irregularities (i.e. roughness) and the periodic behaviour of paths observed in practice. In contrast, for the stochastic differential equations corresponding to (1) extinction is possible (see, e.g. Gard and Kannan 1976, Gard, 1988 or Zengjing and Kulperger, 005) and hence, cyclic behaviour is not present. In order to make the processes { et X, t 0 } and { e Y t, t 0 } stationary, we impose some conditions on the initial value of the system (9). Here, we introduce the second assumption of our model. Assumption (C ) The random variables e X 0 and ey 0 are independent and Gaussian with mean 0 and variance σ = τ /c. Moreover, ( e X ) 0,eY 0 is independent of { (W X t,wt Y ), t 0 }.

9 Improvement in modelling and prediction for predator-prey populations 9 Our method requires the covariance matrix of logx(t),logy (t), t 0 ; the processes of interest. This is given in the following proposition that is established for example in Froda and Nkurunziza (007, Proposition.3). Proposition.1.Let (et X,e Y s ) be a solution to the system (9) where (Wt X,Wt Y ) satisfy assumptions (C 1 ) and (C ), and let, for simplicity, s t. Then (i) (ii) Var e X t e Y s (e X E( t,e Y ) ) s = (0,0) ; = σ 1 ρ ( e c(t s) e c(t+s)) ρ ( e c(t s) e c(t+s)) ; 1 (iii) the vector (e X t,e Y s ) is bivariate normal of mean 0 and covariance matrix defined in (ii). For simplicity and clarity, we consider to deal with realizations of these processes at discrete times t 1,...,t N and take t 1 = 1,...,t N = N. Similar result can be proved for observations at any sequence 0 < t 1 < < t N (see Nkurunziza, 005, Corollary 1.6). Corollary.. Let φ = exp( c) and let ( et X,e Y ) s be a solution to the system (9) where (Wt X,Wt Y ) satisfy assumption (C 1 ). Then : (i) The processes in discrete time (e X i,ey j ), i, j = 1,...,n,... satisfy the equations e X i = φe X i 1 + εx i, e Y i = φe Y i 1 + εy i, (10) where the vector (ε X i,ε Y i ) follows a bivariate normal distribution of mean 0 and covariance matrix given by σ (1 φ 1 ρ ). ρ 1

10 10 S. Nkurunziza Moreover, for i j, the random vector ( ε X i,ε Y i ) ( ) is independent of ε X j,εy j. (ii) In particular, under assumptions (C 1 ) and (C ), the vector (e X i,ey i ) is bivariate normal of mean 0 and covariance matrix σ 1 ρ ( 1 φ i) ρ ( 1 φ i). 1 The proof of Corollary. follows from standard stochastic calculus. As a recent reference, we quote for example Nkurunziza (005). Corollary. emphasizes that, in discrete time (and evenly spaced times), our bivariate process is AR(1). Moreover, for the case where we take unevenly spread observations times, 0 < t 1 < < t N, Nkurunziza (005, Corollary 1.6) proves a statement equivalent to Corollary.. Indeed, the statistical model which is commonly used, by ecologists, with population cycles is the linear autoregressive (AR) model (see Kendall et al., 1999, Berryman, 1995 or Royama, 199), with an order less than or equal to. In our case, the periodicity is captured by the solution of the ODEs (1) and then, to simply some computations, we can reduce the order by considering an AR(1) model. Finally, as stated before, the function H(x,y) plays a central part in our estimation procedure. In particular, we need to evaluate its moments. As a consequence to Proposition.1 and Corollary. we obtain the following Corollary.3 that gives the conditional expectation and the variance of H(X n+1,y n+1 ) given the past. More precisely, let F n denote the sigma field generated by { (X j,y j ),0 j < n } and all null sets (with respect to Gaussian measure).

11 Improvement in modelling and prediction for predator-prey populations 11 Also, let (x(n),y(n)) = (x n,y n ), (X(n),Y (n)) = (X n,y n ) and let P n,x = log(x n ) + φ (log(x n 1 ) log(x n 1 )), P n,y = log(y n ) + φ (log(y n 1 ) log(y n 1 )). Further, let g 1 (σ,φ) = exp[σ (1 φ )] exp[σ (1 φ )] g (σ,φ,ρ) = exp[σ (1 φ )(1 + ρ)] exp[σ (1 φ )] g 3 (σ,φ) = σ (1 φ )exp[σ (1 φ )/]. Corollary.3 (Froda and Nkurunziza, 007). Assume (C 1 ) and (C ) hold. Then, for all n = 0,1,... (i) The conditional expectation is E ] [ [H(X n+1,y n+1 ) F n = γe Pn+1,X + βe P ] n+1,y e σ (1 φ )/ ) (δp n+1,x + αp n+1,y. (11) (ii) The conditional variance is ] ( ) Var [H(X n+1,y n+1 ) F n = α + δ + αδρ σ (1 φ ) [ ] + β γ exp(p n+1,x ) + β exp(p n+1,y ) g 1 (σ,φ,ρ) (1) + γβ exp(p n+1,x + P n+1,y )g (σ,φ,ρ) { } (γβδ + γβαρ)exp(p n+1,x ) + (αβ + δβρ)exp(p n+1,y ) g 3 (σ,φ); (iii) Consider, in particular, ρ = 0 and φ 0. On one hand this means that the bivariate noise process {( et X,e Y ) } t,t 0 is Ornstein-Uhlenbeck. On the other hand, when φ 0, the process {( et X,e Y ) } t, t 0 converges (in distribution) to an i.i.d. process. Moreover, in the limit we obtain the unconditional versions [ ] E H(X n+1,y n+1 ) = e σ / (γx n+1 + βy n+1 ) δ log(x n+1 ) α log(x n+1 )

12 1 S. Nkurunziza and [ ] [ ] Var H(X n+1,y n+1 ) = β γ exp(log(x n+1 )) + β exp(log(y n+1 )) g 1 (σ,0) { } ( γβδ exp(log(x n+1 )) + αβ exp(log(y n+1 )) g 3 (σ,0) + α + δ ) σ. Before closing this section, note that Corollary.3 is important in our estimation and prediction method. This is proved in Froda and Nkurunziza (007, Proposition.5). 3 Estimation 3.1 Some qualitative properties of our ODE Recall that our main goal consists in generalizing and improving the estimation procedure presented in Froda and Nkurunziza (007). The fundamental result of our procedure is based on a re-parametrization of a certain class of ODE system. Namely, the following lemma give a relationship between the trajectories ψ 1 (t), ψ (t), t 0 solutions of the following ODE systems : ψ 1 (t) = f (ψ 1 (t), θ), t 0 with ψ 1 (0) = x 0 fixed ; and (13) ψ (t) = κ(t) f (ψ (t), θ), t 0 with ψ (0) = x 0 fixed (14) where f is continuous function, κ is a positive and continuous function. Let t K(t) = κ(s)ds for all t 0. (15) 0 Lemma 3.1. Assume that f is a continuous function that satisfies the uniqueness conditions of the solution of the ODE system (13) and assume that (15) holds. Further,

13 Improvement in modelling and prediction for predator-prey populations 13 let ψ 1 (t) and ψ (t) be the solution of the ODE systems (13) and (14) respectively. Then, ψ (t) = ψ 1 (K(t)) for all t 0. Corollaries 3. and 3.3 follow from Lemma 3.1. For the proof, we refer the reader to the Appendix. Corollary 3.. Assume that (u(t), v(t)) and (x(t), y(t)) are the solution of ODE system (1) and () respectively. Also, let κ(t) be a function that satisfies the conditions of Lemma 3.1 and let K(t) be the function given by (15). Then, for all t 0, x(t) = u(k(t)) and y(t) = v(k(t)). (16) Corollary 3.3. Assume that Corollary 3. holds. Then, the trajectories (u(t), v(t)) and (x(t),y(t)), t 0 stay on the same closed curve, i.e. for all t 0, H (x(t),y(t)) = H (u(t),v(t)) = η, (17) where H and η are given by (3). Note that Corollary 3. gives a re-parametrization result which allows us to generalize and refine the estimation method suggested by Froda and Nkurunziza (007). Also, note that Corollary 3. generalizes the first part of Lemma 1 given in Froda and Colavita (005) that plays a central role in Froda and Nkurunziza (007). The common point with these previous estimation methods is that, we exploit the periodicity of the observed data. Under some conditions on the function K(t) defined in (15),

14 14 S. Nkurunziza there exists a relationship between the period of (x(t),y(t)) and, ω, the period as (u(t), v(t)). The following assumption is a sufficient condition for a such relationship. Assumption (C 3 ) : The continuous and positive function κ(t) satisfies (15) and for all t 0, ( K t + ω b ) = K(t) + ω with λ > 0, 0 b < ω. λ In order to illustrate the idea in Assumption (C 3 ), let us give some examples of function κ(t) which satisfy this assumption. We have κ(t) = λ > 0 or κ(t) = λω ( ) πλt ω b + acos with 0 b < ω, a < λω ω b ω b. Also, note that, we illustrate our estimation method by taking ( ) πλ κ(t) = λ + (1 λ)cos ω t for all t R with 0 < λ. Note that, for this function b = 0, and from Section 4, we consider this simple case where b = 0. Further, under Assumption (C 3 ), we establish in the Appendix the following proposition that gives a period of (x(t),y(t)). Proposition 3.4. Suppose that Assumption (C 3 ) holds. Further, assume that the conditions of Lemma 3.1 hold and and suppose that ψ 1 (t) is a ω-periodic function. Then, ψ (t) is a ϖ d -periodic function with From Proposition 3.4, we establish Corollary 3.5. ϖ d = ω b λ. (18) Corollary 3.5. Let (x(t), y(t)) be the solution of ODE system (). Further, assume that Lemma 3.1 and Assumption (C 3 ) hold. Then, (x(t),y(t)) is ϖ d -periodic function where ϖ d is given in (18).

15 Improvement in modelling and prediction for predator-prey populations 15 Proof Since (u(t),v(t)) = ψ 1 (t) is ω-periodic function, the proof follows from Proposition 3.4 taking ψ (t) = (x(t),y(t)). It should be noted that Corollary 3. and Corollary 3.5 together generalize Lemma 1 of Froda and Colavita (005). In fact, taking κ(t) = λ > 0 for all t 0, we get K(t) = λt Assumption (C 3 ) is satisfied with b = 0. Corollary 3.6. Let (x(t), y(t)) be the solution of ODE system (). Further, assume that Lemma 3.1 and Assumption (C 3 ) hold. Then, the center of ODE () corresponds to the weighted average of (x(t),y(t)) over one period ; i.e. (x,y) = 1 ( ϖd ϖd ) ( δ κ(t)x(t) dt, κ(t)y(t) dt = K (ϖ d ) 0 0 γ, α ) β (19) where ϖ d is given in (18). In particular, if b = 0, λ = 1, we have ( 1 ω (x,y) = κ(t)x(t) dt, ω 0 1 ω ) ( δ κ(t)y(t) dt = ω 0 γ, α ). β Again the reader is referred to the Appendix for the proof. In the sequel, we consider the simple case where b = 0, which by Proposition 3.4 and Corollary 3.5 implies that ϖ d = ω/λ. We note that ϖ d corresponds to the observed period of data. In this paper, we assume that λ and ϖ d = ω/λ are known and/or estimated from the previous studies. For the estimation of ϖ d, the reader is referred to, for example, Whittle (1954), Spanjaard and White (1995), Bulmer (1974) and references therein. Moreover, the estimate value of λ can be obtained iteratively (taking λ = 1 as initial value), such that the mean predictor error is minimum. Our approach for refining the method of Froda and Nkurunziza (007) is based on Corollaries 3. and 3.5 and this approach produces the generalizations we seek. Our

16 16 S. Nkurunziza work requires some qualitative properties discussed in Froda and Nkurunziza (007) and in Hirsch and Smale (1974, page 6). First, note that the ODE () can be rewritten as ẋ(t) = βκ(t)x(t)(y(t) α β ), ẏ(t) = γκ(t)y(t)(x(t) δ ), (0) γ with (α > 0, β > 0, γ > 0, δ > 0), κ(t) a continuous and strictly positive function. As mentioned above, the point (x,y) = (δ/γ,α/β) is an equilibrium of the system and is a center (see Hirsch and Smale, 1974, page 6). Furthermore, x = δ/γ is the amount of prey needed to sustain the predator population, and below y = α/β the prey starts to grow. In our estimation, a central part is played by the closed curve defined by (3) that is based on the function of two variables H(x,y) = (γx δ logx) + (βy α logy),x > 0,y > 0,γ,β,δ,α R. In particular, we use the fact that if all parameters γ,β,δ,α are of same sign, the value (x,y) = (δ/γ,α/β) is an absolute minimum of H(x,y). Moreover, by continuity, in a neighborhood of (x,y) the sign of H(x,y) is the same as the sign of H(x,y), and we expect the constant η defined in (3) to have the same sign as H(x,y) ; note that η can be either positive or negative, while γ,β,δ,α are all assumed positive. The following corollary suggests an idea of how one can estimate η based on the periods of the ODE systems (1) and (). Corollary 3.7. Let (u (t),v (t)) be the solution of the ODE (1) where γ,β,δ,α are replaced with (γ,β,δ,α ) = (γ/ η,β/ η,δ/ η,α/ η )

17 Improvement in modelling and prediction for predator-prey populations 17 where η 0. Then, (u (t),v (t)) is ω -periodic function and under (C 3 ), we have where ϖ d is given in (18). In particular, if b = 0, we have ω = η, (1) λϖ d + b ω λϖ d = η, () Proof Let ω and ω be the period of (u(t),v(t)) and (u (t),v (t)) respectively. Then, by Corollary 3.1 and Corollary 3.3, ω = ω/(1/ η ) and from (18) ϖ d = (ω b)/λ. Therefore, ω = η (λϖ d + b), that completes the proof. Corollary 3.7 plays a central role in our estimation method. Namely, taking η as the right hand side of the closed curve (3), we start by estimating (γ,β,δ,α ) that corresponds to the normalized closed curve. Further, we estimate ω and, taking ϖ d as the observed period of data, we deduce an estimate of η. Then, multiplying the estimates of (γ,β,δ,α ) and η, we obtain an estimate of (γ,β,δ,α). In the sequel, we use the vector notations Λ = (γ,β,δ,α) and Λ = (γ/ η,β/ η,δ/ η,α/ η ). The first two parameters in Λ or Λ are interaction parameters, while the last two are a death rate and a birth rate respectively. 3. Estimation and prediction : main idea In this subsection, we present a general framework concerning the estimation and prediction based on the model (9). By sing a re-parametrization introduced in Corollary 3., we transform the ODE () (where the parameters are deterministic functions)

18 18 S. Nkurunziza to the classical Lotka-Volterra ODE system (1), that is considered in Froda and Nkurunziza (007). Because of that, we emphasize here to the steps of the algorithm where we use Corollaries 3., 3.5 and 3.7. For more details, we refer the reader to Froda and Nkurunziza (007, Section 3 and Section 4). Let (x(n),y(n)) = (x n,y n ) and let (X(n),Y (n)) = (X n,y n ). By applying the standard least squares theory to the statistical model (9) (see Section ), one can derive the predictors of log(x(t i+n )) and log(y (t i+n )), n = 1,,..., i = 0,...,N from past observations (X(t i ),Y (t i )),i = 0,...,N. Namely, by least squares method, the best predictors for log(x(t n )) and log(y (t n )) are given by P tn,x = log(x tn ) + φ ( log ( X tn 1 ) log ( xtn 1 )), P tn,y = log(y tn ) + φ ( log ( Y tn 1 ) log ( ytn 1 )). (3) In particular, if for simplicity sake we consider only equally spaced times (t i = i), then we get P n,x = log(x n ) + φ (log(x n 1 ) log(x n 1 )), P n,y = log(y n ) + φ (log(y n 1 ) log(y n 1 )). (4) However, these predictors contain the unknown parameters γ,β,δ,α,x 0,y 0 and c,τ,ρ. Note that κ(t) is assumed to be known since it depends only on t, λ and on the period ϖ d that two last parameters are supposed to be known here. In summary, we have two sets of parameters. On one hand, we have the deterministic functions (x(t),y(t)), t [0, ), which, in turn, depend on the initial value (x 0,y 0 ), and on the four parameters, γ,β,δ,α. On the other hand, we have the parameters of the noise process, i.e., φ = exp( c) and σ = τ /c (or, equivalently, c and τ), and ρ.

19 Improvement in modelling and prediction for predator-prey populations 19 Once all these parameters are estimated, we replace the parameters in (4) (or in (3)) with their estimates, and obtain the predictors P n,x and P n,y if we deal with equally spaced times, or P tn,x and P tn,y in the general case. It should be noted that the relation (3) is similar to that given in Froda and Nkurunziza (007, Proposition.5 and Section 3.). The difference consists in the fact that here (x(t),y(t)) is the solution of ODE () while in the quoted paper (x(t),y(t)) is the solution of ODE (1). Below, we give the main steps of the algorithm of our estimation procedure. For the detailed description of this procedure, we refer the reader to Froda and Nkurunziza (006, Section 4). Before presenting these steps, we note that, for simplicity, we assume only equally spaced times, i.e. we consider t i = i. 4 Algorithm 4.1 Main steps of algorithm Step I-a) (estimation) : We obtain preliminary estimates of the parameters γ,β,δ,α and of the solution (x(t),y(t)),t > 0. At this step we ignore the dependence structure given in (6), as the i.i.d. case is a limit case of the present model (see Froda and Nkurunziza, 007, Proposition.5). Thus, this step is essentially done as in Froda and Colavita (005) where the errors in (6) are taken i.i.d. and one solves an ordinary least squares (OLS) problem involving only H(x, y). In particular, from relation (), here we refine the estimation of η and accordingly, we refine the preliminary estimates of γ,β,δ,α and of the solution (x(t),y(t)),t > 0. Let γ (0), β (0), δ (0), α (0) be these preliminary estimates and solve the ODE system () with these estimated coefficients to obtain the preliminary estimates

20 0 S. Nkurunziza at times n = 1,...,N, ( ) x (0) (n),ŷ (0) (n). It should be noted that, in Froda and Nkurunziza (007), at this step, instead of solving the ODE (), one solve the ODE (1). Step I-b) (prediction) : By using the preliminary estimates γ (0), β (0), δ (0), α (0) and ( x (0) (n),ŷ (0) (n)) obtained at Step I-a), we compute prediction errors ê (0) n,x = log(x(n)) log( x(0) (n)), ê (0) n,y = log(y (n)) log(ŷ(0) (n)). (5) Then, we get the estimates of the noise parameters. Let these estimates be φ 0, σ 0, ρ 0. Further, we compute preliminary predictions, by replacing the theoretical values in (4) with their estimates ; let these predictions be P (0) n,x and P (0) n,y, n = 1,...,N. Step II-a) (estimation) : Using the preliminary predictions, P (0) n,x and P (0) n,y and the preliminary estimates ( x (0) (n),ŷ (0) (n)) and φ 0, σ 0, ρ 0 obtained in Step I, we re-estimate the four parameters γ,β,δ,α. This is done by least squares, where we take into account the dependence structure given in (6). Let γ, β, δ, α be these (final) parameter estimates. Further, we solve (1), at times K(t) for 0, with the original parameters replaced with these final parameter estimates, and obtain ( x(t),ŷ(t)),t > 0. Step II-b) (prediction) : By using the final estimates γ, β, δ, α and ( x(n),ŷ(n)), n = 1,,...,N obtained in Step II-a), we repeat Step I-b). Namely, we compute final estimates φ, σ, ρ from the new residuals ê n,x = log(x(n)) log( x n ), ê n,y = log(y (n)) log(ŷ(n)), (6) and thus, we obtain the final predictions P n,x and P n,y from (4). In Subsection 4., we give some mathematical expressions of the steps of the algorithm. Once again, for more details, the reader is referred to Froda and Nkurunziza (007).

21 Improvement in modelling and prediction for predator-prey populations 1 4. Closed curve and estimation criteria Since our estimation method is based on the fact that the ODE system () admits closed orbits, i.e., for any non-equilibrium initial value (x 0,y 0 ), there exists a constant η such that any solution (x(t),y(t)) stays on the closed curve H (x(t),y(t)) = η (7) with H (x(t),y(t)) = (γx(t) δ logx(t)) + (βy(t) α logy(t)), for all t 0. If η 0, let (γ,β,δ,α ) = (γ/ η,β/ η,δ/ η,α/ η ) and let (u (t),v (t)) be the ODE (1) in which (γ,β,δ,α) is replaced with (γ,β,δ,α ). Also, let ω and ω be the period of (u(t),v(t)) and (u (t),v (t)) respectively. Then, by Corollary 3.7, we have ω λϖ d = η, (8) where here, η is the right hand side of the equation (7). It should be noted that when λ = 1, the relation (8) corresponds to that given in Froda and Colavita (005) or in Froda and Nkurunziza (007). Further, let Λ = (γ,β,δ,α), Λ = (γ,β,δ,α ) and let H (x,y) = e σ / (γ x + β y) (δ logx + α logy) = 1 { } e σ / (γx + βy) (δ logx + α logy). η From Corollary.3, it can be verified that E [ H (X n+1,y n+1 ) ] = 1 η H(x n+1,y n+1 ) = η η.

22 S. Nkurunziza For simplicity, we assume that η > 0 and then, the unconditional expectation of H (X n+1,y n+1 ) becomes 1. This suggests that we consider the least squares (LS) minimization N 1{ H (X n+1,y n+1 ) 1}, (9) n=0 but since the parameter σ is unknown and for asymptotic theory issue, we need to consider a slightly different least squares criteria. First, as in Froda and Nkurunziza (007), we find a preliminary estimate of σ, σ 0. In finding this estimate, we exploit the periodicity present in the data, which allows us to partition the data into subsets such that in each subset the observations are separated by a multiple of ϖ d, with ϖ d the period observed in the data. As mentioned above, the estimation method of the period ϖ d is itself a statistical problem. However, this last problem is beyond the scope of this paper. But, the reader is referred to, for example, Whittle (1954), Spanjaard and White (1995), Bulmer (1974) and references therein. In this paper, we take the period of the data, ϖ d as known (fixed) or estimated from the previous studies. Thus, in the sequel, by simplicity, we use ϖ d instead of ϖ d. Then, we can resample N data points with replacement m times, and average over data in each subset. More specifically, let I (r) be the set of indices of the variables present in the r th sample, r = 1,...,m ; then we set σ = 1 0 m m r=1 ( Q r,x + Q ) r,y, (30) where, for r = 1,...,m, and i 0 {1,,..., ϖ d }, we have, e.g. : with Q i 0,X(r) = 1 m i0 (r) Q r,x = 1 ϖ d i 0 {1,,..., ϖ d } l {i I (r):1 i 0 +iϖ d N} Q i 0,X(r), { log(x i0 +lϖ d ) log(x i0,r)}, (31)

23 Improvement in modelling and prediction for predator-prey populations 3 and m i0 (r) = card({i I (r) : 1 i 0 + iϖ d N}), i 0 m i0 (r) = N, log(x i0,r) = 1 m i0 (r) l {i I (r):1 i 0 +iϖ d N} log(x i0 +lϖ d ). Q i 0,Y (r) is computed in a similar way. Note that, for these estimators, we assume that m i0 (r), i.e. the process has been observed over more than one period. (The original sample corresponds to r = 1.) We close this subsection by noting that, for asymptotic theory issue, instead of (9), it is convenient to consider a least squares criteria that is relied to the periodicity of the data. We present this criteria, in the next subsection. 4.3 Grouping data assumption and modified version of algorithm In this subsection, we present the modified version of the algorithm that is relied to the periodicity of the data. The issue of the modified version of the algorithm is to simplify the proof of asymptotic results. Before presenting the criteria on which this algorithm is based, we note that for N fixed (finite), the new criteria is equivalent to the OLS criteria given in (9). As mentioned in Froda and Nkurunziza (007), we assume that the process has been observed over more than one period and thus, the data are assumed to have a special structure, which is quite natural, and can be met in practice. Namely, if ϖ d is the period, we suppose that observations have been collected repeatedly at times which correspond (approximately) to the same fraction of the period ϖ d, e.g. they have been collected at 1 8 ϖ d, 1 4 ϖ d, 1 ϖ d, 3 4 ϖ d, etc (plus a multiple of ϖ d ). For specific animal data one can indeed assume that given times of the year correspond to natural

24 4 S. Nkurunziza life cycles, and therefore to specific fractions of one period. In such a design the expectations of (X i,y i ),i = 1,...,N belong to a finite subset of distinct couples on the closed curve (which are well spread over one period). Let k be the number of such distinct couples (x (1),y (1) ),...,(x (k),y (k) ), with k not depending on N. The formal setting is given in the following Grouping assumption, (C 4 ). Let k N,k > 4 be fixed and, for j = 1,,...,k, let I j be the set of indices of the observations in group j ; N j is the cardinality of I j. Define λ j > 0 by N j N = λ j > 0, with k j=1 λ j = 1. For each group j = 1,,..., k assume that data are such that E[log(X i )] = log(x ( j) ), E[log(Y i )] = log(y ( j) ), i I j, j = 1,,...,k. (3) For such grouped data, we consider average versions of H(X i,y i,λ) over the data in the same group, i.e., for j = 1,,...,k, H ( j) (Λ) = 1 N j Correspondingly, let ( H(X i,y i,λ), H = H (1), H (),..., H (k)). (33) i I j H (X i,y i,λ ) = e σ / (γ X i + β Y i ) (δ log(x i ) + α log(y i )), and consider the averages H ( j) (Λ ) = 1 ( ) N j H (X i,y i,λ ), H = H (1), H (),..., H (k). (34) i I j Finally, we study the following weighted least square (WLS) problem : minimize in Λ = (γ,β,δ,α ) k j=1 N j ( H ( j) (Λ ) 1), (35)

25 Improvement in modelling and prediction for predator-prey populations 5 where, in H ( j) (Λ ) as given in (34) we replace σ with a preliminary estimate, σ 0 and obtain H ( j) (Λ ). It should be noted that, for the finite population case, by taking N j = 1, j = 1,,...,k, the criterion (9) and (35) are equivalent. Thus, in practice, one can apply (9) for simplicity. Let Λ be the estimate of Λ obtained from (35). Further, in order to obtain an estimate Λ we use Corollary 3.7 (with b = 0) and set η = ω /λϖ d where ϖ d and ω are the period observed in the data and the period of the solution to the ODE system (1) of parameters ( γ, β, δ, α ), respectively. To solve this system (and thus obtain its period) we need to estimate the initial value (x 0,y 0 ). As justified in Froda and Colavita (005), we take an initial value, ( x 0,ŷ 0 ), on the estimated curve ( γ x δ ) ) logx + ( β y α logy = 1. (36) Froda and Colavita (005) prove that ( x 0,ŷ 0 ) which satisfies (36) is a strongly consistent estimator of (x 0,y 0 ), the initial value the ODE (). Since Then we obtain a preliminary estimate Λ as ( γ 0, β 0, δ ) 0, α 0 = Λ (0) = Λ η = Λ ω. (37) λϖ d (γ x δ logx) + (β y α logy) = 1 (γx δ logx) + (βy α logy) = η, we can use the same estimated curve (36) to get an initial value for the system in γ,β,δ,α. Thus, at the end of this step we solve the ODE system () with estimated coefficients Λ (0), initial value on the estimated closed curve (36), and we obtain ( ) the preliminary estimates of (x(n),y(n)) at times n = 1,...,N, x (0) (n),ŷ (0) (n). ) Then, we obtain, ( σ 0, φ 0, ρ 0 the preliminary estimates of the nuisance parameters

26 6 S. Nkurunziza ( σ,φ,ρ ), by using the same formulas as in Froda and Nkurunziza (007, p. 41). For completeness, let N ê X = 1 n,x, ê Y = N n=1ê 1 N N ê n,y n=1 and let S X = 1 N N i=1 ( ê i,x ê X ), S Y = 1 N N i=1 (ê i,y ê Y ) and as a common estimator of σ, we take c, let σ 0 = S X + S Y. (38) Let I A be an indicator function of the event A. Further, in order to estimate φ and and take φ X = N 1 i=0 N 1 i=0 ê i+1,x ê i,x (êi,x ), φ Y = N 1 i=0 ê i+1,y ê i,y N 1 ) (êi,y i=0 as well as φ 0 = φ X + φ Y ĉ 0 = log ( φ 0 ) I{ φ 0 >0},, (39) because φ = exp( c) for c > 0. Finally, we obtain an estimate of ρ, ρ 0 = σ 0 1 ( ) 1 1 φ N 0 N i=1 (ê i,x φ 0 ê i 1,X )(ê i,y φ 0 ê i 1,Y ). (40) Further, from (4), we get the estimated predictions P (0) n,x and P (0) n,y. The rest of the steps of the algorithm are done as in Froda and Nkurunziza (007, p. 43-p. 44).

27 Improvement in modelling and prediction for predator-prey populations 7 Briefly, let ĥ(0) n+1,n and Var (0) n be the right sides of (11) and (1) respectively with the parameters replaced by their corresponding preliminary estimates for n = 1,,... Also, let h ( j) = 1 N j ĥ (0) i+1,i and Var ( j) = 1 i I j N j Var (0) i (41) i I j Then, we apply the Weighted Least Squares method in order to find, Λ, the final estimate of Λ. More precisely, we minimize, in Λ { k N j H ( j) h ( j)} j=1 Var ( j). (4) Also, we solve () with the original parameters replaced with these final parameter estimates, and obtain ( x(n),ŷ(n)), n = 1,...,N. Further, in the similar way as in (38), ( ) (39) and (40), we get σ, φ, ρ the final estimates of ( σ,φ,ρ ). Finally, from (4), we obtain the final prediction of population sizes. 5 Asymptotic results In this section, we prove that under the statistical model (8), if (x(t),y(t)) is the solution of (), our estimator is strongly consistent when the sample size N is large (see Proposition 5.1). Also, we establish a result which shows that, asymptotically, our method refines the Froda and Nkurunziza (007) by 1 (1/λ) 100% (see proposition 5.3 and Proposition 5.4). For the proofs of these propositions, we refer the reader to the Appendix. Proposition 5.1. Assume that (C 1 ), (C ), (C 3 ) and (C 4 ) hold. Then, Λ a.s. N Λ, (a.s. stands for almost surely ).

28 8 S. Nkurunziza Proposition 5.. Assume that (C 1 ), (C ), (C 3 ) and (C 4 ) hold. Then, ( ) σ a.s., φ, ρ N ( σ,φ,ρ ). The proof is similar as given in Froda and Nkurunziza (007, Proposition B..5). Before stating Proposition 5.3, let Λ (0) Fr Nk (where Fr Nk stands for Froda and Nkurunziza) be the preliminary estimator of Λ obtained from the Froda and Nkurunziza (007) method. Also, let Λ Fr Nk be the final estimator obtained from the Froda and Nkurunziza (007) method. Recall that Λ (0) and Λ denote respectively the preliminary and the final estimator of Λ obtained from the present method. For any 4-column vector a R 4, let a denote the transpose of a and let a = a a (the Euclidian norm in R 4 ). We define the relative difference between Λ (0) and Λ (0) ϑ (0) (N) = (0) Λ Fr Nk (0) Λ Fr Nk. Λ (0) Fr Nk as In other words, ϑ (0) (N) measures how much the preliminary estimator from our method improves the preliminary estimator from the Froda and Nkurunziza (007) method. Proposition 5.3. Assume that (C 3 ) holds (with b = 0). Then Λ (0) = 1 (0) Λ Fr Nk and ϑ (0) (N) = λ 1 1 λ for all N 4. (43) In the similar way, we define the relative difference between Λ and Λ Fr Nk as ϑ(n) = Λ Λ Fr Nk Λ. Fr Nk

29 Improvement in modelling and prediction for predator-prey populations 9 It measures how much the final estimator from our method improves the final estimator from the Froda and Nkurunziza (007) method. Proposition 5.4 proves that, when N is large, ϑ(n) tends to 1 (1/λ) 100%. Also, on one hand, from Proposition 5.3 and Proposition 5.4, it follows that, under the statistical model (8), if (x(t),y(t)) is the solution of the ODE () with λ 1, the estimator obtained from Froda and Nkurunziza (007) method is no longer consistent. This highlights an additional practical interest of our method ; since from Proposition 5.1, our estimator is strongly consistent. On the other hand, when λ = 1, we have κ(t) = 1 or κ(t) no constant, but in both cases, our method becomes equivalent to the Froda and Nkurunziza (007) methods. This shows that, in addition to be more general, our approach is more flexible than the previous quoted method. Proposition 5.4. Assume that (C 1 ), (C ), (C 4 ) and (C 3 ) hold with (b = 0). Then, (i). (ii). Λ (0) Λ Fr Nk Λ Fr Nk ϑ(n) a.s. N a.s. N 1 1 λ, 1 1 λ. (44) 6 Data analysis : two examples 6.1 Estimation and prediction In this section, we illustrate the application of our method on two real data sets. The first data set consists of 64 pairs of the mink-muskrat as given in Brockwell and

30 30 S. Nkurunziza Davis (1991, p ). For the mink-muskrat data set, the prey are the muskrat and the predator are the mink. Note that these data correspond to fur sales of the Hudson Bay Company, in the years Also, it should be noted that, the same data has been used in Froda and Colavita (005), in Froda and Nkurunziza (007) as well as in Nkurunziza (005) to illustrate their estimation methods. Further, this data set has been studied by Bulmer ( 1974) who observed an approximate ten-year cycle and commented on the fact that the muskrat cycle is due to predation by mink. Thus, the mink-muskrat is a predator-prey couple which seems to satisfy the requirement that the muskrat (prey) is the main food supply for the mink (predator). The second data set is the paramecium-didinium one as given in Luckinbill (1973) and this consists of 6 paramecium caudatium and 6 didinium nasutum which have been observed over 30 days ; measured every half day. For this data set, the prey is paramecium caudatium while the predator is didinium nasutum. This is a laboratory data set where the predator feeds on one prey only. Also, some conditions of coexistence between the predator and the prey have been created by the experimenter. Namely, to sustain the prey population, both ciliates were placed in a special medium which rendered the paramecium poorer food for the didinium. This in turn reduced the predator s efficiency and that allowed the coexistence with an approximate period of 4.5 days (or 9 half-days). In order to illustrate our method, we take ( ) π κ(t) = cos t ϖ d for both data sets. Applying our method, we obtain the parameter estimates, as given in Table 1. Moreover, the final estimates of the nuisance parameters are given in Table.

31 Improvement in modelling and prediction for predator-prey populations 31 TAB. 1 Parameter estimates Parameter Mink-Muskrat Didinium-Paramecium Estimate of Λ Estimate of Λ Preliminary : Λ (0) Final : Λ Preliminary : Λ (0) Final : Λ γ β δ α TAB. Parameter estimates Parameter Mink-Muskrat Didinium-Paramecium σ φ ρ Comparing our parameter estimates given in Table 1 to the parameter estimates given in Froda and Nkurunziza (007, Table 1, p. 49), these numerical values confirm the theoretical result established in Section 5. In fact, our preliminary estimate correspond to 1/0.765 (=1/λ) times the preliminary estimate given the quoted paper. Furthermore, the relative difference, ϑ(n) is equal to 0.4 and respectively for the mink-muskrat and paramecium-didinium data sets and these numbers are relatively close to = 1 1/λ, that is the asymptotic improvement.

32 3 S. Nkurunziza In order to assess the performance of our estimators, we compare predicted population values, X n and Ŷ n, with their corresponding observed data points, X n and Y n. Graphically, Figure 1 and Figure highlight a good performance of our estimation method. Also, as we illustrate in the sequel, an out-of-sample prediction confirms this good performance of our estimation method. 8 7 Observed log sizes (muskrat) Predicted log sizes (muskrat) Observed log sizes (mink) Observed log sizes (mink) FIG. 1 The fitted and observed log-size (mink-muskrat)

33 Improvement in modelling and prediction for predator-prey populations 33 8 Observed log sizes (paramecium) Predicted log sizes (paramecium) Observed log sizes (didinium) Predicted log sizes (didinium) FIG. The fitted and observed log-size (paramecium-didinium)

34 34 S. Nkurunziza Muskrat Observed log sizes Predicted log sizes Estimated ODE solution Observed log sizes Predicted log sizes Estimated ODE solution Mink FIG. 3 The fitted and observed log-size (mink-muskrat)

35 Improvement in modelling and prediction for predator-prey populations Paramecium Observed log sizes Predicted log sizes Estimated ODE solution Observed log sizes Predicted log sizes Estimated ODE solution Didinium FIG. 4 The fitted and observed log-size (paramecium-didinium) As mentioned above, we use the out-of-sample prediction in order to evaluate the performance of our prediction model. For the mink-muskrat, we subtract the 10 last data points while for the paramecium-didinium, we subtract 9 last data points. Then, by using the remaining data points (the first 54 and 53 for the mink-muskrat and paramecium-didinium respectively), we estimate the parameters, and compute one-step-ahead forecasts corresponding to the out-of-sample data points. Comparing the one-step-ahead forecasts and their corresponding observed values, Figure 3 and Figure 4 illustrate that the forecasts are quite goods.

36 36 S. Nkurunziza mink muskrat FIG. 5 The estimated closed curve and the observed log-size (mink-muskrat) didinium paramecium FIG. 6 The estimated closed curve and the observed log-size (mink-muskrat)

37 Improvement in modelling and prediction for predator-prey populations Quality of fit : improvement over iid model In this subsection, we show how the predictor from our method is better than that from the Froda and Colavita (005) method. First, note that the ODE system () under consideration generalizes the classical Lotka-Volterra ODE system (1) that is considered in Froda and Nkurunziza (007). In addition, from (1), we refine the estimation of the closed curve η and that refines the estimation of (γ,β,δ,α) and (x(t), y(t)). Concerning this refinement, by Proposition 5.4, we proves that our method improves the Froda and Nkurunziza (007) method by 1 1 λ 100%. Moreover, the stochastic model (8) generalizes the similar model presented in Froda and Colavita (005) whose error is i.i.d process. Further, it can be verified that the statistical model (8) improves the quality of fit, in comparison with the model with i.i.d error process. The idea of the proof consists in applying the re-parametrization result given in Corollary 3. to reduce the problem to the case studied in Froda and Nkurunziza (007). After this re-parametrization, the proof follows from the statement made in Froda and Nkurunziza (007, p. 49). For completeness, we outline below some ideas concerning the way we assess the quality of fit (for more details, see Froda and Nkurunziza, 007). First, for the stochastic model (8) whose error follows an Ornstein-Uhlenbeck process as given by (9), the prediction is P i,x = log(x i ) + φ (log(x i 1 ) log(x i 1 )) ; P i,y = log(y i ) + φ (log(y i 1 ) log(y i 1 )).

38 38 S. Nkurunziza Second, for the stochastic model given in Froda and Colavita (005) that is similar to (8) with i.i.d error process, the prediction is the deterministic function, i.e. we have, P i,x = log(x i ); P i,y = log(y i ); i = 1,,...,N. In order to assess the quality of fit, we define the average prediction error as R(P) = 1 N N i=1 Then, the respective average prediction errors are and R(P) O U = 1 N N i=1 [P i,x log(x i )]. [log(x i ) log(x i ) φ (log(x i 1 ) log(x i 1 ))], (45) R(P) ind = 1 N N i=1 [log(x i ) log(x i )], (46) where O-U stands for Ornstein-Uhlenbeck, and ind stands for independent (i.e. i.i.d. case). Note that R(P) O U and R(P) ind depend on (x(t),y(t)) and on the nuisance parameter φ. In practice, we replace R(P) O U and R(P) ind by their corresponding estimated versions R (0) (P) O U and R (0) (P) ind. Then, in order to assess the quality of fit, we compute the estimated relative differences, R (0) r = R (0) (P) ind R (0) (P) O U R (0) (P) ind = 1 R (0) (P) O U R (0) (P) ind (47) and R r = 1 R(P) O U R(P) ind. (48) Thus, in each column of Table 3, the values indicate how the present method improves over the Froda and Colavita (005) method.

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

THETA-LOGISTIC PREDATOR PREY

THETA-LOGISTIC PREDATOR PREY THETA-LOGISTIC PREDATOR PREY What are the assumptions of this model? 1.) Functional responses are non-linear. Functional response refers to a change in the rate of exploitation of prey by an individual

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Econometrics II - EXAM Answer each question in separate sheets in three hours

Econometrics II - EXAM Answer each question in separate sheets in three hours Econometrics II - EXAM Answer each question in separate sheets in three hours. Let u and u be jointly Gaussian and independent of z in all the equations. a Investigate the identification of the following

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Using all observations when forecasting under structural breaks

Using all observations when forecasting under structural breaks Using all observations when forecasting under structural breaks Stanislav Anatolyev New Economic School Victor Kitov Moscow State University December 2007 Abstract We extend the idea of the trade-off window

More information

The Geometric Meaning of the Notion of Joint Unpredictability of a Bivariate VAR(1) Stochastic Process

The Geometric Meaning of the Notion of Joint Unpredictability of a Bivariate VAR(1) Stochastic Process Econometrics 2013, 3, 207-216; doi:10.3390/econometrics1030207 OPEN ACCESS econometrics ISSN 2225-1146 www.mdpi.com/journal/econometrics Article The Geometric Meaning of the Notion of Joint Unpredictability

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response

Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response Stochastic Viral Dynamics with Beddington-DeAngelis Functional Response Junyi Tu, Yuncheng You University of South Florida, USA you@mail.usf.edu IMA Workshop in Memory of George R. Sell June 016 Outline

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

5 Applying the Fokker-Planck equation

5 Applying the Fokker-Planck equation 5 Applying the Fokker-Planck equation We begin with one-dimensional examples, keeping g = constant. Recall: the FPE for the Langevin equation with η(t 1 )η(t ) = κδ(t 1 t ) is = f(x) + g(x)η(t) t = x [f(x)p

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Gerardo Zavala. Math 388. Predator-Prey Models

Gerardo Zavala. Math 388. Predator-Prey Models Gerardo Zavala Math 388 Predator-Prey Models Spring 2013 1 History In the 1920s A. J. Lotka developed a mathematical model for the interaction between two species. The mathematician Vito Volterra worked

More information

On the Goodness-of-Fit Tests for Some Continuous Time Processes

On the Goodness-of-Fit Tests for Some Continuous Time Processes On the Goodness-of-Fit Tests for Some Continuous Time Processes Sergueï Dachian and Yury A. Kutoyants Laboratoire de Mathématiques, Université Blaise Pascal Laboratoire de Statistique et Processus, Université

More information

Stochastic Stokes drift of a flexible dumbbell

Stochastic Stokes drift of a flexible dumbbell Stochastic Stokes drift of a flexible dumbbell Kalvis M. Jansons arxiv:math/0607797v4 [math.pr] 22 Mar 2007 Department of Mathematics, University College London, Gower Street, London WC1E 6BT, UK January

More information

2D-Volterra-Lotka Modeling For 2 Species

2D-Volterra-Lotka Modeling For 2 Species Majalat Al-Ulum Al-Insaniya wat - Tatbiqiya 2D-Volterra-Lotka Modeling For 2 Species Alhashmi Darah 1 University of Almergeb Department of Mathematics Faculty of Science Zliten Libya. Abstract The purpose

More information

CHAPTER V. Brownian motion. V.1 Langevin dynamics

CHAPTER V. Brownian motion. V.1 Langevin dynamics CHAPTER V Brownian motion In this chapter, we study the very general paradigm provided by Brownian motion. Originally, this motion is that a heavy particle, called Brownian particle, immersed in a fluid

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

1. Stochastic Processes and Stationarity

1. Stochastic Processes and Stationarity Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture Note 1 - Introduction This course provides the basic tools needed to analyze data that is observed

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Motivation and Goals. Modelling with ODEs. Continuous Processes. Ordinary Differential Equations. dy = dt

Motivation and Goals. Modelling with ODEs. Continuous Processes. Ordinary Differential Equations. dy = dt Motivation and Goals Modelling with ODEs 24.10.01 Motivation: Ordinary Differential Equations (ODEs) are very important in all branches of Science and Engineering ODEs form the basis for the simulation

More information

Chapter 2: Unit Roots

Chapter 2: Unit Roots Chapter 2: Unit Roots 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and undeconometrics II. Unit Roots... 3 II.1 Integration Level... 3 II.2 Nonstationarity

More information

LANGEVIN THEORY OF BROWNIAN MOTION. Contents. 1 Langevin theory. 1 Langevin theory 1. 2 The Ornstein-Uhlenbeck process 8

LANGEVIN THEORY OF BROWNIAN MOTION. Contents. 1 Langevin theory. 1 Langevin theory 1. 2 The Ornstein-Uhlenbeck process 8 Contents LANGEVIN THEORY OF BROWNIAN MOTION 1 Langevin theory 1 2 The Ornstein-Uhlenbeck process 8 1 Langevin theory Einstein (as well as Smoluchowski) was well aware that the theory of Brownian motion

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Gause, Luckinbill, Veilleux, and What to Do. Christopher X Jon Jensen Stony Brook University

Gause, Luckinbill, Veilleux, and What to Do. Christopher X Jon Jensen Stony Brook University Gause, Luckinbill, Veilleux, and What to Do Christopher X Jon Jensen Stony Brook University Alternative Models of Predation: Functional Responses: f (N) Prey Dependent f (N/P) Ratio Dependent Possible

More information

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

On Input Design for System Identification

On Input Design for System Identification On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods

More information

Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business

Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business Michal Pešta Charles University in Prague Faculty of Mathematics and Physics Ostap Okhrin Dresden University of Technology

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed

More information

8 Ecosystem stability

8 Ecosystem stability 8 Ecosystem stability References: May [47], Strogatz [48]. In these lectures we consider models of populations, with an emphasis on the conditions for stability and instability. 8.1 Dynamics of a single

More information

Principles of forecasting

Principles of forecasting 2.5 Forecasting Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables X t (m 1 vector). Let y t+1

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Chapter 3 - Temporal processes

Chapter 3 - Temporal processes STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

Class 1: Stationary Time Series Analysis

Class 1: Stationary Time Series Analysis Class 1: Stationary Time Series Analysis Macroeconometrics - Fall 2009 Jacek Suda, BdF and PSE February 28, 2011 Outline Outline: 1 Covariance-Stationary Processes 2 Wold Decomposition Theorem 3 ARMA Models

More information

ODE Final exam - Solutions

ODE Final exam - Solutions ODE Final exam - Solutions May 3, 018 1 Computational questions (30 For all the following ODE s with given initial condition, find the expression of the solution as a function of the time variable t You

More information

PREDICTION AND NONGAUSSIAN AUTOREGRESSIVE STATIONARY SEQUENCES 1. Murray Rosenblatt University of California, San Diego

PREDICTION AND NONGAUSSIAN AUTOREGRESSIVE STATIONARY SEQUENCES 1. Murray Rosenblatt University of California, San Diego PREDICTION AND NONGAUSSIAN AUTOREGRESSIVE STATIONARY SEQUENCES 1 Murray Rosenblatt University of California, San Diego Abstract The object of this paper is to show that under certain auxiliary assumptions

More information

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis -- An Introduction -- AMS 586 Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Forecasting with ARMA

Forecasting with ARMA Forecasting with ARMA Eduardo Rossi University of Pavia October 2013 Rossi Forecasting Financial Econometrics - 2013 1 / 32 Mean Squared Error Linear Projection Forecast of Y t+1 based on a set of variables

More information

Dynamical Systems. August 13, 2013

Dynamical Systems. August 13, 2013 Dynamical Systems Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 Dynamical Systems are systems, described by one or more equations, that evolve over time.

More information

Ch.10 Autocorrelated Disturbances (June 15, 2016)

Ch.10 Autocorrelated Disturbances (June 15, 2016) Ch10 Autocorrelated Disturbances (June 15, 2016) In a time-series linear regression model setting, Y t = x tβ + u t, t = 1, 2,, T, (10-1) a common problem is autocorrelation, or serial correlation of the

More information

Statistical Inference and Methods

Statistical Inference and Methods Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and

More information

1 An Introduction to Stochastic Population Models

1 An Introduction to Stochastic Population Models 1 An Introduction to Stochastic Population Models References [1] J. H. Matis and T. R. Kiffe. Stochastic Population Models, a Compartmental Perective. Springer-Verlag, New York, 2000. [2] E. Renshaw. Modelling

More information

Models Involving Interactions between Predator and Prey Populations

Models Involving Interactions between Predator and Prey Populations Models Involving Interactions between Predator and Prey Populations Matthew Mitchell Georgia College and State University December 30, 2015 Abstract Predator-prey models are used to show the intricate

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

Distributed Learning based on Entropy-Driven Game Dynamics

Distributed Learning based on Entropy-Driven Game Dynamics Distributed Learning based on Entropy-Driven Game Dynamics Bruno Gaujal joint work with Pierre Coucheney and Panayotis Mertikopoulos Inria Aug., 2014 Model Shared resource systems (network, processors)

More information

An Application of Perturbation Methods in Evolutionary Ecology

An Application of Perturbation Methods in Evolutionary Ecology Dynamics at the Horsetooth Volume 2A, 2010. Focused Issue: Asymptotics and Perturbations An Application of Perturbation Methods in Evolutionary Ecology Department of Mathematics Colorado State University

More information

Topic 6: Projected Dynamical Systems

Topic 6: Projected Dynamical Systems Topic 6: Projected Dynamical Systems John F. Smith Memorial Professor and Director Virtual Center for Supernetworks Isenberg School of Management University of Massachusetts Amherst, Massachusetts 01003

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Introduction to Dynamical Systems

Introduction to Dynamical Systems Introduction to Dynamical Systems Autonomous Planar Systems Vector form of a Dynamical System Trajectories Trajectories Don t Cross Equilibria Population Biology Rabbit-Fox System Trout System Trout System

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Lecture 6: Dynamic Models

Lecture 6: Dynamic Models Lecture 6: Dynamic Models R.G. Pierse 1 Introduction Up until now we have maintained the assumption that X values are fixed in repeated sampling (A4) In this lecture we look at dynamic models, where the

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

A NUMERICAL STUDY ON PREDATOR PREY MODEL

A NUMERICAL STUDY ON PREDATOR PREY MODEL International Conference Mathematical and Computational Biology 2011 International Journal of Modern Physics: Conference Series Vol. 9 (2012) 347 353 World Scientific Publishing Company DOI: 10.1142/S2010194512005417

More information

Derivation of Itô SDE and Relationship to ODE and CTMC Models

Derivation of Itô SDE and Relationship to ODE and CTMC Models Derivation of Itô SDE and Relationship to ODE and CTMC Models Biomathematics II April 23, 2015 Linda J. S. Allen Texas Tech University TTU 1 Euler-Maruyama Method for Numerical Solution of an Itô SDE dx(t)

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE) PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)

More information

BIFURCATION PHENOMENA Lecture 1: Qualitative theory of planar ODEs

BIFURCATION PHENOMENA Lecture 1: Qualitative theory of planar ODEs BIFURCATION PHENOMENA Lecture 1: Qualitative theory of planar ODEs Yuri A. Kuznetsov August, 2010 Contents 1. Solutions and orbits. 2. Equilibria. 3. Periodic orbits and limit cycles. 4. Homoclinic orbits.

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

2.5 Forecasting and Impulse Response Functions

2.5 Forecasting and Impulse Response Functions 2.5 Forecasting and Impulse Response Functions Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables

More information

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES This document is meant as a complement to Chapter 4 in the textbook, the aim being to get a basic understanding of spectral densities through

More information

Workshop on Theoretical Ecology and Global Change March 2009

Workshop on Theoretical Ecology and Global Change March 2009 2022-3 Workshop on Theoretical Ecology and Global Change 2-18 March 2009 Stability Analysis of Food Webs: An Introduction to Local Stability of Dynamical Systems S. Allesina National Center for Ecological

More information

Ch 9. FORECASTING. Time Series Analysis

Ch 9. FORECASTING. Time Series Analysis In this chapter, we assume the model is known exactly, and consider the calculation of forecasts and their properties for both deterministic trend models and ARIMA models. 9.1 Minimum Mean Square Error

More information

Stochastic Processes. A stochastic process is a function of two variables:

Stochastic Processes. A stochastic process is a function of two variables: Stochastic Processes Stochastic: from Greek stochastikos, proceeding by guesswork, literally, skillful in aiming. A stochastic process is simply a collection of random variables labelled by some parameter:

More information

arxiv: v1 [math.oc] 22 Sep 2016

arxiv: v1 [math.oc] 22 Sep 2016 EUIVALENCE BETWEEN MINIMAL TIME AND MINIMAL NORM CONTROL PROBLEMS FOR THE HEAT EUATION SHULIN IN AND GENGSHENG WANG arxiv:1609.06860v1 [math.oc] 22 Sep 2016 Abstract. This paper presents the equivalence

More information

Chapter 9: Forecasting

Chapter 9: Forecasting Chapter 9: Forecasting One of the critical goals of time series analysis is to forecast (predict) the values of the time series at times in the future. When forecasting, we ideally should evaluate the

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Multivariate Generalized Ornstein-Uhlenbeck Processes

Multivariate Generalized Ornstein-Uhlenbeck Processes Multivariate Generalized Ornstein-Uhlenbeck Processes Anita Behme TU München Alexander Lindner TU Braunschweig 7th International Conference on Lévy Processes: Theory and Applications Wroclaw, July 15 19,

More information

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation

Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 6), 936 93 Research Article Stationary distribution and pathwise estimation of n-species mutualism system with stochastic perturbation Weiwei

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information