Residual analysis with bivariate INAR(1) models

Size: px
Start display at page:

Download "Residual analysis with bivariate INAR(1) models"

Transcription

1 Residual analysis with bivariate INAR() models Authors: Predrag M. Popović Faculty of Civil Engineering and Architecture, Serbia, University of Niš, Aleksandar S. Nastić Faculty of Sciences and Mathematics, Serbia, University of Niš, Miroslav M. Ristić Faculty of Sciences and Mathematics, Serbia, University of Niš,

2 2 Popović, Nastić and Ristić

3 Residual analysis with bivariate INAR() models 3 Abstract: In this paper we analyze forecasting errors made by random coefficients bivariate integer-valued autoregressive models of order one. These models are based on the thinning operator to support discreteness of data. In order to achieve a comprehensive analysis, we introduce a model that implements a binomial as well as a negative binomial thinning operator. There are two components of the model: survival and innovation. Forecasting errors made by each of these two sources of uncertainty are unobservable in the classic way. Thus, we derive predictive distributions from which we obtain the expected value of each component ofthe model. We provide an example of residual analysis on real data. Key-Words: Bivariate INAR() model, residual analysis, predictive distribution, binomial thinning, negative binomial thinning, geometric marginal distribution AMS Subject Classification: 62M0. Introduction An integer-valued time series is a sequence of integer data points measured at uniform time intervals. Such series are present in many fields of sciences. For example, in medicine the number of infected persons represents such a series, in finance the number of defaults, in criminology the number of committed crimes, in biology the size of the population of a species, etc. Thus, modelling integervalued time series and better understanding their nature is a point of interest for many researchers. Some of the first models for non-negative integer-valued time series were introduced by [7], [] and [2]. These models have autoregressive structure, where the autoregression is achieved through the thinning operator. Models with the full autoregressive-moving average structure were investigated in [8]. Following these ideas, many models have been developed. A survey on integer-valued autoregressive processes can be found in [5]. While these are models with constant coefficients, [7] defined an integervalued random coefficient model. Using the approach proposed by [3], [2] introduced a bivariate integer-valued random coefficient model. The dependence between processes that this model consists of is achieved through their autoregressive components, which are based on the negative binomial thinning operator. Some modifications of this model regarding the thinning operator and the marginal distribution are discussed in [9]. In this paper we focus on analyzing prediction errors made by these types of models. Since these models are composed

4 4 Popović, Nastić and Ristić of two components, survival and innovation, there are two sources of uncertainty. We try to estimate the portion of prediction error made by the survival and by the innovation component separately. Since these residuals are unobservable, we derive predictive distribution and calculate expected values of these components. Some aspects of predictive distributions for univariate models were presented in [3] and [4]. Residual analysis for univariate models was discussed in [5] and [6]. We extend the research on the bivariate models with random coefficients. In addition, to cover two types of thinning operators, we introduce a bivariate model whose survival components are generated by different thinning operators, namely, binomial and negative binomial. This mix of thinning operators makes it possible to model two dependent processes whose survival parts have different properties. While the survival component generated by the negative binomial thinning operator does produce new members of the series, the other one generated by the binomial thinning does not and new members depends only on the innovation component. To motivate the model we consider two data series: monthly count of motor vehicle thefts and monthly count of drug dealing activities. The first series is characterized by the fact that offended persons are not provoked to commit the same criminal act, but the second series is to a large extent generated by itself since some amount of drugs has been resold many times. The paper is organized as follows. In Section 2 we discuss the general form of bivariate integer-valued autoregressive models of order one with random coefficients. Section 3 introduces a bivariate model with both, binomial and negative binomial, thinning operators. We discuss residual analysis in Section 4. Real data modelling is considered in Section Bivariate INAR models In this section we state a general form of the random coefficient bivariate integer-valued autoregressive models of order one (BINAR()). In order to define a model suitable for the time series of count, we use the thinning operators. The binomial thinning operator is defined as α X X i B i, where X is a non-negative integer-valued random variable and {B i } is a sequence of i.i.d. Bernoulli random variables with mean parameter α. The negative binomial thinning operator is defined as α X X i G i, where {G i } is a sequence of i.i.d. geometric random variables with mean parameter α. For the time being we will not specify the thinning operator in the definition of BINAR() model. Let us denote a nonnegative bivariate time series of counts by Z n and introduce [ ] Un U a random matrix A n 2n, whose elements have the joint probability V n V 2n mass function defined as P(U n α,u 2n 0) p P(U n 0,U 2n α ) and P(V n α 2,V 2n 0) q P(V n 0,V 2n α 2 ), where α,α 2 (0,)

5 Residual analysis with bivariate INAR() models 5 and p,q [0,]. Then, the structure of BINAR() model is given by (2.) Z n A n Z n +e n, n, where {e n } represents the innovation process, which is composed of two independent series. The thinning operator is denoted with and it acts as the matrix multiplication. The two processes that figure in Z n are mutually dependent and their dependence is achieved trough autoregressive components, which are named survival processes. Coefficients that figure in (2.) are random variables, which make this model significantly different from the similar multivariate INAR models (such as the one presented in [4] and [6]). Notice that E(A n ) A [ α p α ( p) α 2 q α 2 ( q) ]. It is easy to show that E(A n Z n ) AE(Z n ). Following the discussion from [6], I A is a non singular matrix if all eigenvalues of A are inside the unit circle, which is proved for matrix A in [2]. All this implies that E(Z n ) (I A) E(e n ). Since (I A) is a matrix of finite values, E(Z 0 ) < iff E(e ) <. The conditional expectation for process (2.) is E(Z n+k Z n ) A k Z n +(I A) (I A k )(I A)µ, whereµ E(Z n ). Thecorrelation structure of BINAR() model is given as Cov(Z n+k,z n ) A k Var(Z n ), k 0. Since the eigenvalues of matrix A are inside the unit circle, covariance tends to zero and conditional expectation tends to the unconditional one, as k tends to infinity. More details on the correlation structure can be found in [9]. 3. Model with mixed thinning operators Inthissectionweintroduceanewbivariatetimeseriesmodel{(X,n,X 2,n )}, n N 0, where the two time series are dependent but evolve under different thinning operators. Let {X,n } and {X 2,n } be the two nonnegative integer-valued time series with probability mass function P(X i,n k) µ k /(+µ) k+, k 0, µ > 0 and i {,2}. A mixed geometric bivariate autoregressive process of order one (BVMIXGINAR()) is given by the following equations { α X X,n,n +ε,n, w.p. p, (3.) α X 2,n +ε,n, w.p. p, { α2 X X 2,n,n +ε 2,n, w.p. q, (3.2) α 2 X 2,n +ε 2,n, w.p. q, where {ε,n } and {ε 2,n } are i.i.d. sequences. The random vectors (ε,n,ε 2,n ) and (X,m,X 2,m ) are independent for all m < n. The thinning operators are defined in previous section and the counting series in α X,n, α X 2,n, α 2 X,n and α 2 X 2,n are mutually independent for all n N 0 and are also independent of innovation processes {ε,n } and {ε 2,n }. The distributions of the innovation processes are given by the following theorem.

6 6 Popović, Nastić and Ristić Theorem 3.. Let X,0 and X 2,0 have the Geom( µ +µ ) distribution, where µ > 0. The stationary bivariate time series {(X,n,X 2,n )} n N0 given by equations (3.) and (3.2) has Geom( µ +µ ) marginal distributions if and only if the processes {ε,n } and {ε 2,n } are distributed as (3.3) (3.4) where α (0,), α 2 (0, { d Geom( µ ε,n +µ ), w.p. α, 0, w.p. α, { d Geom( µ ε 2,n +µ ), w.p. µ( α 2) α 2 Geom( α 2 α +α 2 ), w.p. 2 µ µ α 2, µ +µ ] and p,q [0,]. µ α 2, Proof: Let us assume that the stationary time series {(X,n,X 2,n )} has the geometric marginal distribution Geom( µ +µ ), µ > 0. Since the random variables X,n and X 2,n are equal in distribution, considering probability generating functionsweobtain Φ X,n (s) Φ ε,n (s)φ X,n (Φ Bi (s)), whichfollows from(3.). From their geometric distribution we obtain Φ ε,n (s) +µα ( s) +µ( s) α +( α ) +µ µs, which proves equation (3.3). In a similar manner we derive the probability generating function for ε 2,n and obtain Φ ε2,n (s) (+µ)(+α 2 α 2 s) µ (+µ µs)(+α 2 α 2 s) µ( α 2) α 2 µ α 2 a+µ µs + α 2µ µ α 2 +α 2 α 2 s. Now, equation (3.4) follows under the constraints given in [] for NGINAR() model. Conversely, let usassumethat thedistributionsoftherandomvariables ε,n and ε 2,n are given by equations (3.3) and (3.4), respectively. Since X,0 d X2,0 d Geom(µ/(+µ)), we obtain Φ X, (s) Φ X,0 ( α +α s)φ ε, (s) +µα ( s) +µ µ( α +α s) +µ( s) +µ µs and ( ) Φ X2, (s) Φ X2,0 Φ ε2, (s) +α 2 α 2 s +µ( +α 2 α 2 s ) (+µ)(+α 2 α 2 s) µ (+µ µs)(+α 2 α 2 s) +µ µs. Thus, X, and X 2, have geometric distribution with parameter µ/(+µ). Using mathematical induction we can prove that X,n d X2,n d Geom(µ/( + µ)) for any n N 0.

7 Residual analysis with bivariate INAR() models 7 Even if we assume that X,0 and X 2,0 have the same arbitrary distribution, X,n as well as X 2,n converges to geometric distribution Geom(µ/( + µ)), as n, if random variables ε,n and ε 2,n have the distribution given by Theorem 3.. This can be proved with the following two equations. The first equation is Φ X,n (s) Φ X,n ( α +α s)φ ε,n (s) n Φ X,0 ( α n +αn s) k0 +µα k+ ( s) +µα k ( s) Φ X,0 ( α n +αn s)+µαn ( s) +µ( s) n +µ µs. Also, ( ) Φ X2,n (s) Φ X2,n Φ ε2,n (s) +α 2 α 2 s ( α2 +α 2 ( s)( α n ) n ( ) 2 ) α 2 +α 2 ( s)( α k 2 ) Φ X2,0 α 2 +α 2 ( s)( α n 2 ) Φ ε2,k α k0 2 +α 2 ( s)( α k 2 ) ( α2 +α 2 ( s)( α n ) 2 ) Φ X2,0 α 2 +α 2 ( s)( α n 2 ) ( α2 2 ( s) α 2s)(+α n 2 µ( s) αn+ 2 (+µ)( s) α 2 s) ( α n+ 2 ( s) α 2 s)(+α 2 µ( s) α 2 2 (+µ)( s) α 2s) n +µ µs. Random variables X,n and X 2,n are independent for known X,n and X 2,n. Thus, theconditional distributionof (X,n,X 2,n ), given (X,n,X 2,n ), is defined as P(X,n x,x 2,n y X,n u,x 2,n v) P(X,n x X,n u,x 2,n v)p(x 2,n y X,n u,x 2,n v). The conditional probability mass function of the random variable X,n for given X,n and X 2,n has the form (3.5) P(X,n x X,n u,x 2,n v) min(x,u) p P(ε,n x k)p(α X,n k X,n u) k0 min(x,v) +( p) k0 P(ε,n x k)p(α X 2,n k X 2,n v).

8 8 Popović, Nastić and Ristić Similarly, for X 2,n the form is (3.6) P(X 2,n y X,n u,x 2,n v) y q P(ε 2,n y k)p(α 2 X,n k X,n u) k0 +( q) y P(ε 2,n y k)p(α 2 X 2,n k X 2,n v). k0 Therandom variables α X and α 2 X underthe condition X u have binomial α and negative binomial distribution with parameters (u,α ) and (u, 2 +α 2 ), respectively (where the probability mass function for negative binomial distribution is taken as P(α 2 X k X u) ( u+k ) α k 2 k (+α 2 ). Notice that the probability ) k+u mass functions for the innovation processes are, respectively, P(ε,n x k) {xk} α +( α ) µ x k (+µ) x k+, P(ε 2,n y k) µ( α 2) α 2 µ α 2 µ y k (+µ) y k+ + α 2µ µ α 2 where A is the indicator function of a random event A. α y k 2 (+α 2 ) y k+, The estimation of unknown parameters of the bivariate INAR() models with random coefficients is discussed in details in [9]. We consider the conditional maximum likelihood method for parameters estimation of the model presented in this paper. For the given values {(X,k,X 2,k )} k0,n, we set the conditional likelihood function as L (θ) n lnp(x,i x,i,x 2,i x 2,i X,i x,i,x 2,i x 2,i,θ), i where θ (α,α 2,p,q,µ) is a vector of unknown parameters. The probability mass function is defined as a product of functions (3.5) and (3.6). The maximization of the log-likelihood function is obtained by numerical procedure, which, in our case, is conducted through the programming language R and the function nlm. 4. Residuals The standard statistic used for determining a goodness of fit is obtained by summing squared residuals. The residuals are obtained as a difference between

9 Residual analysis with bivariate INAR() models 9 a value at time n and an expected value of the process in time n for the given value at n, i.e., r X,n X,n α px,n α ( p)x 2,n µ ε, r X2,n X 2,n α 2 qx,n α 2 ( q)x 2,n µ ε2, where µ εi are the expected values of the random variables ε i, i {,2}. Since our process is composed of two sources of uncertainty (survival process and innovation process) it would be useful to track residuals of each source separately. This idea for one-dimensional INAR process is presented in [5] and [6], while we extend it for the bivariate case where the coefficients of the model are random variables. The residual analysis for a bivariate model is also investigated in [0], but the model has constant coefficients, independent survival processes and dependent innovation processes, which makes it significantly different from BVMIXGINAR() model. If we introduce the two pairs of random variables (U n,u 2n ) and (V n,v 2n ) defined in Section 2, we can present BVMIXGINAR() model as (4.) (4.2) X,n U n X,n +U 2n X 2,n +ε,n, X 2,n V n X,n +V 2n X 2,n +ε 2,n. The process in this form is more tractable in terms of survival and innovation components. Therefore, we get two sets of residuals: rx sur,n U n X,n + U 2n X 2,n α px,n α ( p)x 2,n and rx in,n ε,n µ ε (analogous for the process {X 2,n }). The problem that arises here is that the binomial thinning component and the innovation component are not observable. Thus, we have to consider their conditional expectation with respect to the σ-algebra generated by vectors (X,n,X 2,n ),(X,n,X 2,n ),...,(X,0,X 2,0 ). Since the process {(X,n,X 2,n )} is lag-one dependent, we investigate conditional expectations with respect to the σ-algebra generated only with random vectors at moments n and n, F n (X,n,X 2,n,X,n,X 2,n ). Let us introduce the notations P u,v (A) P(A X,n u,x 2,n v) and P x,x 2,u,v(A) P(A X,n x,x 2,n x 2,X,n u,x 2,n v). The conditional probability mass function of the first addend in equation (4.) with respect to the σ-algebra F n, for m,x,y,u,v N 0, is P x,x 2,u,v(U n X,n m) P u,v(u n X,n m,u n X,n +U 2n X 2,n +ε,n x ) P u,v (X,n x ) P u,v (X,n x ) [pp u,v(α X,n m,0 X 2,n +ε,n x m) +( p)p u,v (0 X,n m,α X 2,n +ε,n x m)] P u,v (X,n x ) [pp(bin(u,α ) m)p(ε,n x m) +( p) {m0} P(Bin(v,α )+ε,n x m)],

10 0 Popović, Nastić and Ristić where Bin(u, α) denotes a random variable with binomial distribution and parameters u i α. In a similar manner, for r,k,s N 0, we obtain the following three equations, P x,x 2,u,v(U 2n X 2,n r) P u,v (X,n x ) [pi {r0}p(bin(u,α )+ε,n x ) +( p)p(bin(v,α 2 ) r)p(ε,n x r)], P x,x 2,u,v(V n X,n k) P u,v (X 2,n x 2 ) [qp(nb(u,α 2)k)P(ε 2,n x 2 k) +( q)i {k0} P(NB(v,α 2 )+ε 2,n x 2 )], P x,x 2,u,v(V 2n X 2,n s) P u,v (X 2,n x 2 ) [qi {s0}p(nb(u,α 2 )+ε 2,n x 2 ) +( q)p(nb(v,α 2 )s)p(ε 2,n y s)]. In the last two equations the notation NB(u,α) stands for a random variable with negative binomial distribution with parameters u and α +α. With these results in mind and applying some algebra, we obtain the following equations u i E(U in X i,n F n ) jp x,x 2,u,v(U in X i,n j) and j min(u i,x ) p i P u,v (X,n x ) α p i u i P u,v (X,n x ) j j min(u i,x ) j0 ( ) ui α j j ( α ) ui j P(ε,n x j) ( ui α p i u i P u,v (X,n x ) P i,u i (α X i,n +ε,n x ) x 2 E(V in X i,n F n ) jp x,x 2,u,v(V in X i,n j) q i P u,v (X 2,n x 2 ) α 2 q i u i P u,v (X 2,n x 2 ) j x 2 j x 2 j0 j ( ) ui +j j j ( ) ui ++j j ) α j ( α ) u i j P(ε,n x j) α j 2 (+α 2 ) u i+j P(ε 2,n x 2 j) α 2 q i u i P u,v (X 2,n x 2 ) P i,u i + (α 2 X i,n +ε 2,n x 2 ), α j 2 (+α 2 ) u i++j P(ε 2,n x 2 j) where we introduced the notations P i,x (A) P(A X i,n x), i,2, p p, p 2 p, q q, q 2 q, u u and u 2 v. Thus, we can conclude that

11 Residual analysis with bivariate INAR() models the conditional expectation of the survival part of the process (4.) is calculated as E(U n X,n +U 2n X 2,n F n )pe(α X,n F n )+( p)e(α X 2,n F n ) P u,v (X,n x ) [α pup,u (α X,n +ε,n x ) +α ( p)vp 2,v (α X 2,n +ε,n x )] and for the process (4.2) as E(V n X,n +V 2n X 2,n F n )qe(α 2 X,n F n )+( q)e(α 2 X 2,n F n ) P u,v (X 2,n x 2 ) [quα 2P,u+ (α 2 X,n +ε 2,n x 2 ) +( q)vα 2 P 2,v+ (α 2 X 2,n +ε 2,n x 2 )]. We have defined the innovation processes such that (ε,n,ε 2,n ) is independent of (X,m,X 2,m ) for m < n. Since we observed the conditional expectation at time n with respect to the σ-algebra F n, we need to pay special attention here. The conditional probability mass functions are P x,x 2,u,v(ε,n x k) P(ε,n x k) P u,v (X,n x ) and P x,x 2,u,v(ε 2,n x 2 k) P(ε 2,n x 2 k) P u,v (X 2,n x 2 ) [pp u,v (α X,n k)+( p)p u,v (α X 2,n k)] [qp u,v (α 2 X,n k)+( q)p u,v (α 2 X 2,n k)]. Hance, the corresponding conditional expectations are and E(ε,n F n ) P u,v (X,n x ) [p(x P,u (α X,n +ε,n x ) α up,u (α X,n +ε,n x )) +( p)(x P 2,v (α X 2,n +ε,n x ) α vp 2,v (α X 2,n +ε,n x ))] E(ε 2,n F n ) P u,v (X 2,n x 2 ) [q(x 2 P,u (α 2 X,n +ε 2,n x 2 ) α 2 up,u+ (α 2 X,n +ε 2,n x 2 )) +( q)(x 2 P 2,v (α 2 X 2,n +ε 2,n x 2 ) α 2 vp 2,v+ (α 2 X 2,n +ε 2,n x 2 ))]. Now we can distinguish between the error from the survival and the error from the innovation process. If we sum these two values we obtain the following

12 2 Popović, Nastić and Ristić results r surr X,n +r in X,n E(pα X,n +( p)α X 2,n X,n,X,n ) α px,n α ( p)x 2,n +E(ε,n X,n,X,n ) µ ε E(pα X,n +( p)α X 2,n +ε,n X,n,X,n ) (4.3) α px,n α ( p)x 2,n µ ε,n X,n α px,n α ( p)x 2,n µ ε r X,n. We can conclude that the sum of these two error terms is equal to the error term obtained by using conditional expectation for the process {X,n } with respect to σ-algebra F n. The conclusion is analogous for the process {X 2,n }. 5. Application In this section, we discuss the characteristics of data for which BVMIXGI- NAR() model is the most adequate. We compare results of BVMIXGINAR() model with results of some other bivariate models. At the end of the section we analyze prediction errors of BVMIXGINAR() model and suggest how the model can be improved. We analyze data from the Pittsburgh police department number 407, which can be found on the website where we focus on the number of stolen vehicles (MVTHEFT) and the number of reported drug activities (C DRUG) per month from January 990 to December 200. The average values of these two series are.74 and.5, and the variances are 2.98 and 5.0, respectively. The correlation between the series is The bar plots and correlograms are given in Figure. Both corelograms show the presence of lag autocorrelation. Although there are some autocorrelations on higher lags for series C DRUG, the value on the first lag is dominant. High positive correlation between the series, overdispersion and the first lag autocorrelation imply that BVMIXGINAR() might be adequate. We compare BVMIXGINAR() model with models BVNGINAR() introduced in [2], and BVPOINAR() introduced in [9], since both of these models have random coefficients and a similar structure. For the three models, we compare their values of the log-likelihood functions and the root mean square errors (RMS) made by one step ahead prediction. The results are presented in Table. According to the test results, BVMIXGINAR() is the most adequate for these data. Notice that models with geometric distribution obtain higher values of the likelihood function. BVMIXGINAR() achieves slightly higher log-likelihood values than BVNGINAR() but much lower RMS for MVTHEFT series. Modelling C DRUG series with geometric distribution where survival processes evolve under negative binomial thinning provides the best results. RMS for C DRUG are the same for BVMIXGINAR() and BVNGINAR(). The improvement with

13 Residual analysis with bivariate INAR() models 3 Figure : Data series and autocorrelation function for MVTHEFT and C DRUG series Model CML estimates log- RMS RMS likelihood MVTHEFT C DRUG BVMIXGINAR() ˆα 0.48(0.), ˆα (0.095), ˆp 0.469(0.325), ˆq 0.02(0.08), ˆµ.598(0.64) BVNGINAR() ˆα 0.242(0.66), ˆα (0.095), ˆp 0.39(0.244), ˆq 0.02(0.078), ˆµ.604(0.7) BVPOINAR() ˆα 0.27(0.066), ˆα 2 0.4(0.055), ˆp 0.328(0.72), ˆq 0.068(0.066), ˆλ.687(0.) Table : Parameter estimates of INAR models, root mean square errors and the log-likelihood function for MVTHEFT and C DRUG data series BVMIXGINAR() is with RMS for MVTHEFT. The assumption that one survival process evolves under binomial and the other survival process under negative binomial thinning improves prediction performance. We need this mix of thinning operators when we model two series with different behavior, as the case here. Since once sold drugs are often resold, but once stolen vehicle cannot be stolen again, we have here one process that is self-generated and one that is not. The estimated parameters of BVMIXGINAR() indicate that drug activities influence the number of stolen vehicles in this area, while vice versa does not hold since the value of parameter q is statistically equal to zero. We continue with the prediction performance analysis by focusing on the prediction errors made by the survival and innovation components separately. We calculate these residuals and plot them to assess the adequacy of each component. As given by equation (4.3), the sum of these two residuals is equal to the residuals obtained by the usual definition. The residuals are presented in Figure 2. It can be noticed that the residuals of the innovation processes are much higher than the residuals of the survival processes, apart from the few cases of C DRUG series. Further, the correlation between the two type of residuals is for MVTHEFT

14 4 Popović, Nastić and Ristić Figure 2: Upper figure shows the residuals for MVTHEFT and lower for C DRUG series. and for C DRUG series. The correlation is positive but not as high as one might expect. These results also add value to the model since an imprecise prediction of one component can be absorbed by the prediction of the other component. Another interesting point is a low correlation of only 0. between the innovation processes of the two series, which supports the structural assumption that innovation processes are independent. Higher residuals generated by the innovation processes indicate that future work should focus on improving the innovation processes. REFERENCES [] Al-Osh, M.A., Alzaid, A.A. (987) First-order Integer-valued Autoregressive (INAR()) Process, Journal of Time Series Analysis 8, [2] Al-Osh, M.A., Aly, A.A.E. (992) First order autoregressive time series with negative binomial and geometric marginals, Communications in Statistics - Theory and Methods 2, [3] Dewald, S.L., Lewis, P.A.W., McKenzie, E. (989) A bivariate First-Order Autoregressive Time Series Modiel in Exponential Variables (BEAR()), Management Science 35,

15 Residual analysis with bivariate INAR() models 5 [4] Franke, J., Rao, T.S. (993) Multivariate first-order integer-valued autoregressions, Technical Report, Mathematics Department, UMIST. [5] Freeland, R.K., McCabe B.P.M. (2004) Analysis of low count time series data by poisson autoregression, Journal of Time Series Analysis 25, [6] Latour, A. (997) The Multivatiate GINAR(p) Process, Advances in Applied Probability 29, [7] McKenzie, E. (985) Some simple models for discrete variate time series, Journal of the American Water Resources Association 2, [8] McKenzie, E. (988) Some ARMA models for dependent sequences of Poisson counts, Advances in Applied Probability 20, [9] Nastić, A.S., Ristić, M.M., Popović P.M.(206) Estimation in a Bivariate Integer- Valued Autoregressive Process, Communications in Statistics - Theory and Methods 45, [0] Pedeli, X., Karlis, D. (20) A Bivariate INAR() Process with Application, Statistical Modelling, [] Ristić, M.M., Nastić, A.S., Bakouch, H.S. (2009) A new geometric first-order integer-valued autoregressive (NGINAR()) process, Journal of Statistical Planning and Inference 39, [2] Ristić, M.M., Nastić, A.S., Jayakumar, K., Bakouch, H.S. (202) A bivariate INAR() time series model with geometric marginals, Applied Mathematics Letters 25, [3] Silva, N., Pereira, I., Silva, E.M. (2008) Estimation and forecasting in SUINAR () model, REVSTAT 6, [4] Silva, N., Pereira, I., Silva, E.M. (2009) Forecasting in INAR () model, REVS- TAT 7, [5] Weiss, H.C. (2008) Thinning operations for modeling time series of counts a survey, Advances in Statistical Analysis 92, [6] Park, Y., Kim, H.Y. (202) Diagnostic checks for integer-valued autoregressive models using expected residuals, Statistical Papers 53, [7] Zheng, H., Basawa, I.V., Datta, S. (2005) Inference for p th -order random coefficient integer-valued autoregressive processes, Journal Of Time Series Analysis 27,

A Mixed Thinning Based Geometric INAR(1) Model

A Mixed Thinning Based Geometric INAR(1) Model Filomat 31:13 2017), 4009 4022 https://doi.org/10.2298/fil1713009n Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat A Mixed Thinning

More information

Conditional Maximum Likelihood Estimation of the First-Order Spatial Non-Negative Integer-Valued Autoregressive (SINAR(1,1)) Model

Conditional Maximum Likelihood Estimation of the First-Order Spatial Non-Negative Integer-Valued Autoregressive (SINAR(1,1)) Model JIRSS (205) Vol. 4, No. 2, pp 5-36 DOI: 0.7508/jirss.205.02.002 Conditional Maximum Likelihood Estimation of the First-Order Spatial Non-Negative Integer-Valued Autoregressive (SINAR(,)) Model Alireza

More information

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacyu/filomat Filomat 22: (2008), 69 77 A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS Miroslav M

More information

Katz Family of Distributions and Processes

Katz Family of Distributions and Processes CHAPTER 7 Katz Family of Distributions and Processes 7. Introduction The Poisson distribution and the Negative binomial distribution are the most widely used discrete probability distributions for the

More information

Poisson INAR processes with serial and seasonal correlation

Poisson INAR processes with serial and seasonal correlation Poisson INAR processes with serial and seasonal correlation Márton Ispány University of Debrecen, Faculty of Informatics Joint result with Marcelo Bourguignon, Klaus L. P. Vasconcellos, and Valdério A.

More information

Bernoulli Difference Time Series Models

Bernoulli Difference Time Series Models Chilean Journal of Statistics Vol. 7, No. 1, April 016, 55 65 Time Series Research Paper Bernoulli Difference Time Series Models Abdulhamid A. Alzaid, Maha A. Omair and Weaam. M. Alhadlaq Department of

More information

Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications

Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications CHAPTER 6 Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications 6.1 Introduction Exponential distributions have been introduced as a simple model for statistical analysis

More information

Application of copula-based BINAR models in loan modelling

Application of copula-based BINAR models in loan modelling Application of copula-based BINAR models in loan modelling Andrius Buteikis 1, Remigijus Leipus 1,2 1 Faculty of Mathematics and Informatics, Vilnius University, Naugarduko 24, Vilnius LT-03225, Lithuania

More information

ATHENS UNIVERSITY OF ECONOMICS

ATHENS UNIVERSITY OF ECONOMICS ATHENS UNIVERSITY OF ECONOMICS BIVARIATE INAR1 MODELS Xanthi Pedeli and Dimitris Karlis Department of Statistics Athens University of Economics and Business Technical Report No 247, June, 2009 DEPARTMENT

More information

Random Infinite Divisibility on Z + and Generalized INAR(1) Models

Random Infinite Divisibility on Z + and Generalized INAR(1) Models ProbStat Forum, Volume 03, July 2010, Pages 108-117 ISSN 0974-3235 Random Infinite Divisibility on Z + and Generalized INAR(1) Models Satheesh S NEELOLPALAM, S. N. Park Road Trichur - 680 004, India. ssatheesh1963@yahoo.co.in

More information

Marshall-Olkin Univariate and Bivariate Logistic Processes

Marshall-Olkin Univariate and Bivariate Logistic Processes CHAPTER 5 Marshall-Olkin Univariate and Bivariate Logistic Processes 5. Introduction The logistic distribution is the most commonly used probability model for modelling data on population growth, bioassay,

More information

Simultaneity in the Multivariate Count Data Autoregressive Model

Simultaneity in the Multivariate Count Data Autoregressive Model Simultaneity in the Multivariate Count Data Autoregressive Model Kurt Brännäs Department of Economics Umeå School of Business and Economics, Umeå University email: kurt.brannas@econ.umu.se Umeå Economic

More information

Asymptotic distribution of the Yule-Walker estimator for INAR(p) processes

Asymptotic distribution of the Yule-Walker estimator for INAR(p) processes Asymptotic distribution of the Yule-Walker estimator for IAR(p) processes Isabel Silva a,b, M Eduarda Silva b,c a Departamento de Engenharia Civil, Faculdade de Engenharia - Universidade do Porto, Rua

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

A Marshall-Olkin Gamma Distribution and Process

A Marshall-Olkin Gamma Distribution and Process CHAPTER 3 A Marshall-Olkin Gamma Distribution and Process 3.1 Introduction Gamma distribution is a widely used distribution in many fields such as lifetime data analysis, reliability, hydrology, medicine,

More information

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/ STA263/25//24 Tutorial letter 25// /24 Distribution Theor ry II STA263 Semester Department of Statistics CONTENTS: Examination preparation tutorial letterr Solutions to Assignment 6 2 Dear Student, This

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

Modelling and coherent forecasting of zero-inflated time series count data

Modelling and coherent forecasting of zero-inflated time series count data INDIAN INSTITUTE OF MANAGEMENT AHMEDABAD INDIA Modelling and coherent forecasting of zero-inflated time series count data Raju Maiti Atanu Biswas Apratim Guha Seng Huat Ong W.P. No. 2013-05-02 May 2013

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

A parametric study for the first-order signed integer-valued autoregressive process

A parametric study for the first-order signed integer-valued autoregressive process A parametric study for the first-order signed integer-valued autoregressive process Christophe Chesneau, Maher Kachour To cite this version: Christophe Chesneau, Maher Kachour. A parametric study for the

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

A bivariate first-order signed integer-valued autoregressive process

A bivariate first-order signed integer-valued autoregressive process A bivariate first-order signed integer-valued autoregressive process Jan Bulla, Christophe Chesneau, Maher Kachour To cite this version: Jan Bulla, Christophe Chesneau, Maher Kachour. A bivariate first-order

More information

Nonlinear Time Series Modeling

Nonlinear Time Series Modeling Nonlinear Time Series Modeling Part II: Time Series Models in Finance Richard A. Davis Colorado State University (http://www.stat.colostate.edu/~rdavis/lectures) MaPhySto Workshop Copenhagen September

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1. Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error

More information

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis Chapter 12: An introduction to Time Series Analysis Introduction In this chapter, we will discuss forecasting with single-series (univariate) Box-Jenkins models. The common name of the models is Auto-Regressive

More information

Random products and product auto-regression

Random products and product auto-regression Filomat 7:7 (013), 1197 103 DOI 10.98/FIL1307197B Publishe by Faculty of Sciences an Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Ranom proucts an prouct auto-regression

More information

Efficient parameter estimation for independent and INAR(1) negative binomial samples

Efficient parameter estimation for independent and INAR(1) negative binomial samples Metrika manuscript No. will be inserted by the editor) Efficient parameter estimation for independent and INAR1) negative binomial samples V. SAVANI e-mail: SavaniV@Cardiff.ac.uk, A. A. ZHIGLJAVSKY e-mail:

More information

Statistical Methods in HYDROLOGY CHARLES T. HAAN. The Iowa State University Press / Ames

Statistical Methods in HYDROLOGY CHARLES T. HAAN. The Iowa State University Press / Ames Statistical Methods in HYDROLOGY CHARLES T. HAAN The Iowa State University Press / Ames Univariate BASIC Table of Contents PREFACE xiii ACKNOWLEDGEMENTS xv 1 INTRODUCTION 1 2 PROBABILITY AND PROBABILITY

More information

Chapter 2 Detection of Changes in INAR Models

Chapter 2 Detection of Changes in INAR Models Chapter 2 Detection of Changes in INAR Models Šárka Hudecová, Marie Hušková, and Simos Meintanis Abstract In the present paper we develop on-line procedures for detecting changes in the parameters of integer

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Acta Universitatis Carolinae. Mathematica et Physica

Acta Universitatis Carolinae. Mathematica et Physica Acta Universitatis Carolinae. Mathematica et Physica Jitka Zichová Some applications of time series models to financial data Acta Universitatis Carolinae. Mathematica et Physica, Vol. 52 (2011), No. 1,

More information

COM336: Neural Computing

COM336: Neural Computing COM336: Neural Computing http://www.dcs.shef.ac.uk/ sjr/com336/ Lecture 2: Density Estimation Steve Renals Department of Computer Science University of Sheffield Sheffield S1 4DP UK email: s.renals@dcs.shef.ac.uk

More information

You must continuously work on this project over the course of four weeks.

You must continuously work on this project over the course of four weeks. The project Five project topics are described below. You should choose one the projects. Maximum of two people per project is allowed. If two people are working on a topic they are expected to do double

More information

arxiv: v1 [stat.me] 29 Nov 2017

arxiv: v1 [stat.me] 29 Nov 2017 Fractional approaches for the distribution of innovation sequence of INAR() processes arxiv:70645v [statme] 29 Nov 207 Josemar Rodrigues, Marcelo Bourguignon 2, Manoel Santos-Neto 3,4 N Balakrishnan 5

More information

MCMC for Integer Valued ARMA Processes

MCMC for Integer Valued ARMA Processes MCMC for Integer Valued ARMA Processes Peter Neal & T. Subba Rao First version: 26 September 2005 Research Report No. 10, 2005, Probability and Statistics Group School of Mathematics, The University of

More information

Gumbel Distribution: Generalizations and Applications

Gumbel Distribution: Generalizations and Applications CHAPTER 3 Gumbel Distribution: Generalizations and Applications 31 Introduction Extreme Value Theory is widely used by many researchers in applied sciences when faced with modeling extreme values of certain

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Chapter 3 - Temporal processes

Chapter 3 - Temporal processes STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

Research Article Integer-Valued Moving Average Models with Structural Changes

Research Article Integer-Valued Moving Average Models with Structural Changes Matheatical Probles in Engineering, Article ID 21592, 5 pages http://dx.doi.org/10.1155/2014/21592 Research Article Integer-Valued Moving Average Models with Structural Changes Kaizhi Yu, 1 Hong Zou, 2

More information

Thomas J. Fisher. Research Statement. Preliminary Results

Thomas J. Fisher. Research Statement. Preliminary Results Thomas J. Fisher Research Statement Preliminary Results Many applications of modern statistics involve a large number of measurements and can be considered in a linear algebra framework. In many of these

More information

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. STAT 302 Introduction to Probability Learning Outcomes Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. Chapter 1: Combinatorial Analysis Demonstrate the ability to solve combinatorial

More information

Autoregressive Negative Binomial Processes

Autoregressive Negative Binomial Processes Autoregressive Negative Binomial Processes N. N. LEONENKO, V. SAVANI AND A. A. ZHIGLJAVSKY School of Mathematics, Cardiff University, Senghennydd Road, Cardiff, CF24 4AG, U.K. Abstract We start by studying

More information

2. Multivariate ARMA

2. Multivariate ARMA 2. Multivariate ARMA JEM 140: Quantitative Multivariate Finance IES, Charles University, Prague Summer 2018 JEM 140 () 2. Multivariate ARMA Summer 2018 1 / 19 Multivariate AR I Let r t = (r 1t,..., r kt

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Advances on time series for multivariate counts

Advances on time series for multivariate counts Advances on time series for multivariate counts Dimitris Karlis Department of Statistics Athens University of Economics Besançon, July 2018 Outline 1 Introduction 2 A bivariate INAR(1) process The BINAR(1)

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

A bivariate INAR(1) process with application

A bivariate INAR(1) process with application A bivariate INAR1 process wit application Xanti Pedeli and Dimitris Karlis Department of Statistics Atens University of Economics and Business Abstract Te study of time series models for count data as

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Markov Chains and Hidden Markov Models

Markov Chains and Hidden Markov Models Chapter 1 Markov Chains and Hidden Markov Models In this chapter, we will introduce the concept of Markov chains, and show how Markov chains can be used to model signals using structures such as hidden

More information

Probability and Statistics

Probability and Statistics Probability and Statistics 1 Contents some stochastic processes Stationary Stochastic Processes 2 4. Some Stochastic Processes 4.1 Bernoulli process 4.2 Binomial process 4.3 Sine wave process 4.4 Random-telegraph

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

A Guide to Modern Econometric:

A Guide to Modern Econometric: A Guide to Modern Econometric: 4th edition Marno Verbeek Rotterdam School of Management, Erasmus University, Rotterdam B 379887 )WILEY A John Wiley & Sons, Ltd., Publication Contents Preface xiii 1 Introduction

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

Basics on Probability. Jingrui He 09/11/2007

Basics on Probability. Jingrui He 09/11/2007 Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

PROBABILITY DISTRIBUTIONS. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

PROBABILITY DISTRIBUTIONS. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception PROBABILITY DISTRIBUTIONS Credits 2 These slides were sourced and/or modified from: Christopher Bishop, Microsoft UK Parametric Distributions 3 Basic building blocks: Need to determine given Representation:

More information

arxiv: v1 [stat.me] 29 Nov 2017

arxiv: v1 [stat.me] 29 Nov 2017 Extended Poisson INAR(1) processes with equidispersion, underdispersion and overdispersion arxiv:1711.10940v1 [stat.me] 29 Nov 2017 Marcelo Bourguignon 1, Josemar Rodrigues 2,, and Manoel Santos-Neto 3,4,

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

5. Conditional Distributions

5. Conditional Distributions 1 of 12 7/16/2009 5:36 AM Virtual Laboratories > 3. Distributions > 1 2 3 4 5 6 7 8 5. Conditional Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics David B. University of South Carolina Department of Statistics What are Time Series Data? Time series data are collected sequentially over time. Some common examples include: 1. Meteorological data (temperatures,

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Probability Lecture III (August, 2006)

Probability Lecture III (August, 2006) robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose

More information

Preliminary statistics

Preliminary statistics 1 Preliminary statistics The solution of a geophysical inverse problem can be obtained by a combination of information from observed data, the theoretical relation between data and earth parameters (models),

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

Multivariate integer-valued autoregressive models applied to earthquake counts

Multivariate integer-valued autoregressive models applied to earthquake counts Multivariate integer-valued autoregressive models applied to earthquake counts Mathieu Boudreault, Arthur Charpentier To cite this version: Mathieu Boudreault, Arthur Charpentier. Multivariate integer-valued

More information

Gauge Plots. Gauge Plots JAPANESE BEETLE DATA MAXIMUM LIKELIHOOD FOR SPATIALLY CORRELATED DISCRETE DATA JAPANESE BEETLE DATA

Gauge Plots. Gauge Plots JAPANESE BEETLE DATA MAXIMUM LIKELIHOOD FOR SPATIALLY CORRELATED DISCRETE DATA JAPANESE BEETLE DATA JAPANESE BEETLE DATA 6 MAXIMUM LIKELIHOOD FOR SPATIALLY CORRELATED DISCRETE DATA Gauge Plots TuscaroraLisa Central Madsen Fairways, 996 January 9, 7 Grubs Adult Activity Grub Counts 6 8 Organic Matter

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES

THE ROYAL STATISTICAL SOCIETY 2009 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES THE ROYAL STATISTICAL SOCIETY 9 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA MODULAR FORMAT MODULE 3 STOCHASTIC PROCESSES AND TIME SERIES The Society provides these solutions to assist candidates preparing

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please

More information

Introduction to Probability and Statistics (Continued)

Introduction to Probability and Statistics (Continued) Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Fundamentals of Applied Probability and Random Processes

Fundamentals of Applied Probability and Random Processes Fundamentals of Applied Probability and Random Processes,nd 2 na Edition Oliver C. Ibe University of Massachusetts, LoweLL, Massachusetts ip^ W >!^ AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS

More information

ECE534, Spring 2018: Solutions for Problem Set #5

ECE534, Spring 2018: Solutions for Problem Set #5 ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Applied time-series analysis

Applied time-series analysis Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 18, 2011 Outline Introduction and overview Econometric Time-Series Analysis In principle,

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Autoregressive and Moving-Average Models

Autoregressive and Moving-Average Models Chapter 3 Autoregressive and Moving-Average Models 3.1 Introduction Let y be a random variable. We consider the elements of an observed time series {y 0,y 1,y2,...,y t } as being realizations of this randoms

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Data Mining and Analysis: Fundamental Concepts and Algorithms

Data Mining and Analysis: Fundamental Concepts and Algorithms Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA Department

More information

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0) Delta Method Often estimators are functions of other random variables, for example in the method of moments. These functions of random variables can sometimes inherit a normal approximation from the underlying

More information