Time Series Analysis. Asymptotic Results for Spatial ARMA Models

Size: px
Start display at page:

Download "Time Series Analysis. Asymptotic Results for Spatial ARMA Models"

Transcription

1 Communications in Statistics Theory Methods, 35: , 2006 Copyright Taylor & Francis Group, LLC ISSN: print/532-45x online DOI: 0.080/ Time Series Analysis Asymptotic Results for Spatial ARMA Models AUDE ILLIG AND BENOÎT TRUONG-VAN Laboratoire de Statistique et Probabilités, Université Paul Sabatier, Toulouse, France Causal quadrantal-type spatial ARMAp q models with independent identically distributed innovations are considered. In order to select the orders p q of these models estimate their autoregressive parameters, estimators of the autoregressive coefficients, derived from the extended Yule Walker equations are defined. Consistency asymptotic normality are obtained for these estimators. Then, spatial ARMA model identification is considered simulation study is given. Keywords Asymptotic properties; Causality; Estimation; Order selection; Spatial autoregressive moving-average models. Mathematics Subject Classification Primary 62M30, 62M0; Secondary 60F05.. Introduction Unlike the time series autoregressive moving-average ARMA case, several kind of spatial ARMA models may be defined cf. Guyon, 995, Tjøstheim, 978, 983. Most of the different ARMA representations appear to depend on the order chosen on the lattice d For spatial autoregressive AR model having quadrantal representation, Tjøstheim 983 considered Yule Walker least squares LS estimators for the parameters proved the strong consistency of both these estimators the asymptotic normality for the LS estimator, this even when the innovations of these AR models are strong martingale-differences. Ha Newton 993 proved that the Yule Walker estimator considered in Tjøstheim 983 is in fact biased for the asymptotic normality. They gave the explicit expression of this bias for a causal AR rom field indexed by a two-dimensional lattice proposed an new estimator called unbiased Yule Walker estimator that has the asymptotically normal property. Received August 6, 2004; Accepted September 23, 2005 Address correspondence to Aude Illig, Laboratoire de Statistique et Probabilités, UMR 5583, Université Paul Sabatier, 8, Route de Narbonne 3062, Toulouse Cedex 4, France; Aude.Illig@math.ups-tlse.fr 67

2 672 Illig Truong-Van Tjøstheim 983 used a central limit theorem for strong martingale-differences, relative to the partial ordering on d, to obtain the asymptotic normality of the LS estimators. For weaker martingale-differences, defined with the lexicographic order, Huang 992 also obtained a central limit theorem. While using this total ordering, non symmetric half-space ARMA models can be defined. Huang Anh 992 considered such models on a two-dimensional lattice. They gave a method based on an inverse model associated with the original one to estimate the orders the parameters of the model avoiding in this way the estimation of the innovations. Other kind of spatial ARMA process, not depending on the order on d, can also be defined. Etchison et al. 994 introduced two-dimensional lattice separable ARMA models, which are in fact products of two ARMA processes indexed over one-dimensional lattice set. They studied the properties of the sample partial autocorrelation function in order to identify orders in AR models. Furthermore, Shitan Brockwell 995 proposed an asymptotic test of separability using the properties of the autocovariance function of a spatial separable AR model. In this article, we are interested in model order selection in AR parameters estimation of a causal quadrantal ARMA rom fields as defined in Tjøstheim 978 with independent identically distributed innovations. In this purpose, we introduce estimators of the AR coefficients based on a derivation of extended Yule Walker equations. These estimators correspond in fact to an extension to spatial case of the sample generalized partial autocorrelation GPAC function known in time series Woodward Gray, 98. Unlike the one-dimensional lattice case, several estimators of the GPAC function, that are unbiased for asymptotic normality, could be defined. Our estimators differ from those retained in Ha Newton 993. We establish that they are consistent asymptotically normal. For this, we prove in the first sections some asymptotic properties of sample autocovariances sample crosscovariances of two linear rom fields. These results extend those in Choi 2000 for moving-average MA rom fields also generalize the known corresponding results for time series Brockwell Davis, 987. In the last section, we apply some obtained asymptotic results when considering the order selection of a spatial ARMA model the estimation of its AR parameters. A simulation study is also given. 2. Preliminaries Assumptions In this section, we recall some main notations on rom fields indexed over d.in all the following, all the rom fields are indexed over d, with d 2, d is endowed with the usual partial order that is for s = s s d, t = t t d in d, we write s t if for all i = d, s i t i. For a b d, such that a b a b, the following indexing subsets in d, will be considered: Sa b = x d a x b Sa = x d a x Sa b = Sa b\a Sa = Sa \a Let X t t d be a real valued square integrable rom field. Its autocovariance function is defined by u v = X u X u X v X v for u v in d.

3 Asymptotics for Spatial ARMA Models 673 The rom field X t is said to be stationary if X t = m t d u v = u + h v + h u v h d strictly stationary if for all h a b d, X j j Sab X j+h j Sab have the same joint distributions. As u v = u v 0 for all u v d when X t t d is a stationary rom field, it is convenient to redefine the autocovariance function as a function of one argument as follows: h = h 0 Throughout this work, t t d denotes a family of independent identically distributed i.i.d centered rom variables with variance 2 > 0. We say that a rom field X t t d is linear if for any t, X t = j t j j d with j <. Remark 2.. Linear rom fields as defined above are strictly stationary. They are called MA rom fields in Choi For two linear rom fields X t t d Y t t d, we define their crosscovariance function as xy h = X t Y t+h for every h in d. Given two samples X t t S N Y t t S N, we define the sample crosscovariance function as follows: ˆ xy h = N h t+h SN X t Y t+h 2 with N h = i=dn h i where for N \0, we denote by N the element of d whose all components equal N. In the particular case when X t = Y t, ˆ are, respectively denoted for xy ˆ xy. Following Tjøstheim 978 Guyon 995, we say that a rom field X t t d is a spatial ARMAp q with parameters p q d if it is stationary satisfies the following equation X t j S0p j X t j = t + k S0q k t k 3 where j j S0p k k S0q denotes, respectively the autoregressive the moving-average parameters with the convention that 0 = 0 =. If p respectively, q equals 0, the sum over S0p respectively, S0q is supposed to be 0 the process is called an ARp respectively, MAq rom field.

4 674 Illig Truong-Van The ARMA rom field is called causal if it has the following unilateral expansion with j <. X t = j S0 j t j 4 Remark 2.2. Let z = j S0p j z j z = + j S0q j z j where z = z z d. Then, a sufficient condition c.f. Tjøstheim, 978 for the rom field to be causal is that the autoregressive polynomial z has no zeroes in the closure of the open disc D d in d. For the identifiability of model 3, we assume in addition that these two polynomials have no common irreductible factors in the factorial ring z z d. The ARMA order selection procedure that we consider herein is based on extended Yule Walker equations. More precisely, for spatial ARMAp q rom field X t t d with p 0, we define for S0 S0, the coefficients = j j S0 as the solution, when it exists, of the following extended Yule Walker equations, where Y t = X t j S0 j X t j. Y t X t j = 0 j S0 5 If we arrange vectors matrix in the lexicographic order, these equations can be rewritten under the form of the following single matricial equation with for all i j S0 = 6 j i = + j i j = + j Given a sample X t t S N, we could have considered the extended Yule Walker estimator of defined as the solution of the following matricial equation where for i j S0, = j i = R + j i j = R + j

5 Asymptotics for Spatial ARMA Models 675 with for h S0, Rh = t h SN X t X t h As mentioned above, Ha Newton 993 have proved for an ARp rom field, indexed over 2, that the estimator is biased for the asymptotic normality. The bias appears because, in the terms Rh, the summation set t S N t h S epends on h simultaneously, the normalization coefficient does not match to its cardinal. By modifying the normalization coefficient, they proposed then an unbiased estimator called unbiased Yule Walker estimator for which they proved the asymptotic normality. Here we take another approach to obtain unbiased estimators from the extended Yule Walker equations 5. More precisely, we define the estimators ˆ = ˆ j j S0 of the coefficients j j S0 by the following equations t SN Ŷ t X t j = 0 j S0 7 where Ŷ t = X t j S0 ˆ j X t j = +. These equations can also be rewritten under the following matricial form with for all i j S0, j i = ˆ j = ˆ = ˆ 8 t SN t SN X t i X t j X t X t j For the convenience of the reader, we list below assumptions that will be considered. Assumption A. X t t d is a spatial ARMAp q with p 0. Assumption A2. X t d is causal. Assumption A3. t t d is a family of i.i.d centered rom variables with variance 2 > 0. Assumption A4. t 4 = 4 with being some positive constant.

6 676 Illig Truong-Van 3. Some Asymptotic Results for Linear Rom Fields We consider now two linear rom fields X t t d, Y t t d under the form X t = j t j Y t = j t j 9 j d j d with the crosscovariance function xy. We first establish below the consistency of their sample crosscovariance function ˆ xy as defined in 2. Proposition 3.. Let X t t d Y t t d be two linear rom fields as defined in 9. Under Assumptions A3 A4, for all h d, where ˆ xy h xy h N means the convergence in probability when N tends to infinity. N Proof. We first note that xy h = 2 i d i i+h ˆ xy h = i d i j N j d h t+h SN For i = j h, from the weak law of large numbers N h Now, let us prove that for i j h, We have N h t+h SN N h t+h SN t+h SN t i t+h j 2 = N 2 h 2 t i t i t+h j t SN t +h SN t i t+h j N 2 0 N 0 t 2 SN t 2 +h SN t i t +h j t2 i t2 +h j If t t 2 t i t 2 + h j, t i t +h j t2 i t2 +h j = 0 by the independence property. If t t 2 t i = t 2 + h j, t i t +h j t2 i t2 +h j = 2 t i t 2 i t +h j = 0 again by the independence property.

7 Asymptotics for Spatial ARMA Models 677 Thus, N h t+h SN t i t+h j 2 = N 2 h t+h SN 2 t i 2 t+h j Since i j h, we have N h t+h SN t i t+h j 2 = 4 N h The right-h side of the above equality converges to 0 as N tends to so that follows from Chebyshev s inequality. Let us denote for K, Z NK = i S KK j S KK i j N h t+h SN t i t+h j Z N = i d i j N j d h t+h SN t i t+h j From 0, we deduce that Z NK N Z K = i S KK i+h S KK i i+h 2 Moreover, Z K converges to xy h as K tends to. Following Brockwell Davis 987, Proposition 6.3.9, it suffices then to show that lim K lim sup Z N Z NK =0 N But, this is immediately deduced from the absolute summability of j j. Remark 3.. Tjøstheim 983 has shown the consistency of ˆ under martingaledifference assumptions for linear rom fields having unilateral expansions like 4. It could be shown that Proposition 3. also holds under martingale-difference assumptions as in Tjøstheim 983 by using the same proof with only some slight modifications. Choi 2000 proved consistency for various estimators of the autocovariance function of a spatial linear rom field. This result is a special case of Proposition 3. when X t = Y t. We consider again the two linear rom fields X t t d Y t t d denote by x h y h the covariance functions of Y t X t, respectively.

8 678 Illig Truong-Van Proposition 3.2. Let X t t d Y t t d be two linear rom fields as in 9. If Assumptions A3 A4 hold, then for p S0 q S0, with lim Cov N Y t X t q u Y t X t q v = Vu v u v S0p Vu v = 3 yx q u yx q v + y h x h u + v + yx h q v yx h q u h d Proof. Observe first that for i j k l d 4 if i = j = k = l i j k l = 4 4 if i = j k = l if i = k j = l 4 if i = l j = k 0 otherwise. Now for h d, From above, Y t X t q u Y t h q u X t h 2q u v = i d j d k d Y t X t q u Y t h q u X t h 2q u v = 3 4 It follows that Cov l d i j q u k q u h l 2q u v h t i t j t k t l i d i i q u i q u h i 2q u v h + yx q u yx q v + y q + u + h x q + v + h + yx 2q u v h yx h = N 2d = N 2d s SN Y t X t q u Y t X t q v Y t X t q u Y s X s q v yx q u yx q v [ 3 4 i i q u i+s t i q+s t v s SN i d ] + y t s x t s u + v + yx s t q v yx t s q u

9 Asymptotics for Spatial ARMA Models 679 Setting h = t s permutating the two summations yield Cov = N 2d Y t X t q u h i N Y t X t q v [ 3 4 i i q u i h i q h v h S N N i=d i d ] + y h x h u + v + yx h q v yx h q u As a consequence, lim Cov N Y t X t q u Y t X t q v = 3 yx q u yx q v + y h x h u + v h d + yx h q v yx h q u which is exactly Vu v. Remark 3.2. Proposition 3.2 extends to rom fields the well-known results for two time series. More precisely, for the special case d = X t = Y t, Proposition 3.2 corresponds to Proposition 7.3. in Brockwell Davis 987. Furthermore, for rom fields, Lemma 8 in Choi 2000 is a particular case of Proposition 3.2 when Y t = t. 4. Estimation Identification for a Spatial ARMA We are now interested in asymptotic behavior of estimators for the autoregressive parameters of a spatial ARMAp q rom field X t t d with p 0. First the consistency of the autoregressive coefficients estimators is stated in the following proposition. Proposition 4.. If Assumptions A, A2, A3, A4 hold, then for all i j S0, j i + j i N ˆ j + j N Proof. For i S0, j S0, we have t SN X t i X t j = t i SN t j SN X t i X t j R N

10 680 Illig Truong-Van with R N = t I N X t i X t j I N = t S N t i S N t j S N t S N But, #I N #t S N t S N i i where #E denotes the cardinal of a finite subset E of d. Therefore, R N i i x0 N which implies that R N = o. Furthermore, t i SN t j SN X t i X t j = t+i j SN X t X t+i j R 2 N with 0 if i = 0 R 2 N = X t X t+i j if i 0 t I 2 N where IN 2 = t S N t + i j S N t + i S N As for R N, we show that R2 N = o. Consequently, we have t SN X t i X t j = N i j N i j t+i j SN X t X t+i j + o which converges in probability to i j by Proposition 3.. From Eqs. 6 8, the following theorem is established. Theorem 4.. If Assumptions A, A2, A3, A4 hold if for some S0 S0, is invertible, then ˆ N

11 Asymptotics for Spatial ARMA Models 68 Remark 4.. For causal ARp models, Tjøstheim 983 has proved the strong consistency of the LS estimator of the vector of autoregressive coefficients under the assumption that the t are strong martingale-differences strictly stationary. Since, for an ARp model, ˆ coincides with the LS estimator when = p = 0, in the case of i.i.d. innovations, Theorem 4. extends in fact Tjøstheim s result to ARMAp q rom fields. In order to study the asymptotic normality of the estimator ˆ of the vector of coefficients, we consider the S0-indexed vector ˆ whose jth component equals t SN Y t X t j First, we consider truncated rom fields sums over S N. More precisely, for some K, we study the S0-indexed vector with components Y K t X K t j 2 where X K t = j S0K Y K t = X K t j S0 j t j j XK t j which can be rewritten as follows Y K t = j S0K+ K j t j while using the expansion of X K t. According to the Cramér Wold device, the asymptotic normality of the vector whose jth component is defined in 2 amounts to that of the term W NK = Y K t j S0 r j X K t j where r j j S0 are arbitrary vectors of reals. As in Choi 2000, the tool we use to establish the asymptotic normality of the quantities W NK is based on m-dependent rom fields whose definition is recalled below. Definition 4.. For m S0, a rom field Z t t d is m-dependent if it is stationary if for any s t d, Z s Z t are independent when t i s i >m i for at least one i = d.

12 682 Illig Truong-Van For such rom fields, we quote from Choi 2000 the following central limit theorem. Theorem 4.2. Let Z t t be a stationary m-dependent rom field with mean zero autocovariance function. Ifv m = h S mm h <, then lim Var N Z t = v m Z /2 t L 0v m N where L means the convergence in distribution as N tends to infinity. N The following lemma shows that W NK is a sum of N + -dependent rom variables. Lemma 4.. If t t satisfies Assumption A3, then Y K t j S0 r j Xt j K t is a d K + -dependent rom field. Proof. Let Z t = Y K t Xt j K j S0. Then Y K t j S0 r j Xt j K is K + dependent as soon as Z t t d is. t d Observe that Z t is strictly stationary because the t are i.i.d. For h d such that h l l + K, for example h l > l + K, we show that Z t Z t+h are independent. For this, observe that Z t = K i t i k j t k i S0K+ k Sj+K+j+ so that Z t is a function of the t i for i l 0. In the same way, Z t+h = K i+h t i k j+h t k i S hk+ h k S+j hk++j h only depends on the i for i l < 0. The independence of Z t Z t+h follows from that of the t s. Hence Z t is K + -dependent. Let us denote by K xy h the crosscovariance function Yt K Xt+h K by K x h, K y h the covariances of XK t Y K t, respectively. The following lemma gives the asymptotic normality of the truncated term W NK as N tends to infinity. Lemma 4.2. Assume that t t satisfies Assumptions A3 A4. Then /2 W NK W NK L 0v K N

13 Asymptotics for Spatial ARMA Models 683 where v K = r V K r with A denoting the transpose of a matrix A for u v S0, V K u v = 3 K yx uk yx v + K y hk x h u + v + K yx h vk yxh u h d Proof. From Theorem 4.2, it follows that /2 W NK W NK L 0v K N where From Proposition 3.2, v K = lim VarW NK N [ = lim r Var N = r V K r ] Y K t X K t j r j S0 V K u v = 3 K xy uk xy v + K y hk x h u + vk xy h vk xyh u h d for u v S0. The following lemma establishes the asymptotic normality of the quantities W N = Y t j 0 which correspond to the original untruncated terms. r j X t j Lemma 4.3. If Assumptions A, A2, A3, A4 hold, then where for u v S0 Vu v = Proof. From Lemma 4.2, /2 W N L 0r Vr N h d y h x h u + v + yx h v yx h u /2 W NK W NK L W K = 0v K N

14 684 Illig Truong-Van with v K = r V K r. But for u v S0, lim K V Ku v = 3 yx u yx v + h d y h x h u + v + yx h v yx h u Moreover, from Eqs. 5, the first term in the asymptotic variance equals 0. As a L consequence, W K 0r Vr K Now to prove the lemma, we use again Proposition in Brockwell Davis 987 show that for all >0, lim K lim sup /2 W N W NK W NK >= 0 N But according to Chebyshev s inequality, it amounts to prove that for every j S0, lim K lim sup Var N Y t X t j Y K t X K t j = 0 For this, observe that Var = Var 2Cov By Proposition 3.2, lim Var N Y t X t j Y K t X K t j Y t X t j + Var Y t X t j Y t X t j = 3 yx j 2 + Since yx j = 0, the last term equals Vj j. Again by Proposition 3.2, lim Var N Y K t X K t j Y K t X K t j h d y h x h + yx h j yx h j Y K t X K t j = 3 K yx j2 + K y hk x h + K yx h jk yxh j h d

15 Asymptotics for Spatial ARMA Models 685 But, K x h = j j+h 2 j S0K h which converges to x h when K tends to infinity. Furthermore, K y h = j S0 k S0 j k K x h k + j Thus, K y h converges to yh when K tends to infinity. In the same way, it can be shown that K yx h converges to yxh so that finally, lim lim Var K N Using the same arguments as in Proposition 3.2, lim /2 Cov N Y K t X K t j = Vj j Y t X t j K Y t X K t j = 3 yx j yx K j + yy Kh xx Kh h d + yx K h j y xh j K Observing that yx j = 0 taking the limit when K tends to infinity yields lim lim /2 Cov K N Now, from Lemma 4.3, r /2 Y t X t j Y t X L t j 0r Vr N j S0 Y K t X K t j = Vj j for all arbitrary vector of reals r = r j j S0. Hence, from the Cramér Wold device, Proposition 4.2 follows. Proposition 4.2. If Assumptions A, A2, A3, A4 hold, then /2 Y t X L t j 0 V N j S0 The next proposition states the asymptotic normality of the statistics t SN Y t X t j

16 686 Illig Truong-Van Proposition 4.3. Under Assumptions A, A2, A3, A4, here for u v S0, Vu v = N d/2 ˆ L N 0 V h d y h x h u + v + yx h v yx h u Proof. Let us prove that for all j S0, /2 Y t X t j t SN Y t X t j = o By the same calculations as in Proposition 3.2, /2 = \S+N 2 Y t X t j [ 3 4 i i j i+s t i +s t j \S+N s SN\S+N i d ] + yx s t j yx t s j + y t s x t s Now, set h = t s T h = 3 4 i d i i j i h i h j + yx h j yx h j + y h x h Since j < j <, it follows that T h <. Hence, /2 \S+N 2 Y t X t j \S+N i=d i N h d T h h d T h Since the last term converges to 0 when N tends to infinity, the proof then ends. The following theorem is immediately deduced from Proposition 4.3. Theorem 4.3. If Assumptions A, A2, A3, A4 hold if for some S0 S0, is invertible, then with /2 ˆ L N 0 = V as defined in Proposition 4.3. V

17 Asymptotics for Spatial ARMA Models 687 Remark 4.2. The asymptotic normality for the LS estimator of the vector of autoregressive coefficients in the case of a spatial causal ARp model was obtained by Tjøstheim 983. If the innovations are i.i.d., this result is a special case of Theorem 4.3, for = 0 = p. 5. Application Simulation Results We illustrate now an application of some asymptotic results obtained above to identify the following two-dimensional lattice spatial ARMA : X t 08X t t X t t 2 07X t t 2 = t + 05 t t t t t t 2 where t t 2 is a family of i.i.d. normal rom variables with mean 0 variance. 500 replications from the above ARMA model over the rectangle S 500 are simulated the sample mean the sample stard deviation of the ˆ are computed for several values of. These results are summarized in Table below where the values into brackets are the stard deviations the symbol denotes a value greater than 0. The criterion we choose here for identifying spatial ARMA model is based on the estimators ˆ that correspond to a spatial version of the sample GPAC. Indeed, as for the one-dimensional lattice case see Choi, 99, 992; Woodward Gray, 98, we have the following properties that could be deduced from Remark 2.2: pp = p for q q = 0 for p p Table Sample mean sample stard deviation of ˆ over 500 replications \

18 688 Illig Truong-Van Furthermore, from Theorems , we have ˆ N ˆ L n V Applying the properties of the ˆ to the ARMA model considered here, we have i ˆ = ˆ for q =, ii ˆ 0 for p = p. Table in conjunction with both the above properties clearly allows identifying the ARMA model. Indeed, when inspecting the values of the sample stard deviation of the estimators in Table, first, property i is well revealed at the intersection of the 3rd row with the 4th column bold characters, secondly, property ii is well detected at the intersection of the 4th column with the 6th, 7th, 8th rows bold characters. For one replication X t t S 500, the obtained estimates of the AR coefficients are References Brockwell, P. J., Davis, R. A Time Series: Theory Methods. 2nd ed. Springer Verlag. Choi, B. 99. On the asymptotic distribution of the generalized partial autocorrelation function in autoregressive moving-average processes. J. Time Ser. Anal. 2: Choi, B ARMA Model Identification. Springer Verlag. Choi, B On the asymptotic distributions of mean, autocovariance, autocorrelation, crosscovariance impulse response estimators of a stationary multidimensional rom field. Commun. Statist. Theor. Meth. 298: Etchison, T., Pantula, S. G., Brownie, C Partial autocorrelation function for spatial processes. Statist. Probab. Lett. 2:9 9. Guyon, X Rom Fields on a Network. New York: Springer. Ha, E., Newton, H. J The bias of the estimators of causal spatial autoregressive processes. Biometrika 80: Huang, D. W Central limit theorem for two-parameter martingale differences with application to stationary rom fields. Sci. China Ser. A 35: Huang, D. W., Anh, V. V Estimation of spatial ARMA models. Austral. J. Statist. 343: Shitan, M., Brockwell, P. J An asymptotic test for separability of a spatial autoregressive model. Commun. Statist. Theor. Meth. 248: Tjøstheim, D Statistical spatial series modelling. Adv. Appl. Prob. 0: Tjøstheim, D Statistical spatial series modelling II: some further results on unilateral lattice processes. Adv. Appl. Prob. 5: Woodward, W. A., Gray, H. L. 98. On the relationship between the S array the Box-Jenkins method of ARMA model identification. J. Amer. Statist. Assoc. 76:

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS Y. ZHANG and A.I. MCLEOD Acadia University and The University of Western Ontario May 26, 2005 1 Abstract. A symbolic

More information

Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes

Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes Exponential decay rate of partial autocorrelation coefficients of ARMA and short-memory processes arxiv:1511.07091v2 [math.st] 4 Jan 2016 Akimichi Takemura January, 2016 Abstract We present a short proof

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE) PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)

More information

Sampling Theory. Improvement in Variance Estimation in Simple Random Sampling

Sampling Theory. Improvement in Variance Estimation in Simple Random Sampling Communications in Statistics Theory and Methods, 36: 075 081, 007 Copyright Taylor & Francis Group, LLC ISS: 0361-096 print/153-415x online DOI: 10.1080/0361090601144046 Sampling Theory Improvement in

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

3 Theory of stationary random processes

3 Theory of stationary random processes 3 Theory of stationary random processes 3.1 Linear filters and the General linear process A filter is a transformation of one random sequence {U t } into another, {Y t }. A linear filter is a transformation

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

LINEAR STOCHASTIC MODELS

LINEAR STOCHASTIC MODELS LINEAR STOCHASTIC MODELS Let {x τ+1,x τ+2,...,x τ+n } denote n consecutive elements from a stochastic process. If their joint distribution does not depend on τ, regardless of the size of n, then the process

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Lesson 9: Autoregressive-Moving Average (ARMA) models

Lesson 9: Autoregressive-Moving Average (ARMA) models Lesson 9: Autoregressive-Moving Average (ARMA) models Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università dell Aquila, umberto.triacca@ec.univaq.it Introduction We have seen

More information

Ch. 14 Stationary ARMA Process

Ch. 14 Stationary ARMA Process Ch. 14 Stationary ARMA Process A general linear stochastic model is described that suppose a time series to be generated by a linear aggregation of random shock. For practical representation it is desirable

More information

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1 4.2 Autoregressive (AR) Moving average models are causal linear processes by definition. There is another class of models, based on a recursive formulation similar to the exponentially weighted moving

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

Full terms and conditions of use:

Full terms and conditions of use: This article was downloaded by:[smu Cul Sci] [Smu Cul Sci] On: 28 March 2007 Access Details: [subscription number 768506175] Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered

More information

On 1.9, you will need to use the facts that, for any x and y, sin(x+y) = sin(x) cos(y) + cos(x) sin(y). cos(x+y) = cos(x) cos(y) - sin(x) sin(y).

On 1.9, you will need to use the facts that, for any x and y, sin(x+y) = sin(x) cos(y) + cos(x) sin(y). cos(x+y) = cos(x) cos(y) - sin(x) sin(y). On 1.9, you will need to use the facts that, for any x and y, sin(x+y) = sin(x) cos(y) + cos(x) sin(y). cos(x+y) = cos(x) cos(y) - sin(x) sin(y). (sin(x)) 2 + (cos(x)) 2 = 1. 28 1 Characteristics of Time

More information

Research Article Yule-Walker Estimation for the Moving-Average Model

Research Article Yule-Walker Estimation for the Moving-Average Model International Journal of Stochastic Analysis Volume 2011, Article ID 151823, 20 pages doi:10.1155/2011/151823 Research Article Yule-Walker Estimation for the Moving-Average Model Chrysoula Dimitriou-Fakalou

More information

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

The autocorrelation and autocovariance functions - helpful tools in the modelling problem The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,

More information

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC)

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation University of Oxford Statistical Methods Autocorrelation Identification and Estimation Dr. Órlaith Burke Michaelmas Term, 2011 Department of Statistics, 1 South Parks Road, Oxford OX1 3TG Contents 1 Model

More information

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes. MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn } Stochastic processes Time series are an example of a stochastic or random process Models for time series A stochastic process is 'a statistical phenomenon that evolves in time according to probabilistic

More information

Trend-Cycle Decompositions

Trend-Cycle Decompositions Trend-Cycle Decompositions Eric Zivot April 22, 2005 1 Introduction A convenient way of representing an economic time series y t is through the so-called trend-cycle decomposition y t = TD t + Z t (1)

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 III. Stationary models 1 Purely random process 2 Random walk (non-stationary)

More information

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 ) Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y

More information

IV. Covariance Analysis

IV. Covariance Analysis IV. Covariance Analysis Autocovariance Remember that when a stochastic process has time values that are interdependent, then we can characterize that interdependency by computing the autocovariance function.

More information

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models Module 3 Descriptive Time Series Statistics and Introduction to Time Series Models Class notes for Statistics 451: Applied Time Series Iowa State University Copyright 2015 W Q Meeker November 11, 2015

More information

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES This document is meant as a complement to Chapter 4 in the textbook, the aim being to get a basic understanding of spectral densities through

More information

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS doi:10.1111/j.1467-9892.2005.00459.x COMPUTER ALGEBRA DERIVATION OF THE BIA OF LINEAR ETIMATOR OF AUTOREGREIVE MODEL By Y. Zhang A. I. McLeod Acadia University The University of Western Ontario First Version

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

Sampling Theory. A New Ratio Estimator in Stratified Random Sampling

Sampling Theory. A New Ratio Estimator in Stratified Random Sampling Communications in Statistics Theory and Methods, 34: 597 602, 2005 Copyright Taylor & Francis, Inc. ISSN: 0361-0926 print/1532-415x online DOI: 10.1081/STA-200052156 Sampling Theory A New Ratio Estimator

More information

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed

More information

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA Applied Time Series Analysis Wayne A. Woodward Southern Methodist University Dallas, Texas, USA Henry L. Gray Southern Methodist University Dallas, Texas, USA Alan C. Elliott University of Texas Southwestern

More information

Strictly Stationary Solutions of Autoregressive Moving Average Equations

Strictly Stationary Solutions of Autoregressive Moving Average Equations Strictly Stationary Solutions of Autoregressive Moving Average Equations Peter J. Brockwell Alexander Lindner Abstract Necessary and sufficient conditions for the existence of a strictly stationary solution

More information

Introduction to ARMA and GARCH processes

Introduction to ARMA and GARCH processes Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi Introduction to ARMA () and GARCH processes SNS Pisa 3 March 2010 1 / 24 Stationarity Strict stationarity: (X 1,

More information

Contents. 1 Time Series Analysis Introduction Stationary Processes State Space Modesl Stationary Processes 8

Contents. 1 Time Series Analysis Introduction Stationary Processes State Space Modesl Stationary Processes 8 A N D R E W T U L L O C H T I M E S E R I E S A N D M O N T E C A R L O I N F E R E N C E T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Time Series Analysis 5

More information

Random Infinite Divisibility on Z + and Generalized INAR(1) Models

Random Infinite Divisibility on Z + and Generalized INAR(1) Models ProbStat Forum, Volume 03, July 2010, Pages 108-117 ISSN 0974-3235 Random Infinite Divisibility on Z + and Generalized INAR(1) Models Satheesh S NEELOLPALAM, S. N. Park Road Trichur - 680 004, India. ssatheesh1963@yahoo.co.in

More information

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides

More information

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT

More information

{ ϕ(f )Xt, 1 t n p, X t, n p + 1 t n, { 0, t=1, W t = E(W t W s, 1 s t 1), 2 t n.

{ ϕ(f )Xt, 1 t n p, X t, n p + 1 t n, { 0, t=1, W t = E(W t W s, 1 s t 1), 2 t n. Innovations, Sufficient Statistics, And Maximum Lielihood In ARM A Models Georgi N. Boshnaov Institute of Mathematics, Bulgarian Academy of Sciences 1. Introduction. Let X t, t Z} be a zero mean ARMA(p,

More information

Non-Gaussian Maximum Entropy Processes

Non-Gaussian Maximum Entropy Processes Non-Gaussian Maximum Entropy Processes Georgi N. Boshnakov & Bisher Iqelan First version: 3 April 2007 Research Report No. 3, 2007, Probability and Statistics Group School of Mathematics, The University

More information

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be  ed to TIME SERIES Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be emailed to Y.Chen@statslab.cam.ac.uk. 1. Let {X t } be a weakly stationary process with mean zero and let

More information

Mean Convergence Theorems with or without Random Indices for Randomly Weighted Sums of Random Elements in Rademacher Type p Banach Spaces

Mean Convergence Theorems with or without Random Indices for Randomly Weighted Sums of Random Elements in Rademacher Type p Banach Spaces STOCHASTIC ANALYSIS AND APPLICATIONS Vol. 21, No. 5, pp. 1169 1187, 2003 Mean Convergence Theorems with or without Random Indices for Randomly Weighted Sums of Random Elements in Rademacher Type p Banach

More information

On moving-average models with feedback

On moving-average models with feedback Submitted to the Bernoulli arxiv: math.pr/0000000 On moving-average models with feedback DONG LI 1,*, SHIQING LING 1,** and HOWELL TONG 2 1 Department of Mathematics, Hong Kong University of Science and

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

On the asymptotic normality of frequency polygons for strongly mixing spatial processes. Mohamed EL MACHKOURI

On the asymptotic normality of frequency polygons for strongly mixing spatial processes. Mohamed EL MACHKOURI On the asymptotic normality of frequency polygons for strongly mixing spatial processes Mohamed EL MACHKOURI Laboratoire de Mathématiques Raphaël Salem UMR CNRS 6085, Université de Rouen France mohamed.elmachkouri@univ-rouen.fr

More information

IDENTIFICATION OF PERIODIC AUTOREGRESSIVE MOVING-AVERAGE TIME SERIES MODELS WITH R

IDENTIFICATION OF PERIODIC AUTOREGRESSIVE MOVING-AVERAGE TIME SERIES MODELS WITH R Journal of Mathematics and Statistics (3): 358-367, 4 ISSN: 549-3644 4 doi:.3844/jmssp.4.358.367 Published Online (3) 4 (http://www.thescipub.com/jmss.toc) IDENTIFICATION OF PERIODIC AUTOREGRESSIVE MOVING-AVERAGE

More information

Estimating Moving Average Processes with an improved version of Durbin s Method

Estimating Moving Average Processes with an improved version of Durbin s Method Estimating Moving Average Processes with an improved version of Durbin s Method Maximilian Ludwig this version: June 7, 4, initial version: April, 3 arxiv:347956v [statme] 6 Jun 4 Abstract This paper provides

More information

Stochastic processes: basic notions

Stochastic processes: basic notions Stochastic processes: basic notions Jean-Marie Dufour McGill University First version: March 2002 Revised: September 2002, April 2004, September 2004, January 2005, July 2011, May 2016, July 2016 This

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Christopher Ting http://mysmu.edu.sg/faculty/christophert/ christopherting@smu.edu.sg Quantitative Finance Singapore Management University March 3, 2017 Christopher Ting Week 9 March

More information

Ross Bettinger, Analytical Consultant, Seattle, WA

Ross Bettinger, Analytical Consultant, Seattle, WA ABSTRACT DYNAMIC REGRESSION IN ARIMA MODELING Ross Bettinger, Analytical Consultant, Seattle, WA Box-Jenkins time series models that contain exogenous predictor variables are called dynamic regression

More information

ARIMA Models. Richard G. Pierse

ARIMA Models. Richard G. Pierse ARIMA Models Richard G. Pierse 1 Introduction Time Series Analysis looks at the properties of time series from a purely statistical point of view. No attempt is made to relate variables using a priori

More information

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design Marcel Dettling Institute for Data Analysis and Process Design Zurich University of Applied Sciences marcel.dettling@zhaw.ch http://stat.ethz.ch/~dettling ETH Zürich, March 18, 2013 1 Basics of Modeling

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

STAT 248: EDA & Stationarity Handout 3

STAT 248: EDA & Stationarity Handout 3 STAT 248: EDA & Stationarity Handout 3 GSI: Gido van de Ven September 17th, 2010 1 Introduction Today s section we will deal with the following topics: the mean function, the auto- and crosscovariance

More information

Statistics 349(02) Review Questions

Statistics 349(02) Review Questions Statistics 349(0) Review Questions I. Suppose that for N = 80 observations on the time series { : t T} the following statistics were calculated: _ x = 10.54 C(0) = 4.99 In addition the sample autocorrelation

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process Author manuscript, published in "Journal of Applied Probability 50, 2 (2013) 598-601" Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process L. Hervé and J. Ledoux

More information

First-order random coefficient autoregressive (RCA(1)) model: Joint Whittle estimation and information

First-order random coefficient autoregressive (RCA(1)) model: Joint Whittle estimation and information ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 9, Number, June 05 Available online at http://acutm.math.ut.ee First-order random coefficient autoregressive RCA model: Joint Whittle

More information

Almost sure asymptotics for the random binary search tree

Almost sure asymptotics for the random binary search tree AofA 10 DMTCS proc. AM, 2010, 565 576 Almost sure asymptotics for the rom binary search tree Matthew I. Roberts Laboratoire de Probabilités et Modèles Aléatoires, Université Paris VI Case courrier 188,

More information

Stat 5100 Handout #12.e Notes: ARIMA Models (Unit 7) Key here: after stationary, identify dependence structure (and use for forecasting)

Stat 5100 Handout #12.e Notes: ARIMA Models (Unit 7) Key here: after stationary, identify dependence structure (and use for forecasting) Stat 5100 Handout #12.e Notes: ARIMA Models (Unit 7) Key here: after stationary, identify dependence structure (and use for forecasting) (overshort example) White noise H 0 : Let Z t be the stationary

More information

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties Module 4 Stationary Time Series Models Part 1 MA Models and Their Properties Class notes for Statistics 451: Applied Time Series Iowa State University Copyright 2015 W. Q. Meeker. February 14, 2016 20h

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

A SARIMAX coupled modelling applied to individual load curves intraday forecasting A SARIMAX coupled modelling applied to individual load curves intraday forecasting Frédéric Proïa Workshop EDF Institut Henri Poincaré - Paris 05 avril 2012 INRIA Bordeaux Sud-Ouest Institut de Mathématiques

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

Non Uniform Bounds on Geometric Approximation Via Stein s Method and w-functions

Non Uniform Bounds on Geometric Approximation Via Stein s Method and w-functions Communications in Statistics Theory and Methods, 40: 45 58, 20 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092090337778 Non Uniform Bounds on Geometric Approximation

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL

FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL FORECASTING SUGARCANE PRODUCTION IN INDIA WITH ARIMA MODEL B. N. MANDAL Abstract: Yearly sugarcane production data for the period of - to - of India were analyzed by time-series methods. Autocorrelation

More information

Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability J. Math. Anal. Appl. 305 2005) 644 658 www.elsevier.com/locate/jmaa Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

More information

Chapter 6: Model Specification for Time Series

Chapter 6: Model Specification for Time Series Chapter 6: Model Specification for Time Series The ARIMA(p, d, q) class of models as a broad class can describe many real time series. Model specification for ARIMA(p, d, q) models involves 1. Choosing

More information

Stochastic Processes

Stochastic Processes Elements of Lecture II Hamid R. Rabiee with thanks to Ali Jalali Overview Reading Assignment Chapter 9 of textbook Further Resources MIT Open Course Ware S. Karlin and H. M. Taylor, A First Course in Stochastic

More information

CONVERGENCE OF HYBRID FIXED POINT FOR A PAIR OF NONLINEAR MAPPINGS IN BANACH SPACES

CONVERGENCE OF HYBRID FIXED POINT FOR A PAIR OF NONLINEAR MAPPINGS IN BANACH SPACES International Journal of Analysis and Applications ISSN 2291-8639 Volume 8, Number 1 2015), 69-78 http://www.etamaths.com CONVERGENCE OF HYBRID FIXED POINT FOR A PAIR OF NONLINEAR MAPPINGS IN BANACH SPACES

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Ch 6. Model Specification. Time Series Analysis

Ch 6. Model Specification. Time Series Analysis We start to build ARIMA(p,d,q) models. The subjects include: 1 how to determine p, d, q for a given series (Chapter 6); 2 how to estimate the parameters (φ s and θ s) of a specific ARIMA(p,d,q) model (Chapter

More information

Inverse Brascamp-Lieb inequalities along the Heat equation

Inverse Brascamp-Lieb inequalities along the Heat equation Inverse Brascamp-Lieb inequalities along the Heat equation Franck Barthe and Dario Cordero-Erausquin October 8, 003 Abstract Adapting Borell s proof of Ehrhard s inequality for general sets, we provide

More information

Covariances of ARMA Processes

Covariances of ARMA Processes Statistics 910, #10 1 Overview Covariances of ARMA Processes 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacyu/filomat Filomat 22: (2008), 69 77 A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS Miroslav M

More information

Ch 4. Models For Stationary Time Series. Time Series Analysis

Ch 4. Models For Stationary Time Series. Time Series Analysis This chapter discusses the basic concept of a broad class of stationary parametric time series models the autoregressive moving average (ARMA) models. Let {Y t } denote the observed time series, and {e

More information

Product-limit estimators of the survival function with left or right censored data

Product-limit estimators of the survival function with left or right censored data Product-limit estimators of the survival function with left or right censored data 1 CREST-ENSAI Campus de Ker-Lann Rue Blaise Pascal - BP 37203 35172 Bruz cedex, France (e-mail: patilea@ensai.fr) 2 Institut

More information

Spectral theory for linear operators on L 1 or C(K) spaces

Spectral theory for linear operators on L 1 or C(K) spaces Spectral theory for linear operators on L 1 or C(K) spaces Ian Doust, Florence Lancien, and Gilles Lancien Abstract It is known that on a Hilbert space, the sum of a real scalar-type operator and a commuting

More information