Identification, estimation and applications of a bivariate long-range dependent time series model with general phase

Size: px
Start display at page:

Download "Identification, estimation and applications of a bivariate long-range dependent time series model with general phase"

Transcription

1 Identification, estimation and applications of a bivariate long-range dependent time series model with general phase Stefanos Kechagias SAS Institute Vladas Pipiras University of North Carolina March 27, 218 Abstract A new bivariate, two-sided, fractionally integrated time series model that allows for general phase is proposed. In particular, the focus is on a suitable parametrization under which the model is identifiable. A simulation study to test the performances of a conditional maximum likelihood estimation method and of a forecasting approach is carried out, under the proposed model. Finally, an application is presented to the U.S. inflation rates in goods and services where models not allowing for general phase suffer from misspecification. 1 Introduction In this work, we are interested in modeling bivariate (R 2 vector) time series exhibiting long-range dependence (LRD, in short). In the univariate case, long-range dependent (LRD) time series models are stationary with the autocovariance function decaying slowly like a power-law function at large lags, or the spectral density diverging like a power-law function at the zero frequency. The univariate LRD is understood well in theory and used widely in applications. See, for example, Park and Willinger (2), Robinson (23), Doukhan et al. (23), Palma (27), Giraitis et al. (212), Beran et al. (213), Pipiras and Taqqu (217). Bivariate and, more generally, multivariate (vector-valued) LRD time series models have also been considered by a number of researchers. But theoretical foundations for a general class of such models were laid only recently in Kechagias and Pipiras (215). In particular, Kechagias and Pipiras (215) stressed the importance of the so-called phase parameters. Turning to the bivariate case which is the focus of this work, the phase φ appears in the cross-spectrum of a bivariate LRD series around the zero frequency and controls the (a)symmetry of the series at large time lags. There are currently no parametric models of bivariate LRD with general phase that can be used in estimation and applications. The goal of this work is to introduce such a class of models, and to examine it through a simulation study and an application to real data. In the rest of this section, we describe our contributions in greater detail. A common parametric VARFIMA(, D, ) (Vector Autoregressive Fractionally Integrated Moving Average) model for a bivariate LRD time series {X n } n Z ={(X 1,n, X 2,n ) } n Z is obtained as a AMS subject classification. Primary: 62M1, 62M15. Secondary: 6G22, 42A16. Keywords and phrases: long-range dependence, bivariate time series, phase parameter, estimation, VARFIMA. The second author was supported in part by NSA grant H and NSF grant DMS The authors would also like to thank Richard Davis (Columbia University) for his comments on an earlier version of this paper.

2 natural extension of the univariate ARFIMA(, d, ) model by fractionally integrating the component series of a bivariate white noise series, namely, ( ) ( )( ) X1,n (I B) d 1 η1,n X n = = (I B) d = (I B) D η 2 n, (1.1) X 2,n where B is the backshift operator, I = B is the identity operator, d 1, d 2 (, 1/2) are the LRD parameters of the component series {X 1,n } n Z and {X 2,n } n Z, respectively, D = diag(d 1, d 2 ) and {η n } n Z = {(η 1,n, η 2,n ) } n Z is a bivariate white noise series with zero mean Eη n = and covariance Eη n η n = Σ. If Σ = QQ, note that the model (1.1) can also be written as X n = (I B) D Qɛ n or (I B) D X n = η n = Qɛ n, (1.2) where {ɛ n } n Z is a bivariate white noise with the identity covariance matrix Eɛ n ɛ n = I 2. Throughout the paper, the prime indicates the transpose. The model (1.1) admits a one-sided linear representation of the form X n = k I η 2,n Ψ k η n k = k I Ψ k ɛ n k, (1.3) where I = {k Z : k } and Ψ k, Ψ k are real-valued 2 2 matrices whose entries decay as a power law as k. In the frequency domain, the matrix-valued spectral density function 1 f(λ) of the series X n defined in (1.1) satisfies ( ω11 λ f(λ) 2d 1 ω 12 λ (d 1+d 2 ) e isign(λ)φ ω 21 λ (d 1+d 2 ) e isign(λ)φ ω 22 λ 2d 2 where indicates the asymptotic equivalence, ω 11, ω 12, ω 21, ω 22 R and ), as λ, (1.4) φ = (d 1 d 2 )π/2. (1.5) The asymptotic behavior (1.4) of the spectral density f with general φ ( π/2, π/2) is taken for the definition of bivariate LRD in Kechagias and Pipiras (215). Note that φ ( π/2, π/2) is taken and the following polar coordinate representation z = z ( 1 z2 ) cos(φ) eiφ with φ = arctan (1.6) z 1 of z = z 1 + iz 2 C is used throughout. The special form of the phase parameter φ in (1.5) limits the type of bivariate LRD behavior that can be captured by the model (1.1). For example, in the case of time-reversible models satisfying γ(n) = γ( n), n Z, the spectral density matrix f has real-valued entries and hence φ =. Under the model (1.1) and (1.4), however, φ = holds only when d 1 = d 2. These observations naturally raise the following question: Can one define a bivariate parametric LRD model with general phase? 1 The following convention is used here. The autocovariance function γ is defined as γ(n) = EX nx and the spectral density f(λ) satisfies γ(n) = π π einλ f(λ)dλ. The convention is different from Kechagias and Pipiras (215), where EX X n is used as the autocovariance function, but is the same as in Brockwell and Davis (29), Pipiras and Taqqu (217). See also Remark 2.2 below. 2

3 One solution to the question above is to consider two-sided linear representations with powerlaw decaying coefficients, that is, representations of the form (1.3) with the index set I now being the set of all integers Z. Specifically, Kechagias and Pipiras (215) constructed a two-sided VARFIMA(, D, ) model with general phase by taking X n = (I B) D Q + ɛ n + (I B 1 ) D Q ɛ n, (1.7) where Q +, Q are two real-valued 2 2 matrices. The reason we refer to (1.7) as two-sided is the presence of B 1 in the second term of the right-hand side of (1.7), which translates into having the leads of the innovation process ɛ n. Also, the positive and negative powers of the backshift operator motivate our notation for the subscripts of the matrices Q + and Q. We shall use (1.7) in developing our parametric bivariate LRD model with general phase. A first issue that needs to be addressed is finding a suitable parametrization under which this model is identifiable, while still yielding a general phase parameter. We show in Section 2 that this two-fold goal can be achieved by taking Q as Q = ( c c ) Q + =: CQ +, (1.8) for some real constant c. Under the relation (1.8) and letting {Z n } n Z be a zero mean bivariate white noise series with covariance matrix EZ n Z n = Q + Q + =: Σ, we can rewrite (1.7) in the more succinct form X n = c (B) 1 Z n, (1.9) where the operator c (B) 1 is defined as c (B) 1 = (I B) D + (I B 1 ) D C. (1.1) Note that when c = (and C = ), the filter c (B) 1 becomes the one-sided fractional integration filter (B) 1 = (I B) D. The focus of Section 3 will be on extensions of the model (1.9) (1.1) involving autoregressive and moving average parts, namely, a general phase VARFIMA(p, D, q) model Φ(B)X n = c (B) 1 Θ(B)Z n, (1.11) where Φ(B), Θ(B) are matrix polynomials of finite orders p and q satisfying the usual stationarity and invertibility conditions. In fact, for identifiability and estimation purposes, we shall work with diagonal AR filters Φ(B) in which case the general phase VARFIMA(p, D, q) model (1.11) can also be expressed as Φ(B) c (B)X n = Θ(B)Z n, (1.12) since the diagonal filters Φ(B), c (B) commute. (In fact, the model (1.12) is usually referred to as FIVARMA see Section 3.2 below.) The advantage of the model (1.11), as we show, is that the autocovariance function of the right-hand side of (1.11) can be computed explicitly. In estimation, we can then employ a conditional likelihood approach where the Gaussian likelihood is written for Φ(B)X n (though the maximum is still sought over all unknown parameters). We should also emphasize that the analysis of this work is limited to the second-order properties of time series (that is, autocovariance, cross-correlation, cross-spectrum, and so on). Thus, although the models (1.7) and (1.11) (1.12) are expressed through two-sided and hence non-causal linear representations, their noncausal nature is irrelevant to the extent that these models are used only 3

4 as suitable parameterizations of bivariate long-range dependent models allowing for general phase through their second-order properties. Our estimation procedure follows the approach of Tsay (21) who considered one-sided models (1.12) with c =. Still in the case c =, Sowell (1986) calculated (numerically) the autocovariance function of the model (1.11) and performed exact likelihood estimation. For other approaches (all in the case c = ), see also Ravishanker and Ray (1997) who considered the Bayesian analysis and Pai and Ravishanker (29a, 29b) who employed the EM and PCG algorithms, as well as Dueker and Starz (1998), Martin and Wilkins (1999), Sela and Hurvich (29) and Diongue (21). The rest of the paper is structured as follows. General phase VARFIMA(, D, ) and VARFIMA(p, D, q) series are presented in Sections 2 and 3. Estimation and other tasks are considered in Section 4. Section 5 contains a simulation study, and Section 6 contains an application to the U.S. inflation rates. 2 General phase VARFIMA(, D, ) series In this section, we consider the two-sided bivariate VARFIMA(, D, ) model (1.9) (1.1). Kechagias and Pipiras (215) showed that any phase parameter φ in (1.4) can be obtained with the model (1.7) for an appropriate choice of Q + and Q. However, letting the entries of these matrices take any real value, causes identifiability issues around the zero frequency, as the same phase parameter can be obtained by more than one choice of Q + and Q. Indeed, from the following simple counting perspective, note that the specification (1.4) has 6 parameters (d 1, d 2, ω 11, ω 12, ω 22 and φ) whereas the model (1.7) has 1 (d 1, d 2 and the entries of Q + and Q ). One might naturally expect identifiability up to Q + Q + and Q Q but this still leaves the number of parameters at 8 (d 1, d 2 and the 6 different entries of Q + Q + and Q Q ). In Proposition 2.1 and Corollary 2.1 below (see also the discussion following the latter), we show that the parameterization (1.8) addresses the identifiability and general phase issues. For one, note that the model (1.9) (1.1) has the required number of 6 parameters (d 1, d 2, c and the three different entries of Σ = Q + Q +). Proposition 2.1 Let d 1, d 2 (, 1/2) and Q + be a 2 2 matrix with real-valued entries. Let also {X n (c) } n Z be a time series defined by (1.9) (1.1) where D = diag(d 1, d 2 ). For any φ c ( π/2, π/2), there exists a unique constant c ( 1, 1) such that the series {X n (c) } n Z has the phase parameter φ = φ c in (1.4). Moreover, the constant c has a closed form given by { a 1 a 2 a c = c(φ c ) = 1 +a 2, if φ c = arctan a 1 a 2 a 1 +a 2, 2(a 1 +a 2 ) 2(a 1 a 2 +tan(φ c)(1+a 1 a 2 )), otherwise, (2.1) where ( ) ( ) πd1 πd2 a 1 = tan, a 2 = tan 2 2 and = 16a 1 a 2 + 4(1 + a 1 a 2 ) 2 tan 2 (φ c ). (2.2) The function c = c(φ c ) in (2.1) is continuous at φ c = arctan((a 1 a 2 )/(a 1 + a 2 )). Proof: By using Theorem in Brockwell and Davis (29), the VARFIMA(, D, ) series in (1.9) (1.1) has a spectral density matrix f(λ) = 1 2π c(e iλ ) 1 Σ c (e iλ ) 1, (2.3) 4

5 where the superscript denotes the complex conjugate operation. From (1.8) and by using the fact that 1 e ±iλ iλ, as λ, we have f(λ) 1 ( (iλ) D + ( iλ) D C ) Σ ( ( iλ) D + C(iλ) D), as λ. (2.4) 2π Next, by denoting Σ = (σ jk ) j,k=1,2 and using the relation ±i = e ±iπ/2, we get that the (j, k) element of the spectral density f(λ) satisfies where the complex constant g jk is given by f jk (λ) g jk λ (d j+d k ), as λ +, (2.5) g jk = σ jk 2π (e iπd j/2 + ( 1) j+1 ce iπd j/2 ) (e iπd k/2 + ( 1) k+1 ce iπd k/2 ) (2.6) and ( 1) j+1, ( 1) k+1 in (2.6) account for the different signs next to c s in the diagonal matrix C in (1.8). Focusing on the (1, 2) element, and by applying the polar-coordinate representation z = z 1 cos(φ) eiφ of z = z 1 + iz 2 C with φ = arctan(z 2 /z 1 ) (see (1.6) above) to the two multiplication terms below separately, we have g 12 = σ 12 2π = σ 12 2π ( cos( πd 1 2 )(1 + c) + i sin(πd 1 2 ) ( )(c 1) cos( πd 2 2 )(1 c) + i sin(πd 2 2 ) )(1 + c) cos( πd 1 2 ) cos( πd 2 2 ) cos(φ c,1 ) cos(φ c,2 ) (1 c2 )e iφc, (2.7) where φ c = (φ c,1 + φ c,2 ), ( ) c 1 φ c,1 = arctan a c and ( ) 1 + c φ c,2 = arctan a 2 1 c (2.8) with a 1 and a 2 given in (2.2). By using the arctangent addition formula arctan(u) + arctan(v) = arctan( u+v 1 uv ) for uv < 1 (in our case uv = a 1 a 2 < ), we can rewrite φ c as ( c 1 a1 1+c φ c = arctan + a ) 2 1+c 1 c =: h(c). (2.9) 1 + a 1 a 2 For all d 1, d 2 (, 1/2), the function h : ( 1, 1) ( π/2, π/2) is strictly decreasing (and therefore 1-1) and also satisfies lim h(c) = π c 1 2, lim h(c) = π c 1 2. Since h is continuous, it is also onto its range which completes the existence and uniqueness part of the proof. To obtain the formula (2.1), we invert the relation (2.9) to get the quadratic equation (a 1 a 2 + tan(φ c )(1 + a 1 a 2 ))c 2 2(a 1 + a 2 )c + a 1 a 2 tan(φ c )(1 + a 1 a 2 ) =, (2.1) whose discriminant is given by = 16a 1 a 2 + 4(1 + a 1 a 2 ) 2 tan 2 (φ c ) 5

6 and is always positive. The solutions of (2.1) are then given by c 1 = 2(a 1 + a 2 ) + 2(a 1 a 2 + tan(φ c )(1 + a 1 a 2 )), c 2 = 2(a 1 + a 2 ) 2(a 1 a 2 + tan(φ c )(1 + a 1 a 2 )). It can be checked that c 1 / ( 1, 1) and c 2 ( 1, 1). Note that, when a 1 a 2 +tan(φ c )(1+a 1 a 2 ) = or φ c = arctan((a 1 a 2 )/(a 1 + a 2 )), the quadratic equation (2.1) becomes a linear equation with the solution c = a 1 a 2 tan(φ c )(1 + a 1 a 2 ) 2(a 1 + a 2 ) = a 1 a 2 a 1 + a 2, which always satisfies c ( 1, 1). Finally, the fact that the function c = c(φ c ) in (2.1) is continuous at φ c = arctan((a 1 a 2 )/(a 1 + a 2 )) can be checked easily. The following result is a direct consequence of the proof of Proposition 2.1. Corollary 2.1 The spectral density of the time series {X n (c) } n Z in Proposition 2.1 satisfies the asymptotic relation (1.4) with φ = φ c and ω jj = σ jj 2π (1 + c2 + ( 1) j+1 2c cos(πd j )), j = 1, 2, (2.11) ω 12 = σ 12 2π where Σ = Q + Q + = (σ jk ) j,k=1,2 and φ c,1, φ c,2 are given in (2.8). cos( πd 1 2 ) cos( πd 2 2 ) cos(φ c,1 ) cos(φ c,2 ) (1 c2 ), (2.12) Proof: The relations (2.11) (2.12) follow from (2.5) (2.6) and (2.7) (2.8). Corollary 2.1 shows that the bivariate LRD model (1.9) (1.1) is identifiable around the zero frequency when parametrized by d 1, d 2, Σ = Q + Q + and c. It will be referred to as the general phase VARFIMA(, D, ) series (two-sided VARFIMA(, D, ) series). Remark 2.1 Proposition 2.1 relates the phase φ at the zero frequency and the constant c which appears in the full model (1.9) (1.1). For this model, however, the phase function φ(λ) of the full cross spectral density f 12 (λ) = g 12 (λ)e iφ(λ) or f 21 (λ) = g 12 (λ)e iφ(λ), λ (, π), is not a constant function of the frequency λ. Instead, by using the identity 1 e iλ = 2 sin( λ 2 )e i(λ π) and arguing as for (2.7) (2.9) above, it can be shown that ( x1 (λ) c 1 1+c φ(λ) = arctan + x ) 2(λ) 1+c 1 c, (2.13) 1 + x 1 (λ)x 2 (λ) where ( ) d1 (π λ) x 1 (λ) = tan, x 2 (λ) = tan 2 ( d2 (π λ) 2 ). (2.14) Several plots of the phase function (2.13) are given in Figure 1. We also note that Sela (21) considers LRD models with phase functions φ(λ) following special power laws, but we will not expand in this direction. 6

7 φ(λ) φ(λ) c1= c2=.5 c3=-.8 c4= c1= c2=-.7 c3=.7 c4= λ λ Figure 1: Phase functions φ(λ) for the model (1.9) (1.1) for different parameter values. c =,.5,.8,.4 and d 1 =.2, d 2 =.4. Right: c =,.7,.7,.3, d 1 = d 2 =.3. Left: Figure 2: The function b d (c), c ( 1, 1), in (2.15). Left: d >. Right: d <. Remark 2.2 The autocovariance function of the model (1.7) has an explicit form given in Proposition 5.1 of Kechagias and Pipiras (215). The same form can obviously be used for the model (1.9) (1.1). We also note here that, strictly speaking, Proposition 5.1 is incorrect as stated in the context of Kechagias and Pipiras (215). As indicated in a footnote in Section 1 above, Kechagias and Pipiras (215) use the convention EX X n for autocovariances. But the proof of Proposition 5.1 is based on Brockwell and Davis (29), who use the different convention EX n X and which is also adopted here. The result of Proposition 5.1 is thus correct but using the convention of this paper, and correct only up to matrix transposition in the context of Kechagias and Pipiras (215). Remark 2.3 Technical issues of Proposition 2.1 aside, there is a simple way to see why the proposed model will yield a general phase. Note that a generic term e iπd/2 + ce iπd/2 entering (2.6) can be expressed in polar coordinates as e iπd/2 + ce iπd/2 = a d (c)e ib d(c). (2.15) The generic shape of the function b d (c), c ( 1, 1), is given in Figure 2, left plot, for d (, 1/2), and right plot, for d ( 1/2, ). When d (, 1/2), the range of b d (c), c ( 1, 1), is (, π/2) and when d ( 1/2, ), it is ( π/2, ). When combined into the phase φ c of (2.7), this obviously leads to the phase φ c that covers the whole range ( π/2, π/2). This discussion also shows that, for example, the choice Q = cq +, c ( 1, 1), would not lead to a general phase parameter for the 7

8 resulting bivariate LRD models. A related note is that we presently do not have an explicit form for the inverse filter c (B). But this observation also suggests that one could possibly work with the filter c (B) 1 = ((I B) D + C(I B 1 ) D ) 1, (2.16) if the goal is to have an explicit form of the filter c (B) applied to the series {X n } n Z. Other filters than (1.1) and (2.16) with interesting properties might also exist and could also be considered. In using (1.1), we aimed to have a general phase and an explicit form of the autocovariance function. Remark 2.4 We have assumed in Proposition 2.1 that both component series are LRD, that is, d 1, d 2 (, 1/2). In fact, the proposed model is not fully suited to accommodate the case when d s are allowed to belong to the so-called principal range d ( 1/2, 1/2), including the case d = associated with short-range dependence (SRD, for short). For example, if d 2 =, the discussion in Remark 2.3 shows that the phase φ c covers the range (, π/2), and thus excludes φ c ( π/2, ). Similarly, when d 2 <, only part of the range (, π/2) is covered. When d 1, d 2 ( 1/2, 1/2), a general phase could in fact be obtained by one of the following models (the introduced model given first): with c ( 1, 1), Case of d 1, d 2 Same sign ( Opposite ) signs c Q + Q Q =: C 1 + Q ( ) ( ) c 1 Q Q =: CQ Q =: C c c Q Indeed, this could be seen by using the expression (2.6) (modified accordingly for the model with Q + = C + Q and Q = C Q) and the discussion found in Remark 2.3. When one of the d s is zero, say d 2 =, the model with Q + = Q, Q = CQ gives a positive phase φ c only as discussed in Remark 2.3 but the model with Q + = C + Q, Q = C Q gives a negative phase φ c. In practice, the two models could be fitted for the range d 1, d 2 ( 1/2, 1/2) and the model with the larger likelihood could be selected (a BIC or other model selection criterion could also be used if the numbers of parameters in larger models, as those considered below, are different). Whether the two models could be combined into a single model remains an open question. Remark 2.5 We stress again that the case c = corresponds to the phase φ = (d 1 d 2 )π/2 (and in particular not necessarily φ =.) Note also that if the two component series are interchanged (so that d 1 and d 2 are interchanged, and φ becomes φ), then the constant c in (2.1) changes to c. Finally, we note that the relation (2.1) does not involve the covariance matrix Σ of the innovation terms, but that (2.11) and (2.12) obviously do. Remark 2.6 Finally, we also note the following important point regarding the boundaries c = ±1 of the range c ( 1, 1). As c ±1, the phase parameter φ c π/2. In the specification ω 12 e iφc of the cross-spectrum constant, note that the cases φ c = π/2 and φ c = π/2 are equivalent by changing the sign of ω 12, since ω 12 e iπ/2 = ( ω 12 )e i( π/2). In the model (1.9) (1.1), the sign of ω 12 is the same as the sign of σ 12. From a practical perpective, this observation means that for the model (1.9) (1.1) with c close to 1 ( 1, resp.), it would be common to estimate c close to 1 (1, resp.) and σ 12 with the opposite sign, since the respective models are not that different. This is also certainly what we observed in our simulations. 8

9 3 General phase VARFIMA(p, D, q) series In this section, we generalize the model (1.9) (1.1) by introducing autoregressive (AR, for short) and moving average (MA, for short) components to capture potential short-range dependence effects. For the one-sided model (1.1), this extension has been achieved in a number of ways. Naturally, we focus on extensions that preserve the general phase and identifiability properties. We also consider the problem of computing (theoretically or numerically) the autocovariance functions of the introduced models, since these functions are used in estimation (see Section 4 below). 3.1 VARFIMA(, D, q) series We begin with the case p = (where there is no AR part). Define the general phase VARFIMA(, D, q) series (two-sided VARFIMA(, D, q) series) as where c (B) 1 is the operator given by (1.1) and Y n = c (B) 1 Θ(B)Z n, (3.1) Θ(B) = I 2 + Θ 1 B Θ q B q (3.2) is a matrix polynomial with 2 2 real-valued matrices Θ s = (θ jk,s ) j,k=1,2, s = 1,..., q. As throughout this paper, {Z n } n Z is a white noise series with EZ n Z n = Σ = (σ jk ) j,k=1,2. In the special case where Θ(B) is diagonal or when d 1 = d 2, the model (3.1) is equivalent to Y n = Θ(B) c (B) 1 Z n. (3.3) The two operators c (B) 1 and Θ(B), however, do not commute in general. In fact, the two models in (3.1) and (3.3) are quite different. More specifically, if Θ(B) has at least one nonzero element on the off diagonal and if d 1 d 2, the series {Y n } n Z in (3.3) can be thought to exhibit a form of fractional cointegration, by writing Θ(B) 1 Y n = c (B) 1 Z n where the reduction of memory in one of the component series of {Y n } n Z could occur from linear combination of present and past variables of the two component series. On the other hand, fractional cointegration cannot occur under the model (3.1). In the rest of this work, we will restrict our attention to this simpler case, leaving the investigation of fractional cointegration for future work. In the next proposition, we compute the autocovariance function of the series in (3.1). Tsay (21) calculated the autocovariance function of the one-sided analogue of (3.1) using the properties of the hypergeometric function. Our approach, which we find less cumbersome for the multivariate case, is similar to the one used for the two-sided VARFIMA(, D, ) series in Proposition 5.1 of Kechagias and Pipiras (215) (see also Remark 2.2 above). Proposition 3.1 The (j, k) component γ jk (n) of the autocovariance matrix function γ(n) of the bivariate two-sided VARFIMA(, D, q) series in (3.1) is given by γ jk (n) = 1 2π 2 q u,v=1 s,t= ( θ ju,s θ kv,t σ uv a 1,jk γ (1) st,jk (n) + a 2,jγ (2) st,jk (n) + γ(3) st,jk (n) + a 4,kγ (4) st,jk ), (n) (3.4) where Θ s = (θ jk,s ) j,k=1,2,s=1,...,q, Σ = (σ jk ) j,k=1,2, a 1,jk = c 2 ( 1) j+k, a 2,j = c( 1) j+1, a 4,k = c( 1) k+1, (3.5) 9

10 and γ (1) st,jk (n) = γ(3) γ (4) st,jk (n) = γ(2) ts,jk ( n) = st,kj (n) = 2Γ(1 d j d k ) sin(πd k ) Γ(n+t s+d k) { 1 2π Γ(d j +d k ) Γ(n+t s+1 d j ), Γ(d j +d k +n+t s) Γ(1+n+t s), n s t,, n < s t. (3.6) Proof: By using Theorem in Brockwell and Davis (29), the VARFIMA(, D, q) series in (3.1) has a spectral density matrix f(λ) = 1 2π G(λ)ΣG(λ), (3.7) where G(λ) = c (e iλ ) 1 Θ(e iλ ). The (j, k) component of the spectral density is given by where f jk (λ) = 1 2π 2 u,v=1 s,t= q θ ju,s θ kv,t σ uv e i(s t)λ (f 1,jk (λ) + f 2,jk (λ) + f 3,jk (λ) + f 4,jk (λ)), (3.8) f 1,jk (λ) = a 1,jk (1 e iλ ) d j (1 e iλ ) d k, f 2,jk (λ) = a 2,j (1 e iλ ) (d j+d k ), f 3,jk (λ) = (1 e iλ ) d j (1 e iλ ) d k, f 4,jk (λ) = a 4,k (1 e iλ ) (d j+d k ). Consequently, the (j, k) component of the autocovariance matrix satisfies γ jk (n) = 2π e inλ f jk (λ)dλ, which in view of the relations (3.8) (3.9) implies (3.4) (3.5) with γ (2) st,jk (n) = 2π 2π γ (1) st,jk (n) = γ(3) st,kj (n) = e i(n s+t)λ (1 e iλ ) d j (1 e iλ ) d k dλ, e i(n s+t)λ (1 e iλ ) x jk dλ, γ (4) st,jk (n) = 2π e i(n s+t)λ (1 e iλ ) x jk dλ, where x jk = d j + d k. The relations (3.6) follow from the evaluation of the integrals above as in the proof of Proposition 5.1 of Kechagias and Pipiras (215). Remark 3.1 Since Θ(e iλ ) I 2 +Θ Θ q as λ, and since the relation (2.1) in Proposition 2.1 does not involve Σ, the two-sided VARFIMA(, D, q) model has a general phase at the zero frequency (with the same relation (2.1) between the phase φ c and the parameter c). The parameters of Θ s are identifiable if and only if they are identifiable for the same VARMA(, q) model. 3.2 VARFIMA(p, D, q) series We extend here the model (3.1) to a general phase fractionally integrated model containing both autoregressive and moving average components. As for the one-sided model (1.1), two possibilities can be considered for this extension. Let Φ(B) = I 2 Φ 1 B... Φ p B p be the AR polynomial, where Φ r = (φ jk,r ) j,k=1,2, r = 1,..., p, are 2 2 real-valued matrices. Following the terminology of Sela and Hurvich (29), define the general phase VARFIMA(p, D, q) series (two-sided VARFIMA(p, D, q) series) {X n } n Z as Φ(B)X n = c (B) 1 Θ(B)Z n, (3.1) and the general-phase FIVARMA(p, D, q) series 2 (two-sided FIVARMA(p, D, q) series) as (3.9) Φ(B) c (B)X n = Θ(B)Z n. (3.11) 2 The names VARFIMA and FIVARMA refer to the facts that the fractional integration (FI) operator c(b) 1 is applied to the MA part in (3.1), and after writing X n = c(b) 1 Φ(B) 1 Θ(B)Z n, it is applied to the VARMA series in (3.11). 1

11 (A priori, it is not clear whether (3.1) and (3.11) have a general phase but we use the terminology in line with Sections 2 and 3.1.) The one-sided FIVARMA(p, D, q) series (with c = in (3.11)) have been more popular in the literature, with Lobato (1997), Sela and Hurvich (29) and Tsay (21) being notable exceptions. In particular, Sela and Hurvich (29) investigated thoroughly the differences between the one-sided analogues of the models (3.1) and (3.11), focusing on models with no MA part. As expected, the two-sided VARFIMA and FIVARMA series differ if Φ(B) is nondiagonal and if d 1 d 2. Similarly to the discussion around the models (3.1) and (3.3), the VARFIMA model with nondiagonal Φ(B) allows for fractional cointegration in the sense discussed following the relation (3.3), which however cannot be produced by the FIVARMA model (3.11) (see Sela and Hurvich (29) for more details in the one-sided case). As indicated earlier, the case of fractional cointegration will be pursued elsewhere (though we shall also briefly mention some numerical results in Section 5). We will focus on the VARFIMA(p, D, q) series (3.1) with a diagonal AR part, in which case the two models (3.1) and (3.11) are equivalent. Besides the obvious computational and simplification advantages of this assumption, our consideration is also justified by similar assumptions recently used in Dufour and Pelletier (214) for the construction of identifiable multivariate shortrange dependent time series models. More specifically, Dufour and Pelletier (214) show that any VARMA(p, q) series can be transformed to have a diagonal AR (or MA) part with the cost of increasing the order of the MA (or AR) component. As a consequence, they construct identifiable representations of VARMA(p, q) series where either AR or MA part is diagonal. As in Remark 3.1, such two-sided VARFIMA(p, D, q) model has general phases at the zero frequency, and its parameters are identifiable if and only if they are identifiable for the same VARMA(p, q) model. The presence of the AR filter on the left-hand side of (3.1) makes it difficult to compute the autocovariance function of the series explicitly. Closed form formulas for the autocovariance function of the one-sided model (3.11) with c = were provided by Sowell (1986), albeit his implementation is computationally inefficient as it requires multiple expensive evaluations of hypergeometric functions. The slow performance of Sowell s approach was also noted by Sela (21), who proposed fast approximate algorithms for calculating the autocovariance functions of the one-sided models (3.1) and (3.11) with c = when p = 1 and q =. Although not exact, Sela s algorithms are fast with negligible approximation errors. In fact, it is straightforward to extend these algorithms to calculate the autocovariance function of a two-sided VARFIMA(1, D, q) series. For models with AR components of higher orders, however, this extension seems to require restrictive assumptions on the AR coefficients and therefore we do not pursue this approach. Remark 3.2 There is yet another reason for making our assumption of a diagonal AR part Φ(B). By using the reparametrizations of Dufour and Pelletier (214), the FIVARMA model (3.11) can take the form (3.1) with diagonal Φ(B). Indeed, write first the model (3.11) as c (B)X n = Φ(B) 1 Θ(B)Z n. Next, by using the relation Φ(B) 1 = Φ(B) 1 adj(φ(b)), where and adj( ) denote the determinant and adjoint of a matrix respectively, we can write c (B) Φ(B) X n = adj(φ(b))θ(b)z n, (3.12) where the commutation of c (B) and Φ(B) is possible since Φ(B) is scalar-valued. Φ(B) = diag( Φ(B) ) and Θ(B) = adj(φ(b))θ(b), the relation (3.12) yields Letting c (B) Φ(B)X n = Θ(B)Z n. (3.13) 11

12 Thus, a FIVARMA model with AR component of order p can indeed be written as a VARFIMA model with a diagonal AR part whose order will not exceed 2p (the maximum possible order of Φ(B) ). 4 Estimation and other tasks In this section, we discuss estimation of the general phase VARFIMA(p, D, q) model (3.1) introduced in Section 3.2. Estimation of the parameters of this model can be carried out by adapting the CLDL (Conditional Likelihood Durbin Levinson) estimation of Tsay (21). Tsay s method is appealing in our case for a number of reasons. First, as discussed in Section 4.1 below, the method requires only the knowledge of the autocovariance function of the general phase VARFIMA(, D, q) series (3.1) for which we have an explicit form. Second, Tsay s algorithm can be modified easily to yield multiple steps-ahead (finite sample) forecasts of the series. Finally, Tsay s method has a mild computational cost, compared to most alternative estimation methods. 4.1 Estimation The basic idea of Tsay s CLDL algorithm is to transform a VARFIMA(p, D, q) series to a VARFIMA(, D, q) series whose autocovariance function has a closed form. Then, a straightforward implementation of the well-known Durbin-Levinson (DL, for short) algorithm allows one to replace the computationally expensive likelihood calculations of the determinant and the quadratic part with less time consuming operations. We give next a brief description of the algorithm, starting with some notation. Let {Y n } n=1,...,n be the two-sided VARFIMA(, D, q) series (3.1) and let Γ(k) = EY k Y denote its autocovariance function. Let also Θ = (vec(θ 1 ),..., vec(θ q ) ) be the vector containing the entries of the coefficient matrices of the MA polynomial Θ(B). Assuming that the bivariate white noise series {Z n } is Gaussian, we can express the likelihood function of {Y n } n=1,...,n with the aid of the multivariate DL algorithm (see Brockwell and Davis (29), p. 422). More specifically, letting η = (d 1, d 2, c, σ 11, σ 12, σ 22, Θ ) be the (6 + 4q) dimensional vector containing all the parameters of the model (3.1), we can write the likelihood function as L(η; Y ) = (2π) N( N 1 j= ) 1/2 { V j exp 1 N 1 2 j= (Y j+1 Ŷj+1) V 1 j } (Y j+1 Ŷj+1), (4.1) where Ŷj+1 = E(Y j+1 Y 1,..., Y j ) and V j, j =,..., N 1, are the one-step-ahead finite sample predictors and their corresponding error covariance matrices obtained by the multivariate DL algorithm. Using the fact that the series {Y n } n=1,...,n satisfies the relation Φ(B)X n = Y n, (4.2) where {X n } n=1,...,n is the two-sided VARFIMA(p, D, q) series (3.1), we can view {Φ(B)X n } n=p+1,...,n as a two-sided VARFIMA(, D, q) series, whose likelihood function conditional on X 1,..., X p and Φ = (vec(φ 1 ),..., vec(φ p ) ) is given by L(Φ, η; X n X 1,..., X p ) L(η; Φ(B)X n ), n = p + 1,..., N. (4.3) The reason we do not absorb Φ in η, is to emphasize the different roles that these two parameters have in calculating the likelihood function in (4.3). More specifically, Φ is used to transform 12

13 the available data {X n } n=1,...,n, to a two-sided VARFIMA(, D, q) series {Y n } n=1,...,n, while η is necessary to apply the DL algorithm. The conditional likelihood estimators of Φ and η are then given by ( Φ, η) = argmax Φ,η S L(Φ, η; X n X 1,..., X p ), (4.4) where S = {η R 6+4q : < d 1, d 2 <.5, 1 < c < 1, Σ = σ 11 σ 22 σ 2 12 >, (σ jj) j=1,2 } denotes the parameter space for η. Although there is no closed form solution for the estimates Φ and η, they can be computed numerically using the quasi-newton algorithm of Broyden, Fletcher, Goldfarb, and Shanno (BFGS). 4.2 Forecasting The multivariate DL algorithm used in the estimation above yields the coefficients matrices Φ n,1,..., Φ n,n in the 1 step-ahead forecast (predictor) Ŷ n+1 := Ŷn+1 n := E(Y n+1 Y 1,..., Y n ) = Φ n,1 Y n Φ n,n Y 1, (4.5) as well as the associated forecast error matrix V n = E(Y n+1 Ŷn+1)(Y n+1 Ŷn+1). The h step-ahead forecasts, h 1, on the other hand, are given by Ŷ n+h n := E(Y n+h Y 1,..., Y n ) = F h n,1y n F h n,ny 1, (4.6) where Fn,k h, k = 1,..., n, are 2 2 real-valued coefficient matrices, with the corresponding forecast error matrix W n+h 1 n = E(Y n+h Ŷn+h n)(y n+h Ŷn+h n). (4.7) The 1-step-ahead forecasts can be used recursively by repeated conditioning to obtain recursive expressions for the coefficient matrices F h h,k and an expression for the error matrix W n+h 1 n, as stated in the next result. The standard proof is omitted for shortness sake. Proposition 4.1 Let h 1 and Φ n,k, n 1, k = 1,..., n, be as above. Then, the matrices F h n,k in (4.6) satisfy the recursive relation Fn,k h = Φ h 1 n+h 1,h+k 1 + Φ n+h 1,j F h j n,k, (4.8) with F 1 n,k := Φ n,k, n 1, k = 1,..., n. Moreover, the corresponding error matrices W n+h 1 n in relation (4.7), are given by W n+h 1 n = Γ() where Γ(n) = EY n Y is the autocovariance matrix function of {Y n} n Z. j=1 n Fn,jΓ(h h + j 1), (4.9) j=1 Remark 4.1 In the time series literature (e.g. Brockwell and Davis (29)), it is more succinct and common to express the h step-ahead forecasts by using the coefficient matrices appearing in the multivariate Innovations (IN, for short) algorithm. We use the DL algorithm in both estimation and forecasting since it is faster than the IN algorithm: the coefficient matrices Φ n,k in (4.5) are computed in O(n 2 ) number of steps, whereas the computational complexity for the analogous coefficients in the IN algorithm is O(n 3 ). 13

14 The DL algorithm used in the forecasting procedure and formulae above is based on the assumption that the autocovariance function of the time series {Y n } n Z can readily be computed, as for example for the two-sided VARFIMA(, D, q) series. We now turn our attention to the two-sided VARFIMA(p, D, q) series {X n } n Z defined through {Y n } n Z in (4.2). As we do not have an explicit form of the autocovariance function of {X n } n Z, it is not immediately clear how to calculate the h step-ahead forecasts X n+h n = E(X n+h X 1,..., X n ) and the corresponding error matrices W n+h 1 n = E(X n+h X n+h n )(X n+h X n+h n ), n 1, h 1. In Proposition 4.2 below, we show that X n+h n and W n+h 1 n can be calculated approximately and recursively from Ŷn+h n and W n+h 1 n. For simplicity and since this order will be used in the simulations and the application below, we focus on the case p = 1. However, the proposition can be extended for larger values of p. Proposition 4.2 Let Fn,k h, n 1, k = 1,..., n, be as in (4.6). Then, the h step-ahead forecasts X n+h n = E(X n+h X 1,..., X n ) satisfy where h 1 R n+h n = s= Moreover, the error matrices where Φ s 1 W (a) h 1 n+h 1 n = X (a) X n+h n = X (a) n+h n + R n+h n, (4.1) n+h n = h 1 Φh 1X n + Φ s 1Ŷn+h s n, (4.11) s= ( ) E(Y n+h s X 1,..., X n ) E(Y n+h s Y 1,..., Y n ). (4.12) W (a) n+h 1 n = E(X n+h Φ s 1W n+h s 1 n (Φ s 1) + s= A s,t (n + h) = Γ(t s) n k=1 X (a) n+h n )(X n+h h 1 s,t= s t and Γ(n) = EY n Y is the autocovariance matrix function of {Y n} n Z. Proof: By using the relation (4.2) recursively, we can write X (a) n+h n ) can be computed by Φ s 1A s,t (n + h)(φ t 1), (4.13) Γ(h s + k 1)(F h t n,k ) (4.14) h 1 X n+h = Φ h 1X n + Φ s 1Y n+h s, h = 1, 2,... (4.15) s= which implies that h 1 X n+h n = Φ h 1X n + Φ s 1E(Y n+h s X 1,..., X n ). (4.16) s= 14

15 Since E(Y n+h s Y 1,..., Y N ) = Ŷn+h s n, the relation (4.16) yields (4.11). Next, we subtract (4.11) from (4.15) to get X n+h h 1 (a) X n+h n = Φ s (Y n+h s Ŷn+h s n). s= The h step-ahead error matrix (a) W n+h 1 n is then given by W (a) n+h 1 n = E = ( h 1 Φ s 1(Y n+h s Ŷn+h s n) s= h 1 Φ s 1W n+h s 1 n (Φ s 1) + s= h 1 s,t= s t )( h 1 t= ) Φ t 1(Y n+h t Ŷn+h t n) Φ s 1A s,t (n + h)(φ t 1), (4.17) where A s,t (u) = E(Y u s Ŷu s n)(y u t Ŷu t n). To show that A s,t (u) satisfies (4.14), note that for s, t =,..., u n 1, s t, we have Hence, EŶu s ny u t = E(E(Ŷu s ny u t Y 1,..., Y N )) = EŶu s ne(y u t Y 1,..., Y N ) = EŶu s nŷ u t n. A s,t (u) = EY u s Y u t EY u s Ŷ u t n EŶu s ny u t + EŶu s nŷ u t n = Γ(t s) EY u s Ŷ u t n ( n ) = Γ(t s) EY u s F u t n n,k Y n k+1 = Γ(t s) yielding the relations (4.13) (4.14). n k=1 k=1 Γ(u s n + k 1)(F u t n n,k ), (4.18) Since X n Φ 1 X n 1 = Y n for the VARFIMA(1, D, q) series {X n } and the VARFIMA(, D, q) series {Y n }, the approximation error R n+h n in (4.12) becomes negligible for large n. For this reason, (a) in the simulations and the application below, we shall use the approximate forecasts X n+h n in (4.1) (a) and their forecast error matrices W n+h 1 n given by (4.13). 5 Simulation study In this section, we perform a Monte Carlo simulation study to assess the performance of the CLDL algorithm for the VARFIMA(p, D, q) model (3.1) described in Section 4.1. We examine four different models with AR and MA components of orders p, q =, 1. When either the MA or the AR part is present, we shall consider a non-diagonal coefficient matrix. This is somewhat in contrast to what was stated in Section 3.2 for the AR part but we sought to see what happens when the AR part is non-diagonal as well and the results for the diagonal AR parts (not reported here) were qualitatively similar or better. For each model, we consider three sample sizes N = 2, 4, 1. The Gaussian time series data are generated using the fast and exact synthesis algorithm of Helgason et al. (211), while the number of replications is 1. 15

16 To solve the maximization problem (4.4), we use the SAS/IML nlpqn function, which implements the BFGS quasi-newton method, a popular iterative optimization algorithm. For our optimization scheme, we follow the approach found in Tsay (21). A first step is to eliminate the nonlinear inequality constraint Σ in the parameter space S (defined in Section 4.1, by letting Σ = U U, where U = (U jk ) j,k=1,2 is an upper triangular matrix (Σ is nonnegative definite and such a factorization always exists). Then, the parameter vector θ can be written as θ = (d 1, d 2, c, U 11, U 12, U 22, Θ ) while the parameter space becomes S = {θ R 6+4q : < d 1, d 2 <.5, 1 < c < 1}. Next, we describe our strategy on selecting initial parameter values (Φ I, θ I ) for the BFGS method. Let Φ 1 = (φ jk,1 ) j,k=1,2, θ = (d 1, d 2, c, U11, U12, U22, (Θ 1) ), (5.1) where Θ 1 = (θ jk,1 ) j,k=1,2, be the true parameter values. We consider initial values d I k = 2d k, c I = 2c 1 + 2d k 1 + c, U jk I = 1, θi jk,1 = eθ jk,1 1 e θ jk,1 + 1, φi jk,1 = e φ jk,1 1 e φ jk,1 + 1, (5.2) where j, k = 1, 2. Note that the transformations (5.2) are essentially perturbations of the true parameter values that also retain the range of the parameter space S. For example, the value of d I k will be zero (or 1/2) when d k is also zero (or 1/2). Moreover, even though the parameter space S does not include identifiability (including stability) constraints for the elements of the AR and MA polynomials as discussed in Section 3.2, we did not encounter any cases where the optimization algorithm considers such values. Table 1 and Figure 3 present estimation results for the four models considered. For all simulations, we take (dropping the superscript for simplicity) d 1 =.2, d 2 =.4, c =.6, Σ 11 = 3, Σ 12 =.5, Σ 22 = 3, and wherever present, Φ 11 = φ 11,1 =.5, Φ 12 = φ 12,1 =.2, Φ 21 = φ 21,1 =.4, Φ 22 = φ 22,1 =.8, Θ 11 = θ 11,1 =.1, Θ 12 = θ 12,1 =.6, Θ 21 = θ 21,1 =.2, Θ 22 = θ 22,1 =.8. 3 We also performed simulations for several other values of these parameters and got similar results and therefore we omit them in favor of space economy. Table 1 lists the median differences between the estimates and the corresponding true values, and the respective median absolute deviations. Figure 3, on the other hand, includes the boxplots of the estimates for the various parameters. While the table concerns only the results for the sample sizes N = 2 and 4, the figure also includes the case of N = 1. The results in Table 1 and Figure 3 indicate a satisfactory performance of the CLDL algorithm for most cases considered: the median differences are small overall and tend to decrease with the increasing sample size, and the decrease with the increasing sample size is also evident for the median deviations; moreover, many median deviations and box sizes are relatively small as well. We also note that for some cases and smaller samples spaces, we could see bimodality in the histograms of the corresponding estimates (not included here for shortness sake). For example, this was the case for c, (p, q) = (1, 1) when N = 2, as suggested by the larger median difference in Table 1. But bimodality either diminished or completely disappeared as the sample size increased. Finally, we also comment on the model selection task concerning the one- and two-sided models when using BIC and AIC. Figure 4, the left plot, presents the proportion of times that these information criteria select the one-sided VARFIMA(, D, ) model over the two-sided VARFIMA(, D, ), when in fact the latter model is true, for the same parameter values as in Table 1 in the case p = q =. The right plot of the figure presents the analogous plot for a different set of values of the parameters d 1, d 2 and c. The performance of the model selection criteria is satisfactory overall. 3 For this choice of d 1, d 2, c, the phase parameter is equal to φ = Taking c =.1985 with the same d s would yield zero phase. 16

17 d 1 d 2 c U 11 U 12 U 22 d 1 d 2 c U 11 U 12 U 22 d 1 d 2 c U 11 U 12 U d 1 d 2 c U 11 U 12 U 22 Θ 11 Θ 12 Θ 21 Θ 22 d 1 d 2 c U 11 U 12 U 22 Θ 11 Θ 12 Θ 21 Θ 22 d 1 d 2 c U 11 U 12 U 22 Θ 11 Θ 12 Θ 21 Θ d 1 d 2 c U 11 U 12 U 22 Φ 11 Φ 12 Φ 21 Φ 22 d 1 d 2 c U 11 U 12 U 22 Φ 11 Φ 12 Φ 21 Φ 22 d 1 d 2 c U 11 U 12 U 22 Φ 11 Φ 12 Φ 21 Φ d 1 d 2 c U 11 U 12 U 22 Φ 11 Φ 12 Φ 21 Φ 22 Θ 11 Θ 12 Θ 21 Θ 22 d 1 d 2 c U 11 U 12 U 22 Φ 11 Φ 12 Φ 21 Φ 22 Θ 11 Θ 12 Θ 21 Θ 22 d 1 d 2 c U 11 U 12 U 22 Φ 11 Φ 12 Φ 21 Φ 22 Θ 11 Θ 12 Θ 21 Θ 22 Figure 3: The red solid lines are the medians and the blue dashed lines are the true parameter values (except for U 11, U 22 which are centered at ).Top to bottom: (p, q) = (, ),(, 1),(1, ) and (1, 1). Left to right: N = 2, 4, 1. 17

18 (p, q) (, ) (, 1) (1, ) (1, 1) N d d c /.9.31/.1 Φ 11 /Θ /.66.65/ / /.147 Φ 12 /Θ / / /.28.33/.1 Φ 21 /Θ /.91.26/ /.242.1/.229 Φ 22 /Θ /.129.8/ U U U Table 1: Median differences between the estimates and the corresponding true values (top value in each cell) and median absolute deviations (bottom value in each cell) for the estimated parameters of VARFIMA series with (p, q) = (, ), (, 1), (1, ), (1, 1). 6 Application In this section, we apply the CLDL algorithm to analyze inflation rates in the U.S. under the two-sided VARFIMA(p, D, q) model discussed in Section 3.2. Evidence of long-range dependence behavior in inflation rates has been found in a number of works (see, for example, Baillie et al. (1996), Doornik and Ooms (24), Hurvich and Sela (29), Baillie and Moreno (212) and references therein). More specifically, Hurvich and Sela (29) tested the fit of several long- and short-range dependent models on the annualized monthly inflation rates for goods and services in the U.S. during the period of February 1956 January 28 (N = 624 months) and selected a one-sided VARFIMA model as the best choice. Besides their long memory features, however, the time series of inflation rates often exhibit asymmetric behavior, and therefore call for multivariate LRD models that allow for general phase. Following the notation of Sela (21), 4 we denote the Consumer Price Indices series for commodities as {CP I c n} n=,...,n and the corresponding series for services as {CP I s n} n=,...,n. Then, we define the annualized monthly inflation rates for goods and services as g n = 12 CP Ic n CP I c n 1 CP I c n 1 and s n = 12 CP Is t CP In 1 s CP In 1 s, respectively. The two series {g n } n=1,...,n, {s n } n=1,...,n are depicted in Figure See also the accompanying R code. 5 The consumer price indices (raw) data are available online from the Bureau of Labor Statistics. 18

19 Figure 4: The proportion of times that the considered information criteria select the one-sided VARFIMA(, D, ) model over the two-sided one, when the latter is true. Annualized monthly inflation rate Goods Annualized monthly inflation rate Services Year Year Figure 5: Annualized monthly inflation rates for goods (left) and services (right) from February 1956 to January 28. The two plots in Figure 6 provide some motivation for why a general phase model is needed for this dataset. More specifically, the left plot in Figure 6 depicts the sample cross-correlation function ρ 12 (h) of the two series for all lags such that h < 25. Observe that for negative lags the sample cross-correlation function decays faster than for positive lags suggesting time-non-reversibility of the series and hence non-zero phase. Further evidence for general phase can be obtained from the local Whittle estimation of Robinson (28) which can be used to estimate the phase and the LRD parameters directly from the data. The estimation is semiparametric in the sense that it only requires specification of the spectral density at low frequencies. The right plot in Figure 6 depicts two local Whittle estimates of the phase parameter φ as functions of m a tuning parameter representing the number of lower frequencies used in the estimation. The dashed line corresponds to the special phase estimate φ = ( d 1 d 2 )π/2 of the one-sided VARFIMA model based on the local Whittle estimates of the two d s. On the other hand, the solid line shows the phase parameter estimated directly from the data. The two lines being visibly different suggest that the special phase parameter and the associated VARFIMA model are not appropriate. A more detailed local Whittle analysis of the dataset can be found in Baek et al. (218). 19

20 Whittle (data driven) Model (special phase).3 γ12(h).2 φ lag m Figure 6: Left plot: Sample cross-correlation ρ 12 (h) of the series {g n } n=1,...,n and {s n } n=1,...,n depicted in Figure 5 for h 25. Right plot: Local Whittle phase estimates, one corresponding to the one-sided VARFIMA (dashed line) and one estimated directly from the data (solid line). Both estimates are plotted as functions of a tuning parameter m = N k, k = 1,..., 51, where N = 624 is the sample size of the series. In the analysis of Sela and Hurvich (29), the one-sided VARFIMA(1, D, ) model was selected as the best choice (based on AIC), amongst vector autoregressive models of both low and high orders and also amongst one-sided VARFIMA(p, D, ) and FIVARMA(p, D, ) models with p 1. The estimated VARFIMA(1, D, ) model, in particular, was g n =.327g n s n 1 + ɛ 1,n, s n =.237g n 1.385s n 1 + ɛ 2,n, (6.1) where ( ɛ 1,n (I B).4835 ɛ 2,n ) ( ( N, )). (6.2) We should note here that the SAS optimization algorithm we used produced estimates similar to those of Sela s algorithm (implemented in R), for all parameters except d 1 which Sela estimates to be zero while for this model we estimated it to be.191. More specifically, the estimated one-sided VARFIMA(1, D, ) model is g n =.132 (.64) g n (.8) s n 1 + ɛ 1,n, s n =.56 (.23) g n 1.38 (.44) s n 1 + ɛ 2,n, (6.3) where ( (I B).191 (.52) ɛ 1,n (I B).475 (.62) ɛ 2,n ) ( ( N, and the underlying U in Σ = U U has U 11 = 4.65 (.131), U 12 =.164 (.19) and U 22 = (.75), with standard errors of the estimates added in the parentheses throughout. The parameter estimates in (6.1) (6.2) reveal an interesting feature, noted by Sela (21). In particular, while the lagged services inflation has a significant influence on goods inflation, the lagged goods inflation seems to have a small effect on services inflation. This behavior is potentially related to the so-called gap between the prices in services and the prices in goods which was studied by Peach et al. (24). More specifically, the term gap refers to the tendency of prices in services )) (6.4) 2

Modeling bivariate long-range dependence with general phase

Modeling bivariate long-range dependence with general phase Modeling bivariate long-range dependence with general phase Stefanos Kechagias SAS Institute Vladas Pipiras University of North Carolina November 30, 2018 Abstract Bivariate time series models are introduced

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Time Series I Time Domain Methods

Time Series I Time Domain Methods Astrostatistics Summer School Penn State University University Park, PA 16802 May 21, 2007 Overview Filtering and the Likelihood Function Time series is the study of data consisting of a sequence of DEPENDENT

More information

Levinson Durbin Recursions: I

Levinson Durbin Recursions: I Levinson Durbin Recursions: I note: B&D and S&S say Durbin Levinson but Levinson Durbin is more commonly used (Levinson, 1947, and Durbin, 1960, are source articles sometimes just Levinson is used) recursions

More information

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be ed to

Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be  ed to TIME SERIES Part III Example Sheet 1 - Solutions YC/Lent 2015 Comments and corrections should be emailed to Y.Chen@statslab.cam.ac.uk. 1. Let {X t } be a weakly stationary process with mean zero and let

More information

MATH 5075: Time Series Analysis

MATH 5075: Time Series Analysis NAME: MATH 5075: Time Series Analysis Final For the entire test {Z t } WN(0, 1)!!! 1 1) Let {Y t, t Z} be a stationary time series with EY t = 0 and autocovariance function γ Y (h). Assume that a) Show

More information

Variable Targeting and Reduction in High-Dimensional Vector Autoregressions

Variable Targeting and Reduction in High-Dimensional Vector Autoregressions Variable Targeting and Reduction in High-Dimensional Vector Autoregressions Tucker McElroy (U.S. Census Bureau) Frontiers in Forecasting February 21-23, 2018 1 / 22 Disclaimer This presentation is released

More information

A COMPLETE ASYMPTOTIC SERIES FOR THE AUTOCOVARIANCE FUNCTION OF A LONG MEMORY PROCESS. OFFER LIEBERMAN and PETER C. B. PHILLIPS

A COMPLETE ASYMPTOTIC SERIES FOR THE AUTOCOVARIANCE FUNCTION OF A LONG MEMORY PROCESS. OFFER LIEBERMAN and PETER C. B. PHILLIPS A COMPLETE ASYMPTOTIC SERIES FOR THE AUTOCOVARIANCE FUNCTION OF A LONG MEMORY PROCESS BY OFFER LIEBERMAN and PETER C. B. PHILLIPS COWLES FOUNDATION PAPER NO. 1247 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS

More information

Temporal aggregation of stationary and nonstationary discrete-time processes

Temporal aggregation of stationary and nonstationary discrete-time processes Temporal aggregation of stationary and nonstationary discrete-time processes HENGHSIU TSAI Institute of Statistical Science, Academia Sinica, Taipei, Taiwan 115, R.O.C. htsai@stat.sinica.edu.tw K. S. CHAN

More information

Generalised AR and MA Models and Applications

Generalised AR and MA Models and Applications Chapter 3 Generalised AR and MA Models and Applications 3.1 Generalised Autoregressive Processes Consider an AR1) process given by 1 αb)x t = Z t ; α < 1. In this case, the acf is, ρ k = α k for k 0 and

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA CHAPTER 6 TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA 6.1. Introduction A time series is a sequence of observations ordered in time. A basic assumption in the time series analysis

More information

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes. MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

Modeling and testing long memory in random fields

Modeling and testing long memory in random fields Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous

More information

Computational Statistics and Data Analysis

Computational Statistics and Data Analysis Computational Statistics and Data Analysis 58 (2013) 242 256 Contents lists available at SciVerse ScienceDirect Computational Statistics and Data Analysis journal homepage: www.elsevier.com/locate/csda

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 6.3. FORECASTING ARMA PROCESSES 123 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Chapter 9 Multivariate time series 2 Transfer function

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Ross Bettinger, Analytical Consultant, Seattle, WA

Ross Bettinger, Analytical Consultant, Seattle, WA ABSTRACT DYNAMIC REGRESSION IN ARIMA MODELING Ross Bettinger, Analytical Consultant, Seattle, WA Box-Jenkins time series models that contain exogenous predictor variables are called dynamic regression

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES

SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES SF2943: TIME SERIES ANALYSIS COMMENTS ON SPECTRAL DENSITIES This document is meant as a complement to Chapter 4 in the textbook, the aim being to get a basic understanding of spectral densities through

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Heteroskedasticity; Step Changes; VARMA models; Likelihood ratio test statistic; Cusum statistic.

Heteroskedasticity; Step Changes; VARMA models; Likelihood ratio test statistic; Cusum statistic. 47 3!,57 Statistics and Econometrics Series 5 Febrary 24 Departamento de Estadística y Econometría Universidad Carlos III de Madrid Calle Madrid, 126 2893 Getafe (Spain) Fax (34) 91 624-98-49 VARIANCE

More information

Contents. 1 Time Series Analysis Introduction Stationary Processes State Space Modesl Stationary Processes 8

Contents. 1 Time Series Analysis Introduction Stationary Processes State Space Modesl Stationary Processes 8 A N D R E W T U L L O C H T I M E S E R I E S A N D M O N T E C A R L O I N F E R E N C E T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Time Series Analysis 5

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Synthesis of Gaussian and non-gaussian stationary time series using circulant matrix embedding

Synthesis of Gaussian and non-gaussian stationary time series using circulant matrix embedding Synthesis of Gaussian and non-gaussian stationary time series using circulant matrix embedding Vladas Pipiras University of North Carolina at Chapel Hill UNC Graduate Seminar, November 10, 2010 (joint

More information

Covariances of ARMA Processes

Covariances of ARMA Processes Statistics 910, #10 1 Overview Covariances of ARMA Processes 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation

More information

Vector autoregressions, VAR

Vector autoregressions, VAR 1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Chapter 9: Forecasting

Chapter 9: Forecasting Chapter 9: Forecasting One of the critical goals of time series analysis is to forecast (predict) the values of the time series at times in the future. When forecasting, we ideally should evaluate the

More information

Part II. Time Series

Part II. Time Series Part II Time Series 12 Introduction This Part is mainly a summary of the book of Brockwell and Davis (2002). Additionally the textbook Shumway and Stoffer (2010) can be recommended. 1 Our purpose is to

More information

Strictly Stationary Solutions of Autoregressive Moving Average Equations

Strictly Stationary Solutions of Autoregressive Moving Average Equations Strictly Stationary Solutions of Autoregressive Moving Average Equations Peter J. Brockwell Alexander Lindner Abstract Necessary and sufficient conditions for the existence of a strictly stationary solution

More information

5 Transfer function modelling

5 Transfer function modelling MSc Further Time Series Analysis 5 Transfer function modelling 5.1 The model Consider the construction of a model for a time series (Y t ) whose values are influenced by the earlier values of a series

More information

Lecture 2: ARMA(p,q) models (part 2)

Lecture 2: ARMA(p,q) models (part 2) Lecture 2: ARMA(p,q) models (part 2) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.

More information

Autoregressive Approximation in Nonstandard Situations: Empirical Evidence

Autoregressive Approximation in Nonstandard Situations: Empirical Evidence Autoregressive Approximation in Nonstandard Situations: Empirical Evidence S. D. Grose and D. S. Poskitt Department of Econometrics and Business Statistics, Monash University Abstract This paper investigates

More information

FE570 Financial Markets and Trading. Stevens Institute of Technology

FE570 Financial Markets and Trading. Stevens Institute of Technology FE570 Financial Markets and Trading Lecture 5. Linear Time Series Analysis and Its Applications (Ref. Joel Hasbrouck - Empirical Market Microstructure ) Steve Yang Stevens Institute of Technology 9/25/2012

More information

NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY

NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY Econometric Theory, 26, 2010, 1855 1861. doi:10.1017/s0266466610000216 NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY UWE HASSLER Goethe-Universität Frankfurt

More information

User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series

User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series User Guide for Hermir version 0.9: Toolbox for Synthesis of Multivariate Stationary Gaussian and non-gaussian Series Hannes Helgason, Vladas Pipiras, and Patrice Abry June 2, 2011 Contents 1 Organization

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

Introduction to Time Series Analysis. Lecture 7.

Introduction to Time Series Analysis. Lecture 7. Last lecture: Introduction to Time Series Analysis. Lecture 7. Peter Bartlett 1. ARMA(p,q) models: stationarity, causality, invertibility 2. The linear process representation of ARMA processes: ψ. 3. Autocovariance

More information

9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006.

9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006. 9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006. References for this Lecture: Introduction to Time Series and Forecasting. P.J. Brockwell and R. A. Davis, Springer Texts

More information

Chapter 3: Regression Methods for Trends

Chapter 3: Regression Methods for Trends Chapter 3: Regression Methods for Trends Time series exhibiting trends over time have a mean function that is some simple function (not necessarily constant) of time. The example random walk graph from

More information

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay Midterm

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay Midterm Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay Midterm Chicago Booth Honor Code: I pledge my honor that I have not violated the Honor Code during

More information

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Forecasting 1. Let X and Y be two random variables such that E(X 2 ) < and E(Y 2 )

More information

1 Teaching notes on structural VARs.

1 Teaching notes on structural VARs. Bent E. Sørensen February 22, 2007 1 Teaching notes on structural VARs. 1.1 Vector MA models: 1.1.1 Probability theory The simplest (to analyze, estimation is a different matter) time series models are

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions arxiv: arxiv:0000.0000 REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions Li Pan and Dimitris N. Politis Li Pan Department of Mathematics University of California

More information

Cointegrated VARIMA models: specification and. simulation

Cointegrated VARIMA models: specification and. simulation Cointegrated VARIMA models: specification and simulation José L. Gallego and Carlos Díaz Universidad de Cantabria. Abstract In this note we show how specify cointegrated vector autoregressive-moving average

More information

A nonparametric test for seasonal unit roots

A nonparametric test for seasonal unit roots Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna To be presented in Innsbruck November 7, 2007 Abstract We consider a nonparametric test for the

More information

Mixed frequency models with MA components

Mixed frequency models with MA components Mixed frequency models with MA components Claudia Foroni a Massimiliano Marcellino b Dalibor Stevanović c a Deutsche Bundesbank b Bocconi University, IGIER and CEPR c Université du Québec à Montréal September

More information

Adjusted Empirical Likelihood for Long-memory Time Series Models

Adjusted Empirical Likelihood for Long-memory Time Series Models Adjusted Empirical Likelihood for Long-memory Time Series Models arxiv:1604.06170v1 [stat.me] 21 Apr 2016 Ramadha D. Piyadi Gamage, Wei Ning and Arjun K. Gupta Department of Mathematics and Statistics

More information

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay. Midterm

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay. Midterm Booth School of Business, University of Chicago Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay Midterm Chicago Booth Honor Code: I pledge my honor that I have not violated the Honor Code during

More information

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

A SARIMAX coupled modelling applied to individual load curves intraday forecasting A SARIMAX coupled modelling applied to individual load curves intraday forecasting Frédéric Proïa Workshop EDF Institut Henri Poincaré - Paris 05 avril 2012 INRIA Bordeaux Sud-Ouest Institut de Mathématiques

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

THE K-FACTOR GARMA PROCESS WITH INFINITE VARIANCE INNOVATIONS. 1 Introduction. Mor Ndongo 1 & Abdou Kâ Diongue 2

THE K-FACTOR GARMA PROCESS WITH INFINITE VARIANCE INNOVATIONS. 1 Introduction. Mor Ndongo 1 & Abdou Kâ Diongue 2 THE K-FACTOR GARMA PROCESS WITH INFINITE VARIANCE INNOVATIONS Mor Ndongo & Abdou Kâ Diongue UFR SAT, Universit Gaston Berger, BP 34 Saint-Louis, Sénégal (morndongo000@yahoo.fr) UFR SAT, Universit Gaston

More information

Exercises - Time series analysis

Exercises - Time series analysis Descriptive analysis of a time series (1) Estimate the trend of the series of gasoline consumption in Spain using a straight line in the period from 1945 to 1995 and generate forecasts for 24 months. Compare

More information

AR, MA and ARMA models

AR, MA and ARMA models AR, MA and AR by Hedibert Lopes P Based on Tsay s Analysis of Financial Time Series (3rd edition) P 1 Stationarity 2 3 4 5 6 7 P 8 9 10 11 Outline P Linear Time Series Analysis and Its Applications For

More information

A Robust Approach to Estimating Production Functions: Replication of the ACF procedure

A Robust Approach to Estimating Production Functions: Replication of the ACF procedure A Robust Approach to Estimating Production Functions: Replication of the ACF procedure Kyoo il Kim Michigan State University Yao Luo University of Toronto Yingjun Su IESR, Jinan University August 2018

More information

STOR 356: Summary Course Notes

STOR 356: Summary Course Notes STOR 356: Summary Course Notes Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, NC 7599-360 rls@email.unc.edu February 19, 008 Course text: Introduction

More information

CHAPTER 8 MODEL DIAGNOSTICS. 8.1 Residual Analysis

CHAPTER 8 MODEL DIAGNOSTICS. 8.1 Residual Analysis CHAPTER 8 MODEL DIAGNOSTICS We have now discussed methods for specifying models and for efficiently estimating the parameters in those models. Model diagnostics, or model criticism, is concerned with testing

More information

ARMA models with time-varying coefficients. Periodic case.

ARMA models with time-varying coefficients. Periodic case. ARMA models with time-varying coefficients. Periodic case. Agnieszka Wy lomańska Hugo Steinhaus Center Wroc law University of Technology ARMA models with time-varying coefficients. Periodic case. 1 Some

More information

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Lecture 1: Fundamental concepts in Time Series Analysis (part 2) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC)

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz

More information

2. Multivariate ARMA

2. Multivariate ARMA 2. Multivariate ARMA JEM 140: Quantitative Multivariate Finance IES, Charles University, Prague Summer 2018 JEM 140 () 2. Multivariate ARMA Summer 2018 1 / 19 Multivariate AR I Let r t = (r 1t,..., r kt

More information

Spectral Analysis for Intrinsic Time Processes

Spectral Analysis for Intrinsic Time Processes Spectral Analysis for Intrinsic Time Processes TAKAHIDE ISHIOKA, SHUNSUKE KAWAMURA, TOMOYUKI AMANO AND MASANOBU TANIGUCHI Department of Pure and Applied Mathematics, Graduate School of Fundamental Science

More information

Stochastic volatility models: tails and memory

Stochastic volatility models: tails and memory : tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm Booth School of Business, University of Chicago Business 41914, Spring Quarter 017, Mr Ruey S Tsay Solutions to Midterm Problem A: (51 points; 3 points per question) Answer briefly the following questions

More information

Detection of structural breaks in multivariate time series

Detection of structural breaks in multivariate time series Detection of structural breaks in multivariate time series Holger Dette, Ruhr-Universität Bochum Philip Preuß, Ruhr-Universität Bochum Ruprecht Puchstein, Ruhr-Universität Bochum January 14, 2014 Outline

More information

Time Series Analysis. Correlated Errors in the Parameters Estimation of the ARFIMA Model: A Simulated Study

Time Series Analysis. Correlated Errors in the Parameters Estimation of the ARFIMA Model: A Simulated Study Communications in Statistics Simulation and Computation, 35: 789 802, 2006 Copyright Taylor & Francis Group, LLC ISSN: 0361-0918 print/1532-4141 online DOI: 10.1080/03610910600716928 Time Series Analysis

More information

Akaike criterion: Kullback-Leibler discrepancy

Akaike criterion: Kullback-Leibler discrepancy Model choice. Akaike s criterion Akaike criterion: Kullback-Leibler discrepancy Given a family of probability densities {f ( ; ψ), ψ Ψ}, Kullback-Leibler s index of f ( ; ψ) relative to f ( ; θ) is (ψ

More information

Ch. 14 Stationary ARMA Process

Ch. 14 Stationary ARMA Process Ch. 14 Stationary ARMA Process A general linear stochastic model is described that suppose a time series to be generated by a linear aggregation of random shock. For practical representation it is desirable

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

3 Theory of stationary random processes

3 Theory of stationary random processes 3 Theory of stationary random processes 3.1 Linear filters and the General linear process A filter is a transformation of one random sequence {U t } into another, {Y t }. A linear filter is a transformation

More information

Ch 6. Model Specification. Time Series Analysis

Ch 6. Model Specification. Time Series Analysis We start to build ARIMA(p,d,q) models. The subjects include: 1 how to determine p, d, q for a given series (Chapter 6); 2 how to estimate the parameters (φ s and θ s) of a specific ARIMA(p,d,q) model (Chapter

More information

A Gaussian state-space model for wind fields in the North-East Atlantic

A Gaussian state-space model for wind fields in the North-East Atlantic A Gaussian state-space model for wind fields in the North-East Atlantic Julie BESSAC - Université de Rennes 1 with Pierre AILLIOT and Valï 1 rie MONBET 2 Juillet 2013 Plan Motivations 1 Motivations 2 Context

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Robustness of Principal Components

Robustness of Principal Components PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

Time Series Analysis -- An Introduction -- AMS 586

Time Series Analysis -- An Introduction -- AMS 586 Time Series Analysis -- An Introduction -- AMS 586 1 Objectives of time series analysis Data description Data interpretation Modeling Control Prediction & Forecasting 2 Time-Series Data Numerical data

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Principles of Statistical Inference Recap of statistical models Statistical inference (frequentist) Parametric vs. semiparametric

More information

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Tune-Up Lecture Notes Linear Algebra I

Tune-Up Lecture Notes Linear Algebra I Tune-Up Lecture Notes Linear Algebra I One usually first encounters a vector depicted as a directed line segment in Euclidean space, or what amounts to the same thing, as an ordered n-tuple of numbers

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information