Vector Autoregresive Moving. Average Identification for. Macroeconomic Modeling: Algorithms and Theory

Size: px
Start display at page:

Download "Vector Autoregresive Moving. Average Identification for. Macroeconomic Modeling: Algorithms and Theory"

Transcription

1 ISSN X Department of Econometrics and Business Statistics Vector Autoregresive Moving Average Identification for Macroeconomic Modeling: Algorithms and Theory D. S. Poskitt November 2009 Working Paper 12/2009

2 Vector Autoregresive Moving Average Identification for Macroeconomic Modeling: Algorithms and Theory D. S. Poskitt Department of Econometrics and Business Statistics, Monash University, VIC 3800 Australia November 2009 JEL classification: C32,C52,C63,C87

3 Vector Autoregresive Moving Average Identification for Macroeconomic Modeling: Algorithms and Theory Abstract: This paper develops a new methodology for identifying the structure of VARMA time series models. The analysis proceeds by examining the echelon canonical form and presents a fully automatic data driven approach to model specification using a new technique to determine the Kronecker invariants. A novel feature of the inferential procedures developed here is that they work in terms of a canonical scalar ARMAX representation in which the exogenous regressors are given by predetermined contemporaneous and lagged values of other variables in the VARMA system. This feature facilitates the construction of algorithms which, from the perspective of macroeconomic modeling, are efficacious in that they do not use AR approximations at any stage. Algorithms that are applicable to both asymptotically stationary and unit-root, partially nonstationary (cointegrated) time series models are presented. A sequence of lemmas and theorems show that the algorithms are based on calculations that yield strongly consistent estimates. Keywords: Algorithms, asymptotically stationary and cointegrated time series, echelon canonical form, Kronecker invariants, VARMA models.

4 1 Introduction Since the appearance of the seminal work of Sims (1980) on the relationship between abstract macroeconomic variables and stylized facts as represented by statistical time series models, vector autoregressive (VAR) models of the form A(B)y t = u t, t = 1,..., T, (1.1) have become the cornerstone of much macroeconomic modeling. In equation (1.1) the vector y t = (y 1t,..., y vt ) denotes a v component observable process. The v v matrix operator A(z) = A 0 + A 1 z A p z p in the backward shift or lag operator B, viz. By t = y t 1, determines the basic evolutionary properties of the observed process y t and the stochastic disturbance, u t = (u 1t,..., u vt ), which is unobserved, determines how chance or random influences enter the system. Apart from their use as the main tool in numerous multivariate macroeconomic forecasting applications (as in Doan, Litterman and Sims, 1984), VARs have found broad application as the foundation of much dynamic macroeconomic modeling. They are used to study long-run equilibrium behaviour, with researchers investigating vector error correction models constructed from VARs fitted to macroeconomic time series (following Engle and Granger, 1987). In structural VAR (SVAR) models, VARs coupled with restrictions derived from economic theory are used to examine the effects of structural shocks on key macroeconomic variables (for a recent contribution see Christiano, Eichenbaum and Vigfusson, 2006). In dynamic stochastic general equilibrium (DSGE) models, VARs are used as auxiliary models for indirect estimation of the DSGE model parameters (Smith, 1993), and to provide approximations to the solutions of DSGE models that have been expanded around their steady state (Del Negro and Schrfheide, 2004). This ubiquitous use of VARs has occurred despite their limitations being well known. First, VAR specifications form an unattractive class of models for modeling macroeconomic variables since they are not closed under aggregation, marginalization or the presence of measurement error, see Fry and Pagan (2005) and Lütkepohl (2005). Secondly, economic models often imply that the observed processes have a vector autoregressive moving average (VARMA) representation with a non-trivial moving average component, as in Cooley and Dwyer (1998), and, more recently, Fernández-Villaverde, Rubio-Ramírez and Sargent (2005), who have shown that linearized versions of DSGE models generally imply a finite order VARMA structure. In order to expand the representation in (1.1) into the more general VARMA class, let us assume that u t is a full rank, zero mean, p-dependent stationary process with covariance E[u t u t+τ ] = Γ ξ (τ) = Γ ξ ( τ), τ = 0, ±1, ±2,..., p. This implies the existence of a sequence of zero mean, uncorrelated random variables ε t, defined on the same probability space as u t, such that u t = M(B)ε t, t = 1,..., T, where E[ε t ε t] = Σ > 0 and, without loss of generality, the v v matrix operator M(z) = M 0 + M 1 z M p z p satisfies det(m(z)) 0, z < 1 (see Hannan, 1971, Theorem 10 and the associated discussion). Substituting u t = M(B)ε t 1

5 into equation (1.1) gives us the VARMA form A(B)y t = M(B)ε t. (1.2) The process y t is assumed to evolve over the time period t = 1,..., T, according to the specification given in (1.2), starting from initial values given by y t = ε t = 0, t 0. The stochastic behaviour of y t is now clearly dependent on the operator pair [A(z) : M(z)], with random variation induced by the random disturbances, or shocks, ε t. More formally, it will be assumed that the disturbances, or innovations, possess the following probability structure: Assumption 1 The process ε t = ( ε 1t,..., ε vt ) is a stationary, ergodic, martingale difference sequence. Thus if F t denotes the σ-algebra generated by ε(s), s t, then E[ε t F t 1 ] = 0. Furthermore, E[ε t ε t F t 1 ] = Σ > 0 and E[ε k jt ] <, j = 1,..., v, k 2. In situations where the theoretical background gives rise to a VARMA model it might be expected that a VAR of high order could be used to approximate the true VARMA structure. Results in the recent literature suggest, however, that such an approach could be fraught with difficulties. Conditions under which VARs can be trusted are examined in Fernández- Villaverde et al. (2005) and Canova (2006), and Chari, Kehoe and McGrattan (2007) state that the currently available data is prohibitive, leading to VARs that have too short of a lag length and that provide poor approximations to real business cycles. For a simulated model that has both DSGE elements and data dynamics Kapetanios, Pagan and Scott (2007) suggest that a sample of observations with a VAR of order 50 is required to adequately capture the effect of some of the shocks. Ravenna (2007) also points out that using a VAR to capture the dynamics of a DSGE model that has in truth a VARMA representation can be misleading, and warns researchers to be cautious when relying on evidence from VARs to build DSGE models. Given that the limitations and pitfalls of VARs for macroeconomic analysis have been well documented, one might imagine that applied macroeconomic researchers would have been compelled to consider implementing VARMA models instead. Practitioners appear to have been reluctant to embrace VARMA models however. The reason for this reluctance is, perhaps, that the complexities associated with the identification and estimation of VARMA models stand in sharp contrast to the ease and accessibility of VARs. Multivariate time series models have, of course, been given considerable attention in the past and accounts of many of the methods and techniques available are given in Hannan and Deistler (1988) and Lütkepohl (2005), for example. Nevertheless, the question of how best to determine the internal structure of a VARMA model in a direct and straightforward manner has not been completely resolved. Two techniques of identification predominate: 1. The scalar-component methodology pioneered by Tiao and Tsay (1989), and further developed in Athanasopoulos and Vahid (2008). This method uses an adaptation of the canonical correlation analysis introduced in Akaike (1974b) to detect various linear dependencies implied by different structures. It relies on the solution of different eigenvalue problems and solves the underlying multiple decision problem via a sequence of hypothesis tests; 2

6 2. The echelon form methodology developed in Hannan and Kavalieris (1984) and Poskitt (1992). In this approach the coefficients of an VARMA model expressed in echelon canonical form are estimated and the associated Kronecker indices determined using regression techniques and model selection criteria, à la AIC (Akaike, 1974a) or BIC (Schwarz, 1978). An illuminating exposition of the similarities and differences between scalar-component models and echelon forms is given in Tsay (1991), and Athanasopoulos, Poskitt and Vahid (2007) present a detailed analysis and comparison of these two techniques, highlighting the relative merits and advantages of each method (c.f. Nsiri and Roy, 1992). The lack of a single welldefined multivariate parallel to the classical Box-Jenkins ARMA methodology for univariate time series has, no doubt, discouraged researchers from employing VARMAs in practice, despite the fact that While VARMA models involve additional estimation and identification issues, these complications do not justify systematically ignoring these moving average components,. (Cooley and Dwyer, 1998). The broad aim of this paper is to fill this gap and operationalize the use of VARMA models to the point where they can be routinely employed as part of the basic toolkit of the applied macroeconomist. The paper develops a coherent methodology for identifying and estimating VARMA models that can be fully automated. The approach adopted is to construct a modification of the echelon form methodology using a new technique to determine the Kronecker invariants. The scalar-components method is not considered here, firstly, because it is not amenable to automation in a manor similar to that used for VARs as is the echelon form methodology. Secondly, given the significance of cointegration in the practical analysis of economic and financial time series we wish to investigate unit-root nonstationary cointegrated systems and examine the consequences of applying our methods to identify cointegrated VARMA structures. Extensions of the echelon form methodology to cointegrated VARMA models have been analysised in Lütkepohl and Claessen (1997), Bartel and Lütkepohl (1998) and Poskitt (2003, 2006), but to our knowledge similar extensions of the scalar-components methodology to cointegrated processes are not currently available. In both the scalar-components and the echelon form methodologies the initial step is to fit a high-order VAR, the associated residuals are then used as plug in estimates for the unknown innovations in subsequent stages of the analysis. The applied macroeconomic literature referred to earlier questions the practical efficacy of using long VAR approximations, however, and intimates that the quality of the VAR innovations estimates is likely to be poor. Moreover, Poskitt (2005) presents theoretical arguments showing why the use of the first stage VAR residuals can lead to serious overestimation of the VARMA orders. A novel feature of the inferential procedure developed here is that it does not require the use of autoregressive approximations, thereby circumventing any problems that might be inherent in using a VAR in a macroeconomic modeling context. The paper is organised as follows. The following section defines the the inverse echelon form and Kronecker invariants. Section 3 analyzes a single equation canonical representation that forms the basis of the identification of the Kronecker invariants. An Algorithm for the 3

7 identification of the Kronecker invariants of a stationary ARM A process is then presented in Section 4. Section 5 gives theoretical results stating conditions under which almost sure convergence of the estimated values to the true Kronecker invariants can be achieved. Section 6 shows how the canonical representation considered in Section 3 can be adapted to allow for cointegrated processes and Section 7 then presents a modification of the identification procedure that gives rise to a strongly consistent model selection process that is applicable to cointegrated processes. The eighth section of the paper presents the theoretical results underpinning the technique outlined in Section 7. Section 9 presents a brief conclusion. 2 The Inverse Echelon Form and Kronecker Invariants Before continuing let us establish some additional notational conventions and assumptions. The order of [A(z) : M(z)] is defined as p = max 1 i v n i where, for i = 1,..., v, n i = δ i [A(z) : M(z)] denotes the polynomial degree of the ith row of [A(z) : M(z)]. The integers n r, r = 1,..., v, are called the Kronecker indices. The Kronecker indices determine the lag structure of the, so called, inverse echelon form, which is characterized by an operator pair [A(z) : M(z)] with polynomial elements that satisfy for all r, c = 1,..., v, (i) a rc,0 = m rc,0, (ii) m rr (z) = 1 + m rr,1 z m rr,nr z nr, m rc (z) = m rc,nr n rc +1z n r n rc m rc,nr z n r and (iii) a rc (z) = a rc,0 + a rc,1 z a rc,nr z nr, (2.1) where n rc = { min(nr + 1, n c ) r c, min(n r, n c ) r < c. The restrictions implicit in (2.1) differ from those commonly found in the literature on echelon forms, see Hannan and Deistler (1988, 2.5) for detailed a discussion of the conventional case. Conditions (2.1)(i)&(ii) imply that the standard normalization A 0 = M 0, with unit leading diagonal, is imposed, but (2.1)(ii) implies that additional exclusion constraints are placed upon lower order coefficients of M(z) according to the relative lag lengths, rather than A(z). The latter feature arises because the inverse echelon form is constructed from the mapping [A(z) : M(z)] Ψ(z) defined by M(z)Ψ(z) = A(z), wherein the coefficients Ψ 0, Ψ 1, Ψ 2,... are derived from the recursive relationships i M j Ψ i j = A i, i = 0,..., p, and j=0 p M j Ψ i j = 0, i = p + 1,.... (2.2) j=0 Note that Ψ 0 = I and Ψ i <, i = 0, 1,..., where Ψ j 2 = trψ j Ψ j, the Euclidean norm. If det M(z) 0, z 1 then Ψ i 0 at an exponential rate as i and the power series Ψ(z) = lim N N 0 Ψ iz i will be convergent for z 1. The nomenclature is based 4

8 on the fact that it is the mapping obtained via (2.2) which allows us to invert the VARMA representation and express the innovation process in terms of the model parameters and the observables, namely ε t = t 1 j=0 Ψ jy t j. If we let ARMA E (ν) denote the class of all VARMA models in inverse echelon form with multi-index ν = {n 1,..., n v }, then ARMA E (ν) defines a canonical structure for the set of VARMA models with McMillan degree m = v i=1 n i. Assumption 2 The pair [A(z) : M(z)] are (left) coprime and [A(z) : M(z)] ARMA E (ν). It will be supposed that neither det A(z) or det M(z) is identically zero and that the determinants of the polynomial matrices A(z) and M(z) satisfy det A(z) 0 z < 1 and det M(z) 0, z 1. Note that Assumption 2 allows for the possibility that A(z) has zeroes on the unit circle. Let us assume that det A(z) has ζ v roots of unity, all other zeroes lie outside the unit circle, and that the individual series y it, i = 1,..., v, are asymptotically stationary after first differencing, i.e., y t = (1 B)y t = y t y t 1, t = 1,..., T, is asymptotically stationary. Then the process y t is non-stationary and cointegrated. We will deal with cointegrated processes in detail below, having first examined the asymptotically stationary case. the stationary case the condition on the zeroes of A(z) in Assumption 2 is strengthened to det A(z) 0, z 1. Assumption 2. For We will refer to the strengthened version of Assumption 2 as The Kronecker indices are not invariant with respect to an arbitrary reordering of the elements of y t and to this extent the inverse echelon canonical form is only unique modulo such rotations. The variables in y t = (y 1t,..., y vt ) can be permuted, however, such that the Kronecker indices of (y r(1)t,..., y r(v)t ) are arranged in descending order, n r(1) n r(2) n r(v), where r(j), j = 1,..., v, denotes a permutation of 1,..., v that induces the ordering. The r(j), j = 1,..., v, are unique modulo rotations that leave the ordering n r(1) n r(v) unchanged and (r(j), n r(j) ), j = 1,..., v, are referred to as the Kronecker invariants. When expressed in terms of the Kronecker invariants not only is the representation of the system in inverse echelon form canonical but the coefficient matrix A 0 = M 0 is lower triangular and the individual variables y r(j)t, j = 1,..., v are uniquely characterized. In practice, of course, the Kronecker invariants will not be known and we wish to consider identifying them in the sense of estimating or determining them from the data. Moreover, given that the numbering of the variables in y t assigned by the practitioner is arbitrary, identification of the Kronecker invariants involves the determination of not only the values of n r(1) n r(2) n r(v), but also the permutation (r(1),..., r(v)) of the labels (1,..., v) attached to the variables. At the risk of getting ahead of ourselves, suppose that we know that max{n 1,..., n v } h. We might contemplate examining all ARMA structures in the set {ARMA E (ν) : ν {ν = (n 1,..., n v ) : 0 n r h, r = 1,..., v}}. If a full search over all such structures were to be conducted then a total of (h + 1) v specifications would have to be examined; if v = 5 and h = 12, say, this means estimating different ARMA E (ν) models. This brings us face to face with the curse of dimensionality. Considerable savings can be made, however, by noting that n r(j) specifies the degree of the lag operators in the representation of y r(j)t and the pair (r(j), n r(j) ), j = 1,..., v, can therefore be identified on 5

9 a variable by variable, or equivalently, equation by equation, basis. Determination of the Kronecker invariants variable by variable involves examining v(h + 1) different specifications at most; if v = 5 and h = 12 this gives an upper bound of 65, rather than the previous total of To determine the Kronecker invariants equation by equation, however, we require a univariate specification for each variable that is derived from the overall system representation which allows the Kronecker invariant pairs (r(j), n r(j) ), j = 1,..., v, to be isolated. We derive such a specification in the next section. 3 A Single Equation Canonical Structure Various aspects of the relationship between VARMA models and the structure of the individual univariate series have been discussed in the literature, but none consider specifications that are suitable for our current purposes since they all convolve the individual operators in such a way as to disguise their underlying polynomial degrees. The final form (Wallis, 1977), for example, is obtained by pre-multiplying (1.2) by the adjoint of A(z), denoted adja(z), to give det A(B)y t = adja(b)m(b)ε t. (3.1) In general, the operators on the left and right hand sides of this expression all have degree equal to m. Consequently, although (3.1) can be used to determine the McMillan degree of the overall system (see Remark 3 below), it does not yield univariate specifications suitable for the identification of the Kronecker invariants. In order to identify n r(1) n r(2) n r(v), we now introduce a single equation canonical structure, derived from (1.2), that does not obscure the Kronecker invariants. The single equation form depends upon the following lemma. Lemma 3.1 Suppose that y t is an ARMA process as in (1.2) satisfying Assumptions 1 and 2. Then for each choice of the process ν t = u jt, j = 1,..., v, where u t = u 1t. u vt = A(B)y t, there exists a zero mean, scalar white noise process η t, with variance σ 2 η, defined on the same probability space as y t, such that ν t = η t + µ 1 η t µ n η t n, where n = n j and the coefficients µ 1,..., µ n of µ(z) 1 = n s=1 µ sz s are such that the auto-covariance generating function of ν t equals σηµ(z)µ(z 2 1 ) and µ(z) 0, z 1. Proof: That u t is a moving-average process of order p is obvious from expression (1.2). Now let e j = (0,..., 0, 1, 0,..., 0) denote the j th v element Euclidean basis vector. From 6

10 standard results on linear filtering we know that the spectral density of ν t = e j u t is S ν (ω) = 1 2π e jm(ω)σm(ω) e j, with corresponding autocovariance generating function ρ(z) = s= n ρ s z s = e jm(z)σm(z 1 ) e j, where n = n j. The remainder of the proof is standard. The polynomial z n ρ(z) has 2n roots and since the coefficients ρ s = ρ s, s = 1,..., n, of ρ(z) are real, the roots are real or occur in complex conjugate pairs. Now, ρ(ω) = 2πS ν (ω) > 0, < ω π, so ρ(z) has no zeroes on the unit circle and we may number and group the roots into two sets {ζ 1,..., ζ n } and { ζ 1 1,..., ζ 1 n } such that ζ s < 1, s = 1,..., n. Hence we can construct the operator m(z) = m 0 n s=1 (1 ζ s z) where m 0 = { ( ζ s )ρ n } 1 2 such that n ρ(z) = ρ n z n (1 ζ s z)(1 s=1 = m(z)m(z 1 ). ζ 1 s z) Thus we can select n roots of ρ(z) to construct µ(z) = 1 + µ 1 z µ n z n such that ρ(z) = σηµ(z)µ(z 2 1 ), where µ(z) = m(z)/m 0 and ση 2 = m 2 0, and the roots are chosen in such a way that µ(z) 0, z 1. The existence of the white noise process η t providing a moving-average representation of ν t now follows from the spectral factorization theorem, Rozanov (1967, Theorem 9.1, p. 41), c.f. Lütkepohl (2005, Proposition 6.1) A consequence of Lemma 3.1 is that each variable in y t admits a scalar ARMAX representation in which the exogenous variables are chosen from contemporaneous and lagged values of other variables in the VARMA system. Proposition 3.1 Let y t be an ARMA process satisfying Assumptions 1 and 2, and suppose that the variables have been ordered (renumbered) according to the Kronecker invariants, so that, with a slight abuse of notation, y t = (y 1t,..., y vt ) = (y r(1)t,..., y r(v)t ) and n j = n r(j), j = 1,..., v. Then for each j = 1,..., v the jth equation in (1.2) is equivalent to a scalar ARMAX specification for z t = y jt of the form z t + j 1 α s z t s + β i y it + s=1 i=1 v i=1 i j β i,s y it s = η t + s=1 µ s η t s, (3.2) where the order n = n j. Moreover, α(z) = 1+ n s=1 α sz s, β(z) = (β 1 + n s=1 β 1,sz s,..., β j s=1

11 n s=1 β j 1,sz s, 0, n s=1 β j+1,sz s,..., n s=1 β v,sz s ) and µ(z) = 1+ n s=1 µ sz s are coprime, and the regressors y it, i = 1,..., j 1, and y it s, i = 1,..., v, i j, s = 1,..., n are predetermined relative to z t in the representation (3.2). Proof: To begin, recall that for y t ordered according to the Kronecker invariants the echelon form in (2.1) is such that A 0 = M 0 is lower triangular and the system representation is (contemporaneously) recursive. Let a(z) = e ja(z) denote the nonzero coefficients in the jth row of A(z). Then, for the jth equation of (1.2) we have where, by Lemma 3.1, j 1 a(b)y t = y jt + a ji,0 y it + i=1 n v j a ji,s y it s = u jt (3.3) i=1 s=1 u jt = ν t = η t + µ 1 η t µ n η t n. (3.4) Setting z t = y jt and reorganizing (3.3), adopting the notational conventions α s = a jj,s, s = 1,..., n = n j, β i = a ji,0, i = 1,..., j 1, and β i,s = a ji,s, i = 1,..., v, i j, s = 1,..., n, now gives us the scalar ARMAX specification in (3.2). To show that a(z) and µ(z), and hence the polynomials α(z), β(z) and µ(z), are coprime, assume otherwise. Then we could cancel the common factors in the representation a(b)y t = µ(b)η t obtained by combining (3.3) and (3.4) to give ā(b)y t = µ(b)η t where δ[ā, µ] < n j. Implementing the same technique as employed in Poskitt (2005, pp ), we could now use rows 1,..., j 1 and j + 1,..., v of (1.2), together with ā(b)y t = µ(b)η t, to construct a unimodular matrix Ū(z) I and an operator pair [Ā(z) : M(z)] ARMAE ( ν), with multi-index ν = { n 1,..., n v }, n j n j, such that [A(z) : M(z)] = Ū(z)[Ā(z) : M(z)], contradicting Assumption 2. Finally, it is obvious that the lagged values y it s, i = 1,..., v, i j, s = 1,..., n, are predetermined. That the same is true of the contemporaneous regressors y it, i = 1,..., j 1, follows from the fact that the structure is recursive and we can orthogonalize the innovation process whilst maintaining both the recursive structure and the moving average row degrees. To verify this, let Σ = CDC denote the Choleski decomposition of Σ where C is lower triangular with unit leading diagonal elements and D = diag(d , d 2 v). Then w t = C 1 ε t is a martingale difference innovation sequence with covariance matrix D and we can rewrite the ARMA system in (1.2) as A 0 y t + p A s y t s = L 0 w t + s=1 p L s w t s. (3.5) where L 0 = M 0 C is again lower triangular with unit leading diagonal elements, and because pre multiplication by M s reproduces zero-row structure, each L s = M s C, s = 1,..., n, has the same null rows as the corresponding M s. From the jth equation of (3.5) we now have j 1 u jt = w jt + l ji,0 w it + i=1 s=1 n v j l ji,s w it s. (3.6) i=1 s=1 8

12 Thus, from (3.3) and (3.6) we see that in addition to w it, the variable y it depends at most on w 1t,..., w i 1t. Since by construction the elements of w t are mutually uncorrelated, it follows that for i = 1,..., j 1 we have E[y it w jt F t 1 ] = 0, as required. Two aspects of Proposition 3.1 that are of particular interest here are; (i) that the degree of the lag operators in (3.2) depends only on the value of the Kronecker invariant associated with the variable at hand, and (ii) that the contemporaneous component depends only on those variables associated with a smaller Kronecker invariant. This means that knowledge of the Kronecker index associated with y r(j)t tells us the lag length of all the variables appearing in the ARMAX realization of y r(j)t, and knowing the ranking of the Kronecker index relative to the other indices, i.e. knowledge that n r(j) n r(i), i = j + 1,..., v, tells us about the recursive structure. Note that explicit knowledge of the values of the other Kronecker invariants is not required, the ARMAX specification of the r(j)th equation being a known function of d j (n) = (j 1) + (v + 1)n parameters. These features suggest that a sensible approach to adopt to the identification of the Kronecker invariants is to search through a collection of ARMAX models for each variable supposing that the fitted order coincides with the smallest unknown Kronecker invariant. The details of such a procedure are presented in the following section. 4 Identification Algorithm: Stationary Case Let y t, t = 1,..., T denote a realisation of T observations where y t is an ARMA process as in (1.2) satisfying Assumptions 1 and 2. We have already observed that identification of the Kronecker invariants involves the determination of both the values of n r(1) n r(v) and the permutation of the variables in y t, P(1,..., v) = (r(1),..., r(v)) say, that results in an ARMA E form with multi-index (n r(1),..., n r(v) ) for Py t = (y r(1)t,..., y r(v)t ). The following algorithm identifies the Kronecker invariants equation by equation whilst constructing P via a sequence of elementary row operations. The algorithm exploits the implications of Proposition 3.1 and represents an adaptation to VARMA models of an approach to the identification of the order of scalar processes first outlined in Poskitt and Chung (1996). ALGORITHM ARMA E (ν). Initialization: Set n = 1, j = v, P = I and N = {1,..., v}. Compute the mean corrected values ȳ t = y t ȳ, t = 1,..., T, where ȳ = T 1 T i=1 y t. For each i N, set σ 2 η,i (0) equal to the residual mean square from the regression of ȳ it on ȳ kt, k = 1,..., v, k i. while: j 1 for i(k) N, k = 1,..., v, 1. Set ỹ t = E i(k),j [Py t ], where E r1,r 2 denotes the v v elementary matrix that induces an interchange of rows r 1 and r 2 in H when postmultiplied by H, and evaluate initial estimates of the jth scalar ARMAX form for z t = ȳ i(k)t : 9

13 (a) Construct estimates of the nonzero coefficients in ã(z) = e jã(z), the jth row of Ã(z) = E i(k),jpa(z)p E i(k),j, by solving the equations ã s Cy (r + s) = 0, r = n + 1,..., 2n, (4.1) s=0 for ã 0 = ( ã 1,0,..., ã (j 1),0, 1, 0,..., 0), and ã s = ( ã 1,s,..., ã v,s ), s = 1,..., n, where T r C y (r) = C y ( r) = T 1 ỹ t ỹ t r for r = 1,, T 1. Now set α s = ã j,s, s = 1,..., n, βi = ã i,0, i = 1,..., j 1, and βi,s = ã i,s, i = 1,..., v, i j, s = 1,..., n. (b) For r = 1,, n form C v (r) = C v ( r) = = ã s Cy (r + s u) ã u s=0 u=0 ã(ω)ĩy(ω) ã(ω) exp( ıωr)dω (4.2) where ã(ω) = ã 0 + ã 1 exp(ıω) + + ã n exp(ıωn) = ã(z) z=e ıω and Ĩ y (ω) = 1 2π 1 s= T +1 C y (s) exp(ıω). Set S v (ω) = 1 2π s= n C v (s) exp(ıωs). (4.3) Compute estimates µ s, s = 1,..., n, of the coefficients in the scalar moving average representation of v t = ũ jt, where ũ jt = e j M(L)ε t and M(z) = E i(k),j PM(z), by solving the equation system µs Civ (l + s) = 0, l = 1,..., n s=0 where Ci v (r) = ã(ω)ĩy(ω) ã(ω) S v (ω) 2 exp( ıωr)dω. (4.4) 2. Compute a pseudo maximum likelihood estimate (pseudo MLE) of the innovation variance σ 2 η. (a) Form the v + 1 vector sequence ( ξ t, φ t, ) by solving [ ] [ ] [ ξt ξt s ỹt + µs = φt φt s ηt s=1 ] 10

14 for t = 1,..., T where ηt = = ã j,s ỹ(t s) s=0 s=1 j 1 αs z t s + β i ỹ it + s=0 i=1 µs ηt s v i=1 i j β i,s ỹ it s s=1 s=1 µs ηt s and the recursions are initiated at η t = 0 and ( ξ t, φ t, ) = 0, t 0. (b) Set c η (0) = T 1 T η 2 t t and calculate c η ξ(s) = T 1 T t=s+1 Now compute the mean squared error σ 2 η(n) = c η (0) + ηt ξ t s and c η φ (s) = T 1 ã s c η ξ(s) s=0 t=s+1 µ s c η φ (s) s=1 ηt φt s. where ã s, s = 0,..., n, and µ s, s = 1,..., n, denote the coefficient values obtained from the Toeplitz regression of η t on ξit, i = 1,..., j 1, and ξt s, s = 1,..., n, and φ t s, s = 1,..., n. 3. Apply model selection rule: (a) Evaluate the criterion function IC T (n) = T log σ 2 η(n) + p T (d j (n)) where the penalty term p T (d j (n)) > 0 is a real valued function, monotonically increasing in d j (n) and non-decreasing in T. (b) if IC T (n) > IC T (n 1); set r(j) = i(k); n r(j) = n 1; and update P = E i(k),j P; N = N \ i(k); j = j 1. end end for i(k) N. if n < h T ; increment n = n + 1. else for i(k) N, k = 1,..., v, end end when j = 0. set n r(j) = n; r(j) = i(k); and update P = E i(k),j P; N = N \ i(k); j = j 1. end for i(k) N. 11

15 Some remarks on the algorithm s rationale and numerical implementation are in order: REMARK 1. The first step of the algorithm is designed to provide first stage consistent estimates of the parameters in the scalar ARMAX representation in (3.2). Step 1(a) is based on the fact that from Proposition 3.1 it follows that ỹ t n s, s = 1,..., n, form a natural set of instruments to use to estimate the autoregressive and predetermined regressor coefficients of the r(j)th equation. Thus, post multiplying by ỹ t n s, taking expectations, and writing Γ y (r) = E[ỹ t ỹ t r] for the theoretical autocovariance function of ỹ t, remembering that in Proposition 3.1 the variables are assumed to be ordered according to the Kronecker invariants, we find that ã s Γy (r + s) = 0, r = n + 1,..., 2n. (4.5) s=0 We can see that expression (4.1) forms an empirical counterpart to (4.5) in which the variables have been appropriately permuted. Given that C y (r) is a strongly consistent estimator of Γ y (r) it follows immediately that if the Kronecker invariant pairs (r(i), n r(i) ), i = j,..., v, are correctly specified, then α(z) and β(z) will yield strongly consistent estimates of α(z) and β(z) respectively. A corollary of the consistency of α(z) and β(z) is that ã(ω)ĩy(ω) ã(ω) exp( ıωr)dω = ã(ω) S y (ω)ã(ω) exp( ıωr)dω + o(1) almost surely (a.s.) wherein S y (ω) denotes the spectral density of ỹ t. Given that S y (ω) = (2π) 1 K(ω)Σ K(ω) where K(ω) = Ã(ω) 1 M(ω) it follows that the quadratic form ã(ω) S y (ω)ã(ω) = 2π) 1 e M(ω)Σ M(ω) j e j = (2π) 1 ση µ(ω) 2 2 and hence that the autocovariance estimate computed in Step 1(b) at (4.2) is consistent for the autocovariance of ν t = ũ jt. The spectral estimator Sv (ω) computed at (4.3) is therefore consistent for S v (ω), the spectrum of ν t. Similarly, ã(ω)ĩy(ω) ã(ω) S v (ω) 2 exp( ıωr)dω = 1 exp( ıωr)dω + o(1) S v (ω) a.s. so the Ci v (r) in (4.4) yield consistent estimates of the corresponding inverse autocovariances, the coefficients in the Fourier expansion of S v (ω) 1 = 2π/ση µ(ω) 2 2. From the latter it follows that µ s, s = 1,..., n, provide consistent estimates of the coefficients in µ(z). Detailed particulars of the arguments underlying this heuristic rationale are presented below. REMARK 2. Under Gaussian assumptions T 2 log 1 T η t 2, t 12

16 where µ(b) η t = α(b)z t + β(b) ỹ t = ã(b)ỹ t, forms an approximation to the kernel of the marginal log likelihood of the scalar ARMAX specification, concentrated with respect to σ 2 η. Recalling the convention about values before t = 1, we find that and hence that η t α s = ξ j(t s), η t β is = ξ i(t s) and η t µ s = φ t s, T t η2 t a j = 2 ξt j η t and T t η2 t µ j = 2 φ t j η t. Thus Step 2 may be viewed as a Gauss-Newton iteration designed to minimize T 1 T t η2 t, in line with the Hannan and Rissanen (1982) procedure, only now the calculations are initiated with the consistent parameter estimates provided by ã(z) and µ(z). Revised parameter estimates are constructed as α s + α s, βi,s + βi,s and µ s + µ s, s = 1,, n, where the parameter adjustments are given by the regression coefficients in the regression of η t on ξit, i = 1,..., j 1, and ξt s and φ t s, s = 1,..., n. The iteration can then be repeated until convergence occurs, if so desired. Because T 1 T t η2 t is quasi quadratic it seems likely that for large values of T no more than two or three iterations will be required. For theoretical purposes we will therefore assume that the minimum residual mean square has been achieved, although in order to maintain a closed form expression for the algorithm we have chosen to express the pseudo MLE via a single iteration. REMARK 3. The decision rule embodied in Step 3 leads to the identification of the Kronecker invariants as the first local minimum of IC T (n) in the interval 0 n h T, so that IC T (n) IC T (n r(j) ), for n < n r(j), and either IC T (n r(j) + 1) > IC T (n r(j) ) or n r(j) = h T, j = 1,..., v. It follows that once the practitioner has designated a specification for the criterion function IC T (n), and prescribed the upper bound h T, the identification of the Kronecker invariants is a fully automatic procedure. Many well known information theoretic criteria, such as AIC and BIC, are encompassed by IC T (n), and in light of the extensive literature on such criteria we can anticipate that if the penalty term p T (d j (n)) is assigned appropriately then asymptotically the criterion function IC T (n) will possess a global minimum when n equals the true Kronecker index, n r(j)0. That this is indeed the case is verified below, where it is shown that the Kronecker invariants identified by the algorithm will be strongly consistent if p T (d j (n)) 0 as T such that T p T (d j (n)) and log log T/(T p T (d j (n))) 0. These requirements allow for a wide range of possibilities beyond the conventional AIC or BIC type penalties, suggesting that the most appropriate choice of IC T (n) may be an empirical issue. Obviously the design parameter h T must be assigned such that h T max{n 1,..., n v }. This can be done by noting from the final form in (3.1) that each variable in y t has a scalar ARMA(q, q) representation with q m and q = m for at least one y jt, j = 1,..., v. 13

17 By applying the univariate ARMA algorithm of Poskitt and Chung (1996) to each y jt, j = 1,..., v, we can generate v estimates, q 1,..., q v say. Suppose that this is done using AIC(q) for the criterion function, for q over the range 0 q log T a, a > 1. h T Setting = max{ q 1,..., q v } yields an estimate of the McMillan degree and by Theorem 4.1 of Poskitt and Chung (1996) lim T h T m with probability one. Thus, h T provides a value for the upper bound that will exceed the largest Kronecker invariant almost surely. REMARK 4. Thus far we have expressed the computational steps of the algorithm in terms of the required statistical calculations but we have not commented on numerical implementation. Various measures can be taken to optimize the efficiency of the computations. For example, advantage can be taken of the fast Fourier transform (FFT) when evaluating the covariances and convolutions required to implement the algorithm. Thus, the covariances C y (r), r = 0, ± 1,..., ± (T 1), of the raw data y t can be calculated once and for all from C y (r) = 2π N N I y (ω s ) exp( ıω s r) s=1 where ω s = 2πs/N, s = 1,..., N 2T and I y (ω) = (2πT ) 1 Z y (ω)z y (ω) with Z y (ω) = T y t exp(ıωt). It is well known (Bingham, 1974) that this method takes an order of T log T operations rather than the order T 2 operations used with standard methods. The autocovariances and periodogram ordinates of ỹ t can then be determined using elementary row and column transformations, as in C y (r) = E i(k),j PC y (r)p E i(k),j and Ĩ y (ω) = E i(k),j PI y (ω)p E i(k),j. Similarly, the frequency domain expression for C v (r) in (4.2) is not suitable for computation, but the integral may be replaced by an appropriate Riemann sum and evaluated via the FFT using C v (r) = = 2π N s=0 u=0 ã s Cy (r + s u) ã u N ã(ω s )Ĩy(ω s ) ã(ω s ) exp( ıω s r). (4.6) s=1 Since both Ĩy(ω) and ã(ω) are polynomial (time-limited) the use of (4.6) does not induce aliasing, whereas, replacing the integral in (4.4) by 2π N N s=1 ã(ω s )Ĩy(ω s ) ã(ω s ) S v (ω s ) 2 exp( ıω s r) = j= Ci v (r + jn) clearly results in some aliasing relative to the basic definition of Civ (r). However, Sv (ω) corresponds to the power spectrum of an invertible moving average, implying that for T sufficiently large Ci v (r) < κλ r with probability one, where 0 < λ < 1 and κ denotes a fixed constant. Thus Ci v (u) j= Ci v (r + jn) < 2κ exp(n log λ)/(1 λ N ) and the effects of aliasing will disappear asymptotically. 14

18 REMARK 5. The calculation of α(z), β(z) and µ(z) are Toeplitz in nature, meaning that the matrices in the linear equations being solved have constant elements down any diagonal. This feature is particularly important in the context of µ(z) because at Step 2 it is necessary for µ(z) to be invertible, for otherwise the recursions forming ( ξ t, φ t, ) will explode. The requirement that µ(z) 0, z 1, is met since solving (4.4) for µ(z) is equivalent to solving Yule-Walker equations in the inverse autocovariances. When computing µ(z) advantage can therefore be taken of the Levinson-Durbin recursions. Indeed, we can also embed the evaluation of α(z) and β(z), as well as the calculations of Step 2, into appropriate multivariate Levinson-Durbin (Whittle) recursions. Details of the latter, which follow the development in Hannan and Deistler (1988, pp ), are omitted. It is well known that the use of Toeplitz calculations can have undesirable end-effects. These effects can be ameliorated by the use of Burg-type procedures (Paulsen and Tjøstheim, 1985; Tjøstheim and Paulsen, 1983), but the use of a data-taper in conjunction with Whittle type estimators, such as those implicitly being employed here, can be equally beneficial (Dahlhaus, 1988). Moreover, the benefits obtained via a data-taper can be achieved without incurring the additional computational burden entailed in using Burg-type procedures. Given that we envisage conducting the computations using the FFT the employment of datatapering seems natural. 5 Some Theoretical Properties In this section of the paper we will first state our main theorem and then present a set of lemmas that form the basis of its proof. Our main result presents conditions on the penalty term p T (d j (n)) assigned to the criterion function IC T (n) that will ensure that the indices obtained by implementing the above algorithm will yield consistent estimates of the Kronecker invariants. Recall that identification of the Kronecker invariants also involves the determination of the permutation (r(1),..., r(v)) of the original labels (1,..., v) attached to the variables. In what follows we will let r(q) T, q = 1,..., v, denote the reordering of r = 1,..., v induced by n r(j)t, j = 1,..., v, and we will employ the labels r(1) 0,..., r(v) 0 for the reordering associated with true Kronecker invariants n 0 r(j), j = 1,..., v. Theorem 5.1 Suppose that y t is an ARMA process satisfying Assumptions 1 and 2, and let {r(j) T, n r(j)t }, j = 1,..., v, denote the Kronecker invariant pairs obtained obtained when employing the above algorithm with p T (d j (n)) a possibly stochastic function of n and T. Then: (i) If (r(i) T, n r(i)t ) = (r(i) 0, n r(i)0 ), i = q + 1,..., v, and p T (d q (n))/t 0 almost surely as T, then n r(q)t n 0 r(q) with arbitrarily large probability, as T. (ii) If (r(i) T, n r(i)t ) = (r(i) 0, n r(i)0 ), i = q + 1,..., v, and liminf T p T (d q (n))/l(t ) > 0 almost surely, where L(T ) is a real valued, increasing function of T such that loglogt/l(t ) 0, then P r(lim T n r(q)t n 0 r(q) ) = 1. 15

19 From Theorem 5.1 it is clear that if n r(j)t = n r(j)0 and r(j) T = r(j) 0, for j = q +1,..., v, and provided that p T (d q (n))/t 0 and log log T/p T (d q (n) 0 as T, then for T sufficiently large we will have n r(q)t = n r(q)0 with probability one. Hence, bar invariant rotations, r(q) T must coincide with r(q) 0 almost surely if p T (d q (n)) satisfies the requirements of parts (i) and (ii) of Theorem 5.1. Induction on n r(q)t and r(q) T for q = 1,..., v, now yields the following corollary. Corollary 5.1 Suppose that y t is an ARMA process satisfying Assumptions 1 and 2, and let {r(j) T, n r(j)t }, j = 1,..., v, denote the Kronecker invariant pairs obtained by implementing the above algorithm. If p T (d q (n))/t 0 and log log T/p T (d q (n) 0 as T then, modulo invariant rotations, r(j) T = r(j) 0 a.s. for T sufficiently large, and P r(lim T n r(j)t = n r(j)0 ) = 1, j = 1,..., v. In what follows we will append a zero superscript to quantities of interest to indicate those values corresponding to the actual data generating mechanism giving rise to the observations, as we have already done for the Kronecker invariants. Thus, Σ 0 will denote the true system innovation variance-covariance matrix, and K 0 (ω) = Ã0 (ω) 1 M0 (ω) will represent the true transfer function of ỹ t. Similarly, α 0 (z), β 0 (z) and µ 0 (z) will denote the true autoregressive, exogenous and moving-average operators associated with the scalar ARMAX representation outlined in Proposition 3.2. Lemma 5.1 Suppose that y t is an ARMA process satisfying Assumptions 1 and 2 and assume that (r(i), n r(i) ) = (r(i) 0, n r(i)0 ) for i = j + 1,..., v. Let ã (z) = n s=0 ã sz s where the coefficients ã s, s = 0,..., n belong to the solution set of the equation system ã s Γ y (r + s) = 0, r = n + 1,..., 2n. (5.1) s=0 Set α (z) = ã (z)e j and β (z) = ã (z)(i e j e j ). Given α (z) and β (z), let µ (z) be formed from where s=0 µ ã (ω) S y (ω)ã (ω) s S v(ω) exp ıω(s r) dω = 0, r = 1,..., n, (5.2) 2 S v(ω) = 1 2π r= n ã (θ) S y (θ)ã (θ) exp ı(ω θ)r dθ. (5.3) Then, provided that for i = j +1,..., v the Kronecker invariant pairs satisfy {r(i) T, n r(i)t } = {r(i) 0, n 0 r(i) }, we have: (i) If n < n r(j)0, α(z) = α (z)+o(q T ), β(z) = β (z)+o(qt ) and µ(z) = µ (z)+o(q T ) uniformly in z 1 where Q T = (loglogt/t ) 1/2 ; (ii) If n = n r(j)0, α(z) = α 0 (z)+o(q T ), β(z) = β0 (z)+o(qt ) and µ(z) = µ 0 (z)+o(q T ) uniformly in z 1; (iii) If n > n r(j)0, α(z) = ϕ(z) α 0 (z) + O(Q T ), β(z) = ϕ(z) β0 (z) + O(QT ) and µ(z) = ϕ(z) µ 0 (z)+o(q T ) uniformly in z 1 where ϕ(z) = 1+ ϕ 1 z+...+ ϕ r z r, r = n n r(j)0, and ϕ(z) 0, z 1. 16

20 Proof: From Theorem of Hannan and Deistler (1988) we know that C y (r) Γ y (r) = O(Q T ), r = 0,..., H T (log T ) a, a <. From (4.1) and (5.1) we recognize that ã(z) and ã (z) correspond to the solutions of systems of linear equations in which the coefficient matrices of the two systems differ by terms of order O(Q T ). Moreover, the coefficient matrix of the system of equations is nonsingular by a direct application of Theorem of Hannan and Deistler (1988). We can therefore conclude that ã s ã s = O(Q T ), s = 0,..., n. See also Theorem of Hannan and Deistler (1988). Treating the polynomial operators as elements of the Hardy space H 2, the space of functions analytic for z < 1 and square integrable on z = 1, now yields the result that and α(z) α (z) 2 = ( ã(z) ã (z))e j 2 = ( α s α s) 2 = O(Q 2 T ) s=0 β(z) β (z) 2 = ( ã (z) ã (z))(i e j e j) 2 j 1 = ( βi β i )2 + i=1 v i=1 i j ( βi,s β i,s )2 = O(Q 2 T ). A parallel argument to that just employed will show that µ(z) = µ (z) + O(Q T ), z 1, if it can be verified that the difference in the coefficient matrices of the two equation systems that define the operators µ(z) and µ (z) are, likewise, O(Q T ). To establish the latter, observe that C v (r) = = = s=0 u=0 s=0 u=0 ã s Cy (r + s u) ã u s=0 u=0 s=1 ( ) ( ) ã s + O(Q T )) ( Γy (r + s u) + O(Q T ) ã u + O(Q T ) ã s Γ y (r + s u)ã u + O(Q T ) = γ v(r) + O(Q T ), say, (5.4) where γ v(r) = ã (ω) S y (ω)ã (ω) exp ıωr dω. From (5.4) it follows directly that uniformly in ω [, π]. S v (ω) = 1 2π = 1 2π s= n s= n = S v(ω) + O(Q T ) C v (s) exp(ıωs) ( ) γ v(s) + O(Q T ) exp(ıωs) Let H(ω) = ã(ω)/ Sv (ω) and set H (ω) = ã (ω)/ S v(ω). It is established below that H(ω) and H (ω) belong to L 2, with Fourier coefficients that decline at a geometric rate, and 17

21 that H(ω) H (ω) 2 dω = O(Q 2 T ). Now set γi v(r) = H (ω) S y (ω) H (ω) exp( ıωr)dω. Then, by definition, Ci v (r) γi v(r) = ( H(ω) Ĩ y (ω) H(ω) ) H (ω) S y (ω) H (ω) exp( ıωr)dω, (5.5) and suppressing the argument ω for convenience we have HĨy H H Sy H =( H H )Ĩy( H H ) + ( H H )Ĩy H H Ĩ y ( H H ) + H (Ĩy S y ) H. (5.6) Substituting (5.6) into (5.5) we can now bound Ci v (r) γi v(r) by the sum of four terms. The first term is ( H H )Ĩy( H H ) exp( ıωr)dω ( H H )Ĩy( H H ) dω. (5.7) Applying the Cauchy-Schwartz inequality to the right hand side integrand in (5.7), recognizing that Ĩy is Hermitian positive semi-definite, we find that ( H H )Ĩy( H H ) dω = 1 2πT ( H H ) Z y (ω) 2 dω ( H H 2 tr{ĩy}dω sup tr{ĩy} ω [,π] = O(log T )O(Q 2 T ), ( H H 2 dω since lim sup T [sup ω [,π] tr{ĩy(ω)}/ log T ] sup ω [,π] 2tr{ S y (ω)} (Brillinger, 1975, Theorem ) Similarly, the second term, ( H H )Ĩy H exp( ıωr)dω, and the third term, H Ĩ y ( H H ) exp( ıωr)dω, are both bounded by H Ĩ y ( H H ) ( ) ( H Ĩ H 1/2 y ( H H )Ĩy( H ) 1/2 H ) dω ( ) H Ĩ H y dω (( H H )Ĩy( H ) H ) dω ( sup tr{ĩy}) 2 H 2 dω ω [,π] = (O(log T )) 2 O(Q 2 T ). ( H H 2 dω 18

22 The fourth term is H (Ĩy S y ) H exp ı ωr dω, which equals u= s= h u[ C y (r + u s) Γ y (r + u s)] h u= s= u= s= h u[ C y (r + u s) Γ y (r + u s)] h s h u C y (r + u s) Γ y (r + u s) h s, (5.8) wherein we set C y (τ) = 0, τ T, and h u = H (ω) exp ı ωu dω, u = 0, ±1,.... Since there exists a constant κ > 0 and a parameter λ, 0 < λ < 1, such that h u < κλ u, the right hand side of equation (5.8) is less than or equal to κ 2 u <c log T s <c log T + κ 2 ( C y (0) + Γ y (0) ) C y (r + u s) Γ y (r + u s) λ u + s u c log T s c log T s λ u + s. (5.9) By Theorem of Hannan and Deistler (1988), the order of magnitude of the first term in this expression is O(Q T )4κ 2 (1 λ c log T ) 2 /(1 λ) 2 = O(Q T ). The second term is bounded by a constant times 4κ 2 λ 2c log T /(1 λ) 2, which is of order O(T 1 ) for any c 1/(2 log λ). Hence we can conclude that the coefficient matrices of the two equation systems that define the operators µ(z) and µ (z) differ by terms that are of order O(Q T ) or smaller, as was required to be shown. (i) The first statement of the lemma now follows without further ado. (ii) When n = n 0 r(j) it is readily verified that ã (z) = ã 0 (z) provides the solution to (5.1). By definition ã 0 (z) = e jã0 (z), from which it follows that ã 0 (ω) K 0 (ω) = e j M 0 (ω) where ã 0 (ω) = ã 0 (z) z=e ıω and, given that S y (ω) = (2π) 1 K0 (ω)σ 0 K0 (ω), ã 0 (ω) S y (ω) = (2π) 1 e j M 0 (ω)σ 0 K0 (ω). (5.10) From (5.10) we can conclude that ã0 (ω) S y (ω) exp ıωu dω = 0 for u > n and therefore ã 0 (ω) S y (ω) exp ıωr dω = Similarly, the quadratic form ã 0 Γ s y (r + s) = 0, r = n + 1,..., 2n. s=0 ã 0 (ω) S y (ω)ã 0 (ω) = (2π) 1 e M j 0 (ω)σ 0 M0 (ω) e j = (2π) 1 (ση) 0 2 µ 0 (ω) 2. Substituting ã (ω) = ã 0 (ω) into (5.3) we find that the equation system (5.2) corresponds to the Yule-Walker equations constructed from the (inverse) power spectrum 2π/(ση) 0 2 µ 0 (ω) 2, and hence that µ (z) = µ 0 (z). 19

23 We are therefore lead to the conclusion that α(z) = α o (z)+o(q T ), β(z) = βo (z)+o(qt ) and µ(z) = µ o (z) + O(Q T ), verifying the strong consistency claimed in Remark 1. (iii) Now consider the case n > n 0 r(j). Using the relationship in (5.10) it is straightforward to show that the solutions to Eq. (5.1) are characterized by operators of the form ã (z) = ϕ(z)ã 0 (z) where ϕ(z) = 1+ϕ 1 z+...+ϕ r z r, r = n n 0 r(j). Moreover, since n s=0 ã Γ j y (s j) = 0, s n, ã C j y (s j) j=0 ã j C y (s j) Γ y (s j) = O(Q T ) j=0 ã j. Thus for T sufficiently large we can determine a measurable solution ã(z) to (4.1) such that, with arbitrarily large probability, ã(z) = ϕ(z)ã 0 (z) + O(Q T ) where ϕ(z) 0, z 1. Substituting ã (ω) = ϕ(ω)ã 0 (ω) into (5.3) we find that S v(ω) = 1 2π r= n = (σ0 η) 2 2π ϕ(ω) 2 µ 0 (ω) 2 j=0 ϕ(θ) 2 ã 0 (θ) S y (θ)ã 0 (θ) exp ı(ω θ)r dθ and hence, via (5.2), that µ(z) = µ (z) + O(Q T ) where the coefficients of µ (z) are derived from the Yule-Walker equations with µ j γi v(r + j) = 0, r = 1,..., n, j=0 γi v(s) = This gives the desired conclusion. 2π exp iωs (σ 0 η) 2 ϕ(ω) µ 0 (ω) 2 dω. To complete the proof it remains to be shown that H, H L 2, with Fourier coefficients that decline at a geometric rate, and that H H 2 = O(Q 2 T ). Consider H. Given that ã (z) is polynomial it is sufficient to show that S v(ω) 2 is absolutely integrable to verify that H L 2. By Weierstrass s approximation theorem, for any ϵ > 0, no matter how small, there exists a polynomial p(z) = j 0 p jz j, with p(z) 0, z 1, such that S v(ω) 2 p(ω) 2 ϵ uniformly in ω. Clearly, p 1 L 2 and { 1 π S v(ω) 2 + ϵ dω = 3 1 p(ω) } p(ω) 2 S v(ω) 2 + ϵ 1 dω p(ω) 2 1 p(ω) { p(ω) 2 S v(ω) 2 ϵ} S v(ω) 2 + ϵ dω 1 p(ω) 2 dω <. Letting ϵ 0 we can therefore conclude from Lebesgue s monotone convergence theorem that S v(ω) 2 is absolutely integrable and hence that H L 2. Moreover, if Hp (ω) = ã (ω)/ p(ω), then H p L 2 and H p can be expanded in a mean square convergent Fourier 20

Multivariate ARMA Processes

Multivariate ARMA Processes LECTURE 8 Multivariate ARMA Processes A vector y(t) of n elements is said to follow an n-variate ARMA process of orders p and q if it satisfies the equation (1) A 0 y(t) + A 1 y(t 1) + + A p y(t p) = M

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

Matching DSGE models,vars, and state space models. Fabio Canova EUI and CEPR September 2012

Matching DSGE models,vars, and state space models. Fabio Canova EUI and CEPR September 2012 Matching DSGE models,vars, and state space models Fabio Canova EUI and CEPR September 2012 Outline Alternative representations of the solution of a DSGE model. Fundamentalness and finite VAR representation

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Vector autoregressions, VAR

Vector autoregressions, VAR 1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,

More information

Title. Description. var intro Introduction to vector autoregressive models

Title. Description. var intro Introduction to vector autoregressive models Title var intro Introduction to vector autoregressive models Description Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference on vector autoregressive (VAR) models

More information

Parametric Inference on Strong Dependence

Parametric Inference on Strong Dependence Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation

More information

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 54, NO 10, OCTOBER 2009 2365 Properties of Zero-Free Spectral Matrices Brian D O Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE Abstract In

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz

More information

IDENTIFICATION OF ARMA MODELS

IDENTIFICATION OF ARMA MODELS IDENTIFICATION OF ARMA MODELS A stationary stochastic process can be characterised, equivalently, by its autocovariance function or its partial autocovariance function. It can also be characterised by

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

1 Teaching notes on structural VARs.

1 Teaching notes on structural VARs. Bent E. Sørensen February 22, 2007 1 Teaching notes on structural VARs. 1.1 Vector MA models: 1.1.1 Probability theory The simplest (to analyze, estimation is a different matter) time series models are

More information

Cointegrated VARIMA models: specification and. simulation

Cointegrated VARIMA models: specification and. simulation Cointegrated VARIMA models: specification and simulation José L. Gallego and Carlos Díaz Universidad de Cantabria. Abstract In this note we show how specify cointegrated vector autoregressive-moving average

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

Two canonical VARMA forms: Scalar component models versus Echelon form

Two canonical VARMA forms: Scalar component models versus Echelon form Two canonical VARMA forms: Scalar component models versus Echelon form George Athanasopoulos Monash University Clayton, Victoria 3800 AUSTRALIA D. S. Poskitt Monash University Clayton, Victoria 3800 AUSTRALIA

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Ch. 14 Stationary ARMA Process

Ch. 14 Stationary ARMA Process Ch. 14 Stationary ARMA Process A general linear stochastic model is described that suppose a time series to be generated by a linear aggregation of random shock. For practical representation it is desirable

More information

Measurement Errors and the Kalman Filter: A Unified Exposition

Measurement Errors and the Kalman Filter: A Unified Exposition Luiss Lab of European Economics LLEE Working Document no. 45 Measurement Errors and the Kalman Filter: A Unified Exposition Salvatore Nisticò February 2007 Outputs from LLEE research in progress, as well

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Two canonical VARMA forms: SCM vis-à-vis Echelon form

Two canonical VARMA forms: SCM vis-à-vis Echelon form Two canonical VARMA forms: SCM vis-à-vis Echelon form George Athanasopoulos D.S. Poskitt Farshid Vahid Introduction Outline of presentation Motivation for VARMA models SCM methodology Tiao and Tsay (1989)

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Solving Linear Rational Expectations Models

Solving Linear Rational Expectations Models Solving Linear Rational Expectations Models simplified from Christopher A. Sims, by Michael Reiter January 2010 1 General form of the models The models we are interested in can be cast in the form Γ 0

More information

Assessing Structural VAR s

Assessing Structural VAR s ... Assessing Structural VAR s by Lawrence J. Christiano, Martin Eichenbaum and Robert Vigfusson Columbia, October 2005 1 Background Structural Vector Autoregressions Can be Used to Address the Following

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Heteroskedasticity; Step Changes; VARMA models; Likelihood ratio test statistic; Cusum statistic.

Heteroskedasticity; Step Changes; VARMA models; Likelihood ratio test statistic; Cusum statistic. 47 3!,57 Statistics and Econometrics Series 5 Febrary 24 Departamento de Estadística y Econometría Universidad Carlos III de Madrid Calle Madrid, 126 2893 Getafe (Spain) Fax (34) 91 624-98-49 VARIANCE

More information

Assessing Structural VAR s

Assessing Structural VAR s ... Assessing Structural VAR s by Lawrence J. Christiano, Martin Eichenbaum and Robert Vigfusson Zurich, September 2005 1 Background Structural Vector Autoregressions Address the Following Type of Question:

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Testing for Regime Switching in Singaporean Business Cycles

Testing for Regime Switching in Singaporean Business Cycles Testing for Regime Switching in Singaporean Business Cycles Robert Breunig School of Economics Faculty of Economics and Commerce Australian National University and Alison Stegman Research School of Pacific

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Residuals in Time Series Models

Residuals in Time Series Models Residuals in Time Series Models José Alberto Mauricio Universidad Complutense de Madrid, Facultad de Económicas, Campus de Somosaguas, 83 Madrid, Spain. (E-mail: jamauri@ccee.ucm.es.) Summary: Three types

More information

Department of Econometrics and Business Statistics

Department of Econometrics and Business Statistics ISSN 1440-771X Australia Department of Econometrics and Business Statistics http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/ Some Results on the Identification and Estimation of Vector ARMAX Processes

More information

New Information Response Functions

New Information Response Functions New Information Response Functions Caroline JARDET (1) Banque de France Alain MONFORT (2) CNAM, CREST and Banque de France Fulvio PEGORARO (3) Banque de France and CREST First version : February, 2009.

More information

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes. MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;

More information

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University Topic 4 Unit Roots Gerald P. Dwyer Clemson University February 2016 Outline 1 Unit Roots Introduction Trend and Difference Stationary Autocorrelations of Series That Have Deterministic or Stochastic Trends

More information

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

Structural VARs II. February 17, 2016

Structural VARs II. February 17, 2016 Structural VARs II February 17, 216 Structural VARs Today: Long-run restrictions Two critiques of SVARs Blanchard and Quah (1989), Rudebusch (1998), Gali (1999) and Chari, Kehoe McGrattan (28). Recap:

More information

Appendix A: The time series behavior of employment growth

Appendix A: The time series behavior of employment growth Unpublished appendices from The Relationship between Firm Size and Firm Growth in the U.S. Manufacturing Sector Bronwyn H. Hall Journal of Industrial Economics 35 (June 987): 583-606. Appendix A: The time

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

7. Forecasting with ARIMA models

7. Forecasting with ARIMA models 7. Forecasting with ARIMA models 309 Outline: Introduction The prediction equation of an ARIMA model Interpreting the predictions Variance of the predictions Forecast updating Measuring predictability

More information

VARMA versus VAR for Macroeconomic Forecasting

VARMA versus VAR for Macroeconomic Forecasting VARMA versus VAR for Macroeconomic Forecasting 1 VARMA versus VAR for Macroeconomic Forecasting George Athanasopoulos Department of Econometrics and Business Statistics Monash University Farshid Vahid

More information

The Identification of ARIMA Models

The Identification of ARIMA Models APPENDIX 4 The Identification of ARIMA Models As we have established in a previous lecture, there is a one-to-one correspondence between the parameters of an ARMA(p, q) model, including the variance of

More information

1 Teaching notes on structural VARs.

1 Teaching notes on structural VARs. Bent E. Sørensen November 8, 2016 1 Teaching notes on structural VARs. 1.1 Vector MA models: 1.1.1 Probability theory The simplest to analyze, estimation is a different matter time series models are the

More information

Assessing Structural VAR s

Assessing Structural VAR s ... Assessing Structural VAR s by Lawrence J. Christiano, Martin Eichenbaum and Robert Vigfusson Yale, October 2005 1 Background Structural Vector Autoregressions Can be Used to Address the Following Type

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

Vector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem

Vector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem Vector autoregressive Moving Average Process Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem Road Map 1. Introduction 2. Properties of MA Finite Process 3. Stationarity of MA Process 4. VARMA

More information

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models Journal of Finance and Investment Analysis, vol.1, no.1, 2012, 55-67 ISSN: 2241-0988 (print version), 2241-0996 (online) International Scientific Press, 2012 A Non-Parametric Approach of Heteroskedasticity

More information

Structural VAR Models and Applications

Structural VAR Models and Applications Structural VAR Models and Applications Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 SVAR: Objectives Whereas the VAR model is able to capture efficiently the interactions between the different

More information

Unit Roots in White Noise?!

Unit Roots in White Noise?! Unit Roots in White Noise?! A.Onatski and H. Uhlig September 26, 2008 Abstract We show that the empirical distribution of the roots of the vector auto-regression of order n fitted to T observations of

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

Bonn Summer School Advances in Empirical Macroeconomics

Bonn Summer School Advances in Empirical Macroeconomics Bonn Summer School Advances in Empirical Macroeconomics Karel Mertens Cornell, NBER, CEPR Bonn, June 2015 In God we trust, all others bring data. William E. Deming (1900-1993) Angrist and Pischke are Mad

More information

Darmstadt Discussion Papers in Economics

Darmstadt Discussion Papers in Economics Darmstadt Discussion Papers in Economics The Effect of Linear Time Trends on Cointegration Testing in Single Equations Uwe Hassler Nr. 111 Arbeitspapiere des Instituts für Volkswirtschaftslehre Technische

More information

A test for improved forecasting performance at higher lead times

A test for improved forecasting performance at higher lead times A test for improved forecasting performance at higher lead times John Haywood and Granville Tunnicliffe Wilson September 3 Abstract Tiao and Xu (1993) proposed a test of whether a time series model, estimated

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

Tests of the Co-integration Rank in VAR Models in the Presence of a Possible Break in Trend at an Unknown Point

Tests of the Co-integration Rank in VAR Models in the Presence of a Possible Break in Trend at an Unknown Point Tests of the Co-integration Rank in VAR Models in the Presence of a Possible Break in Trend at an Unknown Point David Harris, Steve Leybourne, Robert Taylor Monash U., U. of Nottingam, U. of Essex Economics

More information

3 Theory of stationary random processes

3 Theory of stationary random processes 3 Theory of stationary random processes 3.1 Linear filters and the General linear process A filter is a transformation of one random sequence {U t } into another, {Y t }. A linear filter is a transformation

More information

Testing Restrictions and Comparing Models

Testing Restrictions and Comparing Models Econ. 513, Time Series Econometrics Fall 00 Chris Sims Testing Restrictions and Comparing Models 1. THE PROBLEM We consider here the problem of comparing two parametric models for the data X, defined by

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

10. Time series regression and forecasting

10. Time series regression and forecasting 10. Time series regression and forecasting Key feature of this section: Analysis of data on a single entity observed at multiple points in time (time series data) Typical research questions: What is the

More information

Model selection using penalty function criteria

Model selection using penalty function criteria Model selection using penalty function criteria Laimonis Kavalieris University of Otago Dunedin, New Zealand Econometrics, Time Series Analysis, and Systems Theory Wien, June 18 20 Outline Classes of models.

More information

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively

More information

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm Booth School of Business, University of Chicago Business 41914, Spring Quarter 017, Mr Ruey S Tsay Solutions to Midterm Problem A: (51 points; 3 points per question) Answer briefly the following questions

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

DSGE Methods. Estimation of DSGE models: GMM and Indirect Inference. Willi Mutschler, M.Sc.

DSGE Methods. Estimation of DSGE models: GMM and Indirect Inference. Willi Mutschler, M.Sc. DSGE Methods Estimation of DSGE models: GMM and Indirect Inference Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@wiwi.uni-muenster.de Summer

More information

Estimating Moving Average Processes with an improved version of Durbin s Method

Estimating Moving Average Processes with an improved version of Durbin s Method Estimating Moving Average Processes with an improved version of Durbin s Method Maximilian Ludwig this version: June 7, 4, initial version: April, 3 arxiv:347956v [statme] 6 Jun 4 Abstract This paper provides

More information

Identifying the Monetary Policy Shock Christiano et al. (1999)

Identifying the Monetary Policy Shock Christiano et al. (1999) Identifying the Monetary Policy Shock Christiano et al. (1999) The question we are asking is: What are the consequences of a monetary policy shock a shock which is purely related to monetary conditions

More information

LINEAR STOCHASTIC MODELS

LINEAR STOCHASTIC MODELS LINEAR STOCHASTIC MODELS Let {x τ+1,x τ+2,...,x τ+n } denote n consecutive elements from a stochastic process. If their joint distribution does not depend on τ, regardless of the size of n, then the process

More information

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50 GARCH Models Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 50 Outline 1 Stylized Facts ARCH model: definition 3 GARCH model 4 EGARCH 5 Asymmetric Models 6

More information

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................

More information

News Shocks, Information Flows and SVARs

News Shocks, Information Flows and SVARs 12-286 Research Group: Macroeconomics March, 2012 News Shocks, Information Flows and SVARs PATRICK FEVE AND AHMAT JIDOUD News Shocks, Information Flows and SVARs Patrick Feve Toulouse School of Economics

More information

Assessing Structural VAR s

Assessing Structural VAR s ... Assessing Structural VAR s by Lawrence J. Christiano, Martin Eichenbaum and Robert Vigfusson Minneapolis, August 2005 1 Background In Principle, Impulse Response Functions from SVARs are useful as

More information

Estimating and Identifying Vector Autoregressions Under Diagonality and Block Exogeneity Restrictions

Estimating and Identifying Vector Autoregressions Under Diagonality and Block Exogeneity Restrictions Estimating and Identifying Vector Autoregressions Under Diagonality and Block Exogeneity Restrictions William D. Lastrapes Department of Economics Terry College of Business University of Georgia Athens,

More information

Box-Jenkins ARIMA Advanced Time Series

Box-Jenkins ARIMA Advanced Time Series Box-Jenkins ARIMA Advanced Time Series www.realoptionsvaluation.com ROV Technical Papers Series: Volume 25 Theory In This Issue 1. Learn about Risk Simulator s ARIMA and Auto ARIMA modules. 2. Find out

More information

Time Series Examples Sheet

Time Series Examples Sheet Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,

More information

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

A SARIMAX coupled modelling applied to individual load curves intraday forecasting A SARIMAX coupled modelling applied to individual load curves intraday forecasting Frédéric Proïa Workshop EDF Institut Henri Poincaré - Paris 05 avril 2012 INRIA Bordeaux Sud-Ouest Institut de Mathématiques

More information

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed

More information

Forecasting with VARMA Models

Forecasting with VARMA Models EUROPEAN UNIVERSITY INSTITUTE DEPARTMENT OF ECONOMICS EUI Working Paper ECO No. 2004 /25 Forecasting with VARMA Models HELMUT LÜTKEPOHL BADIA FIESOLANA, SAN DOMENICO (FI) All rights reserved. No part of

More information

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 6.3. FORECASTING ARMA PROCESSES 123 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss

More information

Forecasting with ARMA Models

Forecasting with ARMA Models LECTURE 4 Forecasting with ARMA Models Minumum Mean-Square Error Prediction Imagine that y(t) is a stationary stochastic process with E{y(t)} = 0. We may be interested in predicting values of this process

More information

Outline. Overview of Issues. Spatial Regression. Luc Anselin

Outline. Overview of Issues. Spatial Regression. Luc Anselin Spatial Regression Luc Anselin University of Illinois, Urbana-Champaign http://www.spacestat.com Outline Overview of Issues Spatial Regression Specifications Space-Time Models Spatial Latent Variable Models

More information

Missing dependent variables in panel data models

Missing dependent variables in panel data models Missing dependent variables in panel data models Jason Abrevaya Abstract This paper considers estimation of a fixed-effects model in which the dependent variable may be missing. For cross-sectional units

More information

Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm

Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm Masaru Inaba November 26, 2007 Introduction. Inaba (2007a) apply the parameterized

More information

Autoregressive Approximation in Nonstandard Situations: Empirical Evidence

Autoregressive Approximation in Nonstandard Situations: Empirical Evidence Autoregressive Approximation in Nonstandard Situations: Empirical Evidence S. D. Grose and D. S. Poskitt Department of Econometrics and Business Statistics, Monash University Abstract This paper investigates

More information