Gibbs Samplers for VARMA and Its Extensions

Size: px
Start display at page:

Download "Gibbs Samplers for VARMA and Its Extensions"

Transcription

1 Gibbs Samplers for VARMA and Its Extensions By Joshua C.C. Chan Australian National University Eric Eisenstat University of Bucharest ANU Working Papers in Economics and Econometrics # 604 February 2013 JEL: C11, C32, C53 ISBN:

2 Gibbs Samplers for VARMA and Its Extensions Joshua C.C. Chan Research School of Economics, Australian National University Eric Eisenstat Faculty of Business Administration, University of Bucharest January 2013 Abstract Empirical work in macroeconometrics has mostly restricted to using VARs, even though there are strong theoretical reasons to consider general VARMAs. This is perhaps because estimation of VARMAs is perceived to be challenging. In this article, we develop a Gibbs sampler for the basic VARMA, and demonstrate how it can be extended to models with stochastic volatility and time-varying parameters. We illustrate the methodology through a macroeconomic forecasting exercise. We show that VARMAs produce better density forecasts than VARs, particularly for short forecast horizons. Keywords: state space, stochastic volatility, factor model, macroeconomic forecasting, density forecast JEL codes: C11, C32, C53

3 1 Introduction Vector autoregressive moving average (VARMA) models have long been considered an appropriate framework for modeling covariance stationary time series. As is well known, by the Wold decomposition theorem, any covariance stationary time series has an infinite moving average representation, which can then be approximated by a suitable finite-order VARMA model. However, in most empirical work, only purely autoregressive models are considered. In fact, since the seminal work of Sims (1980), VARs have become the most prominent approach in empirical macroeconometrics. A wide range of specifications extending the basic VAR have been suggested including time-varying parameter VARs (Canova, 1993), Markov switching or regime switching VARs (Paap and van Dijk, 2003; Koop and Potter, 2007), VARs with stochastic volatility(cogley and Sargent, 2005; Primiceri, 2005), and factor augmented VARs (Bernanke, Boivin, and Eliasz, 2005; Korobilis, 2012), among many others (see, e.g., Koop and Korobilis, 2010) whereas VARMAs remain underdeveloped. This unbalanced development is perhaps due to the fact that in contrast to VARs, identification and estimation of VARMAs are more difficult. In particular, for identification a system of nonlinear invertibility conditions have to be imposed on the VMA coefficients, which makes estimation using maximum likelihood or Bayesian methods difficult. On top of that, even simple VMAs are typically high-dimensional. To obtain the maximum likelihood estimates, iterative gradient-based or direct-search optimization routines are required; for Bayesian estimation, special Metropolis-Hastings algorithms are needed as the conditional distribution of the VMA coefficients is not of standard form. This article tackles this issue by developing an efficient Gibbs sampler to estimate the VARMA model. The significance of our contribution lies in the realization that once the basic VARMA can be efficiently estimated via the Gibbs sampler, a wide variety of generalizations, analogous to those mentioned earlier extending the basic VAR, can also be fitted easily using the machinery of Markov chain Monte Carlo (MCMC) techniques. In fact, we will discuss two generalizations of VARMA: models allowing for time-varying parameters and stochastic volatility. Our approach draws on a few recent developments in the state space literature. First, we make use of a convenient state space representation of VARMA introduced in Metaxoglou and Smith (2007). Specifically, by using the fact that a VMA plus white noise remains a VMA (Peiris, 1988), the authors write a VARMA as a latent factor model, with the unusual feature that lagged factors also enter the current measurement equation. This model is then estimated using the EM algorithm and the Kalman filter. Our point of departure is to develop an efficient Gibbs sampler for this state space representation of VARMA. As discussed, due to the modular nature of MCMC algorithms, further generalizations can be easily fitted once we have an efficient Gibbs sampler for the basic model. Second, instead of using the Kalman filter which involves redefining the states into a higher dimensional vector that in turns slows down estimation we make use of the precision-based sampler to simulate the latent factors without the usual filtering and smoothing iterations (Chan and Jeliazkov, 2009; McCausland, Miller, and Pelletier, 2

4 2011). Of course, in principle any invertible VMA can be approximated well by a sufficiently high order VAR. In practice, however, a high order VAR has too many parameters to estimate relative to the number of observations typically available for macroeconomic data. This can lead to imprecise estimation of impulse responses and poor forecast performance. In fact, some recent papers have shown that parsimonious ARMAs in both univariate and multivariate settings often forecast better than purely AR models (e.g., Athanasopoulos and Vahid, 2008; Chan, 2012). In our empirical application, we find that VARMAs provide better density forecasts compared to VARs, particularly for short forecast horizons. In addition, even when a VAR is a suitable model for a particular set of variables overall, a VARMA may be more appropriate for a small subset of these variables; using a VAR in this setting might lead to loss of forecast accuracy. This perhaps explains why large and medium VARs with the addition of strong and sensible priors tend to forecast better than small VARs (e.g., Banbura, Giannone, and Reichlin, 2010; Koop, 2011). Since these small VARs are typically used in macroeconomic applications, an easy and quick way to estimate VARMAs would be a valuable addition to the toolkit for modeling and forecasting time series. The rest of this article is organized as follows. Section 2 first introduces a state space representation of VARMA that facilitates efficient estimation, followed by a detailed discussion of two alternative identification strategies. We develop a Gibbs sampler for the basic VARMA in Section 3, which is extended to models with time-varying parameters and stochastic volatility in Section 4. In Section 5, the methodology is illustrated using a recursive forecasting exercise involving three macroeconomic variables. Lastly, we discuss some future research directions in Section 6. 2 A Convenient Representation of VARMA Following Metaxoglou and Smith (2007), we consider a convenient state space representation of VARMA that allows for efficient estimation as follows. Recall that a (p, q)-th order VARMA, or VARMA(p,q), is given by y t = µ+ Π 1 y t Π p y t p + f t + Ψ 1 ft Ψ q ft q, (1) where y t is an n 1 vector of observations, Π1,..., Π p are n n matrices of VAR coefficients, Ψ1,..., Ψ q are n n matrices of VMA coefficients, and the innovations f t are distributed iid N(0, Ω). We make use of the results in Peiris (1988), who shows that white noise added to a VMA(q) remains a VMA(q). Hence, the following is another representation of a VARMA(p, q): y t = µ+ Π 1 y t Π p y t p + f t + Ψ 1 ft Ψ q ft q + ε t, 3

5 where ε t are iid N(0, Σ) distributed and Σ is a diagonal matrix. Now, we reparameterize and obtain the following representation: y t = µ+π 1 y t 1 + +Π p y t p +Ψ 0 f t +Ψ 1 f t 1 + +Ψ q f t q +ε t, (2) where Ψ 0 is a lower triangular matrix with ones on the diagonal, ε t N(0,Σ) and f t N(0,Ω) are independent of each other at all leads and lags, and Σ,Ω are diagonal matrices. We further assume that f 0 = f 1 = = f q+1 = 0. Of course, one can treat these initial latent factors as parameters if desired, and the estimation procedures discussed in subsequent sections can be modified easily to allow for this possibility. For typical situations where T q, whether these errors are modeled explicitly or not makes little difference in practice. By fixing the covariance matrix Σ, Metaxoglou and Smith (2007) discuss a one-to-one mapping between the two parameterizations in (1) and (2). With this mapping, one can then recover the estimates of the original parameters once the estimates for (2) are obtained. However, since the parameters in the VARMA(p, q) are typically not of primary interest they are used to produce forecasts or compute impulse responses we take a different approach and work directly with the parameterization in (2). Now, the representation in (2) may be viewed as a factor-augmented VAR (FAVAR) with the innovations f t taking the role of unobserved factors. However, there are two unusual features that are worth commenting. First, the number of factors is equal to the number of variables, and as such, either additional time series (that can be used to estimate the latent factors) or further identifying restrictions are required to identify the system in (2). Second, lagged latent factors f t 1,...,f t q enter the equation for the current observations y t, which induce serial correlation in y t. The key challenge in making this specification operational is to develop an identification strategy such that the system in (2) is identifiable to the extent that an efficient sampling algorithm may be readily constructed, while still allowing for rich dynamics. Essentially, with the reparameterization in (2), the identification issue that remains to be resolved is related to the fact that the system contains n more parameters than can be identified by the available moments. However, arbitrarily imposing restrictions on Ψ 0,...,Ψ 1,Ω,Σ is hardly innocuous. For example, consider the variance and autocovariance matrices generated by (2) for p = 0: Var(y t ) = q Ψ r ΩΨ r +Σ, (3) r=0 q s Cov(y t,y t s ) = r=0 Ψ r+s ΩΨ r, s = 1,...,q, (4) and Cov(y t,y t s ) = 0 for s > q. It is evident from this that the strategy of Metaxoglou and Smith (2007) fixing Σ to be a constant matrix, say, C = (c ij ) immediately implies that Var(y it ) c ii, where y it is the i-th element of y t. Clearly, imposing arbitrary lower bounds on the process variance runs the risk of detrimentally distorting the 4

6 estimation of free parameters and subsequent forecasts; yet there is rarely convincing a priori guidance for bounding variances this way. In what follows, we consider two general approaches to identifying (2). The first approach is based on imposing restrictions on the elements of Ψ 1,...,Ψ q when exactly n series are related to the n latent factors, as presented thus far. Alternatively, it is possible to identify the system without imposing further restrictions on the VMA coefficient matrices, as long as additional data are available that can be reasonably related to the n underlying factors. The remainder of this section describes both approaches in more detail. 2.1 Identification without Additional Time Series As previously discussed, (2) can be hypothetically identified by appropriately fixing an additional n parameters in the matrices Ψ 1,...,Ψ q. However, applying exclusion restrictions in this case is not trivial from at least two perspectives. First, arbitrarily imposing exclusion restrictions does not necessarily guarantee identification; this is clear from the simple fact that if we restrict the n 2 elements of Ψ q to zero, the system remains unidentified with respect to n free parameters. Indeed, only certain combinations of exclusions restrictions in Ψ 1,...,Ψ q lead to identifiability. For example, consider a VMA(1) with n = 4 and the following combinations of restrictions on Ψ 1 : , 0, 0, 0 (A) (B) (C) (D) where each represents an unrestricted parameter. It may be tempting to conclude that schemes A C lead to just-identified systems since they result in exactly 26 model parameters that are matched to 26 moments. Similar reasoning may likewise lead to the conclusion that scheme D results in over-identification. Neither of the two foregoing conclusions is correct: only scheme C yields a just-identified system, while under schemes A, B, and D, a subset of the parameters is unidentified. Indeed, for scheme D where the number of parameters is actually less than the number of available moments, Ψ 1,44, ω4, 2 and σ4 2 the (4,4)-th elements of Ψ 1, Ω, and Σ, respectively are not separately identifiable (i.e., only the product ω4 2Ψ 1,44 and the sum ω4 2 + σ2 4 are identified), while the remaining parameters are over-identified. In general, any scheme involving exclusion restrictions in this context that resembles a block lower triangular Ψ 1 will be characterized by under-identification of the parameters in the lower right quadrant(along with the corresponding elements in Ω and Σ) and over-identification for the remaining parameters. For the lower triangular case represented in scheme D, the key problem is that while 26 moments are available to identify the parameters, the VMA, 5

7 autocovariance structure is such that Ψ 1,44, ω4 2, and σ2 4 are related to only two of these moments, namely, Var(y 4t ) and Cov(y 4t,y 4,t 1 ), where y 4t is the 4-th element of y t. In this light, removing the exclusion restriction on Ψ 1,44 serves to redistribute the three unidentified parameters across a wider range of moments and thereby provides full identification of the entire system. Alternatively, one could achieve identification with any of the schemes A, B, and D by adding an extra lag (such as to yield a VMA(2)) with an unrestricted Ψ 2. However, either of these solutions relies critically on certain sample moments not being close to 0. For example, if the data support Ψ 1,44 0, then activating Ψ 1,44 for identification purposes is futile: identification in the system is weak and conventional problems related to computation prevail. The same is true when augmenting the VMA specification with additional lags and relying on elements of Ψ 2 to identify Ψ 1,44, ω 2 4, and σ2 4. Moreover, care must be taken to ensure that reasonable system dynamics remain intact. Continuing with the VMA(1) example, recall the autocovariances given by (4). Let Ψ j,kl denote the (k,l)-th element of Ψ j and let y it represent the i-th element of y t. It is clear that if, say, Ψ 1,11 = 1 is imposed, then the lag-1 autocovariance of y 1t is Cov(y 1t,y 1,t 1 ) = ω 2 11 > 0. In addition, restrictions of the type Ψ 1,21 = Ψ 1,22 = 0 lead to Cov(y 2t,y 2,t 1 ) = 0, and so on. This suggests that, in general, one should avoid restricting the diagonal elements as these tend to translate to peculiar restrictions on the process autocovariances, while exclusion restrictions on the off-diagonal elements of the VMA matrices are far less imposing. Nevertheless, as previously discussed, only certain combinations of restrictions involving off-diagonal elements achieve identification. Clearly, in a general case with an arbitrarily large n, finding the right combination of exclusion restrictions that yields a just-identified system while retaining reasonable system dynamics is a daunting task. On the other hand, in applications where the MA coefficients themselves are not of primary interest (such as when the end-goal is forecasting), exact identification of model parameters is not of central concern; to the extent that one succeeds to construct an efficient sampling algorithm, the fact that the data are not informative regarding certain (nuisance) parameters is irrelevant. This view has been advocated, for example, by Stock and Watson (2002). Consequently, one may wish to consider a practical alternative strategy based on partial identification. To be precise, for any VARMA(p,q) we restrict Ψ 1 to be lower triangular, while allowing Ψ 2,...,Ψ q to be full matrices. Then, the system is (at worst) partially identified in the sense that Ψ 1,nn, ωn, 2 and σn 2 might not be separately identifiable, but the remaining parameters are over-identified. The reasoning follows as an extension of the discussion of the n = 4 case above. Intuitively, for q = 1 each additional data series y it allows for the identification (and over-identification) of the parameters in the existing measurement equations (i.e., those related to y 1t,...,y i 1,t ), but adds a new set of parameters Ψ 1,ii, ωi, 2 and σi 2 that cannot be individually identified, apart from the quantities ωi 2Ψ 1,ii and ωi 2+σ2 i. Therefore, regardless of the system size, only three parameters provide cause for possible concern in terms of sampling efficiency, while this can be dealt with by specifying a reasonably informative prior for Ψ 1,nn. For q > 1, the identifica- 6

8 tion/computation problem is potentially resolved through the effect of Ψ 2,...,Ψ q on the overall covariance structure, provided that their sample moments are not close to zero. Aside from the practical simplicity of the partial identification strategy, another advantage of specifying a lower triangular Ψ 1 may be perceived in terms of parsimony. This may be of particular value especially in the context of forecasting. A potential drawback of this scheme is that it restricts the space of autocovariances that can be represented, and it accentuates the importance of the variable ordering. A consequence of the latter, for example, is that forecasting y 1t with a VMA(1) and Ψ 1 lower triangular is equivalent to forecasting it with a univariate MA. Regarding the implications on process autocovariances, this scheme yields only mild restrictions. The simplest example of these (albeit not the only) is Cov(y 1t,y it ) 2 4Cov(y 1t,y i,t 1 )Cov(y it,y 1,t 1 ), for i > 1. However, in many contexts such restrictions will not detract significantly from the flexibility of the model and (2) may still be regarded as a close approximation to the Wold decomposition of the multivariate series. In our empirical application, the merit of this (partial) identification scheme can be assessed by the forecast performance of the VARMAs compared to standard VARs. 2.2 Identification with Additional Time Series Alternatively, suppose we obtain additional m time series that can be related to y t and f t. Denoting these additional variables as z t, we can specify an extended version of (2) as ( yt z t ) = ( ) ( µ Π1 0 µ z + Π zy 1 Π zz 1 )( yt 1 + z t 1 ( Ψ0 Ψ z 0 ) ( Πp Π zy p ) ( Ψ1 f t + Ψ z 1 Π zz p ) f t )( yt p z t p ( Ψq Ψ z q ) ) f t q + ( εt ε z t ), (5) where ε z t N(0,Σ z ) is independent of ε t. Recall that Σ and Ω are diagonal, and Ψ 0 is lower triangular with ones on the diagonal. Since we have m + n variables but only n factors, it is clear that for any n, p, and q the system in (5) is identified for a sufficiently large m, without the need for further restrictions on the remaining parameters. Nevertheless, introducing an additional m time series in this manner leads to m(mp + n(p+q +1)+2) additional parameters to estimate, which raises concerns of parameter proliferation and over-fitting. In developing this identification strategy, therefore, it may be of interest to design a more parsimonious way in which the additional time series z t are related to the variables of interest y t and the factors f t. This essentially involves setting some of the newly introduced parameters to zero. To that end, one reasonable approach may be to specify z t = µ z +Π zy 1 y t 1 +Ψ z 1f t 1 +ε z t, ε N(0,Σ z ), (6) 7

9 which leads to the FAVAR specification of Bernanke et al. (2005). 1 Consequently, as long as z t represents relevant information time series, such a specification is straightforward to implement. The implication of this is that the FAVAR model imbeds an unrestricted VARMA(p,q) process for y t and a simple static factor process for z t (i.e., following (4), the covariances Cov(z t,z t s ) = 0 for all s 1), and therefore, informational times series are only being used to estimate the factors in the simplest possible way. Of course, the challenge is to find a sufficiently large set of additional data z t that fits the above specification. Further, we note that the above formulation may be adopted in developing a methodology for estimating medium/large VARMAs. In particular, we may consider starting with (2), for a fairly large n, but instead of adding informational time series to achieve proper identification, we reduce the number of factors to r n (as proposed in Eisenstat, 2011). Following the same line of reasoning, restricting the upper r r block of Ψ 0 to be lower triangular (with ones on the diagonal) leads to (over-) identification without the need for additional parameter restrictions. Moreover, because the covariances given in (4) continue to hold for r < n, in general, allowing the remaining parameters to be estimated freely implies a VARMA(p,q) process for the entire set of time series y t. The intuition behind this approach is that the reduction in the number of factors simply leads to certain interrelationships between the VMA coefficients in the underlying VARMA process. One may regard it, therefore, as a parsimonious specification of a large VARMA model where reasonable a priori restrictions are imposed on the VMA coefficients in the same spirit that a typical factor model interrelates the elements of the contemporaneous covariances for a set of related time series. Consequently, to estimate such a specification in practice, it only remains to impose sufficient a priori information on the autoregressive parameters by way of strong, sensible priors. To this end, a number of methods such as those explored in Litterman (1986), Banbura et al. (2010), and Koop (2011), among others have been demonstrated to perform well in large VAR settings and can be readily adopted here as well. However, we leave the parsimonious formulation of medium/large VARMA models for future research. 3 Posterior Analysis In what follows, we assume that we do not have additional information time series and pursue the identification strategy discussed in Section 2.1. Specifically, we assume that Ψ 1 is lower triangular whereas Ψ 2,...,Ψ q are unrestricted. Estimation with additional information time series as in (5) can proceed similarly with only minor modifications. First, we introduce the conjugate priors, followed by a discussion of an efficient Gibbs 1 To be precise, Bernanke et al. (2005, equation 2.2) relates z t to y t and f t contemporaneously, while (6) appears to match z t to the previous period outcomes y t 1 and f t 1. The two specifications are nevertheless equivalent since in practice, the data in z t can be shifted on period forward to achieve the appropriate temporal matching. 8

10 sampler for VARMA(p, q) in (2). We note that throughout, the analysis is performed conditional on the initial observations y 0,...,y 1 p. Again, one can extend the posterior sampler to the case where the initial observations are modeled explicitly. To facilitate estimation, we first rewrite (2) as y t = X t β +Ψ 0 f t +Ψ 1 f t 1 + +Ψ q f t q +ε t, where X t = I n (1,y t 1,...,y t p ), β = vec((µ,π 1,...,Π q ) ), ε t N(0,Σ), and f t N(0,Ω). Now, stacking the observations over t, we have where X = X 1. X T y = Xβ +Ψf +ε, (7), f = f 1. f T, ε = ε 1. and Ψ is a Tn Tn lower triangular matrix with Ψ 0 on the main diagonal block, Ψ 1 on first lower diagonal block, Ψ 2 on second lower diagonal block, and so forth. For example, for q = 2, we have Ψ Ψ 1 Ψ Ψ 2 Ψ 1 Ψ Ψ = 0 Ψ 2 Ψ 1 Ψ Ψ 2 Ψ 1 Ψ 0 It is important to note that in general Ψ is a sparse Tn Tn matrix that contains at most ( ) n 2 q(q +1) (q +1)T < n 2 (q +1)T 2 non-zero elements, which grows linearly in T andis substantially less thanthe total (Tn) 2 elements for typical applications where T q. This special structure can be exploited to speed up computation, e.g., by using block-banded or sparse matrix algorithms (see, e.g., Kroese, Taimre, and Botev, 2011, p. 220). To complete the model specification, we assume independent priorsforβ,ψ 0,Ψ 1,...,Ψ q, Σ, and Ω as follows. For β, we consider the multivariate normal prior N(β 0,V β ). Let ψ i denote the free parameters in the i-th column of (Ψ 0,Ψ 1,...,Ψ q ). For example, if we impose the condition that Ψ 1 is lower triangular, we have ε T, ψ i = (Ψ 0,i1,...,Ψ 0,ii 1,Ψ 1,i1,...,Ψ 1,ii,Ψ 2,i1,...,Ψ 2,in,...,Ψ q,i1,...,ψ q,in ), where Ψ j,kl is the (k,l)-th element of Ψ j (details related to a more general set of restrictions are covered in the Appendix). We assume the normal prior N(ψ 0i,V ψi ) for ψ i. For later reference we further let ψ = (ψ 1,...,ψ n ). Finally, for the diagonal elements 9

11 of Σ = diag(σ1 2,...,σ2 n ) and Ω = diag(ω2 1,...,ω2 n ), we assume the following independent inverse-gamma priors: σ 2 i IG(ν σi,s σi ), ω 2 i IG(ν ωi,s ωi ). The Gibbs sampler proceeds by sequentially drawing from 1. p(β,f y,ψ,σ,ω); 2. p(ψ y,β,f,σ,ω); 3. p(σ,ω y,β,f,ψ) = p(σ y,β,f,ψ)p(ω y,β,f,ψ). Steps 2 and 3 are standard, and we leave the details to the Appendix; here we focus on Step 1. Although it might be straightforward to sample β and f separately by simulating (β y,f,ψ,σ,ω) followed by (f y,β,ψ,σ,ω), such an approach would potentially induce high autocorrelation and slow mixing in the constructed Markov chain as β and f enter (7) additively. Instead, we aim to sample β and f jointly by first drawing (β y,ψ,σ,ω) marginally of f, followed by (f y,β,ψ,σ,ω). To implement the first step, we note that since f N(0,I T Ω), the joint density of y marginal of f is (y β,ψ,σ) N(Xβ,S y ), where S y = I T Σ+Ψ(I T Ω)Ψ. Since both Σ and Ω are diagonal matrices, and Ψ is a lower triangular sparse matrix, the covariance matrix is also sparse. Using standard results from linear regression (see, e.g., Koop, 2003, p ), we have (β y,ψ,σ,ω) N( β,d β ), where D β = ( V 1 β +X S 1 y X) 1, β = Dβ ( V 1 β β 0 +X S 1 y y). It is important to realize that in order to compute X S 1 y X or X S 1 y y, one needs not obtain the Tn Tn matrix S 1 y otherwise it would involve O(T 3 ) operations. Instead, we exploit the fact that the covariance matrix S y is sparse. For instance, to obtain X S 1 y y, we can first solve the system S yz = y for z, which can be computed in O(T) operations. The solution is z = S 1 y y and we return X z = X S 1 y y, which is the desired quantity. Similarly, X S 1 y Xcan becomputed quickly without inverting any big matrices. Next, we sample all the latent factors f jointly. Note that even though a priori the latent factors are independent, they are no longer independent given y. As such, sampling each f t sequentially would potentially induce high autocorrelation and slow mixing in the Markov chain. One could sample f using Kalman filter-based algorithms, but they would involve redefining the states so that only the state at time t enters the measurement equation at time t. As such, each (new) state vector would be of much higher dimension, 10

12 which in turn results in slower algorithms. Instead, we avoid the Kalman filter and instead implement the precision-based sampler developed in Chan and Jeliazkov (2009) to sample the latent factors jointly. To that end, note that a priori f N(0,I T Ω). Using (7) and standard linear regression results again, we have where (f y,β,ψ,σ,ω) N( f,k 1 f ), K f = I T Ω 1 +Ψ (I T Σ 1 )Ψ, f = K 1 f Ψ (I T Σ 1 )(y Xβ). The challenge, of course, is that the covariance matrix K 1 f is a Tn Tn full matrix, and sampling (f y,β,ψ,σ,ω) using brute force is infeasible. However, the precision matrix K f is sparse (recall that both Σ 1 and Ω 1 are diagonal, whereas Ψ is also sparse), which can be exploited to speed up computation. As before, we first obtain f by solving K f z = Ψ (I T Σ 1 )(y Xβ) for z. Next, obtain the Cholesky decomposition of K f such that C f C f = K f. Solve C f z = u for z, where u N(0,I Tn). Finally, return f = f +z, which follows the desired distribution. We refer the readers to Chan and Jeliazkov (2009) for details. The Gibbs sampler developed above can be extended in a straightforward manner to estimate various generalizations of the basic VARMA(p, q). This is what we turn to in the next section. 4 Extensions In this section we discuss two extensions of the basic VARMA(p, q) relevant to empirical macroeconomic applications. First, we relax the assumption of constant variance and incorporate stochastic volatility into the model. Specifically, consider again the VARMA(p, q) specified in (7). Now, we allow the latent factors to have time-varying volatilities f t N(0,Ω t ), where Ω t = diag(e h 1t,...,e hnt ), and each of the log-volatility follows an independent random walk: h it = h i,t 1 +ζ it, (8) whereζ it N(0,σ 2 h i )fori = 1,...,n,t = 2,...,T. Thelog-volatilitiesareinitializedwith h i1 N(h i0,v hi0 ), where h i0 and V hi1 are known constants. For notational convenience, let h t = (h 1t,...,h nt ), h = (h 1,...,h T) and σ 2 h = (σ2 h 1,...,σ 2 h n ). Estimation of this VARMA(p, q) with stochastic volatility would only require minor modifications of the basic Gibbs sampler developed above. More specifically, posterior draws can be obtained by sequentially sampling from 1. p(β,f y,ψ,h,σ 2 h,σ); 11

13 2. p(ψ y,β,f,h,σ 2 h,σ); 3. p(h y,β,f,ψ,σ 2 h,σ); 4. p(σ,σ 2 h y,β,f,h,ψ) = p(σ y,β,f,h,ψ)p(σ2 h y,β,f,h,ψ). Steps 1 and 2 proceed as before with minor modifications e.g., the prior covariance matrix for f is now diag(e h 11,...,e h n1,...,e h 1T,...,e h nt ) instead of I T Ω. But since the covariance matrix remains diagonal, all the sparse matrix routines are as fast as before. Step 3 can be carried out, for example, using the auxiliary mixture sampling approach of Kim, Shepherd, and Chib(1998). Lastly, Step 4 is also standard if we assume independent inverse-gamma priors for σ 2 h 1,...,σ 2 h n. The second extension that can be easily estimated is a time-varying parameter variant. For example, suppose we allow the VMA coefficient matrices to be time-varying: y t = X t β +Ψ 0t f t +Ψ 1t f t 1 + +Ψ qt f t q +ε t, (9) where X t and β are defined as before, ε t N(0,Σ) and Σ is a diagonal matrix. Let ψ it denote the vector of free parameters in the i-th column of (Ψ 0t,Ψ 1t,...,Ψ qt ). Then, consider the transition equation ψ it = ψ i,t 1 +η it, where η it N(0,Q i ) for i = 1,...,n,t = 2,...,T. The initial conditions are specified as ψ it N(ψ i0,q i0 ), where ψ i0 and Q i0 are known constant matrices. Since Σ is a diagonal matrix and each (ψ i1,...,ψ it) is defined by a linear Gaussian state space model, sampling from the relevant conditional distribution can be done quickly by the precision-based sampler. In addition, sampling all the other parameters can be carried out as in the basic sampler with minor modifications. For example, to sample (β,f) jointly, note that we can stack (9) over t and write y = Xβ + Ψf +ε, where X = X 1. X T, f = f 1. f T, ε = ε 1. ε T, and Ψ is a Tn Tn lower triangular matrix with Ψ 01,...,Ψ 0T on the main diagonal block, Ψ 12,...,Ψ 1T on first lower diagonal block, Ψ 23,...,Ψ 2T on second lower diagonal block, and so forth. The algorithms discussed before can then be applied directly. 12

14 5 Empirical Application In this section we illustrate the proposed approach and estimation methods with a recursive forecasting exercise that involves U.S. GDP growth, inflation, and interest rate. These variables are commonly used in forecasting (e.g., Banbura et al., 2010; Koop, 2011) and small DSGE models (e.g., An and Schorfheide, 2007). We first outline the set of competing models in Section 5.1, followed by a brief description of the data and the priors in Section 5.2. The results of the density forecasting exercise are reported in Section Competing Models The main goal of this forecasting exercise is to illustrate the methodology and investigate how VARMAs and the variants with stochastic volatility compare with standard VARs. To that end, we consider three sets of models: VAR(p), VARMA(p,1), and VARMA(p,1) with stochastic volatility. We investigate three cases where p = 2, 3, 4. First, consider the following standard VAR(p): y t = X t β +ξ t, where X t = I n (1,y t 1,...,y t p ), β = vec((µ,π 1,...,Π q ) ), ξ t N(0,Ξ), and Ξ is a full n n covariance matrix. VARMA(p,1) is the same as given in (2), and we impose the identification restrictions discussed in Section 2.1. For the purpose of this exercise, we restrict Ψ 1 to be lower triangular. Finally, for the VARMA variant with stochastic volatility, we assume f t N(0,Ω t ), where Ω t = diag(ω 2 1,e h 2t,ω 2 3). The log-volatility follows a random walk: h 2t = h 2,t 1 +ζ 2t, where ζ 2t N(0,σ 2 h 2 ) for t = 2,...,T, and the transition equation is initialized with h 21 N(h 20,V h20 ). In other words, we allow the latent factors corresponding to the inflation equation to have stochastic volatility, but not the other factors. This way we can incorporate time-varying volatility which is found to be empirically important for forecasting inflation, while avoiding parameter proliferation that might result in overfitting and poor forecast performance. 5.2 Data and Priors The data consist of U.S. quarterly GDP growth, CPI inflation, and the effective Fed funds rate from 1959:Q1 to 2011:Q4. More specifically, given the quarterly real GDP series w 1t, we transform it via y 1t = 400log(w 1t /w 1,t 1 ) to obtain the growth rate. We perform a similar transformation to the CPI index to get the inflation rate, whereas the effective Fed funds rate series is kept unchanged. For easy comparison, we choose broadly similar priors across models. For instance, the priors for the VAR coefficients in VARMA specifications are exactly the same as those of the corresponding VAR. 13

15 As discussed in Section 3, we assume the following independent priors: β N(β 0,V β ), ψ i N(ψ 0i,V ψi ), σ 2 i IG(ν σi,s σi ), and ω 2 i IG(ν ωi,s ωi ), i = 1,...,n. We set β 0 = 0 and set the prior covariance V β to be diagonal, where the variances associated with the intercepts are 100 and those corresponding to the VAR coefficients are 1. For ψ i, we set ψ 0i = 0, and set V ψi such that the variances associated with the elements in Ψ 0 are 1 and those associated with the elements in Ψ 1,...,Ψ q are all We choose relatively small values for the degrees of freedom parameters which imply relatively non-informative priors: ν σi = ν ωi = 5, i = 1,...,n. For the scale parameters, we set S σi = S ωi = 4, i = 1,...,n. These values imply Eσ 2 i = Eω 2 i = 1, i = 1,...,n. For VARMAs with stochastic volatility, we also need to specify a prior for σ 2 h 2. We assume σ 2 h 2 IG(ν h,s h ), where ν h = 10 and S h = 0.9, which implies Eσ 2 h 2 = Forecasting Results To compare the performance of the competing models in producing density forecasts, we consider a recursive out-of-sample forecasting exercise at various horizons as follows. At the t-th iteration, for each of the model we use data up to time t, denoted as y 1:t, to estimate the joint predictive density p(y t+k y 1:t ) under the model as the k-step-ahead density forecast for y t+k. We then expand the sample using data up to time t+1, and repeat the whole exercise. We continue this procedure until time T k. At the end of the iterations, we obtain density forecasts under the competing models for t = t 0,...,T k, where we set t 0 to be 1970:Q1. We note that the joint predictive density p(y t+k y 1:t ) is not available analytically, but it can be estimated using MCMC methods. For VARs and VARMAs, the conditional density of y t+k given the data and the model parameters denoted as p(y t+k y 1:t,θ) is Gaussian with known mean vector and covariance matrix. Hence, the predictive density can be estimated by averaging p(y t+k y 1:t,θ) over the MCMC draws of θ. For VARMAs with stochastic volatility, at every MCMC iteration given the model parameters and all the states up to time t, we simulate future log-volatilities from time t+1 to t+k using the transition equation. Given these draws, y t+k has a Gaussian density. Finally, these Gaussian densities are averaged over the MCMC iterations to obtain the joint predictive density p(y t+k y 1:t ). To evaluate the quality of the joint density forecast, consider the predictive likelihood p(y t+k = yt+k o y 1:t), i.e., the joint predictive density of y t+k evaluated at the observed value yt+k o. Intuitively, if the actual outcome yo t+k is unlikely under the density forecast, the value of the predictive likelihood will be small, and vise versa. We then evaluate the joint density forecasts using the sum of log predictive likelihoods as a metric: T k t=t 0 logp(y t+k = y o t+k y 1:t). This measure can also be viewed as an approximation of the log marginal likelihood; see, e.g., Geweke and Amisano (2011) for a more detailed discussion. In addition to 14

16 assessing the joint density forecasts, we also evaluate the performance of the models in terms of the forecasts of each individual series. For example, to evaluate the performance for forecasting the i-th component of y t+k, we simply replace the joint density p(y t+k = y o t+k y 1:t) by the marginal density p(y i,t+k = y o i,t+k y 1:t), and report the corresponding sum. Table 1 reports the performance of the nine competing models for producing 1-quarterahead joint density forecasts, as well as the marginal density forecasts for the three components. The figures are presented relative to those of the VAR(2) benchmark. Hence, any positive values indicate better forecast performance than the benchmark and vice versa. A few broad conclusions can be drawn from the results. Overall, adding a moving average component improves the forecast performance of standard VARs for all lag lengths considered. For example, the difference between the log predictive likelihoods of VAR(3) and VARMA(3,1) is 15.5, which may beinterpreted asabayes factor of about in favor of the VARMA(3,1) model. In addition, adding stochastic volatility to the models further improves their forecast performance substantially. This is in line with the large literature on inflation forecasting that shows the considerable benefits of allowing for stochastic volatility for both point and density forecasts (see, e.g., Stock and Watson, 2007; Clark and Doh, 2011; Chan, Koop, Leon-Gonzalez, and Strachan, 2012). Table 1: Relative log predictive likelihoods for 1-quarter-ahead density forecasts. GDP Inflation Interest Joint VAR(2) VARMA(2,1) VARMA(2,1)-SV VAR(3) VARMA(3,1) VARMA(3,1)-SV VAR(4) VARMA(4,1) VARMA(4,1)-SV Next, to investigate the source of differences in forecast performance, we look at the log predictive likelihoods for each series. The results suggest that the gain in adding the moving average and stochastic volatility components comes mainly from forecasting the two nominal variables inflation and interest rate better. In fact, all the nine models perform very similarly in forecasting GDP growth only models with p = 3 lags do a little better than the rest. Tables 2 3 present the results for 2- and 3-quarter-ahead density forecasts, respectively. Overall, VARMAs with stochastic volatility perform best, whereas VARMAs with constant variance do not perform as well as VARs. For example, for the 2-quarter-ahead forecasts, VARMAs are better at forecasting inflation compared to VARs, but they do worse at forecasting GDP growth and interest rate, and overall as well. These results are 15

17 perhaps not surprising, as the moving average components in VARMAs are of first order, and they are not designed to produce long horizon forecasts. Despite this, VARMAs with stochastic volatility perform better than VARs in all cases, highlighting the importance of allowing for time-varying volatility. Table 2: Relative log predictive likelihoods for 2-quarter-ahead density forecasts. GDP Inflation Interest Joint VAR(2) VARMA(2,1) VARMA(2,1)-SV VAR(3) VARMA(3,1) VARMA(3,1)-SV VAR(4) VARMA(4,1) VARMA(4,1)-SV Table 3: Relative log predictive likelihoods for 3-quarter-ahead density forecasts. GDP Inflation Interest Joint VAR(2) VARMA(2,1) VARMA(2,1)-SV VAR(3) VARMA(3,1) VARMA(3,1)-SV VAR(4) VARMA(4,1) VARMA(4,1)-SV Figures 1 3 allow us to investigate the forecast performance of the competing models in more detail by plotting the cumulative sums of log predictive likelihoods over the whole evaluation period. For 1-quarter-ahead forecasts, it is clear that overall VARMAs with or without stochastic volatility consistently perform better than VARs. It is also clear that allowing for stochastic volatility is important, especially after the early 1990s. 16

18 VAR(3) VARMA(3,1) VARMA(3,1) SV VAR(4) VARMA(4,1) VARMA(4,1) SV Figure 1: Cumulative sums of log predictive likelihoods for jointly forecasting GDP growth, inflation, and interest rate relative to VAR(2); one-quarter-ahead forecasts. Figures 2 and 3 also reveal some interesting patterns over time. For instance, VARMAs perform better than VARs from the beginning of the evaluation period up to around mid-1980s the start of the great moderation. After that their performance gradually deteriorates relative to VARs with the constant-volatility variants having a steeper decline until about the financial crisis in 2008, when VARMAs evidently compare more favorably against VARs. This observation suggests that VARMAs would be a valuable addition for producing combination density forecasts, especially when the portfolio of forecasts consists of only VARs VAR(3) VARMA(3,1) VARMA(3,3) SV VAR(4) VARMA(4,1) VARMA(4,1) SV Figure 2: Cumulative sums of log predictive likelihoods for jointly forecasting GDP growth, inflation, and interest rate relative to VAR(2); two-quarter-ahead forecasts. 17

19 VAR(3) 30 VARMA(3,1) VARMA(3,1) SV VAR(4) 30 VARMA(4,1) VARMA(4,1) SV Figure 3: Cumulative sums of log predictive likelihoods for jointly forecasting GDP growth, inflation, and interest rate relative to VAR(2); three-quarter-ahead forecasts. 6 Concluding Remarks and Future Research We have introduced a latent factors representation of the VARMA model, as well as two alternative approaches to identification in this setting. We have developed a Gibbs sampler for the model and discussed how this algorithm can be extended to models with time-varying parameters and stochastic volatility. The proposed methodology was demonstrated using a density forecasting exercise, in which we also showed that VARMAs with stochastic volatility forecast better standard VARs. The methodology developed in this article leads to a few lines of inquiry in the future. First, it would be of interest to develop informative priors for large VARMAs analogous to those proposed in the large VARs literature, and investigate if further forecast improvement can be achieved by introducing the moving average components. In addition, empirical investigation of time-varying parameter VARMAs and regime-switching VAR- MAs also seems beneficial. Moreover, in the discussion of identification with additional time series, we have recovered the popular FAVAR as a VARMA with a particular set of identification restrictions. It would be interesting to further investigate alternative identification restrictions, and how they compare to the standard ones. 18

20 Appendix In this appendix we discuss the details of Steps 2 and 3 in the basic Gibbs sampler for the VARMA(p,q) model in (2). To sample (ψ y,β,f,σ,ω), note that the innovation ε t has a diagonal covariance matrix Σ. Hence, we can estimate the MA coefficients in (2) equation by equation. To that end, define y t = y t µ Π 1 y t 1 Π p y t p, and let y it denote the i-th element of y t. Let φ j,i denote the i-th column of Ψ j, and accordingly, ψ j,i be the free elements in φ j,i, such that φ 0,i = R 0,i ψ 0,i +ι i, φ j,i = R j,i ψ j,i, for j 1, where ι i is an n 1 vector with the i-th element ι ii = 1 and all others set to zero. R j,i in this context is a pre-determined selection matrix of appropriate dimensions, whereas our assumptions regarding Ψ 0 correspond to R 0,i = (I i 1,0) for i > 1 (for i = 1, it is clear that φ 0,1 = ι 1 and there are no free elements). In addition, imposing a lower triangular Ψ 1 (i.e. the identification strategy applied in the text) is equivalent to setting R 1,i = (I i,0) for i 1. Using this formulation, define R i = diag(r 0,1,R 1,1,...,R q,i ) and ψ i = (ψ 0,i,ψ 1,i...,ψ q,i), such that φ i = R i ψ i + (ι i,0) forms the i-th column of (Ψ 0,Ψ 1,...,Ψ q ). Then, the i-th equation in (2) can be written as y it = f it + w t R i ψ i +ε it, where w t = (f t,f t 1,...,f t q ), f it is the i-th element of f t, and ε it is the i-th element of ε t. Denoting further w it = w t R i, standard results dictate (ψ i y,β,f,σ,ω) N( ψ i,d ψi ), where D ψi = ( V 1 ψ i + 1 σ 2 i 1 ( T w it it) w, ψi = D ψi V 1 ψ i ψ 0i + 1 σi 2 t=1 ) T w it (y it f it), t=1 and σ 2 i is the i-th diagonal element of Σ. For Step 2, note that both p(σ y,β,f,ψ) and p(ω y,β,f,ψ) are products of inversegamma densities, and can therefore be sampled using standard methods. In fact, we have ) T (σi (ν 2 y,β,f,ψ) IG σi +T/2,S σi + (yit f it w it ψ i ) 2 /2 (ω 2 i y,β,f,ψ) IG ( ν ωi +T/2,S ωi + t=1 ) T fit/2 2. t=1 19

21 References S. An and F. Schorfheide. Bayesian analysis of DSGE models. Econometric Reviews, 26 (2-4): , G. Athanasopoulos and F. Vahid. VARMA versus VAR for macroeconomic forecasting. Journal of Business and Economic Statistics, 26(2): , M. Banbura, D. Giannone, and L. Reichlin. Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1):71 92, B. Bernanke, J. Boivin, and P. S. Eliasz. Measuring the effects of monetary policy: A factor-augmented vector autoregressive (FAVAR) approach. The Quarterly Journal of Economics, 120(1): , F. Canova. Modelling and forecasting exchange rates with a Bayesian time-varying coefficient model. Journal of Economic Dynamics and Control, 17: , J. C. C. Chan. Moving average stochastic volatility models with application to inflation forecast. Research School of Economics, Australian National University, J. C. C. Chan and I. Jeliazkov. Efficient simulation and integrated likelihood estimation in state space models. International Journal of Mathematical Modelling and Numerical Optimisation, 1: , J. C. C. Chan, G. Koop, R. Leon-Gonzalez, and R. Strachan. Time varying dimension models. Journal of Business and Economic Statistics, 30: , T. E. Clark and T. Doh. A Bayesian evaluation of alternative models of trend inflation. Working paper, Federal Reserve Bank of Cleveland, T. Cogley and T. J. Sargent. Drifts and volatilities: monetary policies and outcomes in the post WWII US. Review of Economic Dynamics, 8(2): , E. Eisenstat. A short-term demand forecasting simulation model for informationdriven resource planning. In Proceedings of the 17th International Conference on the Knowledge-Based Organization 3: Applied Technical Sciences and Advanced Military Technologies, Sibiu, Romania, November J. Geweke and G. Amisano. Hierarchical Markov normal mixture models with applications to financial asset returns. Journal of Applied Econometrics, 26:1 29, S. Kim, N. Shepherd, and S. Chib. Stochastic volatility: Likelihood inference and comparison with ARCH models. Review of Economic Studies, 65(3): , G. Koop. Bayesian Econometrics. Wiley & Sons, New York, G. Koop. Forecasting with medium and large Bayesian VARs. Journal of Applied Econometrics, DOI: /jae

22 G. Koop and D. Korobilis. Bayesian multivariate time series methods for empirical macroeconomics. Foundations and Trends in Econometrics, 3(4): , G. Koop and S. M. Potter. Estimation and forecasting in models with multiple breaks. Review of Economic Studies, 74: , D. Korobilis. Assessing the transmission of monetary policy shocks using time-varying parameter dynamic factor models. Oxford Bulletin of Economics and Statistics, Forthcoming. D. P. Kroese, T. Taimre, and Z. I. Botev. Handbook of Monte Carlo Methods. John Wiley & Sons, New York, R. Litterman. Forecasting with Bayesian vector autoregressions five years of experience. Journal of Business and Economic Statistics, 4:25 38, W. J. McCausland, S. Miller, and D. Pelletier. Simulation smoothing for state-space models: A computational efficiency analysis. Computational Statistics and Data Analysis, 55: , K. Metaxoglou and A. Smith. Maximum likelihood estimation of VARMA models using a state-space EM algorithm. Journal of Time Series Analysis, 28(5): , R. Paap and H. van Dijk. Bayes estimates of Markov trends in possibly cointegrated series: An application to us consumption and income. Journal of Business and Economic Statistics, 21: , M. S. Peiris. On the study of some functions of multivariate ARMA processes. Journal of Time Series Analysis, 25(1): , G. E. Primiceri. Time varying structural vector autoregressions and monetary policy. Review of Economic Studies, 72(3): , C. A. Sims. Macroeconomics and reality. Econometrica, 48:1 48, J. H. Stock and M. W. Watson. Macroeconomic forecasting using diffusion indexes. Journal of Business and Economic Statistics, 20: , J. H. Stock and M. W. Watson. Why has U.S. inflation become harder to forecast? Journal of Money Credit and Banking, 39:3 33,

Efficient Estimation of Bayesian VARMAs With Time-Varying Coefficients

Efficient Estimation of Bayesian VARMAs With Time-Varying Coefficients Efficient Estimation of Bayesian VARMAs With Time-Varying Coefficients Joshua CC Chan Research School of Economics, Australian National University Eric Eisenstat School of Economics, The University of

More information

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively

More information

Online Appendix for: A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve

Online Appendix for: A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve Online Appendix for: A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve Joshua CC Chan Australian National University Gary Koop University of Strathclyde Simon M Potter

More information

doi /j. issn % % JOURNAL OF CHONGQING UNIVERSITY Social Science Edition Vol. 22 No.

doi /j. issn % % JOURNAL OF CHONGQING UNIVERSITY Social Science Edition Vol. 22 No. doi1. 11835 /j. issn. 18-5831. 216. 1. 6 216 22 1 JOURNAL OF CHONGQING UNIVERSITYSocial Science EditionVol. 22 No. 1 216. J. 2161 5-57. Citation FormatJIN Chengxiao LU Yingchao. The decomposition of Chinese

More information

1. Introduction. Hang Qian 1 Iowa State University

1. Introduction. Hang Qian 1 Iowa State University Users Guide to the VARDAS Package Hang Qian 1 Iowa State University 1. Introduction The Vector Autoregression (VAR) model is widely used in macroeconomics. However, macroeconomic data are not always observed

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Research Division Federal Reserve Bank of St. Louis Working Paper Series

Research Division Federal Reserve Bank of St. Louis Working Paper Series Research Division Federal Reserve Bank of St Louis Working Paper Series Kalman Filtering with Truncated Normal State Variables for Bayesian Estimation of Macroeconomic Models Michael Dueker Working Paper

More information

Lecture 4: Bayesian VAR

Lecture 4: Bayesian VAR s Lecture 4: Bayesian VAR Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

The Stochastic Volatility in Mean Model with Time- Varying Parameters: An Application to Inflation Modeling

The Stochastic Volatility in Mean Model with Time- Varying Parameters: An Application to Inflation Modeling Crawford School of Public Policy CAMA Centre for Applied Macroeconomic Analysis The Stochastic Volatility in Mean Model with Time- Varying Parameters: An Application to Inflation Modeling CAMA Working

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector

More information

Moving Average Stochastic Volatility Models with Application to Inflation Forecast

Moving Average Stochastic Volatility Models with Application to Inflation Forecast Moving Average Stochastic Volatility Models with Application to Inflation Forecast Joshua C.C. Chan Research School of Economics, Australian National University May 13 Abstract We introduce a new class

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

Identifying the Monetary Policy Shock Christiano et al. (1999)

Identifying the Monetary Policy Shock Christiano et al. (1999) Identifying the Monetary Policy Shock Christiano et al. (1999) The question we are asking is: What are the consequences of a monetary policy shock a shock which is purely related to monetary conditions

More information

1. Shocks. This version: February 15, Nr. 1

1. Shocks. This version: February 15, Nr. 1 1. Shocks This version: February 15, 2006 Nr. 1 1.3. Factor models What if there are more shocks than variables in the VAR? What if there are only a few underlying shocks, explaining most of fluctuations?

More information

Bayesian Compressed Vector Autoregressions

Bayesian Compressed Vector Autoregressions Bayesian Compressed Vector Autoregressions Gary Koop a, Dimitris Korobilis b, and Davide Pettenuzzo c a University of Strathclyde b University of Glasgow c Brandeis University 9th ECB Workshop on Forecasting

More information

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

Inference in VARs with Conditional Heteroskedasticity of Unknown Form Inference in VARs with Conditional Heteroskedasticity of Unknown Form Ralf Brüggemann a Carsten Jentsch b Carsten Trenkler c University of Konstanz University of Mannheim University of Mannheim IAB Nuremberg

More information

Large Bayesian VARs: A Flexible Kronecker Error Covariance Structure

Large Bayesian VARs: A Flexible Kronecker Error Covariance Structure Crawford School of Public Policy CAMA Centre for Applied Macroeconomic Analysis Large Bayesian VARs: A Flexible Kronecker Error Covariance Structure CAMA Working Paper 41/2015 November 2015 Joshua C.C.

More information

Reducing Dimensions in a Large TVP-VAR

Reducing Dimensions in a Large TVP-VAR Reducing Dimensions in a Large TVP-VAR Eric Eisenstat University of Queensland Joshua C.C. Chan University of Technology Sydney Rodney W. Strachan University of Queensland March, 8 Corresponding author:

More information

Dynamic probabilities of restrictions in state space models: An application to the New Keynesian Phillips Curve

Dynamic probabilities of restrictions in state space models: An application to the New Keynesian Phillips Curve Dynamic probabilities of restrictions in state space models: An application to the New Keynesian Phillips Curve Gary Koop Department of Economics University of Strathclyde Scotland Gary.Koop@strath.ac.uk

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

in Trend University Gary Koop # 590

in Trend University Gary Koop # 590 A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve Joshua C.C. Chan Research School of Economics, Australian National University Gary Koop University of Strathclyde Simon

More information

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract Bayesian analysis of a vector autoregressive model with multiple structural breaks Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus Abstract This paper develops a Bayesian approach

More information

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t. ECO 513 Fall 2008 C.Sims KALMAN FILTER Model in the form 1. THE KALMAN FILTER Plant equation : s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. Var(ε t ) = Ω, Var(ν t ) = Ξ. ε t ν t and (ε t,

More information

Estimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions

Estimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions Estimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions James Morley 1 Benjamin Wong 2 1 University of Sydney 2 Reserve Bank of New Zealand The view do not necessarily represent

More information

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Efficient Estimation of Structural VARMAs with Stochastic Volatility

Efficient Estimation of Structural VARMAs with Stochastic Volatility Efficient Estimation of Structural VARMAs with Stochastic Volatility Eric Eisenstat 1 1 The University of Queensland Vienna University of Economics and Business Seminar 18 April 2018 Eisenstat (UQ) Structural

More information

Using VARs and TVP-VARs with Many Macroeconomic Variables

Using VARs and TVP-VARs with Many Macroeconomic Variables Central European Journal of Economic Modelling and Econometrics Using VARs and TVP-VARs with Many Macroeconomic Variables Gary Koop Submitted: 15.01.2013, Accepted: 27.01.2013 Abstract This paper discusses

More information

Identifying SVARs with Sign Restrictions and Heteroskedasticity

Identifying SVARs with Sign Restrictions and Heteroskedasticity Identifying SVARs with Sign Restrictions and Heteroskedasticity Srečko Zimic VERY PRELIMINARY AND INCOMPLETE NOT FOR DISTRIBUTION February 13, 217 Abstract This paper introduces a new method to identify

More information

Large time varying parameter VARs for macroeconomic forecasting

Large time varying parameter VARs for macroeconomic forecasting Large time varying parameter VARs for macroeconomic forecasting Gianni Amisano (FRB) Domenico Giannone (NY Fed, CEPR and ECARES) Michele Lenza (ECB and ECARES) 1 Invited talk at the Cleveland Fed, May

More information

Regime-Switching Cointegration

Regime-Switching Cointegration Regime-Switching Cointegration Markus Jochmann Newcastle University Rimini Centre for Economic Analysis Gary Koop University of Strathclyde Rimini Centre for Economic Analysis May 2011 Abstract We develop

More information

New Information Response Functions

New Information Response Functions New Information Response Functions Caroline JARDET (1) Banque de France Alain MONFORT (2) CNAM, CREST and Banque de France Fulvio PEGORARO (3) Banque de France and CREST First version : February, 2009.

More information

1 Teaching notes on structural VARs.

1 Teaching notes on structural VARs. Bent E. Sørensen February 22, 2007 1 Teaching notes on structural VARs. 1.1 Vector MA models: 1.1.1 Probability theory The simplest (to analyze, estimation is a different matter) time series models are

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

LARGE TIME-VARYING PARAMETER VARS

LARGE TIME-VARYING PARAMETER VARS WP 12-11 Gary Koop University of Strathclyde, UK The Rimini Centre for Economic Analysis (RCEA), Italy Dimitris Korobilis University of Glasgow, UK The Rimini Centre for Economic Analysis (RCEA), Italy

More information

Large Vector Autoregressions with asymmetric priors and time varying volatilities

Large Vector Autoregressions with asymmetric priors and time varying volatilities Large Vector Autoregressions with asymmetric priors and time varying volatilities Andrea Carriero Queen Mary, University of London a.carriero@qmul.ac.uk Todd E. Clark Federal Reserve Bank of Cleveland

More information

Combining Macroeconomic Models for Prediction

Combining Macroeconomic Models for Prediction Combining Macroeconomic Models for Prediction John Geweke University of Technology Sydney 15th Australasian Macro Workshop April 8, 2010 Outline 1 Optimal prediction pools 2 Models and data 3 Optimal pools

More information

Identifying Aggregate Liquidity Shocks with Monetary Policy Shocks: An Application using UK Data

Identifying Aggregate Liquidity Shocks with Monetary Policy Shocks: An Application using UK Data Identifying Aggregate Liquidity Shocks with Monetary Policy Shocks: An Application using UK Data Michael Ellington and Costas Milas Financial Services, Liquidity and Economic Activity Bank of England May

More information

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS 1. THE CLASS OF MODELS y t {y s, s < t} p(y t θ t, {y s, s < t}) θ t = θ(s t ) P[S t = i S t 1 = j] = h ij. 2. WHAT S HANDY ABOUT IT Evaluating the

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

VARMA versus VAR for Macroeconomic Forecasting

VARMA versus VAR for Macroeconomic Forecasting VARMA versus VAR for Macroeconomic Forecasting 1 VARMA versus VAR for Macroeconomic Forecasting George Athanasopoulos Department of Econometrics and Business Statistics Monash University Farshid Vahid

More information

Bayesian model comparison for time-varying parameter VARs with stochastic volatility

Bayesian model comparison for time-varying parameter VARs with stochastic volatility Received: 2 October 216 Revised: 7 November 217 DOI: 1.12/jae.2617 RESEARCH ARTICLE Bayesian model comparison for time-varying parameter VARs with stochastic volatility Joshua C. C. Chan 1 Eric Eisenstat

More information

Nowcasting Norwegian GDP

Nowcasting Norwegian GDP Nowcasting Norwegian GDP Knut Are Aastveit and Tørres Trovik May 13, 2007 Introduction Motivation The last decades of advances in information technology has made it possible to access a huge amount of

More information

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo NBER Summer Institute Minicourse What s New in Econometrics: Time Series Lecture 5 July 5, 2008 The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo Lecture 5, July 2, 2008 Outline. Models

More information

Time-Varying Vector Autoregressive Models with Structural Dynamic Factors

Time-Varying Vector Autoregressive Models with Structural Dynamic Factors Time-Varying Vector Autoregressive Models with Structural Dynamic Factors Paolo Gorgi, Siem Jan Koopman, Julia Schaumburg http://sjkoopman.net Vrije Universiteit Amsterdam School of Business and Economics

More information

Forecasting with Bayesian Global Vector Autoregressive Models

Forecasting with Bayesian Global Vector Autoregressive Models Forecasting with Bayesian Global Vector Autoregressive Models A Comparison of Priors Jesús Crespo Cuaresma WU Vienna Martin Feldkircher OeNB Florian Huber WU Vienna 8th ECB Workshop on Forecasting Techniques

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

STRATHCLYDE DISCUSSION PAPERS IN ECONOMICS FORECASTING WITH MEDIUM AND LARGE BAYESIAN VARS GARY KOOP NO

STRATHCLYDE DISCUSSION PAPERS IN ECONOMICS FORECASTING WITH MEDIUM AND LARGE BAYESIAN VARS GARY KOOP NO STRATHCLYDE DISCUSSION PAPERS IN ECONOMICS FORECASTING WITH MEDIUM AND LARGE BAYESIAN VARS BY GARY KOOP NO. 11-17 DEPARTMENT OF ECONOMICS UNIVERSITY OF STRATHCLYDE GLASGOW Forecasting with Medium and Large

More information

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014 Warwick Business School Forecasting System Summary Ana Galvao, Anthony Garratt and James Mitchell November, 21 The main objective of the Warwick Business School Forecasting System is to provide competitive

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Computational Macroeconomics. Prof. Dr. Maik Wolters Friedrich Schiller University Jena

Computational Macroeconomics. Prof. Dr. Maik Wolters Friedrich Schiller University Jena Computational Macroeconomics Prof. Dr. Maik Wolters Friedrich Schiller University Jena Overview Objective: Learn doing empirical and applied theoretical work in monetary macroeconomics Implementing macroeconomic

More information

Estimating Macroeconomic Models: A Likelihood Approach

Estimating Macroeconomic Models: A Likelihood Approach Estimating Macroeconomic Models: A Likelihood Approach Jesús Fernández-Villaverde University of Pennsylvania, NBER, and CEPR Juan Rubio-Ramírez Federal Reserve Bank of Atlanta Estimating Dynamic Macroeconomic

More information

Trend-Cycle Decompositions

Trend-Cycle Decompositions Trend-Cycle Decompositions Eric Zivot April 22, 2005 1 Introduction A convenient way of representing an economic time series y t is through the so-called trend-cycle decomposition y t = TD t + Z t (1)

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths

Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths Non-Markovian Regime Switching with Endogenous States and Time-Varying State Strengths January 2004 Siddhartha Chib Olin School of Business Washington University chib@olin.wustl.edu Michael Dueker Federal

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

1 Teaching notes on structural VARs.

1 Teaching notes on structural VARs. Bent E. Sørensen November 8, 2016 1 Teaching notes on structural VARs. 1.1 Vector MA models: 1.1.1 Probability theory The simplest to analyze, estimation is a different matter time series models are the

More information

Stochastic Volatility Models with ARMA Innovations: An Application to G7 Inflation Forecasts

Stochastic Volatility Models with ARMA Innovations: An Application to G7 Inflation Forecasts Stochastic Volatility Models with ARMA Innovations: An Application to G7 Inflation Forecasts Bo Zhang Research School of Economics, Australian National University Joshua C. C. Chan Economics Discipline

More information

Estimating and Identifying Vector Autoregressions Under Diagonality and Block Exogeneity Restrictions

Estimating and Identifying Vector Autoregressions Under Diagonality and Block Exogeneity Restrictions Estimating and Identifying Vector Autoregressions Under Diagonality and Block Exogeneity Restrictions William D. Lastrapes Department of Economics Terry College of Business University of Georgia Athens,

More information

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, ) Econometrica Supplementary Material SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, 653 710) BY SANGHAMITRA DAS, MARK ROBERTS, AND

More information

Title. Description. var intro Introduction to vector autoregressive models

Title. Description. var intro Introduction to vector autoregressive models Title var intro Introduction to vector autoregressive models Description Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference on vector autoregressive (VAR) models

More information

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Robert V. Breunig Centre for Economic Policy Research, Research School of Social Sciences and School of

More information

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Francis X. Diebold Frank Schorfheide Minchul Shin University of Pennsylvania May 4, 2014 1 / 33 Motivation The use

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Chapter 1. Introduction. 1.1 Background

Chapter 1. Introduction. 1.1 Background Chapter 1 Introduction Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science. Henri

More information

Estimation in Non-Linear Non-Gaussian State Space Models with Precision-Based Methods

Estimation in Non-Linear Non-Gaussian State Space Models with Precision-Based Methods MPRA Munich Personal RePEc Archive Estimation in Non-Linear Non-Gaussian State Space Models with Precision-Based Methods Joshua Chan and Rodney Strachan Australian National University 22 Online at http://mpra.ub.uni-muenchen.de/3936/

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Regime-Switching Cointegration

Regime-Switching Cointegration Regime-Switching Cointegration Markus Jochmann Newcastle University Rimini Centre for Economic Analysis Gary Koop University of Strathclyde Rimini Centre for Economic Analysis July 2013 Abstract We develop

More information

Assessing Structural Convergence between Romanian Economy and Euro Area: A Bayesian Approach

Assessing Structural Convergence between Romanian Economy and Euro Area: A Bayesian Approach Vol. 3, No.3, July 2013, pp. 372 383 ISSN: 2225-8329 2013 HRMARS www.hrmars.com Assessing Structural Convergence between Romanian Economy and Euro Area: A Bayesian Approach Alexie ALUPOAIEI 1 Ana-Maria

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

BEAR 4.2. Introducing Stochastic Volatility, Time Varying Parameters and Time Varying Trends. A. Dieppe R. Legrand B. van Roye ECB.

BEAR 4.2. Introducing Stochastic Volatility, Time Varying Parameters and Time Varying Trends. A. Dieppe R. Legrand B. van Roye ECB. BEAR 4.2 Introducing Stochastic Volatility, Time Varying Parameters and Time Varying Trends A. Dieppe R. Legrand B. van Roye ECB 25 June 2018 The views expressed in this presentation are the authors and

More information

Factor models. March 13, 2017

Factor models. March 13, 2017 Factor models March 13, 2017 Factor Models Macro economists have a peculiar data situation: Many data series, but usually short samples How can we utilize all this information without running into degrees

More information

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models Journal of Finance and Investment Analysis, vol.1, no.1, 2012, 55-67 ISSN: 2241-0988 (print version), 2241-0996 (online) International Scientific Press, 2012 A Non-Parametric Approach of Heteroskedasticity

More information

Working Paper Series

Working Paper Series Discussion of Cogley and Sargent s Drifts and Volatilities: Monetary Policy and Outcomes in the Post WWII U.S. Marco Del Negro Working Paper 23-26 October 23 Working Paper Series Federal Reserve Bank of

More information

31 (3) ISSN

31 (3) ISSN Chan, Joshua C. C. and Koop, Gary and Potter, Simon M. (2016) A bounded model of time variation in trend inflation, NAIRU and the Phillips curve. Journal of Applied Econometrics, 31 (3). pp. 551-565. ISSN

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

Vector autoregressions, VAR

Vector autoregressions, VAR 1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,

More information

Essex Finance Centre Working Paper Series

Essex Finance Centre Working Paper Series Essex Finance Centre Working Paper Series Working Paper No14: 12-216 Adaptive Minnesota Prior for High-Dimensional Vector Autoregressions Dimitris Korobilis and Davide Pettenuzzo Essex Business School,

More information

Generalized Impulse Response Analysis: General or Extreme?

Generalized Impulse Response Analysis: General or Extreme? Auburn University Department of Economics Working Paper Series Generalized Impulse Response Analysis: General or Extreme? Hyeongwoo Kim Auburn University AUWP 2012-04 This paper can be downloaded without

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

Bayesian Compressed Vector Autoregressions

Bayesian Compressed Vector Autoregressions Bayesian Compressed Vector Autoregressions Gary Koop University of Strathclyde Dimitris Korobilis University of Glasgow December 1, 15 Davide Pettenuzzo Brandeis University Abstract Macroeconomists are

More information

Oil price and macroeconomy in Russia. Abstract

Oil price and macroeconomy in Russia. Abstract Oil price and macroeconomy in Russia Katsuya Ito Fukuoka University Abstract In this note, using the VEC model we attempt to empirically investigate the effects of oil price and monetary shocks on the

More information

Working Paper Series. A note on implementing the Durbin and Koopman simulation smoother. No 1867 / November Marek Jarocinski

Working Paper Series. A note on implementing the Durbin and Koopman simulation smoother. No 1867 / November Marek Jarocinski Working Paper Series Marek Jarocinski A note on implementing the Durbin and Koopman simulation smoother No 1867 / November 2015 Note: This Working Paper should not be reported as representing the views

More information

Identification of Economic Shocks by Inequality Constraints in Bayesian Structural Vector Autoregression

Identification of Economic Shocks by Inequality Constraints in Bayesian Structural Vector Autoregression Identification of Economic Shocks by Inequality Constraints in Bayesian Structural Vector Autoregression Markku Lanne Department of Political and Economic Studies, University of Helsinki Jani Luoto Department

More information

Factor models. May 11, 2012

Factor models. May 11, 2012 Factor models May 11, 2012 Factor Models Macro economists have a peculiar data situation: Many data series, but usually short samples How can we utilize all this information without running into degrees

More information

DISCUSSION PAPERS IN ECONOMICS

DISCUSSION PAPERS IN ECONOMICS STRATHCLYDE DISCUSSION PAPERS IN ECONOMICS LARGE BAYESIAN VARMAS BY JOSHUA C C CHAN, ERIC EISENSTAT AND GARY KOOP NO 4-9 DEPARTMENT OF ECONOMICS UNIVERSITY OF STRATHCLYDE GLASGOW Large Bayesian VARMAs

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Generalized Autoregressive Score Models

Generalized Autoregressive Score Models Generalized Autoregressive Score Models by: Drew Creal, Siem Jan Koopman, André Lucas To capture the dynamic behavior of univariate and multivariate time series processes, we can allow parameters to be

More information

Large Bayesian VARMAs

Large Bayesian VARMAs Large Bayesian VARMAs Joshua C.C. Chan Australian National University Gary Koop University of Strathclyde May 25 Eric Eisenstat The University of Queensland Abstract: Vector Autoregressive Moving Average

More information

Forecasting with Bayesian Vector Autoregressions

Forecasting with Bayesian Vector Autoregressions WORKING PAPER 12/2012 Forecasting with Bayesian Vector Autoregressions Sune Karlsson Statistics ISSN 1403-0586 http://www.oru.se/institutioner/handelshogskolan-vid-orebro-universitet/forskning/publikationer/working-papers/

More information

Gaussian Mixture Approximations of Impulse Responses and the Non-Linear Effects of Monetary Shocks

Gaussian Mixture Approximations of Impulse Responses and the Non-Linear Effects of Monetary Shocks Gaussian Mixture Approximations of Impulse Responses and the Non-Linear Effects of Monetary Shocks Regis Barnichon (CREI, Universitat Pompeu Fabra) Christian Matthes (Richmond Fed) Effects of monetary

More information

Part I State space models

Part I State space models Part I State space models 1 Introduction to state space time series analysis James Durbin Department of Statistics, London School of Economics and Political Science Abstract The paper presents a broad

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Estimating Unobservable Inflation Expectations in the New Keynesian Phillips Curve

Estimating Unobservable Inflation Expectations in the New Keynesian Phillips Curve econometrics Article Estimating Unobservable Inflation Expectations in the New Keynesian Phillips Curve Francesca Rondina ID Department of Economics, University of Ottawa, 120 University Private, Ottawa,

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

Working Paper Series. This paper can be downloaded without charge from:

Working Paper Series. This paper can be downloaded without charge from: Working Paper Series This paper can be downloaded without charge from: http://www.richmondfed.org/publications/ Choosing Prior Hyperparameters Pooyan Amir-Ahmadi Christian Matthes Mu-Chun Wang Working

More information