Block Bootstrap Prediction Intervals for Vector Autoregression

Size: px
Start display at page:

Download "Block Bootstrap Prediction Intervals for Vector Autoregression"

Transcription

1 Department of Economics Working Paper Block Bootstrap Prediction Intervals for Vector Autoregression Jing Li Miami University 2013 Working Paper #

2 Block Bootstrap Prediction Intervals for Vector Autoregression Jing Li Miami University Abstract This paper attempts to answer the question of whether the principle of parsimony can apply to interval forecasting for a multivariate series. Toward that end this paper proposes the block bootstrap prediction intervals based on the parsimonious first order vector autoregression. The new intervals generalize the standard bootstrap prediction intervals by allowing for serially correlated prediction errors. The unexplained serial correlation is accounted for by the block bootstrap, which resamples two-dimensional arrays of residuals. A Monte Carlo experiment shows that the new intervals outperform the standard bootstrap intervals in most cases. Keywords: Forecast; Vector Autoregression; Block Bootstrap; Prediction Intervals; Principle of Parsimony Jing Li, Department of Economics, Miami University, Oxford, OH 45056, USA. Phone: , Fax: , lij14@miamioh.edu. 1

3 1. Introduction Chatfield (1993) emphasizes that the interval forecasting can provide the associated uncertainty, so is more informative than the point forecasting. This paper is concerned with constructing the individual prediction intervals for each component of a multivariate time series, based on the vector autoregression developed by Sims (1980) 1. We focus on the bootstrap prediction intervals since they can automatically account for the sampling variability of coefficient estimators and non-normal prediction errors. For forecasting purpose the VAR has pros and cons. The VAR fully utilizes the acrossvariable correlation, which is ignored by univariate models. However, the number of unknown coefficients in a dynamically adequate VAR can be large, making the model not parsimonious. This is at odds with the principle of parsimony: it is well known that a parsimonious univariate model may produce superior out-of-sample point forecasts, see Enders (2009) for example. Recently Li (2013) shows a parsimonious univariate model may yield superior interval forecasts as well. The main objective of this paper is to investigate whether the principle of parsimony applies to the interval forecasting for components of a multivariate series. Toward that end this paper proposes the block bootstrap intervals (BBI) using the first order VAR, the most parsimonious VAR. Because the error terms are likely to be serially correlated, we need to implement the block bootstrap of Künsch (1989), instead of the standard bootstrap of Efron (1979). More explicitly, the BBI is characterized by resampling two-dimensional arrays of residuals. The space dimension intends to preserve the correlation between variables; while the time dimension can capture the serial correlation. The proposed BBI adds to the literature by generalizing Thombs and Schucany (1990) in two directions: from univariate to multivariate, and from the assumption of independent 1 There are other types of multivariate forecasting such as the prediction ellipsoid studied in Kim (1999). This paper concentrates on the individual prediction intervals in light of their popularity among practitioners. 2

4 error to relaxing that assumption. As a comparison, this paper also considers applying the standard bootstrap to a dynamically complete p-th order VAR, where p is sufficiently large so that the error becomes serially uncorrelated. This is necessary because the standard bootstrap assumes independent errors. The resulting VAR with white noise errors can be complicated. Our conjecture is that the BBI will have better performance since it is based on the parsimonious model. Later we conduct a Monte Carlo experiment to check this conjecture. In addition, this paper considers improving the BBI by correcting the bias in the coefficient estimated by the ordinary least squares (OLS). Early works such as Shaman and Stine (1988) and Kilian (1998) illustrate the importance of correcting the autoregressive bias. This paper extends those studies from the perspective of the multivariate interval forecasting. The literature of bootstrap prediction intervals is growing. An incomplete list includes Thombs and Schucany (1990), Grigoletto (1998), Kim (1999), Kim (2001), Clements and Taylor (2001), Kim (2002), Kim (2004) and Li (2011). This paper distinguishes from the existing literature by applying the block bootstrap to the VAR forecasting. The remainder of the paper is organized as follows. Section 2 specifies the BBI. Section 3 conducts the Monte Carlo experiment. Section 4 discuss the possible extension of this study and concludes. 2. Bootstrap Prediction Intervals for VAR Let y t denotes an m 1 vector: y t = (y 1,t,..., y m,t ), where y i,t, (i = 1,..., m), (t = 1,...) is a univariate series. The goal is to find the prediction intervals for each component of the future vectors (y n+1,..., y n+h ) based on the observed values (y 1,..., y n ). To focus on the main issue we assume y t has zero mean and is weakly stationary: Ey t = 0, E(y i,t y j,t k ) = σ ijk, (i, j = 1,..., m), (k = 0, 1,...). In practice y t may represent the differenced, detrended or demeaned series, depending on the order of integration and the deterministic terms 2. 2 The literature on forecasting non-stationary series is vast, see Clements and Hendry (2001) for example. 3

5 The VAR is a multivariate model that takes into account the across-variable correlation σ ijk, which is ignored by univariate models. There are at least two ways to build the bootstrap prediction intervals using VAR. Their difference is which bootstrap method is used. Then it boils down to how to model the serial correlation: by a dynamically adequate model with white noise errors, or by a parsimonious model with serially correlated errors. Standard Bootstrap Intervals We start with the standard bootstrap intervals (BI for short), which are the multivariate generalization of the one proposed by Thombs and Schucany (1990). The BI is based on a VAR(p) model y t = α 1 y t 1 + α 2 y t α p y t p + e t, (1) where α i, (i = 1,..., p) is an m m coefficient matrix, and e t is an m 1 vector of error terms 3. Unlike the classical Box-Jenkins type prediction intervals, the BI does not assume e t follows normal distributions. However, because the standard bootstrap of Efron (1979) can only be applied in the independent setting, the assumption that e t in (1) is independent of e t j, ( j 1) is needed for the BI. The independent assumption is restrictive in the sense that model (1) must be dynamically adequate, or the lag value p should be sufficiently large (so that no serial correlation is left in e t ). The detailed algorithm for constructing the BI is omitted here, and can be found in Thombs and Schucany (1990). For our purpose, it suffices to stress that the algorithm involves resampling with replacement the individual m 1 residual vector ê t, after (1) is fitted by OLS. 3 Intercept term is dropped in (1) because we assume y t has zero mean. Our simulation indicates that there is no qualitative change if the intercept term is added. 4

6 Block Bootstrap Intervals Basically the BI utilizes the serial correlation through (possibly many) lagged values of y. Notice that there are m 2 p autoregressive coefficients in (1) to estimate. If p is large, the resulting VAR can be very complicated. In fact, this must be the case when the series follow a vector moving average process, for which any chosen VAR is a finite-order approximation for VAR( ). The principle of parsimony is forgone if that happens. Having an eye for the parsimony principle, this paper proposes the block bootstrap intervals (BBI for short) based on the simplest VAR(1) model y t = β 1 y t 1 + v t. (2) It is obvious that model (2) has no more coefficients to estimate than (1). Moreover, the chance of multicollinearity is minimized by (2). The BBI is motivated by the intuition that the simplicity of model (2) may lead to superior out-of-sample forecasts. In particular, the improvement in the forecasting accuracy is expected to be substantial when the sample size n is small and when serial correlation is strong. One issue with model (2) is that the error term v t in general is serially correlated. To see this, suppose the true data generating process is (1) with the independent error e t. Then by construction v t a 2 y t a p y t p + e t, so is correlated with v t 1 and so on. The serial correlation in the new error term is not surprising: the VAR(1) model is somewhat semiparametric, and just one lagged value in general cannot capture all the dynamic structure. Now we hope to take advantage of the simple model (2), and meanwhile use the serial correlation in the error term to refine the forecast. The bootstrap method can be helpful in this context 4. Nevertheless, Efron s standard bootstrap cannot be applied to the dependent 4 The serial correlation in the error term may suggest methods such as generalized least squares (GLS). We do not consider GLS since that method intends to model the correlation structure in the error term. In other words, the GLS is essentially based on a model as complicated as (1). 5

7 series v t. Instead we need to rely upon the multivariate version of the block bootstrap proposed by Künsch (1989). Let ˆβ 1 denote the coefficient estimated by OLS, and ˆv t y t ˆβ 1 y t 1 the residual vector 5. The key idea of the block bootstrap is resampling with replacement random blocks of adjacent residual vectors. A typical block B(ˆv) j from the residual ˆv t looks like B(ˆv) j = (ˆv j, ˆv j+1,..., ˆv j+b 1 ) = ˆv 1,j... ˆv 1,j+b (3) ˆv m,j... ˆv m,j+b 1 m b where b is the width (size) of block, and the index number j is a random draw from the discrete uniform distribution between 1 and n b + 1. Notice that B(ˆv) j is a two-dimensional array. The vertical horizon measured by m can preserve the across-variable contemporaneous correlation σ ij0. The horizontal horizon measured by b can capture the temporal serial correlation σ iik. When b = 1 the block bootstrap reduces into the standard bootstrap. The block bootstrap resamples blocks of residuals (a vector in this case), rather than individual residual, so that the serial correlation in v t can be accounted for. In the literature the block bootstrap has been mostly used for purposes other than forecasting, see Politis (2003). One exception is Li (2013), which applies the block bootstrap to the univariate interval forecasting by letting m = 1 in (3). This study generalizes that one by allowing for m > 1. In principle, the block size b should rise as the sample size rises. For given sample size, there is a tradeoff when choosing the optimal block size. Rising b can capture more serial correlation. But meanwhile, there is increasing overlapping between blocks, which produces less variability. More discussion about this tradeoff is given in Li (2013). We let b = 4 throughout this paper, as our preliminary simulation indicates that the results may not be 5 One may re-center and re-scale the residual using the factor of Stine (1987). We do not use the rescaling factor since it is unclear how to modify the original one for VAR. 6

8 sensitive to the block size 6. Constructing the BBI takes five steps: In step one, model (2) is fitted by OLS, and the residual ˆv t is saved. In step two, the backward representation of the VAR(1) model is fitted by OLS: y t = ϕ 1 y t+1 + u t, (4) where the regressor is the first lead, not lag. Denote the estimated coefficient and residual by ˆϕ 1 and û t. This backward regression is intended to ensure the conditionality of the bootstrap replicate on the last observed value y n. See Figure 1 of Thombs and Schucany (1990) for an illustration of the conditionality. In step three, use the backward residual û t to obtain the first random block B(û) j1 = (û j1, û j1+1,..., û j1+b 1 ), the second block B(û) j2 = (û j2, û j2+1,..., û j2+b 1 ), and so on, by redrawing the index number j1, j2,... with replacement from the discrete uniform distribution between 1 and n b + 1. Then we stack up these blocks until the length of the stacked series becomes n, the size of the observed sample. Let û t denote the t-th observation of the stacked series. Next one bootstrap replicate series (y 1,..., y n) is generated in a backward fashion as y n = y n, y t = ˆϕ 1 y t+1 + û t, (t = n 1,..., 1) (5) It is instructive to emphasize that we generate the last observation first, then move backward one period a time. By doing this all the bootstrap replicate series have the 6 Alternatively, the stationary bootstrap of Politis and Romano (1994) can be used, which assumes b follows a geometric distribution. The question still exists since the new one becomes how to select the parameter for the geometric distribution. 7

9 same last observation, which is y n. That is what conditionality on the last observed value means. Now we use the bootstrap replicates (y1,..., yn) to refit (2), and get the so called bootstrap coefficient ˆβ 1. Later this step will be repeated many times. Then the variability of ˆβ 1 can be used to mimic the inherent variability of estimating ˆβ 1. This is another advantage of the bootstrap intervals over the Box-Jenkins intervals which fail to account for the sampling variability of the estimated coefficient. In step four, in a similar fashion, obtain the random blocks B(ˆv) j1 = (ˆv j1,..., ˆv j1+b 1 ), B(ˆv) j2 = (ˆv j2,..., ˆv j2+b 1 ),... using ˆv t, the residual of the forward VAR(1). But this time we stack up these blocks until the length of the stacked series is h, the maximum forecast horizon. Let ˆv l denote the l-th observation of the stacked series. Then we compute recursively the block bootstrap l-th step forecast ŷ n+l as ŷ n = y n, ŷ n+l = ˆβ 1ŷ n+l 1 + ˆv l, (l = 1,..., h) (6) The above equation shows that the randomness of the forecast value comes from two sources. The randomness due to estimation is captured by ˆβ 1, while the randomness of the future shock is represented by ˆv l. In step five, we repeat step three and step four D times. In the end, there is a series of the block bootstrap l-th step forecasts for y i,t, (i = 1,..., m) : {ŷ i,n+l (s) } D s=1, (l = 1,..., h) (7) where s is the index for bootstrapping. The l-th step BBI at the γ nominal level for 8

10 the i-th component y i,t is given by l step BBI = [ ŷ i,n+l ( 1 γ 2 ), ŷ i,n+l ( )] 1 + γ 2 (8) where ŷ i,n+l ( 1 γ 2 ) and ŷ i,n+l ( 1+γ 2 ) are the ( 1 γ 2 ) ( 100-th and 1+γ ) th percentiles of the empirical distribution of {ŷ i,n+l (s)}d s=1. Throughout this paper we let γ = 0. We use D = 999 to avoid the discreteness problem raised by Booth and Hall (1994). Here the percentile method of Efron and Tibshirani (1993) is applied to construct the BBI. Hall (1988) discusses other percentile methods. De Gooijer and Kumar (1992) emphasize that the percentile method performs well when the conditional distribution of the predicted values is unimodal. Our preliminary simulation conducts the DIP test of Hartigan and Hartigan (1985) and finds that the distribution is indeed unimodal. Bias-Corrected Block Bootstrap Intervals There is a well known fact that the autoregressive coefficient estimated by OLS can be biased, see Shaman and Stine (1988) for example. This suggests that the proposed BBI may be improved by using the method of Efron and Tibshirani (1993) or Kilian (1998). In particular, equation (6) indicates that the residual in the forward VAR(1) ˆv t has direct effect on the forecast. Therefore correcting the bias in ˆβ 1 may be more important than the backward coefficient ˆϕ 1. This is confirmed by Li (2013) in the setting of univariate autoregressive forecasting. It is straightforward to correct the bias in ˆβ 1. We need to nest a new bootstrapping into the original one. Suppose there are C series of the bootstrap replicates, which can be used to refit the forward regression (2). After obtaining a series of forward bootstrap coefficients { ˆβ 1(s)} C s=1, the bias-corrected forward coefficient and the bias-corrected forward residual are 9

11 computed as ˆβ 1 c = 2 ˆβ C 1 C 1 ˆβ 1(s) (9) s=1 ˆv c t = y t ˆβ c 1y t 1. (10) Throughout this paper we let C = 100. Next, the bias-corrected block bootstrap l-th step forecast ŷ c n+l is generated as ŷ c n = y n, ŷ c n+l = ˆβ 1ŷ c n+l 1 + ˆv c l, (l = 1,..., h) (11) where ˆv c l is obtained by block bootstrapping ˆv c t. The l-th step bias-corrected block bootstrap intervals (BCBBI) at the γ nominal level for the i-th component y i,t are given by l step BCBBI = [ ŷ c i,n+l ( 1 γ 2 ), ŷ c i,n+l ( )] 1 + γ 2 (12) where ŷ c i,n+l ( 1 γ ) ( 2 and ŷ c 1+γ i,n+l 2 ) are the ( 1 γ 2 ) ( 100-th and 1+γ ) th percentiles of the empirical distribution of {ŷ c i,n+l (s)}d s=1. The new intervals can be improved further, for instance, by correcting the bias in ˆϕ 1. However, Li (2013) shows that the benefit of biascorrecting ˆϕ 1 may be marginal. 3. Monte Carlo Experiment This section compares the performance of the BI and BBI using a Monte Carlo experiment. The criterion of comparison is the average coverage rate (ACR) of the prediction intervals for each component of a multivariate series: k 1 k 1(y i,n+l PI), (i = 1,..., m, l = 1,... h) (13) 1 10

12 where k = is the number of iteration, 1(.) is the indicator function that equals one when the event in the parenthesis is true, and PI denotes the prediction intervals. y i,n+l is the i-th component of y n+l = (y 1,n+l,..., y m,n+l ). The maximum forecast horizon h is 5. No qualitative change is found for larger h in preliminary simulation. The true data generating process (DGP) is a second order VAR for a bivariate series y t = (y 1,t, y 2,t ) : y 1,t = a 1 y 1,t 1 + a 2 y 2,t 1 + a 3 y 1,t 2 + e 1,t (14) y 2,t = b 1 y 2,t 1 + b 2 y 2,t 2 + e 2,t, (15) where the coefficients a i and b i are all scalars. The error vector e t = (e 1,t, e 2,t ) is an independent series generated as e t = Au t, u t i.i.d(0, 1), A = 1 ρ 2 ρ 0 1. (16) It is easy to show the variance-covariance matrix of e t is Ω E(e t e t) = AA = 1 ρ ρ 1. (17) Notice that y 2,t 1 appears on the right hand side of (14), but y 1,t 1 is absent in (15). That means y 2,t first-order Granger causes y 1,t, but not vice versa. Using this DGP of triangular form is without losing generality because we can always transform a non-triangular form into a triangular form by multiplying an invertible matrix. One advantage of using the triangular form is the clear interpretation of the parameter: b 1 and b 2 control the stationarity of y 2,t ; a 1 and a 3 control the stationarity of y 1,t ; the across-variable correlation is measured by a 2 11

13 and ρ. For instance, y 2,t is stationary when b 1 = 1.4, b 2 = 0.48 since the corresponding characteristic roots are λ 1 = 0.6 and λ 2 =, both less than one in absolute value. The positive definiteness of Ω requires that ρ < 1. Three bootstrap prediction intervals are considered. The proposed block bootstrap intervals (BBI) (8) are obtained by applying the block bootstrap to the residual of the parsimonious VAR(1) model. One standard bootstrap intervals (denoted by AR1BI) are obtained by applying the standard bootstrap to the residual of the VAR(1) 7 ; the other (denoted by AR2BI) is obtained by applying the standard bootstrap to the residual of the VAR(2). The AR2BI is theoretically correct, but not parsimonious. The AR1BI is parsimonious, but fails to account for the serial correlation in the error term. The BBI is parsimonious and fully utilizes the serial correlation. In total we generate n + h observations 8 of y t. The first n observations are used for the in-sample fitting and constructing the prediction intervals. Then the average coverage rate (13) is computed for the last h pseudo out-of-sample observations. That is, we evaluate whether the last h observations are inside the prediction intervals. The best method is the one that yields the prediction intervals with the average coverage rate closest to the nominal level γ = 0. Error Distributions In theory the bootstrap intervals should be robust to non-normal distributions of the prediction errors. Following Thombs and Schucany (1990) we examine three distributions for u t : the bivariate standard normal distribution; the bivariate exponential distribution with mean of two, which is skewed; and the bivariate mixed normal distribution N( 1, 1)+0.1N(9, 1), 7 In practice one may check the adequacy of the model (no serial correlation in the error term) by applying the Breusch-Godfrey type test to the residual. For simplicity we skip those tests in this section and proceed as if the VAR(1) or VAR(2) is adequate. 8 The initial value y 1 is set as zero, the unconditional mean. The pseudo random number generator of Matlab R2011a is used. 12

14 which is bimodal skewed. All distributions are standardized to have zero mean and unity variance. Then the prediction error e t is generated as in (16), where A is the Cholesky decomposition of the variance-covariance matrix Ω. The parameters are set as b 1 = 1.4, b 2 = 0.48, a 1 = 1.2, a 2 = 0.6, a 3 = 0.35, ρ = 0.4, and the sample size is n = 50. By construction both y 1,t and y 2,t are stationary. Figure 1 plots the ACR against the forecast horizon, as the error distribution varies. The three panels in the first row show the ACR for the first component y 1,n+l ; the second row shows the ACR for the second component y 2,n+l. There are several findings from Figure 1. First of all, the BBI (denoted by circle) in most cases has the best performance with the ACR closest to 0. The superiority of the BBI becomes more evident as the forecast horizon rises. In particular, the BBI is shown to outperform the AR2BI (denoted by square). This fact may serve as the evidence supporting the principle of parsimony. The performance of the AR1BI (denoted by diamond) is interesting. It has the worst performance (lowest coverage rate) for long run forecast, while has the seemingly best performance (highest coverage rate) for the first-step forecast. This finding can be explained by two facts. First, the AR1BI uses the standard bootstrap, which fails to account for the serial correlation in the error term but can add more variability than the block bootstrap. Second, the serial correlation in the prediction error does not matter in short run as much as in long run. The performance of the AR1BI also highlights the tradeoff of preserving serial correlation vs adding variability. For the first-step forecast, adding variability outweighs keeping serial correlation, so the AR1BI has first-step coverage rate greater than the BBI. We find that the coverage rates of all three intervals remain largely unchanged as the error distribution varies. This verifies the robustness of the bootstrap intervals to the nonnormality. Finally, the ACRs of all three intervals decrease as the horizon rises, a finding consistent with Thombs and Schucany (1990). 13

15 Across Variable Correlation: ρ and a 2 Next we investigate the effect of varying across variable correlation determined by ρ and a 2. The same values of b 1, b 2, a 1, a 3 and n as Figure 1 are used. The error follows the bivariate normal distribution. But now we let ρ = 0.4, 0.4,, and a 2 = 0.4, 0.4,. The results are shown in Figure 2 (where ρ varies and a 2 = 0.4) and Figure 3 (where a 2 varies and ρ = 0.4), respectively. We find that the varying ρ and a 2 have minimal effect on the coverage rate for y 2,n+l. This is expected because of the triangular form of the DGP or the fact that y 1,t does not Granger cause y 2,t. Varying ρ and a 2 do affect the coverage rate for y 1,n+l. But the BBI still dominates in most cases. Persistence: b 1, b 2, a 1, a 3 The persistence of the series (or the speed at which the autocorrelation decays) is determined by the parameters b 1, b 2, a 1 and a 3. Their effect on the ACR is illustrated by Figures 4, 5 and 6. We let a 2 = 0.6, ρ = 0.4, n = 50 and the error follows the bivariate normal distribution. Table 1 summarizes the values of b 1, b 2, a 1, a 3, and the corresponding characteristic roots. Figure 4 is concerned with the real characteristic roots. For example, when b 1 = 1.2, b 2 = 0.35, the characteristic roots for y 2,t are and 0.5; when a 1 =, a 3 = 0.2, the characteristic roots for y 1,t are 0.5 and 0.4. So in this case (DGP1) both series are stationary. y 2,t becomes more persistent (and its autocorrelation decays more slowly) when DGP1 changes to DGP2; y 1,t becomes more persistent when DGP2 changes to DGP3. Figure 4 shows that the superiority of the BBI is insensitive to the change in the persistence. The BBI maintains its superiority in Figure 5, where some or all characteristic roots are complex conjugates. In DGP4-6 the series are still stationary because the modulus of the characteristic root is less than one, but now the autocorrelation function has the sinusoidal pattern. 14

16 Figure 6 relaxes the assumption of stationarity. In DGP7 and DGP8, y 2,t has one unit root, so becomes nonstationary. Moreover, y 1,t and y 2,t are cointegrated, in the terminology of Engle and Granger (1987). In DGP9, y 1,t is nonstationary but y 2,t is stationary. As shown by Figure 6, despite the nonstationarity and cointegration, the BBI still delivers the best performance in most cases. Sample Size: n Figure 7 demonstrates the effect of the sample size n on the coverage rate. The error follows the bivariate normal distribution, and the parameter values are the same as the Figure 1. As the sample size rises, we find that the coverage rates of all three intervals improve, by moving upward to the nominal level 0. Furthermore, the improvement of the BBI seems to be the most evident. When n = 150, the ACR of the BBI becomes almost flat at the level of 8. Bias Correction Figure 7 implies that the small sample size is one of the reasons why the prediction intervals tend to undercover the true future values. Another reason is the bias in the autoregressive coefficient estimated by OLS, as shown by Figure 8. In that figure the coverage rate of the BBI is compared to the bias-corrected block bootstrap intervals (BCBBI, denoted by square), using the same DGP as Figure 1. It is clear to see that correcting the autoregressive bias leads to an increase in the coverage rate. 4. Conclusion This paper proposes the block bootstrap prediction intervals (BBI) for each component of a multivariate time series based on the parsimonious first order vector autoregression. One characteristic of the BBI is resampling big blocks or two-dimensional arrays of residuals. 15

17 Those big blocks aim to preserve both the across variable correlation and serial correlation. By contrast, the standard bootstrap intervals require independent prediction errors, which amount to a possibly complicated model, and involve redrawing small blocks or onedimensional arrays of residuals. The Monte Carlo experiment indicates that the principle of parsimony can be extended to multivariate interval forecasting. The BBI is shown to outperform the standard bootstrap intervals in most cases. Remarkably, the BBI always dominates for the long run forecast. The performance of the BBI can be enhanced by bigger sample sizes and correcting the bias of the estimated autoregressive coefficient. There are some noteworthy future studies. One possibility is following Kim (1999) and developing the block bootstrap prediction region (ellipsoid) based on the parsimonious VAR. The prediction region can provide joint forecast for a multivariate series, but may incur higher computational cost. Another one is considering an improved BBI for the cointegration system that explicitly factors in the non-stationarity and the error-correcting mechanism. 16

18 References Booth, J. G. and Hall, P. (1994). Monte carlo approximation and the iterated bootstrap. Biometrika, 81, Chatfield, C. (1993). Calculating interval forecasts. Journal of Business & Economic Statistics, 11, Clements, M. P. and Hendry, D. F. (2001). Forecasting Non-Stationary Economic Time Series. The MIT Press. Clements, M. P. and Taylor, N. (2001). Boostrapping prediction intervals for autoregressive models. International Journal of Forecasting, 17, De Gooijer, J. G. and Kumar, K. (1992). Some recent developments in non-linear time series modeling, testing, and forecasting. International Journal of Forecasting, pages Efron, B. (1979). Bootstrap method: Another look at the jackknife. Annals of Statistics, 7, Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap. London: Chapman and Hall. Enders, W. (2009). Applied Econometric Times Series. Wiley, 3 edition. Engle, R. F. and Granger, C. W. J. (1987). Cointegration and error correction: Representation, estimation and testing. Econometrica, 55, Grigoletto, M. (1998). Bootstrap prediction intervals for autoregressions: some alternatives. International Journal of Forecasting, 14, Hall, P. (1988). Theoretical comparison of bootstrap confidence intervals. Annals of Statistics, 16,

19 Hartigan, J. A. and Hartigan, P. M. (1985). The DIP test of unimodality. Annals of Statistics, 13, Kilian, L. (1998). Small sample confidence intervals for impulse response functions. The Review of Economics and Statistics, 80, Kim, J. (1999). Asymptotic and bootstrap prediction regions for vector autoregression. International Journal of Forecasting, 15, Kim, J. (2001). bootstrap-after-bootstrap prediction intervals for autoregressive models. Journal of Business & Economic Statistics, 19, Kim, J. (2002). Bootstrap prediction intervals for autoregressive models of unknown or infinite lag order. Journal of Forecasting, 21, Kim, J. (2004). Bias-corrected bootstrap prediction regions for vector autoregression. Journal of Forecasting, 23, Künsch, H. R. (1989). The jackknife and the bootstrap for general stationary observations. Annals of Statistics, 17, Li, J. (2011). Bootstrap prediction intervals for SETAR models. International Journal of Forecasting, 27, Li, J. (2013). Block bootstrap prediction intervals for autoregression. Working Paper. Politis, D. N. (2003). The impact of bootstrap method on time series analysis. Statistical Science, 18, Politis, D. N. and Romano, J. P. (1994). The stationary bootstrap. Journal of the American Statistical Association, 89,

20 Shaman, P. and Stine, R. A. (1988). The bias of autoregressive coefficient estimators. Journal of the American Statistical Association, 83, Sims, C. (1980). Macroeconomics and reality. Econometrica, 48, Stine, R. A. (1987). Estimating properties of autoregressive forecasts. Journal of the American Statistical Association, 82, Thombs, L. A. and Schucany, W. R. (1990). Bootstrap prediction intervals for autoregression. Journal of the American Statistical Association, 85,

21 Table 1: Parameters Values and Characteristic Roots in Figures 4, 5, 6 Parameters y 2,t y 1,t b 1 b 2 a 1 a 3 λ 1 λ 2 λ 1 λ 2 DGP DGP DGP DGP i i 0.6 DGP i i DGP i i +0.5i -0.5i DGP DGP DGP The data generating process is (14) and (15) with a 2 = 0.6, ρ = 0.4, n = 50, u t i.i.d.n(0, 1) 20

22 Normal Distribution BBI AR2BI AR1BI 5 5 Exponential Distribution Mixed Normal Distribution 5 5 Normal Distribution Exponential Distribution Mixed Normal Distribution Figure 1: Error Distributions Note: the data generating process is (14) and (15) with b 1 = 1.4, b 2 = 0.48, a 1 = 1.2, a 2 = 0.6, a 3 = 0.35, ρ = 0.4, n =

23 4 rho = rho = rho = 2 BBI 4 AR2BI AR1BI rho = rho = rho = Figure 2: Across Variable Correlation: ρ Note: the data generating process is (14) and (15) with b 1 = 1.4, b 2 = 0.48, a 1 = 1.2, a 2 = 0.6, a 3 = 0.35, n = 50, and u t i.i.d.n(0, 1). 22

24 6 a 2 = a 2 = a 2 = BBI 4 AR2BI AR1BI a 2 = a 2 = a 2 = Figure 3: Across Variable Correlation: a 2 Note: the data generating process is (14) and (15) with b 1 = 1.4, b 2 = 0.48, a 1 = 1.2, a 3 = 0.35, ρ = 0.4, n = 50, and u t i.i.d.n(0, 1). 23

25 6 4 2 DGP1 BBI AR2BI AR1BI 5 5 DGP2 5 5 DGP DGP1 DGP2 DGP Figure 4: Serial Correlation: Real Characteristic Roots Note: the data generating process is in Table 1. 24

26 4 DGP4 DGP5 DGP6 2 4 BBI 2 AR2BI AR1BI DGP4 DGP5 6 DGP Figure 5: Serial Correlation: Complex Characteristic Roots Note: the data generating process is in Table 1. 25

27 5 DGP7 BBI AR2BI AR1BI 5 DGP8 5 DGP DGP7 5 DGP8 DGP Figure 6: Serial Correlation: Nonstationarity and Cointegration Note: the data generating process is in Table 1. 26

28 5 n=50 BBI AR2BI AR1BI 8 6 n=100 8 n= n=50 8 n=100 n= Figure 7: Sample Size Note: the data generating process is (14) and (15) with b 1 = 1.4, b 2 = 0.48, a 1 = 1.2, a 2 = 0.6, a 3 = 0.35, ρ = 0.4 and u t i.i.d.n(0, 1). 27

29 4 2 Normal Distribution BBI BCBBI 4 2 Exponential Distribution Mixed Normal Distribution Normal Distribution 4 Exponential Distribution Mixed Normal Distribution Figure 8: Bias Correction Note: the data generating process is (14) and (15) with b 1 = 1.4, b 2 = 0.48, a 1 = 1.2, a 2 = 0.6, a 3 = 0.35, ρ = 0.4, n =

Block Bootstrap Prediction Intervals for Autoregression

Block Bootstrap Prediction Intervals for Autoregression Department of Economics Working Paper Block Bootstrap Prediction Intervals for Autoregression Jing Li Miami University 2013 Working Paper # - 2013-02 Block Bootstrap Prediction Intervals for Autoregression

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication G. S. Maddala Kajal Lahiri WILEY A John Wiley and Sons, Ltd., Publication TEMT Foreword Preface to the Fourth Edition xvii xix Part I Introduction and the Linear Regression Model 1 CHAPTER 1 What is Econometrics?

More information

Vector autoregressions, VAR

Vector autoregressions, VAR 1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,

More information

Lecture 7a: Vector Autoregression (VAR)

Lecture 7a: Vector Autoregression (VAR) Lecture 7a: Vector Autoregression (VAR) 1 Big Picture We are done with univariate time series analysis Now we switch to multivariate analysis, that is, studying several time series simultaneously. VAR

More information

Dynamic Regression Models (Lect 15)

Dynamic Regression Models (Lect 15) Dynamic Regression Models (Lect 15) Ragnar Nymoen University of Oslo 21 March 2013 1 / 17 HGL: Ch 9; BN: Kap 10 The HGL Ch 9 is a long chapter, and the testing for autocorrelation part we have already

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models Journal of Finance and Investment Analysis, vol.1, no.1, 2012, 55-67 ISSN: 2241-0988 (print version), 2241-0996 (online) International Scientific Press, 2012 A Non-Parametric Approach of Heteroskedasticity

More information

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

Inference in VARs with Conditional Heteroskedasticity of Unknown Form Inference in VARs with Conditional Heteroskedasticity of Unknown Form Ralf Brüggemann a Carsten Jentsch b Carsten Trenkler c University of Konstanz University of Mannheim University of Mannheim IAB Nuremberg

More information

Introduction to Eco n o m et rics

Introduction to Eco n o m et rics 2008 AGI-Information Management Consultants May be used for personal purporses only or by libraries associated to dandelon.com network. Introduction to Eco n o m et rics Third Edition G.S. Maddala Formerly

More information

The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions

The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions Shin-Huei Wang and Cheng Hsiao Jan 31, 2010 Abstract This paper proposes a highly consistent estimation,

More information

Lecture 7a: Vector Autoregression (VAR)

Lecture 7a: Vector Autoregression (VAR) Lecture 7a: Vector Autoregression (VAR) 1 2 Big Picture We are done with univariate time series analysis Now we switch to multivariate analysis, that is, studying several time series simultaneously. VAR

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

Lecture 8a: Spurious Regression

Lecture 8a: Spurious Regression Lecture 8a: Spurious Regression 1 Old Stuff The traditional statistical theory holds when we run regression using (weakly or covariance) stationary variables. For example, when we regress one stationary

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Forecasting Levels of log Variables in Vector Autoregressions

Forecasting Levels of log Variables in Vector Autoregressions September 24, 200 Forecasting Levels of log Variables in Vector Autoregressions Gunnar Bårdsen Department of Economics, Dragvoll, NTNU, N-749 Trondheim, NORWAY email: gunnar.bardsen@svt.ntnu.no Helmut

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions Chapter 5 Analysis of Multiple Time Series Note: The primary references for these notes are chapters 5 and 6 in Enders (2004). An alternative, but more technical treatment can be found in chapters 10-11

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

A comparison of four different block bootstrap methods

A comparison of four different block bootstrap methods Croatian Operational Research Review 189 CRORR 5(014), 189 0 A comparison of four different block bootstrap methods Boris Radovanov 1, and Aleksandra Marcikić 1 1 Faculty of Economics Subotica, University

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Bootstrap tests of multiple inequality restrictions on variance ratios

Bootstrap tests of multiple inequality restrictions on variance ratios Economics Letters 91 (2006) 343 348 www.elsevier.com/locate/econbase Bootstrap tests of multiple inequality restrictions on variance ratios Jeff Fleming a, Chris Kirby b, *, Barbara Ostdiek a a Jones Graduate

More information

Lecture 8a: Spurious Regression

Lecture 8a: Spurious Regression Lecture 8a: Spurious Regression 1 2 Old Stuff The traditional statistical theory holds when we run regression using stationary variables. For example, when we regress one stationary series onto another

More information

Efficiency Tradeoffs in Estimating the Linear Trend Plus Noise Model. Abstract

Efficiency Tradeoffs in Estimating the Linear Trend Plus Noise Model. Abstract Efficiency radeoffs in Estimating the Linear rend Plus Noise Model Barry Falk Department of Economics, Iowa State University Anindya Roy University of Maryland Baltimore County Abstract his paper presents

More information

Maximum Non-extensive Entropy Block Bootstrap

Maximum Non-extensive Entropy Block Bootstrap Overview&Motivation MEB Simulation of the MnEBB Conclusions References Maximum Non-extensive Entropy Block Bootstrap Jan Novotny CEA, Cass Business School & CERGE-EI (with Michele Bergamelli & Giovanni

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Lesson 17: Vector AutoRegressive Models

Lesson 17: Vector AutoRegressive Models Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università dell Aquila, umberto.triacca@ec.univaq.it Vector AutoRegressive models The extension of ARMA models into a multivariate framework

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

Bias-Correction in Vector Autoregressive Models: A Simulation Study

Bias-Correction in Vector Autoregressive Models: A Simulation Study Econometrics 2014, 2, 45-71; doi:10.3390/econometrics2010045 OPEN ACCESS econometrics ISSN 2225-1146 www.mdpi.com/journal/econometrics Article Bias-Correction in Vector Autoregressive Models: A Simulation

More information

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test E 4160 Autumn term 2016. Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test Ragnar Nymoen Department of Economics, University of Oslo 24 October

More information

Steven Cook University of Wales Swansea. Abstract

Steven Cook University of Wales Swansea. Abstract On the finite sample power of modified Dickey Fuller tests: The role of the initial condition Steven Cook University of Wales Swansea Abstract The relationship between the initial condition of time series

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz

More information

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Bootstrapping Long Memory Tests: Some Monte Carlo Results Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin University College Dublin and Cass Business School. July 2004 - Preliminary Abstract We investigate the bootstrapped

More information

Christopher Dougherty London School of Economics and Political Science

Christopher Dougherty London School of Economics and Political Science Introduction to Econometrics FIFTH EDITION Christopher Dougherty London School of Economics and Political Science OXFORD UNIVERSITY PRESS Contents INTRODU CTION 1 Why study econometrics? 1 Aim of this

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES

AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES Journal of Data Science 14(2016), 641-656 AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES Beste H. Beyaztas a,b, Esin Firuzan b* a Department of Statistics, Istanbul Medeniyet

More information

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

The regression model with one stochastic regressor (part II)

The regression model with one stochastic regressor (part II) The regression model with one stochastic regressor (part II) 3150/4150 Lecture 7 Ragnar Nymoen 6 Feb 2012 We will finish Lecture topic 4: The regression model with stochastic regressor We will first look

More information

Econometric Forecasting

Econometric Forecasting Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 1, 2014 Outline Introduction Model-free extrapolation Univariate time-series models Trend

More information

Econ 424 Time Series Concepts

Econ 424 Time Series Concepts Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length

More information

Final Exam November 24, Problem-1: Consider random walk with drift plus a linear time trend: ( t

Final Exam November 24, Problem-1: Consider random walk with drift plus a linear time trend: ( t Problem-1: Consider random walk with drift plus a linear time trend: y t = c + y t 1 + δ t + ϵ t, (1) where {ϵ t } is white noise with E[ϵ 2 t ] = σ 2 >, and y is a non-stochastic initial value. (a) Show

More information

A Guide to Modern Econometric:

A Guide to Modern Econometric: A Guide to Modern Econometric: 4th edition Marno Verbeek Rotterdam School of Management, Erasmus University, Rotterdam B 379887 )WILEY A John Wiley & Sons, Ltd., Publication Contents Preface xiii 1 Introduction

More information

1 Estimation of Persistent Dynamic Panel Data. Motivation

1 Estimation of Persistent Dynamic Panel Data. Motivation 1 Estimation of Persistent Dynamic Panel Data. Motivation Consider the following Dynamic Panel Data (DPD) model y it = y it 1 ρ + x it β + µ i + v it (1.1) with i = {1, 2,..., N} denoting the individual

More information

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Bootstrapping Long Memory Tests: Some Monte Carlo Results Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin Nu eld College, Oxford and Lancaster University. December 2005 - Preliminary Abstract We investigate the bootstrapped

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I. Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series

More information

Chapter 2: simple regression model

Chapter 2: simple regression model Chapter 2: simple regression model Goal: understand how to estimate and more importantly interpret the simple regression Reading: chapter 2 of the textbook Advice: this chapter is foundation of econometrics.

More information

Note: The primary reference for these notes is Enders (2004). An alternative and more technical treatment can be found in Hamilton (1994).

Note: The primary reference for these notes is Enders (2004). An alternative and more technical treatment can be found in Hamilton (1994). Chapter 4 Analysis of a Single Time Series Note: The primary reference for these notes is Enders (4). An alternative and more technical treatment can be found in Hamilton (994). Most data used in financial

More information

Title. Description. var intro Introduction to vector autoregressive models

Title. Description. var intro Introduction to vector autoregressive models Title var intro Introduction to vector autoregressive models Description Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference on vector autoregressive (VAR) models

More information

1 Introduction to Generalized Least Squares

1 Introduction to Generalized Least Squares ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the

More information

DISCUSSION PAPER. The Bias from Misspecification of Control Variables as Linear. L e o n a r d G o f f. November 2014 RFF DP 14-41

DISCUSSION PAPER. The Bias from Misspecification of Control Variables as Linear. L e o n a r d G o f f. November 2014 RFF DP 14-41 DISCUSSION PAPER November 014 RFF DP 14-41 The Bias from Misspecification of Control Variables as Linear L e o n a r d G o f f 1616 P St. NW Washington, DC 0036 0-38-5000 www.rff.org The Bias from Misspecification

More information

EUI Working Papers DEPARTMENT OF ECONOMICS ECO 2009/24 DEPARTMENT OF ECONOMICS FORECASTING LEVELS OF LOG VARIABLES IN VECTOR AUTOREGRESSIONS

EUI Working Papers DEPARTMENT OF ECONOMICS ECO 2009/24 DEPARTMENT OF ECONOMICS FORECASTING LEVELS OF LOG VARIABLES IN VECTOR AUTOREGRESSIONS DEPARTMENT OF ECONOMICS EUI Working Papers ECO 2009/24 DEPARTMENT OF ECONOMICS FORECASTING LEVELS OF LOG VARIABLES IN VECTOR AUTOREGRESSIONS Gunnar Bårdsen and Helmut Lütkepohl EUROPEAN UNIVERSITY INSTITUTE,

More information

interval forecasting

interval forecasting Interval Forecasting Based on Chapter 7 of the Time Series Forecasting by Chatfield Econometric Forecasting, January 2008 Outline 1 2 3 4 5 Terminology Interval Forecasts Density Forecast Fan Chart Most

More information

2.5 Forecasting and Impulse Response Functions

2.5 Forecasting and Impulse Response Functions 2.5 Forecasting and Impulse Response Functions Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables

More information

Volume 03, Issue 6. Comparison of Panel Cointegration Tests

Volume 03, Issue 6. Comparison of Panel Cointegration Tests Volume 03, Issue 6 Comparison of Panel Cointegration Tests Deniz Dilan Karaman Örsal Humboldt University Berlin Abstract The main aim of this paper is to compare the size and size-adjusted power properties

More information

The Prediction of Monthly Inflation Rate in Romania 1

The Prediction of Monthly Inflation Rate in Romania 1 Economic Insights Trends and Challenges Vol.III (LXVI) No. 2/2014 75-84 The Prediction of Monthly Inflation Rate in Romania 1 Mihaela Simionescu Institute for Economic Forecasting of the Romanian Academy,

More information

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 4 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 23 Recommended Reading For the today Serial correlation and heteroskedasticity in

More information

Finite Sample Properties of Impulse Response Intervals in SVECMs with Long-Run Identifying Restrictions

Finite Sample Properties of Impulse Response Intervals in SVECMs with Long-Run Identifying Restrictions SFB 649 Discussion Paper 2006-021 Finite Sample Properties of Impulse Response Intervals in SVECMs with Long-Run Identifying Restrictions Ralf Brüggemann* * Institute of Statistics and Econometrics, Humboldt-Universität

More information

1 Regression with Time Series Variables

1 Regression with Time Series Variables 1 Regression with Time Series Variables With time series regression, Y might not only depend on X, but also lags of Y and lags of X Autoregressive Distributed lag (or ADL(p; q)) model has these features:

More information

Unit Root Tests: The Role of the Univariate Models Implied by Multivariate Time Series

Unit Root Tests: The Role of the Univariate Models Implied by Multivariate Time Series econometrics Article Unit Root Tests: The Role of the Univariate Models Implied by Multivariate Time Series Nunzio Cappuccio 1 and Diego Lubian 2, * 1 Department of Economics and Management Marco Fanno,

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

A Bootstrap Test for Causality with Endogenous Lag Length Choice. - theory and application in finance

A Bootstrap Test for Causality with Endogenous Lag Length Choice. - theory and application in finance CESIS Electronic Working Paper Series Paper No. 223 A Bootstrap Test for Causality with Endogenous Lag Length Choice - theory and application in finance R. Scott Hacker and Abdulnasser Hatemi-J April 200

More information

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND Testing For Unit Roots With Cointegrated Data NOTE: This paper is a revision of

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1

MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1 MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1 1 The General Bootstrap This is a computer-intensive resampling algorithm for estimating the empirical

More information

BCT Lecture 3. Lukas Vacha.

BCT Lecture 3. Lukas Vacha. BCT Lecture 3 Lukas Vacha vachal@utia.cas.cz Stationarity and Unit Root Testing Why do we need to test for Non-Stationarity? The stationarity or otherwise of a series can strongly influence its behaviour

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

Lecture 7: Dynamic panel models 2

Lecture 7: Dynamic panel models 2 Lecture 7: Dynamic panel models 2 Ragnar Nymoen Department of Economics, UiO 25 February 2010 Main issues and references The Arellano and Bond method for GMM estimation of dynamic panel data models A stepwise

More information

Forecasting 1 to h steps ahead using partial least squares

Forecasting 1 to h steps ahead using partial least squares Forecasting 1 to h steps ahead using partial least squares Philip Hans Franses Econometric Institute, Erasmus University Rotterdam November 10, 2006 Econometric Institute Report 2006-47 I thank Dick van

More information

Econometrics. Week 11. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 11. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 11 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 30 Recommended Reading For the today Advanced Time Series Topics Selected topics

More information

Some Monte Carlo Evidence for Adaptive Estimation of Unit-Time Varying Heteroscedastic Panel Data Models

Some Monte Carlo Evidence for Adaptive Estimation of Unit-Time Varying Heteroscedastic Panel Data Models Some Monte Carlo Evidence for Adaptive Estimation of Unit-Time Varying Heteroscedastic Panel Data Models G. R. Pasha Department of Statistics, Bahauddin Zakariya University Multan, Pakistan E-mail: drpasha@bzu.edu.pk

More information

M O N A S H U N I V E R S I T Y

M O N A S H U N I V E R S I T Y ISSN 440-77X ISBN 0 736 066 4 M O N A S H U N I V E R S I T Y AUSTRALIA A Test for the Difference Parameter of the ARIFMA Model Using the Moving Blocks Bootstrap Elizabeth Ann Mahara Working Paper /99

More information

Darmstadt Discussion Papers in Economics

Darmstadt Discussion Papers in Economics Darmstadt Discussion Papers in Economics The Effect of Linear Time Trends on Cointegration Testing in Single Equations Uwe Hassler Nr. 111 Arbeitspapiere des Instituts für Volkswirtschaftslehre Technische

More information

Non-Parametric Dependent Data Bootstrap for Conditional Moment Models

Non-Parametric Dependent Data Bootstrap for Conditional Moment Models Non-Parametric Dependent Data Bootstrap for Conditional Moment Models Bruce E. Hansen University of Wisconsin September 1999 Preliminary and Incomplete Abstract A new non-parametric bootstrap is introduced

More information

Using all observations when forecasting under structural breaks

Using all observations when forecasting under structural breaks Using all observations when forecasting under structural breaks Stanislav Anatolyev New Economic School Victor Kitov Moscow State University December 2007 Abstract We extend the idea of the trade-off window

More information

Nonsense Regressions due to Neglected Time-varying Means

Nonsense Regressions due to Neglected Time-varying Means Nonsense Regressions due to Neglected Time-varying Means Uwe Hassler Free University of Berlin Institute of Statistics and Econometrics Boltzmannstr. 20 D-14195 Berlin Germany email: uwe@wiwiss.fu-berlin.de

More information

Lecture 4a: ARMA Model

Lecture 4a: ARMA Model Lecture 4a: ARMA Model 1 2 Big Picture Most often our goal is to find a statistical model to describe real time series (estimation), and then predict the future (forecasting) One particularly popular model

More information

Bootstrapping the Grainger Causality Test With Integrated Data

Bootstrapping the Grainger Causality Test With Integrated Data Bootstrapping the Grainger Causality Test With Integrated Data Richard Ti n University of Reading July 26, 2006 Abstract A Monte-carlo experiment is conducted to investigate the small sample performance

More information

A better way to bootstrap pairs

A better way to bootstrap pairs A better way to bootstrap pairs Emmanuel Flachaire GREQAM - Université de la Méditerranée CORE - Université Catholique de Louvain April 999 Abstract In this paper we are interested in heteroskedastic regression

More information

E 4101/5101 Lecture 9: Non-stationarity

E 4101/5101 Lecture 9: Non-stationarity E 4101/5101 Lecture 9: Non-stationarity Ragnar Nymoen 30 March 2011 Introduction I Main references: Hamilton Ch 15,16 and 17. Davidson and MacKinnon Ch 14.3 and 14.4 Also read Ch 2.4 and Ch 2.5 in Davidson

More information

MA Advanced Econometrics: Applying Least Squares to Time Series

MA Advanced Econometrics: Applying Least Squares to Time Series MA Advanced Econometrics: Applying Least Squares to Time Series Karl Whelan School of Economics, UCD February 15, 2011 Karl Whelan (UCD) Time Series February 15, 2011 1 / 24 Part I Time Series: Standard

More information

7 Introduction to Time Series

7 Introduction to Time Series Econ 495 - Econometric Review 1 7 Introduction to Time Series 7.1 Time Series vs. Cross-Sectional Data Time series data has a temporal ordering, unlike cross-section data, we will need to changes some

More information

Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem

Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Stochastic vs. deterministic

More information

1 Teaching notes on structural VARs.

1 Teaching notes on structural VARs. Bent E. Sørensen February 22, 2007 1 Teaching notes on structural VARs. 1.1 Vector MA models: 1.1.1 Probability theory The simplest (to analyze, estimation is a different matter) time series models are

More information

Univariate ARIMA Models

Univariate ARIMA Models Univariate ARIMA Models ARIMA Model Building Steps: Identification: Using graphs, statistics, ACFs and PACFs, transformations, etc. to achieve stationary and tentatively identify patterns and model components.

More information

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture 6: Additional Results for VAR s 6.1. Confidence Intervals for Impulse Response Functions There

More information

1. Introduction. Hang Qian 1 Iowa State University

1. Introduction. Hang Qian 1 Iowa State University Users Guide to the VARDAS Package Hang Qian 1 Iowa State University 1. Introduction The Vector Autoregression (VAR) model is widely used in macroeconomics. However, macroeconomic data are not always observed

More information

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe

More information

Modified Variance Ratio Test for Autocorrelation in the Presence of Heteroskedasticity

Modified Variance Ratio Test for Autocorrelation in the Presence of Heteroskedasticity The Lahore Journal of Economics 23 : 1 (Summer 2018): pp. 1 19 Modified Variance Ratio Test for Autocorrelation in the Presence of Heteroskedasticity Sohail Chand * and Nuzhat Aftab ** Abstract Given that

More information

Nonstationary Time Series:

Nonstationary Time Series: Nonstationary Time Series: Unit Roots Egon Zakrajšek Division of Monetary Affairs Federal Reserve Board Summer School in Financial Mathematics Faculty of Mathematics & Physics University of Ljubljana September

More information

Lecture 6a: Unit Root and ARIMA Models

Lecture 6a: Unit Root and ARIMA Models Lecture 6a: Unit Root and ARIMA Models 1 2 Big Picture A time series is non-stationary if it contains a unit root unit root nonstationary The reverse is not true. For example, y t = cos(t) + u t has no

More information

LESLIE GODFREY LIST OF PUBLICATIONS

LESLIE GODFREY LIST OF PUBLICATIONS LESLIE GODFREY LIST OF PUBLICATIONS This list is in two parts. First, there is a set of selected publications for the period 1971-1996. Second, there are details of more recent outputs. SELECTED PUBLICATIONS,

More information