Bootstrap prediction intervals for factor models

Size: px
Start display at page:

Download "Bootstrap prediction intervals for factor models"

Transcription

1 s-9 Bootstrap prediction intervals for factor models Sílvia Gonçalves, Benoit Perron, Antoine Djogbenou Série Scientifique/Scientific Series

2 s-9 Bootstrap prediction intervals for factor models Sílvia Gonçalves, Benoit Perron, Antoine Djogbenou Série Scientifique Scientific Series Montréal Avril/April Sílvia Gonçalves, Benoit Perron, Antoine Djogbenou. ous droits réservés. All rights reserved. Reproduction partielle permise avec citation du document source, incluant la notice. Short sections may be quoted without explicit permission, if full credit, including notice, is given to the source.

3 CIRAO Le CIRAO est un organisme sans but lucratif constitué en vertu de la Loi des compagnies du Québec. Le financement de son infrastructure et de ses activités de recherche provient des cotisations de ses organisations-membres, d une subvention d infrastructure du ministère de l Économie, de l Innovation et des Exportations, de même que des subventions et mandats obtenus par ses équipes de recherche. CIRAO is a private non-profit organization incorporated under the Quebec Companies Act. Its infrastructure and research activities are funded through fees paid by member organizations, an infrastructure grant from the ministère de l Économie, de l Innovation et des Exportations, and grants and research mandates obtained by its research teams. Les partenaires du CIRAO Partenaires corporatifs Autorité des marchés financiers Banque de développement du Canada Banque du Canada Banque Laurentienne du Canada Banque ationale du Canada Bell Canada BMO Groupe financier Caisse de dépôt et placement du Québec Fédération des caisses Desjardins du Québec Financière Sun Life, Québec Gaz Métro Hydro-Québec Industrie Canada Intact Investissements PSP Ministère de l Économie, de l Innovation et des Exportations Ministère des Finances du Québec Power Corporation du Canada Rio into Ville de Montréal Partenaires universitaires École Polytechnique de Montréal École de technologie supérieure ÉS HEC Montréal Institut national de la recherche scientifique IRS McGill University Université Concordia Université de Montréal Université de Sherbrooke Université du Québec Université du Québec à Montréal Université Laval Le CIRAO collabore avec de nombreux centres et chaires de recherche universitaires dont on peut consulter la liste sur son site web. Les cahiers de la série scientifique CS visent à rendre accessibles des résultats de recherche effectuée au CIRAO afin de susciter échanges et commentaires. Ces cahiers sont écrits dans le style des publications scientifiques. Les idées et les opinions émises sont sous l unique responsabilité des auteurs et ne représentent pas nécessairement les positions du CIRAO ou de ses partenaires. his paper presents research carried out at CIRAO and aims at encouraging discussion and comment. he observations and viewpoints expressed are the sole responsibility of the authors. hey do not necessarily represent positions of CIRAO or its partners. ISS 9-3 en ligne

4 Bootstrap prediction intervals for factor models * Sílvia Gonçalves, Benoit Perron, Antoine Djogbenou Résumé/abstract We propose bootstrap prediction intervals for an observation h periods into the future and its conditional mean. We assume that these forecasts are made using a set of factors extracted from a large panel of variables. Because we treat these factors as latent, our forecasts depend both on estimated factors and estimated regression coefficients. Under regularity conditions, Bai and g proposed the construction of asymptotic intervals under Gaussianity of the innovations. he bootstrap allows us to relax this assumption and to construct valid prediction intervals under more general conditions. Moreover, even under Gaussianity, the bootstrap leads to more accurate intervals in cases where the cross-sectional dimension is relatively small as it reduces the bias of the OLS estimator as shown in a recent paper by Gonçalves and Perron. Mots clés/keywords : factor model, bootstrap, forecast, conditional mean * We are grateful for comments from Lutz Kilian and seminar participants at the University of California, San Diego, the University of Southern California, Queen s, Pompeu Fabra, the oulouse School of Economics, Duke, Western, Sherbrooke, and Michigan as well as from participants at the 3rd meeting of the EC^, the 3 orth American Winter Meeting of the Econometric Society, the 3 meeting of the Canadian Economics Association, the 3 Joint Statistical Meetings, the conference "Bootstrap Methods for ime Series" in Copenhagen, the MAESG conference in Atlanta, and the Canadian Econometric Study Group. We also thank atevik Sekhposyan for providing the data for the empirical application. Gonçalves acknowledges financial support from the SERC and MIACS whereas Perron acknowledges financial support from the SSHRC and MIACS. Département de sciences économiques, CIREQ and CIRAO, Université de Montréal. Département de sciences économiques, CIREQ and CIRAO, Université de Montréal. Département de sciences économiques, CIREQ and CIRAO, Université de Montréal.

5 Introduction Forecasting using factor-augmented regression models has become increasingly popular since the seminal paper of Stock and Watson. he main idea underlying the so-called diffusion index forecasts is that when forecasting a given variable of interest, a large number of predictors can be summarized by a small number of indexes when the data follows an approximate factor model. he indexes are the latent factors driving the panel factor model and can be estimated by principal components. Point forecasts can be obtained by running a standard OLS regression augmented with the estimated factors. In this paper, we consider the construction of prediction intervals in factor-augmented regression models using the bootstrap. In particular, our main contribution is to show the consistency of bootstrap intervals for a future target variable and its conditional mean. Our results allow for the construction of bootstrap prediction intervals without assuming Gaussianity and with better finite-sample properties than those based on asymptotic theory. o be more specific, suppose that y t+h denotes the variable to be forecast where h is the forecast horizon and let X t be a -dimensional vector of candidate predictors. We assume that y t+h follows a factor-augmented regression model, y t+h = α F t + β W t + ε t+h, t =,..., h,

6 where W t is a vector of observed regressors including for instance lags of y t which jointly with F t help forecast y t+h. he r-dimensional vector F t describes the common latent factors in the panel factor model, X it = λ if t + e it, i =,...,, t =,...,, where the r vector λ i contains the factor loadings and e it is an idiosyncratic error term. he goal is to forecast y +h or its conditional mean y +h = α F + β W using {y t, X t, W t : t =,..., }, the available data at time. Since factors are not observed, the diffusion index forecast approach typically involves a two-step procedure: in the first step we estimate F t by principal components yielding F t and in the second step we regress y t+h on W t and F t to obtain the regression coeffi cients. he point forecast is then constructed as ŷ +h = ˆα F + ˆβ W. Because we treat factors as latent, point forecasts depend both on estimated factors and regression coeffi cients. hese two sources of parameter uncertainty must be accounted for when constructing prediction intervals and confidence intervals, as shown by Bai and g. Under regularity conditions, Bai and g derived the asymptotic distribution of regression estimates and the corresponding forecast errors and proposed the construction of asymptotic intervals. Our motivation for using the bootstrap as an alternative method of inference is twofold. First, the finite sample properties of the asymptotic approach of Bai and g can be poor, especially if is not suffi ciently large relative to. his was recently shown by Gonçalves and Perron in the context of confidence intervals for the regression coeffi cients, and as we will show below, the same is true in the context of prediction intervals. In particular, estimation of factors leads to an asymptotic bias term in the OLS estimator if / c and c. Gonçalves and Perron proposed a bootstrap method that removes this bias and 3

7 outperforms the asymptotic approach of Bai and g. Second, the bootstrap allows for the construction of prediction intervals for y +h that are consistent under more general assumptions than the asymptotic approach of Bai and g. In particular, the bootstrap does not require the Gaussianity assumption on the regression errors that justifies the asymptotic prediction intervals of Bai and g. As our simulations show, prediction intervals based on the Gaussianity assumption perform poorly when the regression error is asymmetrically distributed whereas the bootstrap prediction intervals do not suffer significant size distortions. We apply our procedure to forecasting inflation changes using quarterly observations on the US GDP deflator for the period he resulting bootstrap intervals differ in interesting ways from the asymptotic ones in specific periods. In particular, the 95% equaltailed percentile-t bootstrap intervals are shifted downwards and lie entirely below following the financial crisis of and during the last quarter of. hese periods were marked by a significant concern of deflation. Our intervals are more consistent with such concerns than the asymptotic ones which include some probability of increasing inflation. he remainder of the paper is organized as follows. Section introduces our forecasting model and considers asymptotic prediction intervals. Section 3 describes two bootstrap prediction algorithms. Section presents a set of high level assumptions on the bootstrap idiosyncratic errors under which the bootstrap distribution of the estimated factors at a given time period is consistent for the distribution of the sample estimated factors. hese results together with the results of Gonçalves and Perron and Djogbenou, Gonçalves, and Perron regarding inference on the coeffi cients are used in Section 5 to show the asymptotic validity of wild bootstrap prediction intervals. Section presents our simulation experiments, while Section 7 presents an empirical illustration of our methods. Finally, Section concludes. Mathematical

8 proofs appear in the Appendix. Prediction intervals based on asymptotic theory his section introduces our assumptions and reviews the asymptotic theory-based prediction intervals proposed by Bai and g.. Assumptions Let z t = F t W t make the following assumptions., where z t is p, with p = r + q. Following Bai and g, we Assumption a E F t M and t= F tf t P Σ F >, where Σ F is a non-random r r matrix. b he loadings λ i are either deterministic such that λ i M, or stochastic such that E λ i M. In either case, Λ Λ/ P Σ Λ >, where Σ Λ is a non-random matrix. c he eigenvalues of the r r matrix Σ Λ Σ F are distinct. Assumption a E e it =, E e it M. b E e it e js = σ ij,ts, σ ij,ts σ ij for all t, s, σ ij,ts τ ts for all i, j. Furthermore, s= τ ts M, for each t, and t,s,i,j σ ij,ts M. c For every t, s, E / i= e ite is E e it e is M. d t,s,l,u i,j Cov e ite is, e jl e ju < M <. 5

9 e For each t, i= λ ie it d, Γ t, where Γ t lim V ar i= λ ie it >. Assumption 3 he variables {λ i }, {F t } and {e it } are three mutually independent groups. Dependence within each group is allowed. Assumption a E ε t+h = and E ε t+h < M. b E ε t+h y t, z t, y t, z t,... = for any h >, and z t, ε t are independent of the idiosyncratic errors e is for all i, s, t. c E z t M and t= z tz t P Σ zz >. d As, h t= z tε t+h d, Ω, where E Ω lim V ar h t= z tε t+h >. h t= z tε t+h < M, and Assumptions and are standard in the approximate factors literature, allowing in particular for weak cross sectional and serial dependence in e it of unknown form. Assumption 3 assumes independence among the factors, the factor loadings and the idiosyncratic error terms. We could allow for weak dependence among these three groups of variables at the cost of introducing restrictions on this dependence. Assumption imposes moment conditions on {ε t+h }, on {z t } and on the score vector {z t ε t+h }. Part c requires {z t z t} to satisfy a law of large numbers. Part d requires the score to satisfy a central limit theorem, where Ω denotes the limiting variance of the scaled average of the scores. We generalize the form of the covariance matrix assumed in Bai and g to allow for serial correlation as this will generally be the case when the forecast horizon is greater than.

10 . ormal-theory intervals As described in Section, the diffusion index forecasts are based on a two step estimation procedure. he first step consists of extracting the common factors F t from the -dimensional panel X t. In particular, given X, we estimate F and Λ with the method of principal components. F is estimated with the r matrix F = F... F composed of times the eigenvectors corresponding to the r largest eigenvalues of XX / arranged in decreasing order, where the normalization F F = I r is used. he matrix containing the estimated loadings is then Λ = λ,..., λ = X F F F = X F /. In the second step, we run an OLS regression of y t+h on ẑ t = F t W t, i.e. we compute ˆδ ˆα ˆβ = h ẑ t ẑ t t= h ẑ t y t+h, 3 t= where ˆδ is p with p = r + q. Suppose the object of interest is y +h, the conditional mean of y +h = α F +β W +ε +h at time. he point forecast is ŷ +h = ˆα F + ˆβ W and the forecast error is given by ŷ +h y +h = ẑ ˆδ δ + α H F HF, where δ α H β is the probability limit of ˆδ. he matrix H is defined as H = Ṽ F F Λ Λ, 5 where Ṽ is the r r diagonal matrix containing on the main diagonal the r largest eigenvalues 7

11 of XX /, in decreasing order cf. Bai 3. It arises because factor models are only identified up to rotation, implying that the principal component estimator F t converges to HF t, and the OLS estimator ˆα converges to H α. It must be noted that forecasts do not depend on this rotation since the product is uniquely identified. he above decomposition shows that the asymptotic distribution of the forecast error depends on two sources of uncertainty: the first is the usual parameter estimation uncertainty associated with estimation of α and β, and the second is the factors estimation uncertainty. Under Assumptions -, and assuming that / and / as,, Bai and g show that the studentized forecast error ŷ +h y +h ˆB d,, where ˆB is a consistent estimator of the asymptotic variance of ŷ +h given by ˆB = V ar ŷ +h = ẑ ˆΣ δ ẑ + ˆα ˆΣ F ˆα. 7 Here, ˆΣδ consistently estimates Σ δ = V ar ˆδ δ and ˆΣ F consistently estimates = V ar F HF. In particular, under Assumptions -, Σ F ˆΣ δ = h ẑ t ẑ t ˆΩ h t= t= ẑ t ẑ t, where ˆΩ is a heteroskedasticity and autocorrelation consistent HAC estimator of

12 Ω = lim V ar h t= z tε t+h, and ˆΣ F = Ṽ Γ Ṽ, 9 where Γ is an estimator of Γ = lim V ar i= λ ie i which depends on the cross sectional dependence and heterogeneity properties of e i. Bai and g provide three different estimators of Γ. Section 5 below considers such an estimator. he central limit theorem result in justifies the construction of an asymptotic α% level confidence interval for y +h given by ŷ +h z α/ ˆB, ŷ +h + z α/ ˆB, where z α/ is the α/ quantile of a standard normal distribution. When the object of interest is a prediction interval for y +h, Bai and g propose ŷ +h z α/ Ĉ, ŷ +h + z α/ Ĉ, where Ĉ = ˆB + ˆσ ε, with ˆB as above and ˆσ ε = t= ˆε t. he validity of depends on the additional assumption that ε t is i.i.d., σ ε. An important condition that justifies and is that /. his condition ensures that the term reflecting the parameter estimation uncertainty in the forecast error 9

13 decomposition, ˆδ δ, is asymptotically normal with a mean of zero and a variancecovariance matrix that does not depend on the factors estimation uncertainty. As was recently shown by Gonçalves and Perron, when / c, ˆδ δ d c δ, Σ δ, where δ is a bias term that reflects the contribution of the factors estimation error to the asymptotic distribution of the regression estimates ˆδ. In this case, the two terms in will depend on the factors estimation uncertainty and a natural question is whether this will have an effect on the prediction intervals and derived by Bai and g under the assumption that c =. As we argue next, these intervals remain valid even when c. he main reason is that when / c, the ratio /, which implies that the parameter estimation uncertainty associated with δ is dominated asymptotically by the uncertainty from having to estimate F. More formally, when / c, / and the convergence rate of ŷ +h is, implying that ŷ +h y +h = / ˆδ δ ẑ + α H F HF = α H F HF + o P. hus, the forecast error is asymptotically, α H Σ F H α. Since ˆB = / ẑ ˆΣ δ ẑ + ˆα ˆΣ F ˆα = α H Σ F H α + o P, the studentized forecast error given in is still, as,. For the studentized forecast error associated with forecasting y +h, the variance of ŷ +h is asymptotically as, dominated by the variance of the error term σ ε, im-

14 plying that neither the parameter estimation uncertainty nor the factors estimation uncertainty contribute to the asymptotic variance. 3 Description of bootstrap intervals Following Gonçalves and Perron, we consider the following bootstrap data-generating process: X t = Λ F t + e t, y t+h = ˆα Ft + ˆβ W t + ε t+h, 3 where { e t = e t,..., e t } denotes a bootstrap sample from a resampled version of {ˆε t+h = y t+h ˆα Ft ˆβ } W t. {ẽ t = X t Λ F t } and { ε t+h} is Our goal in this section is to describe two general bootstrap algorithms that can be used to compute intervals for y +h and y +h for any choice of {e t } and { εt+h}. he specific method of generating {e t } and { εt+h} will depend on the assumptions we make on {eit } and {ε t+h }, respectively. In Section 5 we describe several methods. For example, we rely on the wild bootstrap to generate both {e t } and { εt+} when constructing confidence intervals for y +. he wild bootstrap is justified in this setting since we assume away cross sectional dependence in e it and we assume that ε t+ is a m.d.s. when h =. For one-step ahead prediction intervals we strengthen the m.d.s. assumption to an i.i.d. assumption on ε t+, and therefore we generate ε t+ using the i.i.d. bootstrap. For multi-step prediction intervals, we generate ε t+h with either the block wild bootstrap or the dependent wild bootstrap of Djogbenou et al. to account for possible serial correlation.

15 We estimate the factors by the method of principal components using the bootstrap panel data set {X t : t =,..., }. We let F = F,..., F denote the r matrix of bootstrap estimated factors which equal the r eigenvectors of X X / multiplied by corresponding to the r largest eigenvalues. he r matrix of estimated bootstrap loadings is given by Λ = λ,..., λ = X F /. We then run a regression of yt+h on F t and W t using observations t =,..., h. We let ˆδ denote the corresponding OLS estimator h ˆδ = ẑt ẑt t= h ẑt yt+h, t= where ẑ t = F t, W t. he steps for obtaining a bootstrap confidence interval for y +h are as follows. Algorithm Bootstrap confidence interval for y +h. For t =,...,, generate where {e it} is a resampled version of. Estimate the bootstrap factors { F t X t = Λ F t + e t, {ẽ it = X it λ i F t }. } : t =,..., using X. 3. For t =,..., h, generate y t+h = ˆα Ft + ˆβ W t + ε t+h, where the error term ε t+h is a resampled version of ˆε t+h.

16 . Regress y t+h generated in step 3 on the bootstrap estimated factors F t obtained in step and on the fixed regressors W t and obtain the OLS estimator ˆδ. 5. Obtain bootstrap forecasts ŷ +h = ˆα F + ˆβ W ˆδ ẑ, and bootstrap variance ˆB = ẑ ˆΣ δẑ + ˆα ˆΣ F ˆα, where the choice of ˆΣ δ and ˆΣ F depends on the properties of ε t+h and e it.. Let y +h = ˆα F + ˆβ W and compute bootstrap prediction errors: a For equal-tailed percentile-t bootstrap intervals, compute studentized bootstrap prediction errors as s +h = ŷ +h y +h ˆB b For symmetric percentile-t bootstrap intervals, compute s +h.. 7. Repeat this process B times, resulting in statistics { s +h,,..., s +h,b} and { s +h,,..., s +h,b }.. Compute the corresponding empirical quantiles: a For equal-tailed percentile-t bootstrap intervals, q α is the empirical α quantile of { s +h,,..., s +h,b}. b For symmetric percentile-t bootstrap intervals, q, α is the empirical α quantile of { s +h,,..., s +h,b }. 3

17 A α% equal-tailed percentile-t bootstrap interval for y +h is given by EQ α y +h ŷ +h q α/ ˆB, ŷ +h qα/ ˆB, 5 whereas a α% symmetric percentile-t bootstrap interval for y +h is given by SY α y +h ŷ +h q, α ˆB, ŷ +h + q, α ˆB, When prediction intervals for a new observation y +h are the object of interest, the algorithm reads as follows. Algorithm Bootstrap prediction interval for y +h. Identical to Algorithm.. Identical to Algorithm. 3. Generate { y +h,..., y, y +,..., y +h} using y t+h = ˆα Ft + ˆβ W t + ε t+h, where { ε +h,..., ε, ε +,..., ε +h} is a bootstrap sample obtained from {ˆε+h,..., ˆε }.. ot making use of the stretch { y +,..., y +h}, compute ˆδ as in Algorithm. 5. Obtain the bootstrap point forecast ŷ +h as in Algorithm but compute its variance as Ĉ = ˆB + ˆσ ε,

18 where ˆσ ε is a consistent estimator of σ ε = V ar ε +h and ˆB is as in Algorithm.. Let y +h = ˆα F + ˆβ W + ε +h and compute bootstrap prediction errors: a For equal-tailed percentile-t bootstrap intervals, compute studentized bootstrap prediction errors as s +h = ŷ +h y +h. Ĉ b For symmetric percentile-t bootstrap intervals, compute s +h. 7. Identical to Algorithm.. Identical to Algorithm. A α % equal-tailed percentile-t bootstrap interval for y +h is given by EQ α y +h ŷ +h q α/ Ĉ, ŷ +h qα/ Ĉ, 7 whereas a α % symmetric percentile-t bootstrap interval for y +h is given by SYy α +h ŷ +h q, α Ĉ, ŷ +h + q, α Ĉ. he main differences between the two algorithms is that in step 3 of Algorithm we generate observations for y t+h for t =,..., instead of stopping at t = h. his allows us to obtain a bootstrap observation for y +h, the bootstrap analogue of y +h, which we will use in constructing the studentized statistic s +h in step of Algorithm. he point forecast is identical to Algorithm and relies only on observations for t =,..., h, but the bootstrap 5

19 variance Ĉ contains an extra term ˆσ ε that reflects the uncertainty associated with the error of the new observation ε +h. ote that Algorithm generates bootstrap point forecasts ŷ +h and bootstrap future observations y +h that are conditional on W. his is important because the point forecast ŷ +h depends on W. When W t contains lagged dependent variables e.g. W t = y t and h =, steps 5 and of Algorithm set W = y when computing ŷ + and y +. his is effectively equivalent to setting y = y for the purposes of computing these quantities. However, Step 3 of Algorithm generates observations on { y t+ : t =,..., } that do not necessarily satisfy the requirement that y = y. As recently discussed by Pan and Politis, we can account for parameter estimation uncertainty in predictions generated by autoregressive models by relying on a forward bootstrap method that contains two steps: one step generates the bootstrap data by relying on the forward representation of the model. his step accounts for parameter estimation uncertainty even if y y. In a second step, we evaluate the bootstrap prediction and future observation conditional on the last values of the observed variable. Our Algorithm can be viewed as a version of the forward bootstrap method of Pan and Politis when some of the regressors are latent factors that need to be estimated. Bootstrap distribution of estimated factors he asymptotic validity of the bootstrap intervals for y +h and y +h described in the previous section depends on the ability of the bootstrap to capture two sources of estimation error: the parameter estimation error and the factors estimation error. In particular, the bootstrap

20 estimation error for the conditional mean is given by ŷ +h y +h = ẑ ˆδ δ + ˆα H F H F, where δ = Φ ˆδ and Φ = diag H, I q. Here, H is the bootstrap analogue of the rotation matrix H defined in 5, i.e. H = Ṽ F F Λ Λ, where Ṽ is the r r diagonal matrix containing on the main diagonal the r largest eigenvalues of X X /, in decreasing order. ote that contrary to H, which depends on unknown population parameters, H is fully observed. Using the results in Bai and g 3, H converges asymptotically to a diagonal matrix with + or on the main diagonal, see Gonçalves and Perron for more details. Adding and subtracting appropriately, we can write ŷ +h y +h = ẑ Φ ˆδ ˆδ + ˆα H F F + o P. 9 As usual in the bootstrap literature, we use P to denote the bootstrap probability measure, conditional on a given sample; E and V ar denote the corresponding bootstrap expected value and variance operators. For any bootstrap statistic, we write = o P, in probability, or P, in probability, when for any δ >, P > δ = o P. We write = O P, in probability, when for all δ > there exists M δ < such that lim, P [P > M δ > δ] =. Finally, we write d D, in probability, if conditional on a sample with probability that converges to one, weakly converges to the 7

21 distribution D under P, i.e. E f P E f D for all bounded and uniformly continuous functions f. See Chang and Park 3 for similar notation and for several useful bootstrap asymptotic properties. he stochastic expansion 9 shows that the bootstrap estimation error captures the two forms of estimation uncertainty in provided: the bootstrap distribution of Φ ˆδ ˆδ is a consistent estimator of the distribution of ˆδ δ, and the bootstrap distribution of H F F is a consistent estimator of the distribution of F HF. Gonçalves and Perron discussed conditions for the consistency of the bootstrap distribution of ˆδ δ. Here we propose a set of conditions that justifies using the bootstrap to consistently estimate the distribution of the estimated factors Ft HF t at each point t. Condition A. A.. For each t, s= γ st = O P, where γ st = E i= e ite is A.. For each t, s= E i= e ite is E e ite is = O P. A.3. For each t, E F s= i= s e ite is E e ite is = O P. A.. E F t= i= t λ ie it = O P. A.5. t= E λ i= i e it = O P. A.. For each t, Γ / t is uniformly positive definite. λ i= i e it d, I r, in probability, where Γ t = V ar. i= λ i e it Condition A is the bootstrap analogue of Bai s 3 assumptions used to derive the limiting distribution of Ft HF t. Gonçalves and Perron also relied on similar high level

22 assumptions to study the bootstrap distribution of ˆδ δ. In particular, Conditions A. and A.5 correspond to their Conditions B*c and B*d, respectively. Since our goal here is to characterize the limiting distribution of the bootstrap estimated factors at each point t, we need to complement some of their other conditions by requiring boundedness in probability of some bootstrap moments at each point in time t in addition to boundedness in probability of the time average of these bootstrap moments; e.g. Conditions A. and A. expand Conditions A*b and A*c in Gonçalves and Perron in this manner. We also require that a central limit theorem applies to the scaled cross sectional average of λ i e it, at each time t Condition A.. his high level condition ensures asymptotic normality for the bootstrap estimated factors. It was not required by Gonçalves and Perron because their goal was only to consistently estimate the distribution of the regression estimates, not of the estimated factors. heorem. Suppose Assumptions and hold. Under Condition A, as, such that / 3/, we have that for each t, F t H Ft = H Ṽ i= λ i e it + o P, in probability, which implies that Π / t H F t F t d, I r, in probability, where Π t = Ṽ Γ t Ṽ. heorem.i of Bai 3 shows that under regularity conditions weaker than Assumptions and and provided /, Ft HF t d, Π t, where Π t = V QΓ t Q V, 9

23 Q = p lim F F. heorem. is its bootstrap analogue. A stronger rate condition / 3/ instead of / is used to show that the remainder terms in the stochastic expansion of F t H Ft are asymptotically negligible. his rate condition is a function of the number of finite moments for F s we assume. In particular, if we replace Assumption a with E F t q M for all t, then the required rate restriction is / /q. See Remarks and 3 below. o prove the consistency of Π t for Π t we impose the following additional condition. Condition B. For each t, p lim Γ t = QΓ t Q. Condition B requires that Γ t, the bootstrap variance of the scaled cross sectional average of the scores λ i e it, be consistent for QΓ t Q. his in turn requires that we resample ẽ it in a way that preserves the cross sectional dependence and heterogeneity properties of e it. Corollary. Under Assumptions and and Conditions A and B, we have that for each t, as, such that / 3/, H F t F t d, Π t, in probability, where Π t = V QΓ t Q V is the asymptotic covariance matrix of Ft HF t. Corollary. justifies using the bootstrap to construct confidence intervals for the rotated factors HF t provided Conditions A and B hold. hese conditions are high level conditions that can be checked for any particular bootstrap scheme used to generate e it. We verify them for a wild bootstrap in Section 5 when proving the consistency of bootstrap confidence intervals for the conditional mean. he fact that factors and factor loadings are not separately identified implies the need to rotate the bootstrap estimated factors in order to consistently estimate the distribution of the sample factor estimates, i.e. we use H F t F t to approximate the distribution

24 of Ft HF t. A similar rotation was discussed in Gonçalves and Perron in the context of bootstrapping the regression coeffi cients ˆδ. 5 Validity of bootstrap intervals 5. Confidence intervals for y + We begin by considering intervals for next period s conditional mean. For this purpose, we use a two-step wild bootstrap scheme, as in Gonçalves and Perron. Specifically, we rely on Algorithm and we let ε t+ = ˆε t+ v t+, t =,...,, with v t+ i.i.d.,, and e it = ẽ it η it, t =,...,, i =,...,, where η it is i.i.d., across i, t, independently of v t+. o prove the asymptotic validity of this method we strengthen Assumptions - as follows. Assumption 5. λ i are either deterministic such that λ i M <, or stochastic such that E λ i M < for all i; E F t M < ; E e it M <, for all i, t ; and for some q >, E ε t+ q M <, for all t. Assumption. E e it e js = if i j. With h =, our Assumption b on ε t+h becomes a martingale difference sequence assumption, and the wild bootstrap in is natural. his assumption rules out serial correlation in

25 ε t+ but allows for conditional heteroskedasticity. Below, we consider the case where h >. Assumption assumes the absence of cross sectional correlation in the idiosyncratic errors and motivates the use of the wild bootstrap in. As the results in the previous sections show, prediction intervals for y +h or y +h are a function of the factors estimation uncertainty even when this source of uncertainty is asymptotically negligible for the estimation of the distribution of the regression coeffi cients i.e. even when / c =. Since factors estimation uncertainty depends on the cross sectional correlation of the idiosyncratic errors e it via Γ = lim V ar / i= λ ie i, bootstrap prediction intervals need to mimic this form of correlation to be asymptotically valid. Contrary to the pure time series context, a natural ordering does not exist in the cross sectional dimension, which implies that proposing a nonparametric bootstrap method e.g. a block bootstrap that replicates the cross sectional dependence is challenging if a parametric model is not assumed. herefore, we follow Gonçalves and Perron and use a wild bootstrap to generate e it under Assumption. he bootstrap percentile-t method, as described in Algorithm and equations 5 and, requires the choice of two variances, ˆB and its bootstrap analogue ˆB. o compute ˆB we use 7, where ˆΣ δ is given in. ˆΣ F is given in 9, where Γ = i= λ i λ iẽ i is estimator 5a in Bai and g, and it is a consistent estimator of a rotated version of Γ = lim V ar i= λ ie i under Assumption. We compute ˆB using and relying on the heteroskedasticity-robust bootstrap analogues of ˆΣ δ and ˆΣ F. heorem 5. Suppose Assumptions - hold and we use Algorithm with ε t+ = ˆε t+ v t+

26 and e it = ẽ it η it, where v t+ i.i.d., for all t =,..., and η it i.i.d., for all i =,..., ; t =,...,, and v t+ and η it are mutually independent. Moreover, assume that E η it < C for all i, t and E v t+ < C for all t. If / c, where c <, and / /, then conditional on {y t, X t, W t : t =,..., }, ŷ + y + ˆB d,, in probability. Remark he rate restriction / / is slightly stronger than the rate used by Bai 3 cf. /. It is weaker than the restriction / 3/ used in heorem. and Corollary. because we have strengthened the number of factor moments that exist from to compare Assumption 5 with Assumption a. See Remark 3 in the Appendix. Remark Since ŷ + y + d,, as shown by Bai and g, heorem 5. ˆB implies that bootstrap confidence intervals for y + obtained with Algorithm have the correct coverage probability asymptotically. 5. Prediction intervals for y + In this section we provide a theoretical justification for bootstrap prediction intervals for y + as described in Algorithm. In particular, our goal is to prove that a bootstrap prediction interval contains the future observation y + with unconditional probability that converges to the nominal level as,. We add the following assumption. Assumption 7. ε t+ is i.i.d., σ ε with a continuous distribution function F ε x = P ε t+ x. 3

27 Assumption 7 strengthens the m.d.s. Assumption.b by requiring the regression errors to be i.i.d. However, and contrary to Bai and g, F ε does not need to be Gaussian. he continuity assumption on F ε is used below to prove that the Kolmogorov distance between the bootstrap distribution of the studentized forecast error and the distribution of its sample analogue converges in probability to zero. Let the studentized forecast error be defined as s + ŷ + y +, ˆB + ˆσ ε where ˆσ ε is a consistent estimate of σ ε = V ar ε + and ˆB = V ar ŷ + = ẑ ˆΣ δ ẑ + ˆα ˆΣ F ˆα. Given Assumption 7, we can use ˆσ ε = ˆε t+ and ˆΣδ = ˆσ ε ẑ t ẑ t. t= t= Our goal is to show that the bootstrap can be used to estimate consistently F,s x = P s + x, the distribution function of s +. ote that we can write ŷ + y + = ŷ + y + + y + y + = ε + + O P /δ, given that ŷ + y + = O P δ,, where δ = min this follows under the assumptions of heorem 5.. Since ˆσ ε P σ ε and ˆB = O P /δ = op, it follows that s + = ε + σ ε + o P. 3

28 hus, as,, s + converges in distribution to the random variable ε + σ ε, i.e. F,s x P s + x P ε + σ ε x = F ε xσ ε F,s x, for all x R. If we assume that ε t+ is i.i.d., σ ε, as in Bai and g, then F ε xσ ε = Φ x = Φ x, implying that F,s x Φ x, i.e. s + d,. evertheless, this is not generally true unless we make the Gaussianity assumption. We note that although asymptotically the variance of the prediction error ŷ + y + does not depend on any parameter nor factors estimation uncertainty as it is dominated by σ ε for large and, we still suggest using Ĉ = ˆB + ˆσ ε to studentize ŷ + y + since ˆσ ε will underestimate the true forecast variance for finite and. Politis 3 and Pan and Politis discuss notions of asymptotic validity that require taking into account the estimation of the condition mean. More specifically, in addition to requiring that the interval contains the true observation with the desired nominal coverage probability asymptotically, they require the bootstrap to capture parameter estimation uncertainty. o the extent that their definitions can be extended to the case of generated regressors, we expect our bootstrap intervals to satisfy these stricter notions of validity. ext we show that the bootstrap yields a consistent estimate of the distribution of s + without assuming that ε t+ is Gaussian. Our proposal is based on a two-step residual based bootstrap scheme, as described in Algorithm and equations 7 and, where in step 3 we generate { ε,..., ε, ε +} as a random sample obtained from the centered residuals {ˆε ˆε,..., ˆε ˆε }. Resampling in an i.i.d. fashion is justified under Assumption 7. We recenter the residuals because ˆε is not necessarily zero unless W t contains a constant regressor. 5

29 evertheless, since ˆε = o P, resampling the uncentered residuals is also asymptotically valid in our context. We compute ˆB and ˆσ ε using the bootstrap analogues of ˆΣ δ and ˆσ ε introduced in. ote that ˆσ ε is a consistent estimator of σ ε and ˆB = o P, in probability. As above, we can write ŷ + y + = ŷ + y + + y + y + = ε + + O P /δ, in probability, which in turn implies s + ŷ + y + ˆB + ˆσ ε = ε + σ ε + o P. hus, F,s x = P s + x, the bootstrap distribution of s + conditional on the sample is asymptotically the same as the bootstrap distribution of ε + σ ε. Let F,ε denote the bootstrap distribution function of ε t. It is clear from the stochastic expansions 3 and that the crucial step is to show that ε + converges weakly in probability to ε +, i.e. d F,ε, F ε P for any metric that metrizes weak convergence. In the following we use Mallows metric which is defined as d F X, F Y = inf E X Y / over all joint distributions for the random variables X and Y having marginal distributions F X and F Y, respectively. Lemma 5. Under Assumptions -7, and as, such that / c, c <, d F,ε, F ε P. Corollary 5. Under the same assumptions as heorem 5. strengthened by Assumption 7,

30 we have that sup F,s x F,s x, x R in probability. Corollary 5. implies the asymptotic validity of the bootstrap prediction intervals given in 7 and, where asymptotic validity means that the interval contains y + with unconditional probability converging to the nominal level asymptotically. Specifically, we can show that P y + EQ α y + α and P y + SYy α + α as,. See e.g. Beran 97 and Wolf and Wunderli 5, Proposition. For instance, P y + EQ α y + = P s + q α/ P s + q α/ = P F,s s + α/ P F,s s + α/. Given Corollary 5., we have that F,s s + = F,s s + + o P, and we can show that F,s s + d U [, ]. Indeed, for any x, P F,s s + x = P s + F,s x F,s F,s x F,s F,s x = x. A stronger result than that implied by Corollary 5. would be to prove that P y + EQ α y + z α, where z = F, W. evertheless, to claim asymptotic validity of the bootstrap prediction intervals conditional on the regressors would require stronger assumptions, namely the assumption that ε + is independent of z. Such a strong exogeneity assumption is unlikely to be satisfied in economics. 7

31 5.3 Multi-horizon forecasting, h > Finally, we consider the case where the forecasting horizon, h, is larger than. he main complication in this case is the fact that the regression errors ε t+h in the factor-augmented regression will generally be serially correlated to order h. his serial correlation affects the distribution of ˆδ δ since the form of Ω is different in this case, as it includes autocovariances of the score process. We modify our two algorithms above by drawing ε t+h using the block wild bootstrap BWB algorithm proposed in Djogbenou et al.. he idea is to separate the sample residuals ˆε t+h into non-overlapping blocks of b consecutive observations. For simplicity, we assume that h b, the number of such blocks, is an integer. hen, we generate our bootstrap errors by multiplying each residual within a block by the same draw of an external variable, i.e. ε i+j b = ˆε i+j b η j for j =,..., h, i = + h,..., h + b, and η b j i.i.d.,. he fact that each residual within a block is multiplied by the same external draw preserves the time series dependence. We let b = h because we use the fact that ε t+h MA h under Assumption b. For h =, this algorithm is the same as the wild bootstrap. Djogbenou et al. show that this algorithm allows for valid bootstrap inference in a regression model with estimated factors and general mixing conditions on the error term. he moving average structure obtained in a forecasting context assuming correct specification obviously satisfies these mixing conditions, and this ensures that this block wild bootstrap algorithm replicates the distribution of ˆδ δ after rotating the estimated parameter in the bootstrap world. hus, the result of heorem 5. holds

32 in this more general context since h > does not affect factor estimation. For the forecast of the new observation, y +h, the crucial condition for asymptotic validity of the bootstrap prediction intervals is to capture the marginal distribution of ε +h.. his means that the i.i.d. bootstrap can still be used in step of algorithm to generate ε t+h despite the serial correlation in ε t+h. Alternatively, we can also amend the block wild bootstrap by generating ˆε t+h as above for t =,..., h and generating ε +h as a draw from the empirical distribution function of ˆε t, t =,..., h. We will compare these two approaches in the simulation experiment below. Simulations In this section, we report results from a simulation experiment to analyze the properties of the normal asymptotic intervals as well as their bootstrap counterparts analyzed above. he data-generating process is similar to the one used in Gonçalves and Perron. We consider the single factor model: y t+h =.5F t + ε t+h 5 where F t is an autoregressive process: F t =.F t + u t with u t drawn from a normal distribution independently over time with a variance of.. We use the backward representation of this autoregressive process to make sure that all sample paths have F =. We will consider two forecasting horizons, h = and h =. 9

33 he regression error ε t+h will be homoskedastic with expectation, variance and will have a moving average structure to accommodate multi-horizon forecasting: h ε t+h =. j v t+h j, j= and to analyze the effects of deviations from normality, we report results for two distributions for v t : ormal: v t h, j=.j Mixture : v t h [p, + p 9, ], j=.j where p is distributed as Bernoulli.9. he particular mixture distribution we are using is similar to the one proposed by Pascual, Romo and Ruiz. Most of the data is drawn from a, but about % will come from a second normal with a much larger mean of 9. he scaling term in parentheses ensures that the variance of ε t+h is regardless of h. We have also considered other distributions such as the uniform, exponential, and χ but do not report these results for brevity. he matrix of panel variables is generated as: X it = λ i F t + e it where λ i is drawn from a U [, ] distribution independent across i and e it is heteroskedastic but independent over i and t. he variance of e it is drawn from U [.5,.5] for each i. We consider asymptotic and bootstrap confidence intervals at a nominal level of 95%. As- 3

34 ymptotic inference is conducted by using a HAC estimator quadratic spectral kernel with bandwidth set to h to account for possible serial correlation. We use Algorithms and described above to generate the bootstrap data with B = 999 bootstrap replications. he idiosyncratic errors are always drawn using the wild bootstrap in step. In step 3, three bootstrap schemes are analyzed to draw ε t : the first one draws the residuals with replacement in an i.i.d. fashion, the second one uses the wild bootstrap, while the last one redraws the residuals using the block wild bootstrap with a block size equal to h. he first two methods are only valid when h =, while the last one is valid for both values of h. In all applications of the wild bootstrap and block wild bootstrap, the external variable has a standard normal distribution. With the wild bootstrap, we use the heteroskedasticity-robust variance estimator, while we use the HAC one with block size equal to h for the block wild bootstrap. We consider two types of bootstrap intervals: symmetric percentile-t and equal-tailed percentilet. We report experiments based on 5, replications and with three values for 5,, and and values for 5,, 5, and. We report results graphically for the conditional mean y +h and for the new observation y +h. We report the frequency of times the 95% confidence interval is to the left or right of the true parameter. Each figure has three rows corresponding to = 5, =, and = with on the horizontal axis, and in the last column, we show the average length of the corresponding confidence intervals relative to the length of the "ideal"confidence intervals obtained with the.5% and 97.5% quantiles from the empirical distribution simulated for each and,, times as endpoints. o keep the figures readable, we report results for two bootstrap methods in each figure. For the conditional mean, we report results for the wild 3

35 bootstrap and block wild bootstrap with differences thus only coming from the block size since the two methods are the same for a block size equal to. For the observation, we report results using the iid and block wild bootstrap since we require an i.i.d. assumption for the construction of intervals for this quantity. It turns out that the distribution of ε +h noticeably affects the results for y +h only. As a consequence, we only report results with Gaussian ε t+h for the conditional mean. On the other hand, the results of y +h are dominated by the behavior of ε t+h. hus, the contribution of the conditional mean from the contribution of ε +h in the forecasts of y +h are clearly separated.. Forecasting horizon h = We start by presenting results when we are interested in making a prediction for next period s value. For this horizon, because ε t+ does not have serial correlation, the wild bootstrap and block wild bootstrap methods are identical with reported differences due to simulation error. Conditional mean, y + he results for the conditional mean are presented in Figure. Asymptotic theory blue line shows large distortions that decrease with an increasing. For example, for = = 5, the 95% confidence interval does not include the true mean in % of the replications instead of the nominal 5%. his number is reduced to 7.% when = and = 5. Moreover, we see that most of these instances are in one direction, when the confidence interval is to the left of the true value. his can be explained by a bias in the estimation of the parameter δ as documented by Gonçalves and Perron due to the estimation uncertainty in the factors. his bias is negative, thus shifting the distribution of the conditional mean to the left, leading to more rejections on the left side and fewer on the right side than predicted 3

36 = = = 5 by the asymptotic normal distribution. Figure. Probability of interval to the left or right of y + Probability to the left Probability to the right Length of intervals A SYM. WB E WB SYM. BWB E BWB ote: he figures in the first two columns report the fraction of confidence intervals that lie to the left or to the right of the conditional mean for each method as a function of the cross-sectional dimension. Each row corresponds to a different time series dimension. he last column reports the length of the confidence intervals relative to the length of the "ideal" intervals obtained as the.5% and 97.5% quantiles of the empirical distribution. he presence of bias is reflected in the bootstrap distribution of ŷ + which is also shifted to the left. his is illustrated by a large difference between the bootstrap symmetric and equaltailed intervals. he symmetric intervals reproduce the pattern of more coverage to the left than to the right, while equal-tailed intervals distribute coverage more or less equally in both tails. In both cases, the total rejection rates are closer to their nominal level than with asymptotic theory, for example with = = 5, the wild bootstrap does not include the true value in.7% of the replications with the symmetric intervals and.% for the equal-tailed intervals. 33

37 = = = 5 his phenomenon is also reflected in the length of intervals. he asymptotic intervals are shortest and least accurate. he equal-tailed intervals are typically slightly shorter than the corresponding symmetric intervals. Forecast of y + We next consider the prediction of y + in Figures and 3. As mentioned before, given our parameter configuration, the uncertainty is dominated by the underlying error term ε + and not estimation uncertainty. his is the reason asymptotic intervals rely on the normality assumption. his provides a motivation for the bootstrap, and the effect of non-normality is highlighted in our figures. Figure shows that under normality, inference for y + is quite accurate for all methods, and it is essentially unaffected by the values of and as predicted since it is dominated by the behavior of ε t+h. All methods perform similarly, though we see that the asymptotic intervals that make the correct Gaussianity assumption are shorter than those based on the bootstrap. he iid bootstrap also produces slightly narrower intervals than the block wild bootstrap. Probability to the left Figure. Probability of interval to the left and right of y +, ε is normal Probaility to the right Length of intervals A SYM. iid.3 E iid SYM. BWB. E BWB ote: he figures in the first two columns report the fraction of confidence intervals that lie to the 3

38 left or to the right of the observation for each method as a function of the cross-sectional dimension. Each row corresponds to a different time series dimension. he last column reports the length of the confidence intervals relative to the length of the "ideal" intervals obtained as the.5% and 97.5% quantiles of the empirical distribution. Figure 3 provides the same information when the errors are drawn from a mixture of normals. We see problems with asymptotic theory, and these come almost exclusively in the form a confidence interval to the left of the true value. his is due to the fact that we have falsely imposed that errors are Gaussian, whereas the true distribution is bimodal. On the other hand, the bootstrap corrects these diffi culties. he symmetric intervals do so by reducing coverage on the left side to between 5 and % and having almost no coverage to the right. he equal-tailed intervals distribute coverage more evenly by reducing undercoverage on the right side and pretty much eliminating the over-coverage on the left side. Because they allow for asymmetry, the equal-tailed intervals are shorter than the symmetric ones. Similarly, the i.i.d. bootstrap that makes the correct assumption that ε + is i.i.d. produces slightly more accurate coverage and shorter intervals. 35

39 = = = 5 Probability to the left Figure 3. Probability of interval to the left or right of y +, ε is mixture Probability to the right Length of intervals A SYM. iid E iid SYM. BWB E BWB ote: see Figure.. Multi-horizon forecasting In Figures -, we report the same results as before but for h = instead of h =. Because the error term is now a moving average of order 3, the wild bootstrap and block wild bootstrap a block size equal to is used are no longer identical. Figure reports the results for the conditional mean, y +. he main difference with Figure is that there is a gap between the accuracy of the intervals based on the wild bootstrap and on the block wild bootstrap. As before, the equal-tailed intervals provide more accurate intervals and smaller length because they capture the bias in the distribution. 3

40 = = = 5 Figure. Probability of interval to the left or right of y + Probability to the left Probability to the right Length of intervals A SYM. WB E WB SYM. BWB E BWB ote: see Figure. While there is a difference in coverage between the wild bootstrap and the block wild bootstrap, it is not very large. his feature can be explained by the fact that factors are estimated. he forecast error variance has two parts, one due to the estimation of the parameters and one due to the estimation of the factors see equation. Serial correlation only affects the first term in that expression, and thus its effect is dampened by the presence of the second term which is usually not present in a typical forecasting context where predictors are observed. Figures 5 and give the results for the new observation, y +. Overall, we see that serial correlation does not seem to affect inference on y +h much. here are some effects when = 5, but this seems related to diffi culties in estimating the distribution of ε + with serial correlation. Otherwise, the figures and conclusions are similar to those in Figures and 3 with the exception of the fact that the block wild bootstrap leads to much wider intervals than the i.i.d. bootstrap with some improvement in coverage for = 5. 37

41 = = = 5 = = = 5 Probability to the left Figure 5. Probability of interval to the left and right of y +, ε is normal Probaility to the right Length of intervals A SYM. iid.3 E iid SYM. BWB. E BWB ote: see Figure. Probability to the left Figure. Probability of interval to the left or right of y +, ε is mixture Probability to the right Length of intervals A SYM. iid E iid SYM. BWB E BWB ote: see Figure. 3

42 7 Empirical illustration In this section, we use the dataset of Stock and Watson 3 and Rossi and Sekhposyan, updated to the first quarter of, to illustrate the properties of asymptotic and bootstrap intervals. We consider forecast intervals for changes in the inflation rate measured by the quarterly growth rate of the GDP deflator P GDP at annual rate: [ ] P DGPt P DGPt π t = ln ln. P GDP t P GDP t here is a total of = 9 series on asset prices, measures of economic activity, wages and prices, and money used to construct forecasts, see Rossi and Sekhposyan for details. he inflation rate is not included in the data used for extracting the factors. In order to have a balanced panel, our sample covers the period 973q-q. We construct forecasts from the factor-augmented autoregressive model: p ˆπ t+h = ˆβ r + ˆφ j π t j+ + ˆα j Fj,t. j= j= We compute forecast intervals for h = for the last 5 observations in the sample. his means that the forecasts are made each period from the third quarter of until the end of 3. We use a rolling window of observations to estimate factors and parameters as in Rossi and Sekhposyan. We also follow Rossi and Sekhposyan and first choose the AR order p for each time period using BIC and then augment with the estimated factors. In each period, We thank atevik Sekhposyan for providing us with the data. 39

43 we select the number of factors such that the factors explain a minimum of % of the total variance of the panel after centering and rescaling. hree factors are selected by this approach in out of the 5 periods, and for the remaining periods. he factor-based forecasts reduce the root mean squared error of the forecasts by about 3% relative to autoregressive forecasts. In Figure 7, we report prediction intervals for the factor-augmented forecasts. he dashed red lines represent the bounds of the pointwise 95% prediction interval based on the asymptotic theory of Bai and g for each date. his interval is symmetric around the point forecast by construction since it is based on the normal distribution. We also report bootstrap intervals based on the block wild bootstrap BWB for ε t+h with block size equal to the bandwidth selected by the Andrews 99 rule and the wild bootstrap for e t. Other methods for drawing ε t+h lead to very similar intervals, and we do not report them to ease exposition they are available from the authors upon request. he reported intervals were constructed as equal-tailed percentile-t intervals and are based on B = 9999 bootstrap replications. Figure 7. Prediction interval for π + Factor augmented forecast intervals A BWB 3 3 5

44 ote: he dashed red lines represent the bounds of the pointwise 95% prediction interval based on the asymptotic theory of Bai and g for each date. he solid blue line are bounds of the 95% equal-tailed percentile-t bootstrap intervals based on the block wild bootstrap BWB with block size equal to the bandwidth selected by the Andrews 99 based on B=9999 bootstrap replications. While both sets of intervals in Figure 7 are similar, there are noticeable differences that can be attributed either to bias in the estimation of the parameters or to non-normality in the distribution of the error term. Rossi and Sekhposyan find fairly strong evidence of non-normality of the forecast errors for this series, and this is likely an important source of the differences between the asymptotic and equal-tailed intervals. he behavior of the bootstrap intervals during specific periods is quite interesting. For example, early in the sample, the bootstrap intervals are consistently shifted down relative to the asymptotic intervals. Figure highlights two periods where the bootstrap interval lies completely below. he left panel presents the same intervals as Figure 7 around the fourth quarter of. We see that the bootstrap intervals are shifted down for most of the reported period, and the upper limit of the bootstrap interval drops just below it is -.7% in the fourth quarter of. On the other hand, the asymptotic interval contains with an upper limit of about %. his means that policy makers concerned about a sudden reduction in inflation following the collapse of Lehman Brothers would have underestimated the probability of a reduction in inflation had they based their decision on the asymptotic intervals.

45 Figure. Prediction interval for and Q Q Q Q3 Q 9Q 9Q 9Q3 9Q 5 Q Q Q Q3 Q Q Q Q3 Q ote: see Figure 7. Similarly, the right panel of Figure focuses on the intervals around the fourth quarter of. As in, the bootstrap interval for the fourth quarter of is shifted down and includes only negative values, whereas the corresponding asymptotic interval includes positive inflation changes. At that time, many central banks were concerned about deflation risk, and relying on asymptotic intervals would have given them the impression that large reductions in inflation were much less likely than suggested by the bootstrap interval the change in inflation turned out to be percentage points. Conclusion In this paper, we have proposed the bootstrap to construct valid prediction intervals for models involving estimated factors. We considered two objects of interest: the conditional mean y +h and the realization y +h.. he bootstrap improves considerably on asymptotic theory for the conditional mean when the factors are relevant because of the bias in the estimation of the

46 regression coeffi cients. However, our simulation results suggest that allowing for serial correlation, as is relevant when the forecasting horizon is greater than, is not very important in practice. For the observation, the bootstrap allows the construction of valid intervals without having to make strong distributional assumptions such as normality as was done in previous work by Bai and g. One key assumption that we had to make to establish our results is that the idiosyncratic errors in the factor models are cross-sectionally independent. his is certainly restrictive, but it allows for the use of the wild bootstrap on the idiosyncratic errors. on-parametric bootstrapping under more general conditions remains a challenge. he results in this paper could be used to prove the validity of a scheme in that context by showing the conditions A and B are satisfied. 3

47 A Appendix he proof of heorem. requires the following auxiliary result, which is the bootstrap analogue of Lemma A. of Bai 3. It is based on the following identity that holds for each t: F t H Ft = Ṽ F s γ st + F s ζ st + F s η st + F s ξ st s= s= s= s=, }{{}}{{}}{{}}{{} A t A t A 3t A t where γ st = E η st = i= i= e ise it λ i F s e it = F s Λ e t, ζ st = e ise it E e ise it, i= and ξ st = λ F i t e is = η ts. i= Lemma A. Assume Assumptions and hold. Under Condition A, we have that for each t, in probability, as,, a s= F s γ st = O P b s= F s ζ st = O P c s= F s η st = O P d s= F s ξ st = O P δ + O P δ ; δ ;. 3/ ; Remark 3 he term O P / 3/ that appears in a is of a larger order of magnitude than the corresponding term in Bai 3, Lemma A.i, which is O P /. he reason why we obtain this larger term is that we rely on Bonferroni s inequality and Chebyshev s inequality

48 to bound max s F s = O P / using the fourth order moment assumption on F s cf. Assumption a. In general, if E F s q M for all s, then max s F s = O P /q and we will obtain a term of order O P / /q. Proof of Lemma A.. he proof follows closely that of Lemma A. of Bai 3. he only exception is a, where an additional O 3/ term appears. In particular, we write s= F s γ st = s= F s H Fs γ st + H s= F s γ st = a t + b t. We use Cauchy-Schwartz and Condition A. to bound a t as follows a t = O P F / s H Fs s= δ O P = O P / γ st s= δ, where s= F s H Fs = OP δ by Lemma 3. of Gonçalves and Perron note that this lemma only requires Conditions A*b, A*c, and B*d, which correspond to our Condition A., A. and A.5. For b t, we have that ignoring H, which is O P, b t = s= F s γ st = Fs HF s γ st + H F s γ st = b t + b t, s= s= where b t = O P /δ using the fact that s= F s HF s = OP δ under Assumptions and and the fact that s= γ st = O P / for each t by Condition 5

49 A.. For b t, note that ignoring H = O P, b t max F s γ st s }{{} s= }{{} O P / O P = O P, 3/ where we have used the fact that E F s M for all s Assumption to bound max s F s. Indeed, by Bonferroni s inequality and Chebyshev s inequality, we have that P / max s F s > M P F s > / M s= s= E F s M M 3 for M suffi ciently large. For b, we follow exactly the proof of Bai 3 and use Condition A. to bound s= ζ st = O P for each t; similarly, we use Condition A.3 to bound / F s= s ζ st for each t. For c, we bound F s= s η st = H λ i= i e it = O P by using Condition A.. his same condition is used to bound s= η st = O P / for each t. Finally, for part d, we use Condition A. to bound s= F s ξ st = O P each t and we use Condition A.5 to bound s= ξ st = O P / for each t. Proof of heorem..by Lemma A., it follows that the third term in F t H Ft is the dominant one it is O P ; the first term is O P δ + O P = O 3/ P = 3/ o P if / 3/ whereas the second and the fourth terms are O P /δ = o P as for

50 ,. hus, we have that F t H Ft = Ṽ [ = s= F s Ṽ F F i= Λ Λ λ i F s e it + o P ] Λ Λ i= λ i e it + o P = H Ṽ Γ / t Γ / t λ i e it i= }{{} + o P, d,i r by Condition A. given the definition of H Λ Λ and the fact that Ṽ =. Since det Γ t > ɛ > for all and some ɛ, Γ t exists and we can define Γ / t = Γ / / t where Γ t Γ / t = Γ t. Let Π / t = Γ / t Ṽ and note that Π / t is symmetric and it is such that Π / t Π / t he result follows by multiplying by Π / t H and using Condition A.. = Ṽ Γ t Ṽ = Π t. Proof of Corollary.. Condition B and the fact that Ṽ P V under our assumptions imply that Π t P Π t V QΓ t Q V. his suffi ces to show the result. Proof of heorem 5.. Using the decomposition 9 and the fact that ẑ = Φ ẑ + F H F, where Φ = diag H, I q, it follows that ŷ + y + = ẑ Φ ˆδ ˆδ + ˆα H F F + r, 7

51 where the remainder is r = F H F ˆα H ˆα = O P. First, we argue that ŷ + y + B d,, 7 where B is the asymptotic variance of ŷ + y +, i.e. B ŷ = av ar + y + = ẑ Σ δẑ + ˆα Π ˆα. o show 7, we follow the arguments of Bai and g, proof of their heorem 3 and show that Z = Φ ˆδ ˆδ d c δ, Σ δ ; Z = H F F d, Π ; 3 Z and Z are asymptotically independent conditional on the original sample. Condition follows from Gonçalves and Perron under Assumptions -; follows from Corollary. provided / / and conditions A and B hold for the wild bootstrap which we verify next; 3 holds because we generate e t independently of ε t+. Proof of Condition A for the wild bootstrap. We verify for t =. We have that s= γ s =. i= ẽ i hus, it suffi ces to show that i= ẽ i = O P. his follows by using the decomposition ẽ it = e it λ ih Ft HF t λi H λ i Ft,

52 which implies that ẽ it 3 i= +3 e it + 3 λ i H F HF i= i= λ i H λ i F. i= he first term is O P given that E e it = O ; the second term is O P since E λ i = O and given that F HF = OP / = o P ; and the third term is O P given Lemma C..ii of Gonçalves and Perron and the fact that F = OP. ext, we verify A.. For t =, following the proof of heorem. in Gonçalves and Perron condition A*c, we have that = s= s= η E i= i= ẽ i i= e i e is E e i e is ẽ i ẽ isv ar η i η is η }{{} η / i= s= i= ẽ i s= ẽ is ẽ is / = O P, where the first factor in can be bounded by an argument similar to that used above to bound i= ẽ i, and the second factor can be bounded by Lemma C. iii of Gonçalves and Perron. A.3 follows by an argument similar to that used by Gonçalves and Perron 9

53 to verify Condition B*b. In particular, E F s e ise i E e ise i s= i= = F s F s ẽ i ẽ isv ar η i η is η ẽ i F F s s ẽ is s= i= i= s= [ ] / [ η ẽ i F / s ẽ is] = O P, i= s= i= s= under our assumptions. Conditions A. and A.5 correspond to Gonçalves and Perron s Conditions B*c and B*d, respectively. Finally, we prove Condition A. for t =. Using the fact that e i = ẽ i η i, where η i i.i.d., across i, note that Γ = V ar i= λ i e i = i= λ i λ iẽ i P QΓ Q, by heorem of Bai 3, where Γ lim V ar i= λ ie i > by assumption. hus, Γ is uniformly positive definite. We now need to verify that i= l Γ / λ i e i = i= l Γ / λ i ẽ i η }{{ i d,, } =ω i in probability, for any l such that l l =. Since ω i is an heterogeneous array of independent random variables given that η it is i.i.d., we apply a CL for heterogeneous independent arrays. 5

54 ote that E ω i = and V ar ω i i= = l Γ / V ar λ i ẽ i η i Γ / l i= = l Γ / λ i λ iẽ i Γ / l = l l =. i= hus, it suffi ces to verify Lyapunov s condition, i.e. for some r >, r i= E ω i r P. We have that r E ω i r i= r l r Γ / r C Γ r / r r λ i ẽi r E η i r }{{} i= M< / / r λ i ẽ i r i= i= = O P = o r P Proof of Condition B for the wild bootstrap. Γ heorem of Bai 3. = λ i= i λ iẽ i P QΓ Q, by he result for the studentized statistic where we replace B with an estimate ˆB then follows by showing that ẑ ˆΣ δẑ ẑ Σ δẑ P, and ˆα ˆΣ F ˆα ˆα ˆΣ F ˆα P, in probability. his can be shown using the arguments in Bai and g, heorems 3. and Bai 3, heorem. Proof of Lemma 5.. Recall that F ε x = P ε t x and define the following empirical distribution functions, F,ˆε ε x = {ˆε t+ ε x } and F,ε x = {ε t+ x}, t= t= 5

55 where ˆε = t= ˆε t+. ote that F,ε x = F,ˆε ε x. It follows that d F,ˆε ε, F ε d F,ˆε ε, F,ε + d F,ε, F ε, where d F,ε, F ε = o a.s. by Lemma. of Bickel and Freedman 9. hus, it suffi ces to show that d F,ˆε ε, F,ε = op. Let I be distributed uniformly on {,..., } and define X = ˆε I+ ˆε and Y = ε I+. We have that d F,ˆε ε, F,ε E X Y = E I ˆεI+ ˆε ε I+ = = t= ˆεt+ ˆε ε t+ ˆε t+ ε t+ ˆε t+ ε t+ ˆε + ˆε A + A + A 3. t= t= We can write ˆδ ˆε t+ ε t+ = Ft HF t ˆα Φzt δ, where Φ = diag H, I q. his implies that A F t HF t ˆα + Φz t ˆδ δ = OP δ +O P = o P. t= t= Similarly, ˆε = t= ˆε t+ = t= ˆε t+ ε t+ + t= ε t+ = O P + o P, δ where the first term is bounded by an argument similar to that used to bound A via the Cauchy-Schwartz inequality. his implies that A and A 3 are also o P. 5

56 Proof of Corollary 5.. Lemma 5. implies that s + d F ε xσ ε, in probability. Since s + d F ε xσ ε and F ε is everywhere continuous under Assumption 7, Polya s heorem implies the result. References [] Bai, J., 3. Inferential heory for Factor Models of Large Dimensions, Econometrica, 7, [] Bai, J. and S. g,. Confidence Intervals for Diffusion Index Forecasts and Inference with Factor-augmented Regressions, Econometrica, 7, [3] Bai, J. and S. g, 3. Principal Components Estimation and Identification of the Factors, Journal of Econometrics, 7, -9. [] Beran, R. 97. Prepivoting to Reduce Level Error of Confidence Sets, Biometrika, 7, 3, 57-. [5] Bickel, P. and D. Freedman, 9. Asymptotic heory for the Bootstrap. Annals of Statistics, Vol. 9, o., 9-7. [] Chang, Y. and J. Park, 3. A Sieve Bootstrap for the est of a Unit Root, Journal of ime Series Analysis,, [7] Djogbenou, A., S. Gonçalves, and B. Perron,. Bootstrap Inference for Regressions with Estimated Factors and Serial Correlation, Journal of ime Series Analysis, forthcoming. 53

57 [] Gonçalves, S. and B. Perron,. Bootstrapping Factor-Augmented Regressions, Journal of Econometrics,, [9] Pan, L. and D. Politis,. Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions, Journal of Statistical Planning and Inference, forthcoming. [] Pascual, L., J. Romo, and E. Ruiz,. Effects of Parameter Estimation on Prediction Densities: a Bootstrap Approach, International Journal of Forecasting, 7, 3-3. [] Politis, D. 3. Model-free Model-fitting and Predictive Distributions" with discussion and rejoinder, est,, 3-5. [] Stock, J. H. and M. Watson,. Forecasting using Principal Components from a Large umber of Predictors, Journal of the American Statistical Association, 97, [3] Wolf, M. and D. Wunderli, 5. Bootstrap Joint Prediction Regions, Journal of ime Series Analysis, 3,

58

Bootstrap prediction intervals for factor models

Bootstrap prediction intervals for factor models Bootstrap prediction intervals for factor models Sílvia Gonçalves and Benoit Perron Département de sciences économiques, CIREQ and CIRAO, Université de Montréal April, 3 Abstract We propose bootstrap prediction

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

Bootstrapping factor models with cross sectional dependence

Bootstrapping factor models with cross sectional dependence Bootstrapping factor models with cross sectional dependence Sílvia Gonçalves and Benoit Perron McGill University, CIREQ, CIRAO and Université de Montréal, CIREQ, CIRAO ovember 4, 07 Abstract We consider

More information

Bootstrapping factor models with cross sectional dependence

Bootstrapping factor models with cross sectional dependence Bootstrapping factor models with cross sectional dependence Sílvia Gonçalves and Benoit Perron University of Western Ontario and Université de Montréal, CIREQ, CIRAO ovember, 06 Abstract We consider bootstrap

More information

Long-Run Market Configurations in a Dynamic Quality-Ladder Model with Externalities

Long-Run Market Configurations in a Dynamic Quality-Ladder Model with Externalities Long-Run Market Configurations in a Dynamic Quality-Ladder Model with Externalities MARIO SAMANO MARC SANTUGINI 27S-24 WORKING PAPER WP 27s-24 Long-Run Market Configurations in a Dynamic Quality-Ladder

More information

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Exact confidence sets and goodness-of-fit methods for stable distributions

Exact confidence sets and goodness-of-fit methods for stable distributions 2015s-25 Exact confidence sets and goodness-of-fit methods for stable distributions Marie-Claire Beaulieu, Jean-Marie Dufour, Lynda Khalaf Série Scientifique/Scientific Series 2015s-25 Exact confidence

More information

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

Inference in VARs with Conditional Heteroskedasticity of Unknown Form Inference in VARs with Conditional Heteroskedasticity of Unknown Form Ralf Brüggemann a Carsten Jentsch b Carsten Trenkler c University of Konstanz University of Mannheim University of Mannheim IAB Nuremberg

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

BOOTSTRAPPING DIFFERENCES-IN-DIFFERENCES ESTIMATES

BOOTSTRAPPING DIFFERENCES-IN-DIFFERENCES ESTIMATES BOOTSTRAPPING DIFFERENCES-IN-DIFFERENCES ESTIMATES Bertrand Hounkannounon Université de Montréal, CIREQ December 2011 Abstract This paper re-examines the analysis of differences-in-differences estimators

More information

Block Bootstrap HAC Robust Tests: The Sophistication of the Naive Bootstrap

Block Bootstrap HAC Robust Tests: The Sophistication of the Naive Bootstrap Block Bootstrap HAC Robust ests: he Sophistication of the Naive Bootstrap Sílvia Gonçalves Département de sciences économiques, CIREQ and CIRANO, Université de ontréal and imothy J. Vogelsang Department

More information

The Exact Distribution of the t-ratio with Robust and Clustered Standard Errors

The Exact Distribution of the t-ratio with Robust and Clustered Standard Errors The Exact Distribution of the t-ratio with Robust and Clustered Standard Errors by Bruce E. Hansen Department of Economics University of Wisconsin October 2018 Bruce Hansen (University of Wisconsin) Exact

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

Central Bank of Chile October 29-31, 2013 Bruce Hansen (University of Wisconsin) Structural Breaks October 29-31, / 91. Bruce E.

Central Bank of Chile October 29-31, 2013 Bruce Hansen (University of Wisconsin) Structural Breaks October 29-31, / 91. Bruce E. Forecasting Lecture 3 Structural Breaks Central Bank of Chile October 29-31, 2013 Bruce Hansen (University of Wisconsin) Structural Breaks October 29-31, 2013 1 / 91 Bruce E. Hansen Organization Detection

More information

Darmstadt Discussion Papers in Economics

Darmstadt Discussion Papers in Economics Darmstadt Discussion Papers in Economics The Effect of Linear Time Trends on Cointegration Testing in Single Equations Uwe Hassler Nr. 111 Arbeitspapiere des Instituts für Volkswirtschaftslehre Technische

More information

Design choices and environmental policies

Design choices and environmental policies 2016s-09 Design choices and environmental policies Sophie Bernard Série Scientifique/Scientific Series 2016s-09 Design choices and environmental policies Sophie Bernard Série Scientifique Scientific Series

More information

City size distribution in China: are large cities dominant?

City size distribution in China: are large cities dominant? 2014s-04 City size distribution in China: are large cities dominant? Zelai Xu, Nong Zhu Série Scientifique Scientific Series Montréal Janvier/January 2014 2014 Zelai Xu, Nong Zhu. Tous droits réservés.

More information

Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form

Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form Bootstrapping Autoregressions with Conditional Heteroskedasticity of Unknown Form Sílvia Gonçalves CIREQ, CIRANO and Département de sciences économiques, Université de Montréal and Lutz Kilian University

More information

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 4 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 23 Recommended Reading For the today Serial correlation and heteroskedasticity in

More information

Identification-robust inference for endogeneity parameters in linear structural models

Identification-robust inference for endogeneity parameters in linear structural models 2014s-17 Identification-robust inference for endogeneity parameters in linear structural models Firmin Doko Tchatoka, Jean-Marie Dufour Série Scientifique Scientific Series Montréal Février 2014/February

More information

Finite-sample resampling-based combined hypothesis tests, with applications to serial correlation and predictability

Finite-sample resampling-based combined hypothesis tests, with applications to serial correlation and predictability 2013s-40 Finite-sample resampling-based combined hypothesis tests, with applications to serial correlation and predictability Jean-Marie Dufour, Lynda Khalaf, Marcel Voia Série Scientifique Scientific

More information

A Practitioner s Guide to Cluster-Robust Inference

A Practitioner s Guide to Cluster-Robust Inference A Practitioner s Guide to Cluster-Robust Inference A. C. Cameron and D. L. Miller presented by Federico Curci March 4, 2015 Cameron Miller Cluster Clinic II March 4, 2015 1 / 20 In the previous episode

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

LECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH

LECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH LECURE ON HAC COVARIANCE MARIX ESIMAION AND HE KVB APPROACH CHUNG-MING KUAN Institute of Economics Academia Sinica October 20, 2006 ckuan@econ.sinica.edu.tw www.sinica.edu.tw/ ckuan Outline C.-M. Kuan,

More information

MFE Financial Econometrics 2018 Final Exam Model Solutions

MFE Financial Econometrics 2018 Final Exam Model Solutions MFE Financial Econometrics 2018 Final Exam Model Solutions Tuesday 12 th March, 2019 1. If (X, ε) N (0, I 2 ) what is the distribution of Y = µ + β X + ε? Y N ( µ, β 2 + 1 ) 2. What is the Cramer-Rao lower

More information

Inference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation

Inference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation Inference about Clustering and Parametric Assumptions in Covariance Matrix Estimation Mikko Packalen y Tony Wirjanto z 26 November 2010 Abstract Selecting an estimator for the variance covariance matrix

More information

Estimates of Québec s Growth Uncertainty

Estimates of Québec s Growth Uncertainty 2015RP-01 Estimates of Québec s Growth Uncertainty Simon van Norden Rapport de projet Project report Montréal Janvier/January 2015 2015 Simon van Norden. Tous droits réservés. All rights reserved. Reproduction

More information

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented

More information

The Exact Distribution of the t-ratio with Robust and Clustered Standard Errors

The Exact Distribution of the t-ratio with Robust and Clustered Standard Errors The Exact Distribution of the t-ratio with Robust and Clustered Standard Errors by Bruce E. Hansen Department of Economics University of Wisconsin June 2017 Bruce Hansen (University of Wisconsin) Exact

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao

More information

Exogeneity tests and weak-identification

Exogeneity tests and weak-identification Exogeneity tests and weak-identification Firmin Doko Université de Montréal Jean-Marie Dufour McGill University First version: September 2007 Revised: October 2007 his version: February 2007 Compiled:

More information

A Simple, Graphical Procedure for Comparing Multiple Treatment Effects

A Simple, Graphical Procedure for Comparing Multiple Treatment Effects A Simple, Graphical Procedure for Comparing Multiple Treatment Effects Brennan S. Thompson and Matthew D. Webb May 15, 2015 > Abstract In this paper, we utilize a new graphical

More information

Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap

Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap Emmanuel Flachaire To cite this version: Emmanuel Flachaire. Bootstrapping heteroskedastic regression models: wild bootstrap

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Testing Error Correction in Panel data

Testing Error Correction in Panel data University of Vienna, Dept. of Economics Master in Economics Vienna 2010 The Model (1) Westerlund (2007) consider the following DGP: y it = φ 1i + φ 2i t + z it (1) x it = x it 1 + υ it (2) where the stochastic

More information

Testing for Regime Switching in Singaporean Business Cycles

Testing for Regime Switching in Singaporean Business Cycles Testing for Regime Switching in Singaporean Business Cycles Robert Breunig School of Economics Faculty of Economics and Commerce Australian National University and Alison Stegman Research School of Pacific

More information

Groupe de Recherche en Économie et Développement International. Cahier de recherche / Working Paper 08-17

Groupe de Recherche en Économie et Développement International. Cahier de recherche / Working Paper 08-17 Groupe de Recherche en Économie et Développement International Cahier de recherche / Working Paper 08-17 Modified Fast Double Sieve Bootstraps for ADF Tests Patrick Richard Modified Fast Double Sieve Bootstraps

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 4 Jakub Mućk Econometrics of Panel Data Meeting # 4 1 / 30 Outline 1 Two-way Error Component Model Fixed effects model Random effects model 2 Non-spherical

More information

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I. Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Least Squares Estimation of a Panel Data Model with Multifactor Error Structure and Endogenous Covariates

Least Squares Estimation of a Panel Data Model with Multifactor Error Structure and Endogenous Covariates Least Squares Estimation of a Panel Data Model with Multifactor Error Structure and Endogenous Covariates Matthew Harding and Carlos Lamarche January 12, 2011 Abstract We propose a method for estimating

More information

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods Robert V. Breunig Centre for Economic Policy Research, Research School of Social Sciences and School of

More information

Identifying SVARs with Sign Restrictions and Heteroskedasticity

Identifying SVARs with Sign Restrictions and Heteroskedasticity Identifying SVARs with Sign Restrictions and Heteroskedasticity Srečko Zimic VERY PRELIMINARY AND INCOMPLETE NOT FOR DISTRIBUTION February 13, 217 Abstract This paper introduces a new method to identify

More information

Econometrics II - EXAM Outline Solutions All questions have 25pts Answer each question in separate sheets

Econometrics II - EXAM Outline Solutions All questions have 25pts Answer each question in separate sheets Econometrics II - EXAM Outline Solutions All questions hae 5pts Answer each question in separate sheets. Consider the two linear simultaneous equations G with two exogeneous ariables K, y γ + y γ + x δ

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

ECON3327: Financial Econometrics, Spring 2016

ECON3327: Financial Econometrics, Spring 2016 ECON3327: Financial Econometrics, Spring 2016 Wooldridge, Introductory Econometrics (5th ed, 2012) Chapter 11: OLS with time series data Stationary and weakly dependent time series The notion of a stationary

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

Introduction to Econometrics

Introduction to Econometrics Introduction to Econometrics T H I R D E D I T I O N Global Edition James H. Stock Harvard University Mark W. Watson Princeton University Boston Columbus Indianapolis New York San Francisco Upper Saddle

More information

A Bootstrap Test for Conditional Symmetry

A Bootstrap Test for Conditional Symmetry ANNALS OF ECONOMICS AND FINANCE 6, 51 61 005) A Bootstrap Test for Conditional Symmetry Liangjun Su Guanghua School of Management, Peking University E-mail: lsu@gsm.pku.edu.cn and Sainan Jin Guanghua School

More information

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50 GARCH Models Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 50 Outline 1 Stylized Facts ARCH model: definition 3 GARCH model 4 EGARCH 5 Asymmetric Models 6

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Casuality and Programme Evaluation

Casuality and Programme Evaluation Casuality and Programme Evaluation Lecture V: Difference-in-Differences II Dr Martin Karlsson University of Duisburg-Essen Summer Semester 2017 M Karlsson (University of Duisburg-Essen) Casuality and Programme

More information

A test of singularity for distribution functions

A test of singularity for distribution functions 20s-06 A test of singularity for distribution functions Victoria Zinde-Walsh, John W. Galbraith Série Scientifique Scientific Series Montréal Janvier 20 20 Victoria Zinde-Walsh, John W. Galbraith. Tous

More information

Chapter 6. Panel Data. Joan Llull. Quantitative Statistical Methods II Barcelona GSE

Chapter 6. Panel Data. Joan Llull. Quantitative Statistical Methods II Barcelona GSE Chapter 6. Panel Data Joan Llull Quantitative Statistical Methods II Barcelona GSE Introduction Chapter 6. Panel Data 2 Panel data The term panel data refers to data sets with repeated observations over

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

A Primer on Asymptotics

A Primer on Asymptotics A Primer on Asymptotics Eric Zivot Department of Economics University of Washington September 30, 2003 Revised: October 7, 2009 Introduction The two main concepts in asymptotic theory covered in these

More information

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017 Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.

More information

A Course in Applied Econometrics Lecture 14: Control Functions and Related Methods. Jeff Wooldridge IRP Lectures, UW Madison, August 2008

A Course in Applied Econometrics Lecture 14: Control Functions and Related Methods. Jeff Wooldridge IRP Lectures, UW Madison, August 2008 A Course in Applied Econometrics Lecture 14: Control Functions and Related Methods Jeff Wooldridge IRP Lectures, UW Madison, August 2008 1. Linear-in-Parameters Models: IV versus Control Functions 2. Correlated

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

RESIDUAL-BASED BLOCK BOOTSTRAP FOR UNIT ROOT TESTING. By Efstathios Paparoditis and Dimitris N. Politis 1

RESIDUAL-BASED BLOCK BOOTSTRAP FOR UNIT ROOT TESTING. By Efstathios Paparoditis and Dimitris N. Politis 1 Econometrica, Vol. 71, No. 3 May, 2003, 813 855 RESIDUAL-BASED BLOCK BOOTSTRAP FOR UNIT ROOT TESTING By Efstathios Paparoditis and Dimitris N. Politis 1 A nonparametric, residual-based block bootstrap

More information

Predicting bond returns using the output gap in expansions and recessions

Predicting bond returns using the output gap in expansions and recessions Erasmus university Rotterdam Erasmus school of economics Bachelor Thesis Quantitative finance Predicting bond returns using the output gap in expansions and recessions Author: Martijn Eertman Studentnumber:

More information

Reliability of inference (1 of 2 lectures)

Reliability of inference (1 of 2 lectures) Reliability of inference (1 of 2 lectures) Ragnar Nymoen University of Oslo 5 March 2013 1 / 19 This lecture (#13 and 14): I The optimality of the OLS estimators and tests depend on the assumptions of

More information

The regression model with one stochastic regressor (part II)

The regression model with one stochastic regressor (part II) The regression model with one stochastic regressor (part II) 3150/4150 Lecture 7 Ragnar Nymoen 6 Feb 2012 We will finish Lecture topic 4: The regression model with stochastic regressor We will first look

More information

BOOTSTRAP PREDICTION INTERVALS IN STATE SPACE MODELS. Alejandro Rodriguez 1 and Esther Ruiz 2

BOOTSTRAP PREDICTION INTERVALS IN STATE SPACE MODELS. Alejandro Rodriguez 1 and Esther Ruiz 2 Working Paper 08-11 Departamento de Estadística Statistic and Econometric Series 04 Universidad Carlos III de Madrid March 2008 Calle Madrid, 126 28903 Getafe (Spain) Fax (34-91) 6249849 BOOTSTRAP PREDICTION

More information

Econ 424 Time Series Concepts

Econ 424 Time Series Concepts Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length

More information

Time Series Econometrics For the 21st Century

Time Series Econometrics For the 21st Century Time Series Econometrics For the 21st Century by Bruce E. Hansen Department of Economics University of Wisconsin January 2017 Bruce Hansen (University of Wisconsin) Time Series Econometrics January 2017

More information

Heteroskedasticity-Robust Inference in Finite Samples

Heteroskedasticity-Robust Inference in Finite Samples Heteroskedasticity-Robust Inference in Finite Samples Jerry Hausman and Christopher Palmer Massachusetts Institute of Technology December 011 Abstract Since the advent of heteroskedasticity-robust standard

More information

Econometric Methods for Panel Data

Econometric Methods for Panel Data Based on the books by Baltagi: Econometric Analysis of Panel Data and by Hsiao: Analysis of Panel Data Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

More efficient tests robust to heteroskedasticity of unknown form

More efficient tests robust to heteroskedasticity of unknown form More efficient tests robust to heteroskedasticity of unknown form Emmanuel Flachaire To cite this version: Emmanuel Flachaire. More efficient tests robust to heteroskedasticity of unknown form. Econometric

More information

Cointegration Lecture I: Introduction

Cointegration Lecture I: Introduction 1 Cointegration Lecture I: Introduction Julia Giese Nuffield College julia.giese@economics.ox.ac.uk Hilary Term 2008 2 Outline Introduction Estimation of unrestricted VAR Non-stationarity Deterministic

More information

The Wild Bootstrap, Tamed at Last. Russell Davidson. Emmanuel Flachaire

The Wild Bootstrap, Tamed at Last. Russell Davidson. Emmanuel Flachaire The Wild Bootstrap, Tamed at Last GREQAM Centre de la Vieille Charité 2 rue de la Charité 13236 Marseille cedex 02, France by Russell Davidson email: RussellDavidson@mcgillca and Emmanuel Flachaire Université

More information

Testing for Unit Roots in the Presence of a Possible Break in Trend and Non-Stationary Volatility

Testing for Unit Roots in the Presence of a Possible Break in Trend and Non-Stationary Volatility esting for Unit Roots in the Presence of a Possible Break in rend and Non-Stationary Volatility Giuseppe Cavaliere a, David I. Harvey b, Stephen J. Leybourne b and A.M. Robert aylor b a Department of Statistical

More information

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions arxiv: arxiv:0000.0000 REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions Li Pan and Dimitris N. Politis Li Pan Department of Mathematics University of California

More information

Department of Economics, Vanderbilt University While it is known that pseudo-out-of-sample methods are not optimal for

Department of Economics, Vanderbilt University While it is known that pseudo-out-of-sample methods are not optimal for Comment Atsushi Inoue Department of Economics, Vanderbilt University (atsushi.inoue@vanderbilt.edu) While it is known that pseudo-out-of-sample methods are not optimal for comparing models, they are nevertheless

More information

Estimating and Testing the US Model 8.1 Introduction

Estimating and Testing the US Model 8.1 Introduction 8 Estimating and Testing the US Model 8.1 Introduction The previous chapter discussed techniques for estimating and testing complete models, and this chapter applies these techniques to the US model. For

More information

Efficient estimation using the Characteristic Function

Efficient estimation using the Characteristic Function 013s- Efficient estimation using the Characteristic Function Marine Carrasco, Rachidi Kotchoni Série Scientifique Scientific Series Montréal Juillet 013 013 Marine Carrasco, Rachidi Kotchoni. ous droits

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Estimation of the Conditional Variance in Paired Experiments

Estimation of the Conditional Variance in Paired Experiments Estimation of the Conditional Variance in Paired Experiments Alberto Abadie & Guido W. Imbens Harvard University and BER June 008 Abstract In paired randomized experiments units are grouped in pairs, often

More information

Testing an Autoregressive Structure in Binary Time Series Models

Testing an Autoregressive Structure in Binary Time Series Models ömmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffffff Discussion Papers Testing an Autoregressive Structure in Binary Time Series Models Henri Nyberg University of Helsinki and HECER Discussion

More information

What s New in Econometrics. Lecture 13

What s New in Econometrics. Lecture 13 What s New in Econometrics Lecture 13 Weak Instruments and Many Instruments Guido Imbens NBER Summer Institute, 2007 Outline 1. Introduction 2. Motivation 3. Weak Instruments 4. Many Weak) Instruments

More information

Forecasting 1 to h steps ahead using partial least squares

Forecasting 1 to h steps ahead using partial least squares Forecasting 1 to h steps ahead using partial least squares Philip Hans Franses Econometric Institute, Erasmus University Rotterdam November 10, 2006 Econometric Institute Report 2006-47 I thank Dick van

More information

10. Time series regression and forecasting

10. Time series regression and forecasting 10. Time series regression and forecasting Key feature of this section: Analysis of data on a single entity observed at multiple points in time (time series data) Typical research questions: What is the

More information

ECON 4160, Autumn term Lecture 1

ECON 4160, Autumn term Lecture 1 ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least

More information

Using all observations when forecasting under structural breaks

Using all observations when forecasting under structural breaks Using all observations when forecasting under structural breaks Stanislav Anatolyev New Economic School Victor Kitov Moscow State University December 2007 Abstract We extend the idea of the trade-off window

More information

TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST

TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Econometrics Working Paper EWP0402 ISSN 1485-6441 Department of Economics TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Lauren Bin Dong & David E. A. Giles Department

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

Diagnostics for the Bootstrap and Fast Double Bootstrap

Diagnostics for the Bootstrap and Fast Double Bootstrap Diagnostics for the Bootstrap and Fast Double Bootstrap Department of Economics and CIREQ McGill University Montréal, Québec, Canada H3A 2T7 by Russell Davidson russell.davidson@mcgill.ca AMSE-GREQAM Centre

More information

Monte Carlo tests with nuisance parameters: a general approach to finite-sample inference and non-standard asymptotics

Monte Carlo tests with nuisance parameters: a general approach to finite-sample inference and non-standard asymptotics 2005s-02 Monte Carlo tests with nuisance parameters: a general approach to finite-sample inference and non-standard asymptotics Jean-Marie Dufour Série Scientifique Scientific Series Montréal Août 2004

More information

A Course in Applied Econometrics Lecture 7: Cluster Sampling. Jeff Wooldridge IRP Lectures, UW Madison, August 2008

A Course in Applied Econometrics Lecture 7: Cluster Sampling. Jeff Wooldridge IRP Lectures, UW Madison, August 2008 A Course in Applied Econometrics Lecture 7: Cluster Sampling Jeff Wooldridge IRP Lectures, UW Madison, August 2008 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of roups and

More information

Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences

Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences Piotr Kokoszka 1, Gilles Teyssière 2, and Aonan Zhang 3 1 Mathematics and Statistics, Utah State University, 3900 Old Main

More information

Bootstrap Methods in Econometrics

Bootstrap Methods in Econometrics Bootstrap Methods in Econometrics Department of Economics McGill University Montreal, Quebec, Canada H3A 2T7 by Russell Davidson email: russell.davidson@mcgill.ca and James G. MacKinnon Department of Economics

More information

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models Journal of Finance and Investment Analysis, vol.1, no.1, 2012, 55-67 ISSN: 2241-0988 (print version), 2241-0996 (online) International Scientific Press, 2012 A Non-Parametric Approach of Heteroskedasticity

More information

New Developments in Econometrics Lecture 16: Quantile Estimation

New Developments in Econometrics Lecture 16: Quantile Estimation New Developments in Econometrics Lecture 16: Quantile Estimation Jeff Wooldridge Cemmap Lectures, UCL, June 2009 1. Review of Means, Medians, and Quantiles 2. Some Useful Asymptotic Results 3. Quantile

More information