A New Asymptotic Theory for Heteroskedasticity-Autocorrelation Robust Tests

Similar documents
Testing in GMM Models Without Truncation

CAE Working Paper # A New Asymptotic Theory for Heteroskedasticity-Autocorrelation Robust Tests. Nicholas M. Kiefer and Timothy J.

A New Approach to the Asymptotics of Heteroskedasticity-Autocorrelation Robust Testing

CAE Working Paper # Fixed-b Asymptotic Approximation of the Sampling Behavior of Nonparametric Spectral Density Estimators

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

A New Approach to Robust Inference in Cointegration

LECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH

A Fixed-b Perspective on the Phillips-Perron Unit Root Tests

Serial Correlation Robust LM Type Tests for a Shift in Trend

CAE Working Paper # Spectral Density Bandwith Choice: Source of Nonmonotonic Power for Tests of a Mean Shift in a Time Series

Block Bootstrap HAC Robust Tests: The Sophistication of the Naive Bootstrap

Robust Unit Root and Cointegration Rank Tests for Panels and Large Systems *

Heteroskedasticity and Autocorrelation Consistent Standard Errors

HAR Inference: Recommendations for Practice

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions

Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing

Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis

Economic modelling and forecasting

THE ERROR IN REJECTION PROBABILITY OF SIMPLE AUTOCORRELATION ROBUST TESTS

Robust Nonnested Testing and the Demand for Money

GMM, HAC estimators, & Standard Errors for Business Cycle Statistics

On the Long-Run Variance Ratio Test for a Unit Root

Asymptotic distribution of GMM Estimator

Estimation and Inference of Linear Trend Slope Ratios

Heteroskedasticity- and Autocorrelation-Robust Inference or Three Decades of HAC and HAR: What Have We Learned?

Powerful Trend Function Tests That are Robust to Strong Serial Correlation with an Application to the Prebish Singer Hypothesis

OPTIMAL BANDWIDTH CHOICE FOR INTERVAL ESTIMATION IN GMM REGRESSION. Yixiao Sun and Peter C.B. Phillips. May 2008

ESSAYS ON TIME SERIES ECONOMETRICS. Cheol-Keun Cho

by Ye Cai and Mototsugu Shintani

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

Comment on HAC Corrections for Strongly Autocorrelated Time Series by Ulrich K. Müller

Understanding Regressions with Observations Collected at High Frequency over Long Span

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

Darmstadt Discussion Papers in Economics

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Inference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation

HAC Corrections for Strongly Autocorrelated Time Series

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

Current Version: December, 1999 FULLY MODIFIED OLS FOR HETEROGENEOUS COINTEGRATED PANELS * Peter Pedroni. Indiana University

LONG RUN VARIANCE ESTIMATION AND ROBUST REGRESSION TESTING USING SHARP ORIGIN KERNELS WITH NO TRUNCATION

Predicting bond returns using the output gap in expansions and recessions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Testing Restrictions and Comparing Models

Heteroscedasticity and Autocorrelation

Specification Test for Instrumental Variables Regression with Many Instruments

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

Testing for Regime Switching in Singaporean Business Cycles

Econometrics Summary Algebraic and Statistical Preliminaries

First published on: 29 November 2010 PLEASE SCROLL DOWN FOR ARTICLE

This chapter reviews properties of regression estimators and test statistics based on

Generalized Method of Moments (GMM) Estimation

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Alternative HAC Covariance Matrix Estimators with Improved Finite Sample Properties

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators

Fixed-b Asymptotics for t-statistics in the Presence of Time-Varying Volatility

Testing for Unit Roots with Cointegrated Data

The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions

The Number of Bootstrap Replicates in Bootstrap Dickey-Fuller Unit Root Tests

Fixed-b Asymptotics for Spatially Dependent Robust Nonparametric Covariance Matrix Estimators

Bootstrap Testing in Econometrics

Bootstrap Tests: How Many Bootstraps?

HAR Inference: Recommendations for Practice

Fixed-b Inference for Testing Structural Change in a Time Series Regression

11. Bootstrap Methods

HAR Inference: Recommendations for Practice

Discussion of Sensitivity and Informativeness under Local Misspecification

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

HETEROSKEDASTICITY, TEMPORAL AND SPATIAL CORRELATION MATTER

A better way to bootstrap pairs

Central Bank of Chile October 29-31, 2013 Bruce Hansen (University of Wisconsin) Structural Breaks October 29-31, / 91. Bruce E.

Panel Threshold Regression Models with Endogenous Threshold Variables

Closest Moment Estimation under General Conditions

Nonparametric Cointegrating Regression with Endogeneity and Long Memory

New Methods for Inference in Long-Horizon Regressions

Volatility. Gerald P. Dwyer. February Clemson University

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

Bias-Correction in Vector Autoregressive Models: A Simulation Study

POWER MAXIMIZATION AND SIZE CONTROL IN HETEROSKEDASTICITY AND AUTOCORRELATION ROBUST TESTS WITH EXPONENTIATED KERNELS

Tests for the Null Hypothesis of Cointegration: A Monte Carlo Comparison

Fixed bandwidth asymptotics in single equation models of cointegration with an application to money demand

Robust Backtesting Tests for Value-at-Risk Models

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008

Econometrics of Panel Data

A Robust Approach to Estimating Production Functions: Replication of the ACF procedure

Robust methods for detecting multiple level breaks in autocorrelated time series

Testing Overidentifying Restrictions with Many Instruments and Heteroskedasticity

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series

Heteroskedasticity-Robust Inference in Finite Samples

A Gaussian IV estimator of cointegrating relations Gunnar Bårdsen and Niels Haldrup. 13 February 2006.

Department of Economics, UCSD UC San Diego

What s New in Econometrics? Lecture 14 Quantile Methods

Inference with Dependent Data Using Cluster Covariance Estimators

Analyzing repeated-game economics experiments: robust standard errors for panel data with serial correlation

University of Pavia. M Estimators. Eduardo Rossi

A Course on Advanced Econometrics

Robust Performance Hypothesis Testing with the Variance. Institute for Empirical Research in Economics University of Zurich

Transcription:

A New Asymptotic Theory for Heteroskedasticity-Autocorrelation Robust Tests Nicholas M. Kiefer and Timothy J. Vogelsang Departments of Economics and Statistical Science, Cornell University April 2002, Revised August 7, 2003 Abstract Anewfirst order asymptotic theory for heteroskedasticity-autocorrelation (HAC) robust tests based on nonparametric covariance matrix estimators is developed. The bandwidth of the covariance matrix estimator is modeled as a fixed proportion of the sample size. This leads to a distribution theory for HAC robust tests that explicitly captures the choice of bandwidth and kernel. This contrasts with the traditional asymptotics (where the bandwidth increases slower than the sample size) where the asymptotic distributions of HAC robust tests do not depend on the bandwidth or kernel. Finite sample simulations show that the new approach is more accurate than the traditional asymptotics. The impact of bandwidth and kernel choice on size and power of t-tests is analyzed. Smaller bandwidths lead to tests with higher power but greater size distortions and large bandwidths lead to tests with lower power but less size distortions. Size distortions across bandwidths increase as the serial correlation in the data becomes stronger. Overall, the results clearly indicate that for bandwidth and kernel choice there is a trade-off between size distortions and power. Finite sample performance using the new asymptotics is comparable to the bootstrap which suggests the asymptotic theory in this paper could be useful in understanding the theoretical properties of the bootstrap when applied to HAC robust tests. Keywords: Covariance matrix estimator, inference, autocorrelation, truncation lag, prewhitening, generalized method of moments, functional central limit theorem, bootstrap. Helpful comments provided by Cliff Hurvich, Andy Levin, Jeff Simonoff and seminar participants at NYU (Statistics), U. Texas Austin, NBER/NSF Time Series Conference and 2003 Winter Meetings of the Econometrics Society are gratefully acknowledge. We thank Joel Horowitz and three referees for thoughtful comments that helped us improve the paper. We gratefully acknowledge financial support from the National Science Foundation through grant SES-0095211. We thank the Center for Analytic Economics at Cornell University. Corresponding Author: Tim Vogelsang, Department of Economics, Uris Hall, Cornell University, Ithaca, NY 14853-7601, Phone: 607-255-5108, Fax: 607-255-2818, email: tjv2@cornell.edu

1 Introduction We provide a new and improved approach to the asymptotics of hypothesis testing in time series models with arbitrary, i.e. unspecified, serial correlation and heteroskedasticity. Our results are general enough to apply to stationary generalized of method of moments (GMM) models. Heteroskedasticity and autocorrelation consistent (HAC) estimation and testing in these models involves calculating an estimate of the spectral density at zero frequency of the estimating equations or moment conditions defining the estimator. We focus on the class of nonparametric spectral density estimators 1 which were originally proposed and analyzed in the time series statistics literature. See Priestley (1981) for the standard textbook treatment. Important contributions to the development of these estimators for covariance matrix estimation in econometrics include White (1984), Newey and West (1987), Gallant (1987), Gallant and White (1988), Andrews (1991), Andrews and Monahan (1992), Hansen (1992), Robinson (1998) and de Jong and Davidson (2000). We stress at the outset that we are not proposing new estimators or test statistics; rather we focus on improving the asymptotic distribution theory for existing techniques. Our results, however, do provide some guidance on the choice of HAC estimator. Conventional asymptotic theory for HAC estimators is well established and has proved useful in providing practical formulas for estimating asymptotic variances. The ingenious trick is the assumption that the variance estimator depends on a fraction of sample autocovariances, with the number of sample autocovariances going to infinity, but the fraction going to zero as the sample size grows. Under this condition it has been shown that well-known HAC estimators of the asymptotic variance are consistent. Then, the asymptotic distribution of estimated coefficients can essentially be derived assuming the variance is known. That is, sampling bias and variance of the variance estimator does not appear in the first order asymptotic distribution theory of test statistics regarding parameters of interest. While this is an extremely productive simplifying assumption that leads to standard asymptotic distribution theory for tests, the accuracy of the resulting asymptotic theory is often less than satisfactory. In particular there is a tendency for HAC robust tests to over reject (sometimes substantially) under the null hypothesis in finite samples; see Andrews (1991), Andrews and Monahan (1992), and the July 1996 special issue of Journal of Business and Economic Statistics for evidence. There are two main sources of finite sample distortions. The first source is inaccuracy via the central limit theorem approximation to the sampling distribution of parameters of interest. This becomes a serious problem for data that has strong or persistent serial correlation. The second 1 An alternative to the nonparametric approach has been advocated by den Haan and Levin (1997,1998). Following Berk (1974) and others in the time series statistics literature, they propose estimating the zero frequency spectral density parametrically using vector autoregression (VAR) models. They show that this parametric approach can achieve essentially the same generality as the nonparametric approach if the VAR lag length increases with the sample size at a suitable rate. 1

source is the bias and sampling variability of the HAC estimate of the asymptotic variance. This second source is the focus of this paper. Simply appealing to a consistency result for the asymptotic variance estimator as is done under the standard approach does not capture these important small sample properties. The assumption that the fraction of the sample autocovariances used in calculating the asymptotic variance goes to zero as the sample size goes to infinity is a clever technical assumption that substantially simplifies asymptotic calculations. However, in practice there is a given sample size and some fraction of sample autocovariances is used to estimate the asymptotic variance. Even if a practitioner chooses the fraction based on a rule such that the fraction goes to zero as the sample size grows, it does not change the fact that a positive fraction is being used for a particular data set. The implications of this simple observation have been eloquently summarized by Neave (1970, p.70) in the context of spectral density estimation: When proving results on the asymptotic behavior of estimates of the spectrum of a stationary time series, it is invariably assumed that as the sample size T tends to infinity, so does the truncation point M, but at a slower rate, so that M/T tends to zero. This is a convenient assumption mathematically in that, in particular, it ensures consistency of the estimates, but it is unrealistic when such results are used as approximations to the finite case where the value of M/T cannot be zero. Based on this observation, Neave (1970) derived an asymptotic approximation for the sampling variance of spectral density estimates under the assumption that M/T is a constant and showed that his approximation was more accurate than the standard approximation. In this paper, we effectively generalize the approach of Neave (1970) for zero frequency nonparametric spectral density estimators (HAC estimators). We derive the entire asymptotic distribution (rather than just the variance) of these estimators under the assumption that M = bt where b (0, 1] is a constant 2. We label asymptotics obtained under this nesting of the bandwidth, fixed-b asymptotics. In contrast, under the standard asymptotics b goes to zero as T increases. Therefore, we refer to the standard asymptotics as small-b asymptotics. We show that under fixed-b asymptotics, the HAC robust variance estimators converge to a limiting random matrix that is proportional to the unknown asymptotic variance and has a limiting distribution that depends on the kernel and b. Underthefixed-b asymptotics, HAC robust test statistics computed in the usual way are shown to have limiting distributions that are pivotal but depend on the kernel and b. This contrasts with small-b asymptotics where the effects of the kernel and bandwidth are not captured. 2 The approach we adopt here is a natural generalization of ideas explored by Kiefer, Vogelsang and Bunzel (2000), Bunzel, Kiefer and Vogelsang (2001), Kiefer and Vogelsang (2002)a, Kiefer and Vogelsang (2002)b and Vogelsang (2000). These papers analyze HAC robust tests for the case of b =1. 2

While the assumption that the proportion of sample autocovariances remains fixed as the sample sizegrowsisabetterreflection of practice in reality, that alone does not justify the new asymptotic theory. In fact, our asymptotic theory leads to two important innovations for HAC robust testing. First, because the fixed-b asymptotics explicitly captures the choice of bandwidth and kernel a more accurate first order asymptotic approximation is obtained for HAC robust tests. Finite sample simulations reported here and in the working paper, Kiefer and Vogelsang (2002)c, show that in many situations the fixed-b asymptotics provide a more accurate approximation than the standard small-b asymptotics. There is also theoretical evidence by Jansson (2002) showing that fixed-b asymptotics is more accurate than small-b asymptotics in Gaussian location models in the special case of the Bartlett kernel with b =1. Jansson (2002) proves that fixed-b asymptotics delivers an error in rejection probability that is O(T 1 ). This contrasts with small-b asymptotics where the error in rejection probability is no smaller than O(T 1/2 ) (see Velasco and Robinson (2001)). Second, fixed-b asymptotic theory permits a local asymptotic power analysis for HAC robust tests that depends on the kernel and bandwidth. We can theoretically analyze how the choices of kernel and bandwidth affect the power of HAC robust tests. Such an analysis is not possible under the standard small-b asymptotics because local asymptotic power does not depend on the choice of kernel or bandwidth. Because of this fact, the existing HAC robust testing literature has focused instead on minimizing the asymptotic truncated MSE of the asymptotic variance estimators when choosing the kernel and bandwidth. For the analysis of HAC robust tests, this is not a completely satisfying situation as noted by Andrews (1991, p.828) 3. An obvious alternative to asymptotic approximations is the bootstrap. Recently it has been shown by Hall and Horowitz (1996), Gotze and Kunsch (1996) and Inoue and Shintani (2001) and others that higher order refinements to the small-b asymptotics are feasible when bootstrapping the distribution of HAC robust tests using blocking. Using finite sample simulations we compare fixed-b asymptotics with the block bootstrap. Our results suggest some interesting properties of the bootstrap and indicate that fixed-b asymptotics may be a useful analytical tool for understanding variation in bootstrap performance across bandwidths. One result that the bootstrap without blocking performs almost as well as the fixed-b asymptotics even when the data are dependent (in contrast the small-b asymptotics performs relatively poorly). When blocking is used, the bootstrap can perform slightly better or slightly worse than fixed-b asymptotics depending on the choice of block length. It may be the case that the block bootstrap delivers an asymptotic refinement over the fixed-b first-order asymptotics if the block length is chosen in a suitable way. A higher order fixed-b asymptotic theory needs to be developed before such a result can be obtained. Because we are approaching the distribution theory of HAC robust testing using a perspective that differs from the conventional approach, we think it is important to stress here that an 3 Additional discussion of this point is given by Cushing and McGarvey (1999, p. 80). 3

important purpose of asymptotic theory is to provide approximations to sampling distributions. While sampling distributions can be obtained exactly under precise distributional assumptions by a change of variables, this approach often requires difficult if not impossible calculations that may be required on a case by case basis. A unifying approach, giving results applicable in a wide variety of settings, must involve approximations. The usual approach is to consider asymptotic approximations. Whether these are useful in any particular setting is a matter of how well the approximation mimics the exact sampling distribution. Although exact comparisons can be made is simple cases, typically, this approximation is most efficiently assessed by finite sample Monte Carlo experiments, and that is our approach here. This point is emphasized by Barndorff-Nielsen and Cox (1989, p. ix) The approximate arguments are developed by supposing that some defining quantity, often a sample size but more generally an amount of information, becomes large: it must be stressed that this is a technical device for generating approximations whose adequacy always needs assessing, rather than a physical limiting notion. The remainder of the paper is organized as follows. Section 2 lays out the GMM framework and reviews standard results. Section 3 reports some small sample simulation results that illustrate the inaccuracies that can occur when using the small-b asymptotic approximation. Section 4 introduces the new fixed-b asymptotic theory. Section 5 analyzes the performance of the new asymptotic theory in terms of size distortions. The impact of the choice of bandwidth and kernel is analyzed and comparisons are made with the traditional small-b asymptotics and the block bootstrap. Section 6 discusses how the kernel and bandwidth affect local asymptotic power. Section 7 gives concluding comments. Proofs and some formulas are provided in two appendices. The following notation is used throughout the paper. The symbol denotes weak convergence, B j (r) denotes a j vector of standard Brownian motions (Wiener processes) defined on r [0, 1], eb j (r) =B j (r) rb j (1) denotes a j vector of standard Brownian bridges, and [rt] denotes the integer part of rt for r [0, 1]. 2 Inference in GMM Models: The Standard Approach We present our results in the GMM framework noting that this covers estimating equation methods (Heyde, 1997). Since the influential work of Hansen (1982), GMM is widely used in virtually every field of economics. Heteroskedasticity or autocorrelation of unknown form is often an important specification issue especially in macroeconomics and financial applications. Typically the form of the correlation structure is not of direct interest (if it is, it should be modeled directly). What is desired is an inference procedure that is robust to the form of the heteroskedasticity and serial correlation. HAC covariance matrix estimators were developed for exactly this setting. 4

Consider the p 1 vector of parameters, θ Θ R p. Let θ 0 denote the true value of θ, and assume θ 0 is an interior point Θ. Letv t denote a vector of observed data and assume that q moment conditions hold that can be written as E[f(v t, θ 0 )] = 0, t =1, 2,..., T, (1) where f( ) is a q 1 vector of functions with q p and rank(e [ f/ θ 0 ]) = p. The expectation is taken over the endogenous variables in v t, and may be conditional on exogenous elements of v t. There is no need in what follows to make this conditioning explicit in the notation. Define g t (θ) =T 1 tx f(v j, θ), where g T (θ) =T 1 P T t=1 f(v t, θ) is the sample analog to (1). The GMM estimator is defined as j=1 bθ T =argmin θ Θ g T (θ) 0 W T g T (θ) (2) where W T is a q q positive definite weighting matrix. Alternatively, θ b T can also be defined as an estimating equations estimator, the solution to the p first order conditions associated with (2) G T ( b θ T ) 0 W T g T ( b θ T )=0, (3) where G t (θ) =T 1 P t j=1 f(v j, θ)/ θ 0. Of course, when the model is exactly identified and q = p, an exact solution to g T ( θ b T )=0is attainable and the weighting matrix W T is irrelevant. Application of the mean value theorem implies that g t ( b θ T )=g t (θ 0 )+G t ( b θ T, θ 0, λ T )( b θ T θ 0 ) (4) where G t ( θ b T, θ 0, λ T ) denotes the q p matrix whose i th row is the corresponding row of G t (θ (i) T ) where θ (i) T = λ i,t θ 0 +(1 λ i,t ) θ b T for some 0 λ i,t 1 and λ T is the q 1 vector with i th element λ i,t. In order to focus on the new asymptotic theory for tests, we avoid listing primitive assumptions and make rather high-level assumptions on the GMM estimator θ b T.Listsofsufficient conditions for these to hold can be found in Hansen (1982) and Newey and McFadden (1994). Our assumptions are: Assumption 1 p lim b θ T = θ 0. Assumption 2 T 1/2 P [rt ] t=1 f(v t, θ 0 )=T 1/2 g [rt] (θ 0 ) ΛB q (r) where ΛΛ 0 = Ω = P j= Γ j, Γ j = E[f(v t, θ 0 ),f(v t j, θ 0 ) 0 ]. 5

Assumption 3 p lim G [rt ] ( θ b T )=rg 0 and p lim G [rt] ( θ b T, θ 0, λ T )=rg 0 uniformly in r [0, 1] where G 0 = E[ f(v t, θ 0 )/ θ 0 ]. is positive semi-definite and p lim W T = W where W is a matrix of con- Assumption 4 W T stants. These assumptions hold in wide generality for the models seen in economics, and with the exception of Assumption 2 they are fairly standard. Assumption 2 requires that a functional central limit theorem hold for T 1/2 g t (θ 0 ). This is stronger than the central limit theorem for T 1/2 g T (θ 0 ) that is required for asymptotic normality of θ b T. However, consistent estimation of the asymptotic variance of θ b T requires an estimate of Ω. Conditions for consistent estimation of Ω are typically stronger than Assumption 2 and often imply Assumption 2. For example, Andrews (1991) requires that f(v t, θ 0 ) is a mean zero fourth order stationary process that is α mixing. Phillips and Durlauf (1986) show that Assumption 2 holds under the weaker assumption that f(v t, θ 0 ) is a mean zero, 2+δ order stationary process (for some δ > 0) thatisα mixing. Thus our assumptions are slightly weaker than those usually given for asymptotic testing in HAC-estimated GMM models. Under our assumptions θ b T is asymptotically normally distributed, as recorded in the following lemma which is proved in the appendix. Lemma 1 Under Assumptions 1-4, as T, T 1/2 ( θ b T θ 0 ) (G 0 0W G 0 ) 1 Λ B p (1) N(0,V), where Λ Λ 0 = G 0 0 W ΛΛ 0 W G 0 and V =(G 0 0 W G 0 ) 1 Λ Λ 0 (G 0 0 W G 0 ) 1. Under the standard approach, a consistent estimator of V is required for inference. Let bω denote an estimator of Ω. ThenV can be estimated by bv =[G T ( b θ T ) 0 W T G T ( b θ T )] 1 G T ( b θ T ) 0 W T bωw T G T ( b θ T )[G T ( b θ T ) 0 W T G T ( b θ T )] 1. (5) The HAC literature builds on the spectral density estimation literature to suggest feasible estimators of Ω and to find conditions under which such estimators are consistent. The widely used class of nonparametric estimators of Ω take the form with bγ j = T 1 T X t=j+1 bω = TX 1 j= (T 1) k(j/m)bγ j (6) f(v t, b θ T )f(v t j, b θ T ) 0 for j 0, b Γj = b Γ 0 j for j < 0, 6

where k(x) is a kernel function k : R R satisfying k(x) =k( x), k(0) = 1, k(x) 1, k(x) continuous at x =0and R k2 (x)dx <. Often k(x) =0for x > 1 so M trims the sample autocovariances and acts as a truncation lag. Some popular kernel functions do not truncate, and M is often called a bandwidth parameter in those cases. For kernels that truncate, the cutoff at x =1isarbitrary and is essentially a normalization. For kernels that do not truncate, a normalization must be made since the weights generated by the kernel k(x) and bandwidth, M are thesameasthosegeneratedbykernelk(ax) with bandwidth am. Therefore, there is an interaction between bandwidth and kernel choice. We focus on kernels that yield positive definite bω for the obvious practical reasons although many of our theoretical results hold without this restriction. Standard asymptotic analysis proceeds under the assumption that M and M/T 0 as T in which case bω is a consistent estimator. Thus, b 0 as T and hence the label, small-b asymptotics. Because in practical settings b is strictly positive, the assumption that b shrinks to zero has little to do with econometric practice; rather it is an ingenious technical assumption allowing an estimable asymptotic approximation to the asymptotic distribution of θ b T to be calculated. The difficulty in practice is that any choice of M for a given sample size, T, canbe made consistent with the above rate requirement. Although the rate requirement can been refined if one is interested in minimizing the MSE of bω (e.g. M must increase at rate T 1/3 for the Bartlett kernel), these refinements do not deliver specific choicesform. This fact has long been recognized in the spectral density and HAC literatures and data dependent methods for choosing M have been proposed. See Andrews (1991) and Newey and West (1994). These papers suggest choosing M to minimize the truncated MSE of bω. However, because the MSE of bω depends on the serial correlation structure of f(v t, θ 0 ), the practitioner must estimate the serial correlation structure of f(v t, θ 0 ) either nonparametrically or with an approximate parametric model. While data dependent methods are a significant improvement over the basic case for empirical implementation, the practitioner is still faced with either a choice of approximating parametric model or the choice of bandwidth in a preliminary nonparametric estimation problem. Even these bandwidth rules do not yield unique bandwidth choices in practice because, for example, if a bandwidth, M A satisfies the optimality criterion of Andrews (1991), then the bandwidth, M A +d where d is a finite constant is also optimal. See den Haan and Levin (1997) for details and additional practical challenges. As the previous discussion makes clear, the standard approach addresses the choice of bandwidth by determining the rate by which M must grow to deliver a consistent variance estimator so that the usual standard normal approximation can be justified. In contrast, for a given data set and sample size, we take the choice of M as given and provide an asymptotic theory that reflects to some extent how that choice of M affects the sampling distribution of the HAC robust test. 7

3 Motivation: Finite Sample Performance of Standard Asymptotics While the poor finite sample performance of HAC robust test in the presence of strong serial correlation is well documented, it will be useful for later comparisons to illustrate some of these problems briefly in a simple environment. Consider the most basic univariate time series model with ARMA(1,1) errors, y t = µ + u t, (7) u t = ρu t 1 + ε t + ϕε t 1, ε t iidn(0, 1), u 0 = ε 0 =0. Define the HAC robust t-statistic for µ as t = T (bµ µ) p, Ω b where bµ = y = T 1 P T t=1 y t and bω is computed using f(y t, bµ) =y t bµ. Wesetµ =0and generated data according to (7) for a wide variety of parameter values and computed empirical rejection probabilities of the t-statistic for testing the null hypothesis that µ 0 against the alternative that µ>0. We report results for the sample size T =50and 5,000 replications were used in all cases. To illustrate how well the standard normal approximation works as the bandwidth varies in this finite sample, we computed rejection probabilities for the t-statistic implemented using M =1, 2, 3,..., 49, 50. We set the asymptotic significance level to 0.05 and used the usual standard normal critical value of 1.96 for all values of M. To conserve on space, we report results for six error configurations: iid errors (ρ = ϕ =0), AR(1) errors with ρ =0.7, 0.9, MA(1) errors with ϕ = 0.5, 0.5 and ARMA(1,1) errors with ρ =0.8, ϕ = 0.4. WealsogiveresultswherebΩ is implemented with AR(1) prewhitening. We report results for the popular Bartlett and quadratic spectral (QS) kernels. Results for other kernels are similar. The results are depicted in Figures 1 and 2. In each figure, the line with the label, N(0,1), plots rejection probabilities when the critical value 1.96 is used and there is no prewhitening. In the case of prewhitening, the label is N(0,1), PW. The figures also depict plots of rejection probabilities using the fixed b asymptotic critical values but for now we only focus on the results using the standard small-b asymptotics. Consider first the case of iid errors. When M is small rejection probabilities are only slightly above 0.05 as one might expect for iid data. However, as M increases, rejection probabilities steadily rise. When M = T, rejection probabilities are nearly 0.2 for the Bartlett kernel and exceed 0.25 for the QS kernel. Similar, although different, patterns occur for AR(1) errors. When M is 8

small, there are nontrivial over-rejection problems. When prewhitening is not used, the rejections fall as M increases but then rise again as M increases further. When prewhitening is used, rejection probabilities essential increase as M increases. Prewhitening reduces the extent of over-rejection but not remove it. The patterns for MA(1) errors are similar to the iid case and the patterns for ARMA(1,1) errors are combination of the AR(1) and MA(1) cases. These plots clearly illustrate a fundamental finite sample property of HAC robust tests: the choice of M in a given sample matters and can greatly affect the extent of size distortion depending on the serial correlation structure of the data. And, when M is not very small, the choice of kernel also matters for the over-rejection problem. Given that the sampling distribution of the test depends on the bandwidth and kernel, it would seem to be a minimal requirement that the asymptotic approximation reflect this dependence. Whereas the standard small-b asymptotics is too crude for this purpose, our results in the next section show that the fixed-b asymptotics naturally captures the dependence on the bandwidth and kernel and captures the dependence in a simple and elegant manner 4. Before moving on, it is useful to discuss in some detail the reason that rejection probabilities have the nonmonotonic pattern for AR(1) errors. When M is small, bω is biased but has relatively small variance. Because the bias is downward, that leads to over-rejection. As M initially increases, bias falls and variance rises. The bias effect is more important for over-rejection (see Simonoff (1993)) and the extent of over-rejection decreases. According to the conventional, but wrong, wisdom, the story would be that as M increases further bias continues to fall but variance increases so much that over-rejections become worse 5. This is not what is happening. In fact, as M increases further, bias eventually starts to increase and variance begins to fall. It is the increase in bias the leads to the large over-rejections when M is large. The reason that bias increases and variance shrinks as M gets large is easy to explain. When M is large, high weights are being placed on high order sample autocovariances. In the extreme case where full weight is placed on all of the sample autocovariances, it is well known that bω is identically equal to zero, and this occurs because bω is 4 An alternative approach to the fixed-b asymptotics is to consider higher order asymptotic approximations in the small-b framework. For example, one could take the Edgeworth expansions from the recent work by Velasco and Robinson (2001) and obtain a second order asymptotic approximation for HAC robust tests that depends on the bandwidth and kernels. While this is a potentially fruitful approach, it is complicated by the need to estimate the second derivative of the spectral density. This estimate would introduce additional sampling variability that could offset gains in accuracy from using a higher order approximation. In contrast, the fixed-b asymptotic framework requires no additional estimates. 5 This common misperception is an unfortunate result of a folk-lore in the econometrics literature that states that as M increases, bias in Ω b decreases but variance increases. This folk-lore is quite misleading although the source is easy to pinpoint. A careful reader of Priestley (1981) will repeatedly see the phrase as M increases, bias in Ω b decreases but variance increases. This phrase is completely correct if, as in Priestley (1981), one is discussing the properties of spectral density estimators at non-zero frequencies or, in the case of known mean zero data, at the zero frequency. However, this phrase does not apply to zero frequency estimators computed using demeaned data as is the case in GMM models. This fact is well known and has been nicely illustrated by Figure 2 in Ng and Perron (1996) where plots of the exact bias and variance of bω are given for AR(1) processes. 9

computed using residuals that have a sample average of zero. Obviously, bω =0it is an estimator with large bias and zero variance. Thus, as M increases, bω is pushed closer to the full weight case. 4 A New Asymptotic Theory 4.1 Distribution of bω in the Fixed-b Asymptotic Framework We now develop a distribution theory for bω in the fixed-b asymptotic framework. We proceed under the asymptotic nesting that M = bt where b (0, 1] is fixed. The limiting distribution of bω in the fixed-b asymptotic framework can be written in terms of Q i (b), ani i random matrix that takes on one of three forms depending on the second derivative of the kernel. The following definition gives the forms of Q i (b). Definition 1 Let the i i random matrix, Q i (b) be defined as follows. Case (i): if k(x) is twice continuously differentiable everywhere, Q i (b) = Z 1 Z 1 0 0 0 1 b 2 k00 ( r s ) eb i (r) eb i (s) 0 drds. b Case (ii): if k(x) is continuous, k(x) =0for x 1, and k(x) is twice continuously differentiable everywhere except for x =1, ZZ 1 Q i (b) = r s <b b 2 k00 ( r s ) eb i (r) eb i (s) 0 drds b Z + k0 1 b ³ (1) B e i (r + b) eb i (r) 0 + eb i (r) eb 0 i (r + b) dr. b where k (1) 0 = lim h 0 [(k(1) k(1 h)) /h], i.e. k (1) 0 is the derivative of k(x) from the left at x =1. Case (iii): if k(x) is the Bartlett kernel (see the formula appendix) Q i (b) = 2 Z 1 eb i (r) eb i (r) 0 dr 1 Z 1 b ³ B e i (r + b) eb i (r) 0 + eb i (r) eb 0 i (r + b) dr. b b 0 0 The moments of Q i (1) have been derived by Phillips, Sun and Jin (2003) for the case of positive definite kernels. Hashimzade, Kiefer and Vogelsang (2003) have generalized those results for Q i (b) while also relaxing the positive definite requirement. The moments are ZZ µ r s E (Q i (b)) = I i µ1 k drds, b var (vec (Q i (b))) = ν(b)(i i 2 + κ ii ), ZZ µ r s 2 ZZZ µ µ ZZ µ r s r q r s 2 ν(b) = k drds 2 k k drdsdq + k drds, b b b b 10

where κ ii is the standard commutation matrix 6 and the range of all integrals is 0 to 1. Using these moment results, Hashimzade et al. (2003) prove that lim b 0 E (Q i (b)) = I i and lim b 0 var (vec(q i (b))) = 0. As an illustrative example, consider the Bartlett kernel where µ E (Q i (b)) = I i 1 b + 1 3 b2, ν(b) = 2 3 b 7 6 b2 + 7 15 b3 + 1 9 b4 for b 1 2. We first consider the asymptotic distribution of bω for the case of exactly identified models. Theorem 1 (Exactly Identified Models) Suppose that q = p. LetM = bt where b (0, 1] is fixed. Let Q p (b) be given by Definition 1 for i = p. Then, under Assumptions 1-4, as T, bω ΛQ p (b)λ 0. Several useful observations can be made regarding this theorem. Under fixed-b asymptotics, bω converges to a matrix of random variables (rather than constants) that is proportional to Ω through Λ and Λ 0. This contrasts with the small-b asymptotic approximation where bω is approximated by the constant Ω. Asb 0, it follows from Lemma 1 that p lim b 0 ΛQ p (b)λ 0 = Ω. Thus,thefixed-b asymptotics coincides with the standard small-b asymptotics as b goes to zero. The advantage of the fixed-b asymptotic result is that limit of bω depends on the kernel through k 00 (x) and k (1) 0 and on the bandwidth through b but are otherwise nuisance parameter free. Therefore, it is possible to obtain a first order asymptotic distribution theory that explicitly captures the choice of kernel and bandwidth. Under fixed-b asymptotics, any choice of bandwidth leads to asymptotically pivotal testsofhypothesesregardingθ 0 when using bω to construct standard errors (details are given below). Note that Theorem 1 generalizes results obtained by Kiefer and Vogelsang (2002)a and Kiefer and Vogelsang (2002)b where the focus was b =1. When q>pand the model is overidentified, the limiting expressions for bω are more complicated and asymptotic proportionality to Ω no longer holds. This was established for the special case of b = 1 by Vogelsang (2000). This does not mean, however, that valid testing is not possible when using bω in overidentified models because the required asymptotic proportionality does hold for G T ( θ b T ) 0 W T bωw T G T ( θ b T ), the middle term in bv. The following theorem provides the relevant result. Theorem 2 (Over-identified Models) Suppose that q>p. Let M = bt where b (0, 1] is fixed. Let Q p (b) be given by Definition 1 for i = p. Define Λ = G 0 0 W Λ. Under Assumptions 1-4, as 6 The standard notation for the commutation matrix is usually K ii (see Magnus and Neudecker (1999, p.46). We are using alternative notation because K ij is used for a different matrix in our proofs. 11

T, G T ( b θ T ) 0 W T bωw T G T ( b θ T ) Λ Q p (b)λ 0. This theorem shows that G T ( θ b T ) 0 W T bωw T G T ( θ b T ) is asymptotically proportional to Λ Λ 0 and otherwise only depends on the random matrix Q p (b). It directly follows that bv is asymptotically proportional to V, and asymptotically pivotal tests can be obtained. 4.2 Inference We now examine the limiting null distributions of tests regarding θ 0 under fixed-b asymptotics. Consider the hypotheses H 0 : r(θ 0 )=0 H 1 : r(θ 0 ) 6= 0 where r(θ) is an m 1 vector (m p) of continuously differentiable functions with first derivative matrix, R(θ) = r(θ)/ θ 0. Applying the delta method to Lemma 1 we obtain T 1/2 r( b θ T ) R(θ 0 )V 1/2 B p (1) N(0,V R ), (8) where V R = R(θ 0 )VR(θ 0 ) 0. Using (8) one can construct the standard HAC robust Wald test of the null hypothesis or a t-test in the case of m =1. To remain consistent with earlier work, we consider the F-test version of the Wald statistic defined as F = Tr( b θ T ) 0 ³ R( b θ T )bvr( b θ T ) 0 1 r( b θt )/m, When m =1the usual t-statistic can be computed as t = T 1/2 r( b θ T ) qr( b θ T )bv M=bT R( b θ T ) 0. Often, the significance of individual statistics are of interest which leads to t-statistics of the form θ t = b it se( θ b it ), where se( θ b p i ) = T 1 V b ii and bv ii is the i th diagonal element of the bv matrix. To avoid any confusion, please note that these statistics are being computed in exactly the same way as under the standard approach. Only the asymptotic approximation to the sampling distribution is different. Note that some kernels, including the Tukey-Hanning, allow negative variance estimates. In this case some convention must be adopted in calculating the denominator of the test statistics. Equally 12

arbitrary conventions include reflection of negative values through the origin or setting negatives to a small positive value. Although our results apply to kernels that are not positive definite, we see no merit in using a kernel allowing negative estimated variances absent a compelling argument in aspecific case. Nevertheless, we have experimented with the Tukey-Hanning and trapezoid kernels and results not reported here do not support their consideration over a kernel guaranteeing positive variance estimates. The following theorem provides the asymptotic null distributions of F and t. Theorem 3 Let b (0, 1] be a constant and suppose M = bt.letq i (b) be given by Definition 1 for i = m. Then, under Assumptions 1-4 and H 0,asT, F B m (1) 0 Q m (b) 1 B m (1)/m, t p B 1(1) Q1 (b). Theorem 3 shows that under fixed-b asymptotics, asymptotically pivotal tests are obtained and the asymptotic distributions reflect the choices of kernel and bandwidth. This contrasts asymptotic results under the standard approach where F would have a limiting χ 2 m/m distribution and t a limiting N(0, 1) distribution regardless of the choice of M and k(x). It is natural to expect that the fixed-b asymptotics provide a more accurate approximation in finite samples than the traditional asymptotics. As shown by the corollary to Theorem 2, as b 0, p lim Q m (b) =I m and the fixed-b asymptotics reduces to the standard small-b asymptotics. Therefore, if a traditional bandwidth rule is used in conjunction with the fixed-b asymptotics, in large samples the two asymptotic theories will coincide. However, because the value of b is strictly greater than zero in practice, it is natural to expect the fixed-b asymptotics to deliver a more accurate approximation. The simulations results reported in Section 5 and in the working paper, Kiefer and Vogelsang (2002)c, indicate that this is true in some simple Gaussian models. A theoretical comparison of the accuracy of the fixed-b asymptotics with the small-b asymptotics is not currently available because existing methods in higher order asymptotic expansions, such as Edgeworth expansions, do not directly apply to the fixed-b asymptotic nesting because of the nonstandard nature of the distribution theory. Obtaining such theoretical results appears very difficult although it may be possible to obtain bounds on the rates at which the error in rejection probability shrinks using the lines of argument in Jansson (2002). Jansson (2002) showed that for a Gaussian location model the error in rejection probability of F is O(T 1 ) when M = T (b =1) and the Bartlett kernel is used. While it remains an open question whether this results holds more generally, our simulations are consistent with this possibility. Obviously, this is an area of current research. 13

4.3 Asymptotic Critical Values The limiting distributions given by Theorem 3 are nonstandard. Analytical forms of the densities are not available with the exception of t for the case of the Bartlett kernel with b =1(seeAbadir and Paruolo, 2002 and Kiefer and Vogelsang, 2002b). However, because the limiting distributions are simple functions of standard Brownian motions, critical values are easily obtained using simulations. In the working paper we provide critical values for the t statistic for a selection of popular kernels (see the formula appendix for formulas for the kernels). Additional critical values for the F test will be made available in a follow-up paper. To make the use of the fixed-b asymptotics easy for practitioners we provide critical value functions for the t statistic using the cubic equation cv(b) =a 0 + a 1 b + a 2 b 2 + a 3 b 3. For a selection of well known kernels, we computed the cv(b) function for the percentage points 90%, 95%, 97.5% and 99%. Critical values for the left tail follow by symmetry around zero. The a i coefficients are given in Table 1. They were obtained as follows. For each kernel and the grid b = 0.02, 0.04,...0.98, 1.0 critical values were calculated via simulation methods using 50,000 replications. Normalized partial sums of 1,000 i.i.d. N(0, 1) random deviates were used to approximate the standard Brownian motions in the respective distributions given by Theorem 3. For each percentage point, the simulated critical values were used to fit thecv(b) function by OLS. The intercept was constrained to yield the standard normal critical value so that cv(0) coincides with the standard asymptotics. Table 1 also reports the R 2 from the regressions and the fits are excellent in all cases (R 2 ranging from 0.9825 to 0.9996). 5 Choice of Kernel and Bandwidth: Performance In this section we analyze the choice of kernel and bandwidth on the performance of HAC robust tests. We focus on accuracy of the asymptotic approximation under the null and on local asymptotic power. As far as we know, our analysis is the first to explore theoretically the effects of kernel and bandwidth choice on power of HAC robust tests. 5.1 Accuracy of the Asymptotic Approximation under the Null and Comparison with the Bootstrap The way to evaluate the accuracy of an asymptotic approximation to a null distribution, or indeed any approximation, is to compare the approximate distribution to the exact distribution. Sometimes this can be done analytically; more commonly the comparison can be made by simulation. We argued above that our approximation to the distribution of HAC robust tests was likely to be 14

better than the usual approximation, since ours takes into account the randomness in the estimated variance. However, as noted, that argument is unconvincing in the absence of evidence on the approximation s performance. We provide results for two popular positive definite kernels: Bartlett and QS. Results for the Parzen, Bohman and Daniell kernel are similar and are not reported here. The working paper, Kiefer and Vogelsang (2002)c, contains additional finite sample simulation results that are similar to what is reported here. The simulations were based on the simple location model (7) with same design as described previously. Figures 1 and 2 provide plots of empirical rejection probabilities using the fixed-b asymptotics. Compared to the standard small-b asymptotics, the results are striking. In nearly all cases, the tendency to over-reject is substantially reduced. The exceptions are for small values of M where the fixed-b asymptotics show only small improvements. But, this is to be expected given that the two asymptotic theories coincide as b goes to 0. In the case of MA errors, fixed-b asymptotics gives tests with rejection probabilities close to 0.05. In the case of AR errors, over-rejections decrease as M increases. This pattern is consistent with Hashimzade and Vogelsang (2003) who found in finite sample simulations that the fixed-b asymptotic approximation for bω improves as M increases. Intuitively, the fixed-b asymptotics is capturing the downward bias in bω that can be substantial when M is large. As M increases, the tendency to over-reject falls. The QS kernel delivers tests with less size distortions than the Bartlett kernel especially when M is large. The results in Figures 1 and 2 strongly suggest that the use of fixed-b asymptotic critical values greatly improves the performance of HAC robust t-tests. The improvement of the asymptotic approximation using fixed-b asymptotics occurs because fixed-b asymptotics captures much of the bias and sampling variability induced by the kernel and bandwidth. In the case of iid errors, the fixed-b asymptotics is remarkably accurate. As serial correlation in the errors becomes stronger, the tendency to over-reject appears. The reason is that the functional central limit theorem approximation begins to erode. Therefore, there may be potential for the bootstrap to provide further refinements in accuracy. The fixed-b asymptotics suggests that the bootstrap could work because the HAC robust tests are asymptotically pivotal for any kernel and bandwidth. Ideally, one would like to establish theoretically whether or not the bootstrap provides an asymptotic refinement. Such a theoretical exercise is well beyond the scope of this paper because higher order expansions for fixed-b asymptoticshave not been developed. Obtaining such expansions is an important topic that deserves attention. To show the potential of the bootstrap for HAC robust tests, we expanded our simulation study to include the block bootstrap. We implemented the block bootstrap following Gotze and Kunsch (1996) as follows. The original data, y 1,y 2,..., y T is divided into T l +1 overlapping blocks of length, l. For simplicity, we report results for l =1, 4, 10 all of which factor into T =50. 15

T/l blocks are drawn randomly with replacement to form a bootstrap sample labeled y1,y 2,..., y T. The bootstrap t statistic is computed in two ways. The first statistic, labelled the naive bootstrap test, is defined as t Naive = T (bµ y) p b Ω, where bµ = y = T 1 P T t=1 y t and bω is computed using formula (6) with f(yt, bµ )=bu t = yt bµ. The second statistic, labelled the Gotze-Kunsch (GK) bootstrap test, is defined as T bµ t y GK = p, where Ω e Ã! Ã T l T/l X lx X lx 2 y =(T l +1) 1 l 1 y j+i, eω = T 1 bu l (j 1)+i!. j=0 i=1 The centering and standardization of the GK test is required for the bootstrap to attain second order accuracy under small-b asymptotics. Empirical rejection probabilities using the bootstrap critical values are reported in Figures 1, 2, 3 and 4. The label NBoot refers to the naive bootstrap and the label GKBoot refers to the GK bootstrap. In Figures 1 and 2 the block length is 4. In figures 3 and 4 results are reported for the naive bootstrap for block lengths 1, 4 and 10. There are some very compelling patterns in the Figures. In figures 1 and 2 it is obvious that the GK bootstrap provides a more accurate approximation than the standard normal approximation although for AR(1) errors the prewhitened standard normal approximation outperforms the GK bootstrap. On the other hand, the GK bootstrap is inferior to the fixed-b asymptotic approximation except when b isveryclosetozero(m =1, 2) inwhichcasedifferences are small. These simulation results suggest that while the GK bootstrap provides the expected asymptotic refinement over the standard normal approximation, it does not approach the accuracy of the fixed-b asymptotics when M is not very small. This is not that surprising given that the theoretical results for the GK bootstrap were obtained under the small-b asymptotic nesting. It is well known that the GK bootstrap does not attain second order correctness when the Bartlett kernel is used. Yet, the simulations suggest that GK bootstrap appears to provide refinements when the Bartlett kernel is used. Contrary to theoretical results by Davison and Hall (1993) in the bootstrap literature, the naive bootstrap performs quite well and yields rejection probabilities that are very similar to the fixed-b asymptotics. It appears that the naive bootstrap may provide a higher order refinement over the fixed-b asymptotics although in the case of AR(1) errors, prewhitening and fixed-b asymptotics is more accurate than the naive bootstrap. j=1 i=1 16

Regardless of the data generating process or kernel, it is striking how the naive bootstrap closely follows the fixed-b asymptotics while the GK bootstrap closely follows the standard small-b asymptotics. Clearly, there are some very interesting properties of the bootstrap that warrant additional research. It appears that fixed-b asymptotics could play an important role in understanding the strong performance of the naive bootstrap in spite of the negative theoretical results obtained in the literature. Our conjecture is that while the GK bootstrap nails the second order term of the small-b Edgeworth expansion, the Edgeworth expansion itself does not provide a very accurate approximation when the bandwidth isn t small. On the other hand, we conjecture that the naive bootstrap may capture a second order term in the fixed-b asymptotics. We hope these interesting puzzles will stimulate new theoretical work in the bootstrap literature. Finally, we report some results in Figures 3 and 4 illustrating the sensitivity of the naive bootstrap to the choice of block length. For AR(1) processes, a block length of 10 does fairly well, almost as well as the prewhitened test with fixed-b critical values. In contrast for MA(1) errors, a large block length can cause the bootstrap to perform more poorly than the fixed-b asymptotics. 6 Local Asymptotic Power Whereas the existing HAC literature has almost exclusively focused on the MSE of the HAC estimator to guide the choice of kernel and bandwidth, a more relevant metric is to focus on the power of the resulting tests. In this section we compare power of HAC robust t-tests using a local asymptotic power analysis within the fixed-b asymptotic framework. Our analysis permits comparison of power across bandwidths and across kernels. Such a comparison is not possible using the traditional first order small-b asymptotics because local asymptotic power is the same for all bandwidths and kernels. For clarity, we restrict attention to linear regression models. Given the results in Theorem 1, the derivations in this section are very simple extensions of results given by Kiefer and Vogelsang (2002)a. Therefore, details are kept to a minimum. Consider the regression model y t = x 0 tθ 0 + u t (9) with θ 0 and x t p 1 vectors. In terms of the general model we have f(v t, θ 0 )=x t (y t x 0 tθ 0 ). Without loss of generality, we focus on θ i0, one element of θ, and consider null and alternative hypotheses H 0 : θ i0 0 H 1 : θ i0 = ct 1/2 where c>0 is a constant. If the regression model satisfies Assumptions 1 through 4, then we can use the results of Theorem 1 and results from Kiefer and Vogelsang (2002)a to easily establish that 17

under the local alternative, H 1,asT, t b δ + B 1(1) p Q1 (b), (10) where δ = c/ V ii, V ii is the i th diagonal element of V, and Q 1 (b) is given by Definition 1 for i =1. Asymptotic power curves can be computed for given bandwidths and kernels by simulating the asymptotic distribution of t b based on (10) for a range of values for δ and computing rejection probabilities with respect to the relevant null critical value. Using the same simulation methods as for the asymptotic critical values, local asymptotic power was computed for δ =0, 0.2, 0.4,...,4.8, 5.0 using 5% asymptotic null critical values. The power results are reported in two ways. Figures 5-12 plot power across the kernels for a given value of b. Figures 13-17 plot power across values of b for a given kernel. Figures 5-12 show that for small bandwidths, power is essentially the same across kernels. As b increases, it becomes clear that the Bartlett kernel has the highest power while the QS and Daniell kernels have the lowest power. If power is the criterion used to choose a test, then the Bartlett kernel is the best choice within this set of five kernels. If we compare the Bartlett and QS kernels, we see that the power ranking of these kernels is the reverse of their ranking based on accuracy of the asymptotic approximation under the null. Figures 13-17 show how the choice of bandwidth affects power. Regardless of the kernel, power is highest for small bandwidths and lowest for large bandwidths and power is decreasing in b. These figures also show that power of the Bartlett kernel is least sensitive to b whereas power of the QS and Daniell kernels is the most sensitive to b. Again, power rankings of b are the opposite of rankings of b based on accuracy of the asymptotic approximation under the null. 7 Conclusions We have provided a new approach to the asymptotic theory of HAC robust testing. We consider tests based on the popular kernel based nonparametric estimates of the standard errors. We are not proposing new tests but rather we propose a new asymptotic theory for these well known tests. Our results are general enough to apply to stationary models estimated by GMM. In our approach, b, the ratio of bandwidth to sample size is held constant when deriving the asymptotic behavior of the relevant covariance matrix estimator (i.e. zero frequency spectral density estimator). Thus we label our asymptotic framework fixed-b asymptotics. In standard asymptotics, b is sent to zero and can be viewed as a small-b asymptotic framework. Fixed-b asymptotics improves upon two well known problems with the standard approach. First, as has been well documented in the literature, the standard asymptotic approximation of the sampling behavior of tests is often poor. Second, the kernel and bandwidth choice do not appear in the approximate distribution, leaving the standard 18