Comment on HAC Corrections for Strongly Autocorrelated Time Series by Ulrich K. Müller

Similar documents
A New Approach to Robust Inference in Cointegration

Estimation and Inference of Linear Trend Slope Ratios

A Conditional-Heteroskedasticity-Robust Con dence Interval for the Autoregressive Parameter

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

POWER MAXIMIZATION AND SIZE CONTROL IN HETEROSKEDASTICITY AND AUTOCORRELATION ROBUST TESTS WITH EXPONENTIATED KERNELS

Comparing Nested Predictive Regression Models with Persistent Predictors

A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald W.K. Andrews and Patrik Guggenberger

Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing

Robust testing of time trend and mean with unknown integration order errors

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011

Estimation and Inference with Weak Identi cation

Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation

Chapter 1. GMM: Basic Concepts

Serial Correlation Robust LM Type Tests for a Shift in Trend

11. Bootstrap Methods

Testing for a Trend with Persistent Errors

Powerful Trend Function Tests That are Robust to Strong Serial Correlation with an Application to the Prebish Singer Hypothesis

Problem set 1 - Solutions

CAE Working Paper # A New Asymptotic Theory for Heteroskedasticity-Autocorrelation Robust Tests. Nicholas M. Kiefer and Timothy J.

Estimation and Inference with Weak, Semi-strong, and Strong Identi cation

Chapter 2. GMM: Estimating Rational Expectations Models

GENERIC RESULTS FOR ESTABLISHING THE ASYMPTOTIC SIZE OF CONFIDENCE SETS AND TESTS. Donald W.K. Andrews, Xu Cheng and Patrik Guggenberger.

A CONDITIONAL-HETEROSKEDASTICITY-ROBUST CONFIDENCE INTERVAL FOR THE AUTOREGRESSIVE PARAMETER. Donald Andrews and Patrik Guggenberger

Testing Weak Convergence Based on HAR Covariance Matrix Estimators

Testing for Regime Switching: A Comment

ESTIMATION AND INFERENCE WITH WEAK, SEMI-STRONG, AND STRONG IDENTIFICATION. Donald W. K. Andrews and Xu Cheng. October 2010 Revised July 2011

Central Bank of Chile October 29-31, 2013 Bruce Hansen (University of Wisconsin) Structural Breaks October 29-31, / 91. Bruce E.

Discussion Paper Series

IDENTIFICATION-ROBUST SUBVECTOR INFERENCE. Donald W. K. Andrews. September 2017 Updated September 2017 COWLES FOUNDATION DISCUSSION PAPER NO.

Supplemental Material 1 for On Optimal Inference in the Linear IV Model

Efficiency Tradeoffs in Estimating the Linear Trend Plus Noise Model. Abstract

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 Revised March 2012

Inference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation

Likelihood Ratio Based Test for the Exogeneity and the Relevance of Instrumental Variables

Cointegration Tests Using Instrumental Variables Estimation and the Demand for Money in England

Tests for Cointegration, Cobreaking and Cotrending in a System of Trending Variables

GMM based inference for panel data models

Should We Go One Step Further? An Accurate Comparison of One-step and Two-step Procedures in a Generalized Method of Moments Framework

Panel Data. March 2, () Applied Economoetrics: Topic 6 March 2, / 43

Bootstrapping the Grainger Causality Test With Integrated Data

Modeling Expectations with Noncausal Autoregressions

1 The Multiple Regression Model: Freeing Up the Classical Assumptions

Volume 29, Issue 1. On the Importance of Span of the Data in Univariate Estimation of the Persistence in Real Exchange Rates

A Modified Confidence Set for the S Title Date in Linear Regression Models.

Fixed-b Inference for Testing Structural Change in a Time Series Regression

Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis

Model Comparisons in Unstable Environments

On GMM Estimation and Inference with Bootstrap Bias-Correction in Linear Panel Data Models

HAC Corrections for Strongly Autocorrelated Time Series

CAE Working Paper # Fixed-b Asymptotic Approximation of the Sampling Behavior of Nonparametric Spectral Density Estimators

Finite sample distributions of nonlinear IV panel unit root tests in the presence of cross-section dependence

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Accurate Asymptotic Approximation in the Optimal GMM Frame. Stochastic Volatility Models

Long-Run Covariability

Parametric Inference on Strong Dependence

On Size and Power of Heteroskedasticity and Autocorrelation Robust Tests

Inference for Parameters Defined by Moment Inequalities: A Recommended Moment Selection Procedure. Donald W.K. Andrews and Panle Jia

Testing the null of cointegration with structural breaks

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

GMM Estimation with Noncausal Instruments

LECTURE 13: TIME SERIES I

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Testing Models of Low-Frequency Variability

On the Power of Tests for Regime Switching

ECONOMICS 7200 MODERN TIME SERIES ANALYSIS Econometric Theory and Applications

Modeling Expectations with Noncausal Autoregressions

Subsets Tests in GMM without assuming identi cation

Convergence of prices and rates of in ation

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Inference on a Structural Break in Trend with Fractionally Integrated Errors

GLS-based unit root tests with multiple structural breaks both under the null and the alternative hypotheses

Finite Sample Performance of A Minimum Distance Estimator Under Weak Instruments

Inference on a Structural Break in Trend with Fractionally Integrated Errors

Equivalence of several methods for decomposing time series into permananent and transitory components

Lecture Notes on Measurement Error

Cointegration and the joint con rmation hypothesis

Testing Statistical Hypotheses

Markov-Switching Models with Endogenous Explanatory Variables. Chang-Jin Kim 1

GMM estimation of spatial panels

1 Procedures robust to weak instruments

A FRACTIONAL DICKEY-FULLER TEST FOR UNIT ROOTS

Applications of Subsampling, Hybrid, and Size-Correction Methods

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

On detection of unit roots generalizing the classic Dickey-Fuller approach

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008

Nearly Optimal Tests when a Nuisance Parameter is Present Under the Null Hypothesis

Testing in GMM Models Without Truncation

x i = 1 yi 2 = 55 with N = 30. Use the above sample information to answer all the following questions. Show explicitly all formulas and calculations.

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

ECONOMICS SERIES SWP 2009/11 A New Unit Root Tes t with Two Structural Breaks in Level and S lope at Unknown Time

Economics 620, Lecture 13: Time Series I

The Impact of a Hausman Pretest on the Size of a Hypothesis Test: the Panel Data Case

Föreläsning /31

Confidence Intervals for an Autoregressive Coeffi cient Near One Based on Optimal Selection of Sequences of Point Optimal Tests

Averaging Estimators for Regressions with a Possible Structural Break

On the Long-Run Variance Ratio Test for a Unit Root

Simple Estimators for Monotone Index Models

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing

Transcription:

Comment on HAC Corrections for Strongly Autocorrelated ime Series by Ulrich K. Müller Yixiao Sun Department of Economics, UC San Diego May 2, 24 On the Nearly-optimal est Müller applies the theory of optimal statistical testing to heteroskedasticity and autocorrelation robust (HAR) inference in the presence of strong autocorrelation. As a starting point, Müller uses Le Cam s idea on the limits of experiments ingeniously and converts a more complicated nite sample testing problem into an asymptotically equivalent and simpler testing problem. he main barrier to optimal testing is that both the null hypothesis and alternative hypothesis are composite, even after the asymptotic reduction based on Le Cam s idea. So the Neyman-Pearson lemma does not directly apply. o reduce the dimension of the alternative hypothesis space, it is standard practice to employ a weighting function and take a weighted average of the probability distributions under the alternative hypothesis. See, for example, Cox and Hinkley (974, pp. 2). he weighting function should re ect a user s belief about the likelihood of di erent parameter values and the associated cost of false acceptance under the alternative. Selecting the weighting function is as di cult as selecting one point out of many possible parameter values. A test that is designed to be optimal against a point alternative may not be optimal for other alternatives. he nearly-optimality of Müller s test should be interpreted with this point in mind. here are a number of ways to reduce the dimension of the null hypothesis space, including the invariance arguments and the conditioning argument on su cient statistics. See, for example, Cox and Hinkley (974, Ch. 5). In fact, Müller uses scale invariance to remove one nuisance parameter. However, as in many other contexts, here the null cannot be reduced to a point by using the standard arguments. Müller follows Wald, Lehmann, Stein and other pioneers in statistical testing and construct the so-called least favorable distribution over the null parameter space and use it to average the probability distributions. his e ectively reduces the composite null into a simple one. However, the least favorable distribution has to be found numerically, which can be a formidable task. his is perhaps one of the reasons that the theory of optimal testing has not been widely used in statistics and econometrics. A contribution of Müller s paper is to nd an approximate least favorable distribution and construct a test that is nearly optimal. he reason to employ the least favorable distribution is that we want to control the level of the test for each point in the parameter space under the null. While we are content with the average power under the alternative, we are not satis ed with the control of the average level under the null. In fact, the requirement on size control is even stronger: the

null rejection probability has to be controlled uniformly over the parameter space under the null. here is some contradiction here, which arises from the classical dogma that puts size control before power maximization. he requirement that the null rejection probability has to be controlled for each possible parameter value under the null, no matter how unlikely a priori a given value is, is overly conservative. he test designed under this principle can su er from a severe power loss. In fact, in the simulation study, Müller s test often has a lower (size-adjusted) power than some commonly used tests. I am sympathetic with the argument that the power loss is the cost one has to pay to achieve the size accuracy as size-adjustment is not empirically feasible. However, one can always design a test with accurate size but no power. Ultimately, there is a trade-o between size control and power improvement. Using the least favorable distribution does not necessarily strike the optimal trade-o. As a comprise, one may want to control the average level/size of the test over a few empirically relevant regions in the parameter space. here is some convincing simulation evidence that Müller s test is much more accurate than existing tests for some data generating processes (DGP). hese are the DGP s where the nite sample testing problem can be approximated very well by the asymptotically equivalent testing problem. However, there is not much research on the quality of the approximation. If the approximation error is large, Müller s test, which is nearly optimal for the asymptotically equivalent problem, may not be optimal for the original problem. For example, when the error process in the location model is an AR() plus noise, Müller s simulation results show that his test can still over-reject considerably. As another example, if the error process follows the AR(2) model u t = :9u t :95u t 2 + e t where e t s iid N(,), then Müller s test (and many other tests) su ers from under-rejection. In this example, the modulus of the larger root is about :97, which is close to one. However, the spectral density does not resemble that of an AR() process. It does not have a peak at the origin. Instead, there is a speak near the origin. As a result, the quality of the AR() approximation is low. If the periodograms used in the variance estimator include the peak, then the variance estimator will be biased upward, leading to a smaller test statistic and under-rejection. 2 Near-unity Fixed-smoothing Asymptotic Approximation An attractive feature of Müller s test is that the scenarios under which it is optimal or nearly optimal are given explicitly. However, practitioners may nd it unattractive because of the computation cost, the unfamiliar form of the test statistic, and its applicability to models beyond the simple Gaussian location model. An alternative approach to deal with strong autocorrelation is to derive a new approximation for the conventional test statistic that captures the strong autocorrelation. his has been recently developed in Sun (24b). o provide the context for further discussion, I give a brief summary of Sun (24b) here. Consider a p-dimensional time series y t of the form: y t = + e t ; t = ; 2; : : :; () where y t = (y t ; : : : ; y pt ), = ( ; : : : ; p ), and e t = (e t ; : : : ; e pt ) is a zero mean process. We are interested in testing the null H : = against the alternative H : 6= : he OLS estimator of is the average of fy t g ; i.e., ^ = y := P t= y t: he F test version of 2

the Wald statistic based on the OLS estimator is given by F = (^ ) ^ (^ )=p where ^ is an estimator of the approximate variance of (^ ): When p = ; we can construct the t-statistic t = (^ )=^ =2 : A very general class of variance estimators is the class of quadratic variance estimators, which takes the following form: ^ = X X t 2 Q h ; s ^e t^e s (2) t= s= where ^e t = e t e for e = P t= e t and Q h (r; s) is a weighting function that depends on the smoothing parameter h: When Q h (r; s) = k ((r s) =b) for some kernel function k () and smoothing parameter b; ^ is the commonly used kernel variance estimator. When Q h (r; s) = K P K j= j (r) j (s) for some basis functions j (r) on L 2 [; ] satisfying R j (r) dr = and smoothing parameter K; we obtain the so-called series variance estimator. his estimator has a long history. It can be regarded as a multiple-window estimator with window function k (t= ). See homson (982). It also belongs to the class of lterbank estimators and ^ is a simple average of the individual lter-bank estimators. For more discussions along this line, see Chapter 5 of Stoica and Moses (25). Recently there has been some renewed interest in this type of variance estimators, see Phillips (25), Sun (26, 2, 23), and Müller (27). De ne Q ;h (r; s) = Q X h (r; s) Q h ; s X Q h r; 2 + X X Q h ; 2 ; then = ^ = 2 he Wald statistic is then equal to F =! " X X e t t= Similarly, the t-statistic becomes t = X X t= s= X t= s= Q h Q ;h 2 = = 2 = t ; s e t e s: (3) t ; s # e t e s P t= e t h P P t= s= Q h t ; s i =2 : et e s! X e t =p: he question is how to approximate the sampling distributions of F and t : If fe t g is stationary and =2 P [ r] t= e t converges weakly to a Brownian motion process, then under some conditions on Q h ; it can be shown that, for a xed h : F! d F (h) := W p () Z Z t= Q h (r; s) dw p (r) dw p (s) W p () =p; (4) t! d W p () t (h) := q R R ; (5) Q h (r; s) dw p (r) dwp (s) 3

where W p (r) is a p vector of standard Wiener processes and Q h (r; s) = Q h (r; s) Z Z Q h ( ; s) d Q h (r; 2 ) d 2 + Z Z Q h ( ; 2 ) d d 2 : For easy reference, I refer to the above approximations as the stationary xed-smoothing asymptotic approximations. hey are more accurate than the chi-square approximation or the normal approximation. As pointed out by Müller s paper, these approximations are still not good enough when e t is highly autocorrelated. o model the high autocorrelation, we assume that e t follows an AR() process of the form: c m e t = e t + u t where e = O p () and = for some sequence fc m g : his is in the spirit similar to Müller s paper and many other papers in the literature. See, for example, Phillips, Magdalinous and Giraitis (2). Under the assumption that p e [ r]! J cm (r) for some matrix ; where J cm (r) is the Ornstein-Uhlenbeck process de ned by dj cm (r) = c m J cm (r) dr + dw p (r) with J cm () =, we can obtain the following near-unity xed-smoothing asymptotics when c m and h are xed: F! d F (c m ; h) Z Z : = J cm (r) dr Z Z Q h (r; s) J c m (r) Jc m (s) drds J cm (r) dr =p: If we further assume that Q h (r; s) is positive de nite, then for xed c m and h : t! d t (c m ; h) := R J c m (r) dr h R R i =2 : Q h (r; s) J c m (r) Jc m (s) drds If we let c m! ; then the near-unity xed-smoothing asymptotic distributions F (c m ; h) and t (c m ; h) approach the stationary xed-smoothing asymptotic distributions given in (4) and (5). On the other hand, if we let c m! ; then F (c m ; h) and t (c m ; h) approach the unit-root xed-smoothing asymptotic distributions, which are de ned as F (c m ; h) and t (c m ; h) but with J cm (r) replaced by W p (r) : Depending on the value of c m ; the limiting distributions F (c m ; h) and t (c m ; h) provide a smooth transition from the usual stationary xed-smoothing asymptotics to the unit-root xed-smoothing asymptotics. In my view, the chi-square/normal approximation, the stationary xed-smoothing approximation and the near-unity xed-smoothing approximation are just di erent approximations to the same test statistic constructed using the same variance estimator. It is a little misleading to talk about consistent and inconsistent variance estimators. he variance estimator is actually the same but we embed it on di erent asymptotic paths. When the xed-smoothing asymptotics is used, we do not necessarily require that we x the smoothing parameter h in nite samples. In fact, in empirical applications, the sample size is 4

usually given beforehand and the smoothing parameter h needs to be determined using a priori information and/or information obtained from the data. Very often, the selected smoothing parameter h is larger for a larger sample size but is still small relative to the sample size. So the empirical situations appear to be more compatible with the conventional increasing-smoothing asymptotics. he beauty of the xed-smoothing asymptotics is that xed-smoothing critical values are still correct under the increasing-smoothing asymptotics. In fact, in a sequence of papers (e.g. Sun, 24a), I have shown that the xed-smoothing critical values are second order correct under the increasing-smoothing asymptotics. In contrast, increasing-smoothing critical values are not even rst order correct under the xed-smoothing asymptotics. Given this, the xed-smoothing approximation can be regarded as more robust than the increasing-smoothing approximation. he same comment applies to the local-to-unity parameter c m : When we use the nearunity xed-smoothing approximation, we do not have to literally x c m at a given value in nite samples. Whether we hold c m xed or let it increase with the sample size can be viewed as di erent asymptotic speci cations to obtain approximations to the same nite sample distribution. In practice, we can estimate c m even though a consistent estimator is not available. For a stationary AR() process with a xed autoregressive coe cient, the estimator ^c m derived from the OLS estimator of ^ ;m necessarily converges to in nity in probability. he critical values from the near-unity xed-smoothing asymptotic distribution are thus close to those from the stationary xed-smoothing asymptotic distribution. So the near-unity xed-smoothing approximation is still asymptotically valid. For this reason, we can say that the near-unity xed-smoothing approximation is a more robust approximation. Compared to the chi-square or normal approximation, the near-unity xed-smoothing approximation achieves double robustness it is asymptotically valid regardless of the limiting behaviors of c m and h: 3 Some Simulation Evidence o implement the near-unity xed-smoothing approximation, we need to pin down the value of c m ; which cannot be consistently estimated. However, a nontrivial and informative con dence interval (CI) can still be constructed. I propose to construct a CI for c m and use the maximum of the critical values, each of which corresponds to one value of c m in the CI. An argument based on the Bonferroni bound can be used to determine the con dence level of the CI and the signi cance level of the critical values. More speci cally, for tests with nominal level, we could employ the critical value de ned by CV = sup CV (c m ; ) c m2ci + where ; CI + is a lower con dence interval for c m with nominal coverage probability + ; and CV (c m ; ) is the ( ) % quantile from the distribution F (c m ; h) or t (c m ; h) : his idea of choosing critical values in the presence of unidenti- ed nuisance parameters has been used in various settings in statistics and econometrics. See for example McCloskey (22) and references therein. One drawback of the approach based on the Bonferroni correction is that the critical value is often too large and the resulting test often under-rejects. here are sophisticated ways to improve on the Bonferroni method. As a convenient empirical strategy, here I 5

employ CV = sup cm2ci9% CV (c m ; 95%) for nominal 5% tests. I construct the CI for c m using the method of Andrews (993). Other methods such as Stock (99) and Hansen (999) can also be used. See Mikusheva (24) and Phillips (24) for recent contributions on this matter. Since CV (c m ; 95%) is decreasing in c m ; we only need to nd the lower limit of CI 9% in order to compute CV: he computational cost is very low. I consider a univariate Gaussian location model with AR(2) error e t = e t + 2 e t 2 + u t, where u t s iidn(; ): he sample size is 2. he initial value of the error process is set to be standard normal. I generate a time series of length 4 and drop the rst 2 observations to minimize the initialization e ect. his is similar to generating a time series with 2 observations but with the initial value drawn from its stationary distribution. I consider Müller s test, the KVB test, and the test based the series variance estimator with the basis functions: 2j (x) = p 2 cos(2jx), 2j (x) = p 2 sin(2jx); j = ; : : : ; K=2. he values of K = 2; 24; 48 correspond to the values of q = 2; 24; 48 in Müller s paper. he number of simulation replications is 2,. able reports the null rejection probabilities for various two-sided 5% tests. It is clear that the tests based on the near-unity xed-smoothing approximation are in general more accurate than those based on the usual stationary xed-smoothing approximation. his is especially true when the process is highly autocorrelated. In term of size accuracy, Müller test is slightly better than the near-unity xed-smoothing test. he size accuracy of the latter test is actually quite satisfactory. As I mentioned before, the AR(2) process with ( ; 2 ) = (:9; :95) posts a challenge to all tests considered. Figure plots the size-adjusted power against the noncentrality parameter 2 in the presence AR() errors. he gure is representative of other con gurations. It is clear from the gure that the slightly better size control of Müller s tests is achieved at the cost of some power loss. able : Empirical null rejection probability of nominal 5% tests with = 2 under AR(2) errors Stationary Fixed-Smoothing Near-Unity Fixed-Smoothing Nearly-Optimal ests ( ; 2 ) K 2 K 24 K 48 KVB K 2 K 24 K 48 KVB S 2 S 24 S 48 (, ).48.47.48.47.45.42.4.42.45.47.47 (.7,).58.83.48.58.4.35.32.4.46.47.47 (.9,).33.248.393.84.36.33.33.38.48.47.5 (.95,).258.42.553.25.39.37.36.39.48.49.5 (.99,).63.738.86.333.79.79.79.7.45.44.45 (.9,-.95).5.2.5.5.....25.. (.8,.).46.272.47.89.5.5.52.45.49.48.52 4 Conclusion Müller s paper makes an important contribution to the literature on HAR inference. It has the potential for developing a standard of practice for HAR inference when the process 6

is strongly autocorrelated. he paper inspires us to think more about optimality issues in hypothesis testing. Unfortunately, uniformly optimal tests do not exist except in some special cases. his opens the door to a wide range of competing test procedures. In this discussion, I have outlined an alternative test, which is based on the standard test statistic but employs a new asymptotic approximation. he alternative test has satisfactory size but is not as accurate as Müller s test. However, Müller s test is often less powerful. he tradeo between size accuracy and power improvement is unavoidable. A prewhitening testing procedure with good size property may also be crafted. HAR testing is fundamentally a nonparametric problem. A good test requires some prior knowledge about the data generating process. In the present setting, the prior knowledge should include the range of the largest AR root and the neighborhood around origin in which the spectral density remains more or less at. Equipped with this knowledge, a practitioner can select a testing procedure to minimize their loss function. References [] Andrews, D.W.K. (993): Exactly Median-Unbiased Estimation of First Order Autoregressive/Unit Root Models. Econometrica, 6(), 39 65. [2] Hansen, B.E. (999): he Grid Bootstrap and the Autoregressive Model. Review of Economics and Statistics, 8(4), 594 67. [3] Cox, D.R. and D.V. Hinkley (974): heoretical Statistics, Chapman and Hall: New York. [4] Elliott, G., U.K. Müller and M.W. Watson (22): Nearly Optimal ests when a Nuisance Parameter is Present Under the Null Hypothesis. Working paper, Department of Economics, UC San Diego and Princeton University. [5] Müller, U.K. (27): A heory of Robust Long-Run Variance Estimation. Journal of Econometrics 4, 33 352. [6] McCloskey, A. (22): Bonferroni-Based Size-Correction for Nonstandard esting Problems. Working paper, Department of Economics, Brown University. [7] Mikusheva, A. (24): Second Order Expansion of the t-statistic in AR() Models. Econometric heory, forthcoming. [8] Phillips, P.C.B. (25): HAC Estimation by Automated Regression. Econometric heory 2, 6 42. [9] Phillips, P.C.B. (24): On Con dence Intervals for Autoregressive Roots and Predictive Regression. Econometrica, forthcoming. [] Phillips, P.C.B.,. Magdalinos and L. Giraitis (2): Smoothing Local-to-moderate Unit Root heory. Journal of Econometrics 58, 274 279. [] Stoica, P. and R. Moses (25): Spectral Analysis of Signals, Pearson Prentice Hall. 7

[2] Stock, J. (99): Con dence Intervals for the Largest Autoregressive Root in US Macroeconomic ime Series. Journal of Monetary Economics 28, 435 459. [3] Sun, Y. (26): Best Quadratic Unbiased Estimators of Integrated Variance in the Presence of Market Microstructure Noise. Working paper, Department of Economics, UC San Diego. [4] Sun, Y. (2): Autocorrelation Robust rend Inference with Series Variance Estimator and esting-optimal Smoothing Parameter. Journal of Econometrics 64, 345 366. [5] Sun, Y. (23): A Heteroskedasticity and Autocorrelation Robust F est Using Orthonormal Series Variance Estimator. Econometrics Journal 6, 26. [6] Sun, Y. (24a): Let s Fix It: Fixed-b Asymptotics versus Small-b Asymptotics in Heteroskedasticity and Autocorrelation Robust Inference. Journal of Econometrics, 78(3), 659 677 [7] Sun, Y. (24b): Fixed-smoothing Asymptotics and Asymptotic F and t ests in the Presence of Strong Autocorrelation. Working paper, Department of Economics, UC San Diego. [8] homson, D. J. (982): Spectrum Estimation and Harmonic Analysis. IEEE Proceedings, 7, 55 96. 8

Power Power Power Power (ρ, ρ 2 ) = (, ) (ρ, ρ 2 ) = (.7, ).8.8.6 K6 K2.4 K24 K48 KVB.2 S2 S24 S48 5 5 δ 2.6.4.2 5 5 δ 2 (ρ, ρ 2 ) = (.9, ) (ρ, ρ 2 ) = (.95, ).8.8.6.6.4.4.2.2 5 5 δ 2 5 5 δ 2 Figure : Size-adjusted Power of Di erent 5% ests with sample size = 2 ( K6, K2, K24, K48, and KVB are the Near-Unity Fixed-Smoothing tests while S2, S24 and S48 are Müller s tests) 9