Section 6: Heteroskedasticity and Serial Correlation

Similar documents
Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Models, Testing, and Correction of Heteroskedasticity. James L. Powell Department of Economics University of California, Berkeley

Heteroskedasticity and Autocorrelation

Week 11 Heteroskedasticity and Autocorrelation

Lecture 4: Heteroskedasticity

Econometrics. 9) Heteroscedasticity and autocorrelation

9. AUTOCORRELATION. [1] Definition of Autocorrelation (AUTO) 1) Model: y t = x t β + ε t. We say that AUTO exists if cov(ε t,ε s ) 0, t s.

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Heteroskedasticity. We now consider the implications of relaxing the assumption that the conditional

the error term could vary over the observations, in ways that are related

F9 F10: Autocorrelation

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

Freeing up the Classical Assumptions. () Introductory Econometrics: Topic 5 1 / 94

Ch.10 Autocorrelated Disturbances (June 15, 2016)

Introductory Econometrics

ECON2228 Notes 10. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 48

Panel Data Models. James L. Powell Department of Economics University of California, Berkeley

Intermediate Econometrics

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley

Multiple Regression Analysis

ECON2228 Notes 10. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 54

Generalized Least Squares Theory

Econometrics - 30C00200

AUTOCORRELATION. Phung Thanh Binh

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics. Week 8. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econ 510 B. Brown Spring 2014 Final Exam Answers

Panel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation

Heteroskedasticity. y i = β 0 + β 1 x 1i + β 2 x 2i β k x ki + e i. where E(e i. ) σ 2, non-constant variance.

Environmental Econometrics

Auto correlation 2. Note: In general we can have AR(p) errors which implies p lagged terms in the error structure, i.e.,

1. You have data on years of work experience, EXPER, its square, EXPER2, years of education, EDUC, and the log of hourly wages, LWAGE

Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares

Homoskedasticity. Var (u X) = σ 2. (23)

Repeated observations on the same cross-section of individual units. Important advantages relative to pure cross-section data

Econometrics Multiple Regression Analysis: Heteroskedasticity

Reading Assignment. Serial Correlation and Heteroskedasticity. Chapters 12 and 11. Kennedy: Chapter 8. AREC-ECON 535 Lec F1 1

Econometrics Summary Algebraic and Statistical Preliminaries

Lecture 3: Multiple Regression

Iris Wang.

Topic 7: Heteroskedasticity

11.1 Gujarati(2003): Chapter 12

MEI Exam Review. June 7, 2002

Økonomisk Kandidateksamen 2004 (I) Econometrics 2. Rettevejledning

LECTURE 11. Introduction to Econometrics. Autocorrelation

Economics 308: Econometrics Professor Moody

7. GENERALIZED LEAST SQUARES (GLS)

Empirical Economic Research, Part II

1 Introduction to Generalized Least Squares

LECTURE 10: MORE ON RANDOM PROCESSES

Regression with time series

Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data

(X i X) 2. n 1 X X. s X. s 2 F (n 1),(m 1)

1 The Multiple Regression Model: Freeing Up the Classical Assumptions

Autocorrelation. Jamie Monogan. Intermediate Political Methodology. University of Georgia. Jamie Monogan (UGA) Autocorrelation POLS / 20

Economic modelling and forecasting

LECTURE 13: TIME SERIES I

Introduction to Econometrics. Heteroskedasticity

Final Exam. Economics 835: Econometrics. Fall 2010

Greene, Econometric Analysis (7th ed, 2012) Chapters 9, 20: Generalized Least Squares, Heteroskedasticity, Serial Correlation

Econometrics I. Professor William Greene Stern School of Business Department of Economics 25-1/25. Part 25: Time Series

Multiple Regression Analysis: Heteroskedasticity

Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)

Econ 582 Fixed Effects Estimation of Panel Data

Model Mis-specification

10. Time series regression and forecasting

GENERALISED LEAST SQUARES AND RELATED TOPICS

Heteroskedasticity. Part VII. Heteroskedasticity

LECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

Econometrics of Panel Data

Econometrics of Panel Data

Economics 620, Lecture 13: Time Series I

Multiple Equation GMM with Common Coefficients: Panel Data

Topic 6: Non-Spherical Disturbances

Course information EC2020 Elements of econometrics

Linear Regression with Time Series Data

Econometrics. Week 6. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Linear Regression with Time Series Data

Quick Review on Linear Multiple Regression

Graduate Econometrics Lecture 4: Heteroskedasticity

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ch 2: Simple Linear Regression

Heteroscedasticity and Autocorrelation

Statistics 910, #5 1. Regression Methods

1. The OLS Estimator. 1.1 Population model and notation

Linear Model Under General Variance Structure: Autocorrelation

Linear Regression with Time Series Data

Applied Econometrics (QEM)

Review of Econometrics

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Economics 582 Random Effects Estimation

Time Series Analysis

Diagnostics of Linear Regression

Formulary Applied Econometrics

13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process. Strict Exogeneity

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

Chapter 6. Panel Data. Joan Llull. Quantitative Statistical Methods II Barcelona GSE

Transcription:

From the SelectedWorks of Econ 240B Section February, 2007 Section 6: Heteroskedasticity and Serial Correlation Jeffrey Greenbaum, University of California, Berkeley Available at: https://works.bepress.com/econ_240b_econometrics/14/

Section 6: Heteroskedasticity and Serial Correlation Jeffrey Greenbaum February 23, 2007 Contents 1 Section Preamble 2 2 Weighted Least Squares 3 3 Feasible WLS 3 3.1 Multiplicative Heteroskedasticity Models...................... 3 3.2 esting for Heteroskedasticity............................ 4 3.3 Feasible Estimator.................................. 6 3.4 Exercises....................................... 6 3.4.1 2002 Exam, 1B................................ 6 3.4.2 2004 Exam, 1D................................ 7 3.4.3 Grouped-Data Regression Model...................... 7 3.4.4 Multiplicative Model............................. 8 4 Eicker-White Standard Errors 9 5 Structural Approach to Serial Correlation 9 5.1 First-Order Serial Correlation............................. 10 5.2 esting for Serial Correlation............................. 11 5.3 Feasible GLS..................................... 13 5.4 Exercises....................................... 14 5.4.1 2002 Exam, Question 1C........................... 14 5.4.2 2003 Exam, Question 1B........................... 14 5.4.3 2004 Exam, Question 1B........................... 15 6 Nonstructural Approach to Serial Correlation 16 1

1 Section Preamble his week we continue with the relaxed scalar variance-covariance assumption and GLS estimation. Specifically, we consider two more cases in which we can possibly construct a Feasible GLS estimator that has the same asymptotic properties as the GLS estimator. We will first analyze when the variance-covariance matrix demonstrates pure heteroskedasticity, and then serial correlation. Recall that when we relax the scalar variance-covariance assumption that ˆβ GLS is the most efficient of the linear unbiased estimators. However, ˆβ GLS requires that we know the values of Σ, which is not realistic because we do not know the values of ε. Nevertheless, if we can consistently estimate Σ, then we can construct a feasible estimator, ˆβ F GLS, which has the same asymptotic properties as ˆβ GLS. Finding a consistent estimator for Σ, however, is often not possible because generally as the sample size increases, the number of elements in Σ to estimate increases by a greater amount. Moreover, we generally assume that each element effectively comes from a distinct distribution unless we pose structure on how Σ is formed. Posing the correct structure can enable us to consistenly estimate Σ. he spirit of the solutions to heteroskedasticity and serial correlation are similar. As previously suggested, one approach is to assume a functional form for the structure of Σ, estimate this structure, and use this estimate to construct ˆβ F GLS. If the correct structure is chosen then this estimator has the same asymptotic properties as ˆβ GLS, and so asymptotically, ˆβ F GLS is consistent and BLUE. Before proceeding with feasible estimtaion, we should conduct an hypothesis test where the null hypothesis is homoskedasticity or the lack or serial correlation, as appropriate to the case. If we believe that Ω = I, then we can use ˆβ GLS, which has desirable finite sample properties. Nevertheless, hypothesis testing may spuriously lead to detecting heteroskedasticity or serial correlation. Moreover, we may either assume the wrong structure of Σ, or more simply have no intuition about what its structure might be. Lastly, point estimates from [F]GLS do not lend themselves to the desirable ceteris paribus interpretation. An alternative approach is use OLS, which remains unbiased and consistent, and to correct the standard errors so that they are consistently estimated. Although OLS is no longer BLUE if V ar(y X) = Σ, this method is more frequently used because of these concerns about posing a structure for Σ. Despite the loss of efficiency and some caution about the finite-sample properties of these corrected standard errors, if hypothesis testing produces highly statistically significant point estimates, then we can likely trust inferences with OLS. Moreover, OLS point estimates are appealing for policy applications because they lend to lend to a ceteris paribus interpretation. Although the OLS and [F]GLS estimators are both unbiased, at least asymptotically, point estimates inevitably differ unless V ar(y X) = σ 2 I. Nevertheless, it is not necessary to be concerned with such differences unless the difference is economically significant, such as a difference in sign, while inference on both are each highly statistically significant. In this case, another classical assumption is likely to be faulty, such as the linear expectations assumption, as we will begin to discuss next week. 2

2 Weighted Least Squares GLS estimation with pure heteroskedasticity is known as weighted least squares. We first consider the case in which all of the elements along the diagonal of Σ, or equivalently Ω, are known. hen, we consider a more realistic scenario where we can construct a feasible estimator by estimating the heteroskedasticity given an assumed functional form. In the case of pure heteroskedasticity, the generalized classical regression assumption of V ar(y X) = Σ reduces to V ar(y X) = Diag[σ 2 i ]. hat is, pure heteroskedasticity denotes that the diagonal elements of Σ are heteroskedastic but the off diagonal elements are not serially correlated. Using the framework established for GLS, ˆβ W LS is the ordinary least squares estimate applied to the linear model that is multiplied through by Σ 1/2. Although the interpretation of WLS coefficients are not as desirable as OLS coefficients, ˆβ W LS is BLUE. Moreover, if we assume that the errors are independent and distributed normally, then finite sample inference is most desirable with ˆβ W LS. Let w i = 1 σ 2 i. Because Σ is diagonal, Σ 1/2 = Diag[w 1/2 i ]. As a result, ˆβ W LS = (X Σ 1 X) 1 X Σ 1 y = (X (Diag[w i ])X) 1 X (Diag[w i ])y ( n ) 1 n = ω i (x i x i) ω i x i y i i=1 Accordingly, this estimator is known as weighted least squares because it is equivalently derived by minimizing the weighted sum of the squared residuals. Specifically, each squared residual, e 2 i is multiplied by w i. he reason for this interpretation is because we are transforming our linear model by Σ 1/2. As with all GLS estimation, this transformation is equivalent to finding the estimator that thus minimizes (y Xβ) Σ 1 (y Xβ). By construction, Σ 1 = Diag[w i ] for GLS with pure heteroskedasticity, which when rewritten in summation notation, lends to the weighted least squares interpretation. i=1 3 Feasible WLS In practice, Σ contains unknown parameters, aside from an arbitrary constant of proportionality. We can thus construct a feasible estimate for ˆβ W LS by estimating a model for V ar(y i ) = σ 2 i and substituting ˆΣ in place of Σ. his feasible estimator has the same asymptotic properites as the WLS estimator if ˆΣ p Σ. 3.1 Multiplicative Heteroskedasticity Models In lecture, Professor Powell presented the multiplicative heteroskedasticity model because of its wide use in Feasible WLS, which is the linear model, y i = x i β + u i, with error terms of the form: 3

where ε i iid, E(ε i ) = 0, and V ar(ε i ) = σ 2. u i = c i ε i Furthermore, we assume that the function c 2 i is a linear function of the observable variables in an underlying linear way: c 2 i = h(z iθ) where the variables z i are some observable functions of the regressors, x i, excluding the constant term, and θ is a vector of coefficients to be estimated. We will elaborate on the estimation procedure for θ shortly when discussing the feasible WLS procedure. Moreover, h(.), which must always be positive so that V ar(y i x i ) > 0 i, is normalized so that h(0) = 1 and h (0) 0. For example, we would expect to observe greater variation in the profits of large firms than in those of small firms, even after accounting for firm sizes. Combining these two assumptions about the structure of the variance yields: V ar(u i ) = V ar(c i ε i ) = c 2 i V ar(ε i ) = h(z iθ)σ 2 his variance is thus homoskedastic if V ar(u i ) is constant i, or equivalently, if h(z iθ) is constant i. By our normalization, we know that h(z iθ) is constant i if z iθ = 0 because h(0) = 1. If θ = 0 then we are guaranteed that z iθ = 0. On the other hand, it is not sensible to that expect z i = 0. herefore, if θ = 0 then V ar(u i ) = 1 σ 2 = σ 2, and u i is homoskedastic. 3.2 esting for Heteroskedasticity Accordingly, a test for heteroskedasticity reduces to testing the null hypothesis H 0 : θ = 0. he alternative hypothesis is H 1 : θ 0. We now derive a linear regression that lends to this hypothesis test. Note that this test presumes that we have correctly assumed the functional form for h(.). Under the null hypothesis, V ar(ε i ) = E(ε 2 i ) E(ε i ) 2 = E(ε 2 i ) 0 2 = E(ε 2 i ) V ar(ε i ) = σ 2 E(ε 2 i ) = σ 2 V ar(u i ) = E(u 2 i ) E(u i ) 2 = E(u 2 i ) E(c i ε i ) 2 = E(u 2 i ) c 2 i E(ε i ) 2 = E(u 2 i ) (1)(0) 2 = E(u 2 i ) V ar(u i ) = h(z iθ)σ 2 = σ 2 E(u 2 i ) = σ 2 E(ε 2 i ) = E(u 2 i ) = σ 2 = h(z iθ)σ 2 4

A first order aylor Series approximation for h(z iθ) about θ = 0 is h(z i θ) = h(0) + h (0)z i θ + R(z iθ). We assume that as z iθ 0, R(z iθ) 0 at rate that is at least quadratic. his assumption can be quite problematic depending on the functional form for h(.), although it is effectively a footnote to the many other assumptions we are making. Nevertheless, we proceed with this assumption so that we can write h(z i θ) = h(0) + h (0)z i θ = 1 + h (0)z i θ, which mathematically is false: E(u 2 i ) = σ 2 h(z iθ) = σ 2 (1 + h (0)z iθ) = σ 2 + σ 2 h (0)z iθ Let δ = σ 2 h (0)θ. Moreover, if we assume an error term, r i, such that E(r i z i ) = 0 and V ar(r i z i ) = τ, then this model satisfies the classical linear assumptions. herefore, we can test the regression: u 2 i = σ 2 + z i δ + r i Since θ = 0 δ = 0, we actually test the null hypothesis that H 0 : δ = 0. Moreover, because E(ε 2 i ) = E(u 2 i ), we could substitute ε 2 i in place of u 2 i. However, we cannot immediately proceed because we do not observe u i. Accordingly, the prevailing baseline test comes from Breusch and Pagan (1979). he justification for the test is beyond the scope of the course. Nevertheless, Professor Powell expects that you know the steps in Breusch and Pagan and that you could apply it to data. Here is a summary of the tests Professor Powell presented. You are responsible for these tests insofar as Professor Powell presented them: able 1: Summary of ests for Heteroskedasticity Name Expression Distribution Comment Breusch-Pagan = NR 2 χ 2 p p = dim(z i ) F F = (N K)R2 F (1 R 2 )p (p,n K) F = /p Studentized LM = RSS χ 2 ˆτ p if ε i gaussian, τ = 2σ 4 Goldfeld-Quandt s 2 1/s 2 2 F ([N/2] k,n [N/2] k) gaussian ε i, one-sided Here is the procedure for Breusch and Pagan (1979) to test the null hypothesis: 1. Use û 2 i = (y i x i ˆβ OLS ) 2 as a proxy for u 2 i because the squared residuals are observable and are consistent estimators of the true squared errors. Note that û i actually makes more sense than ˆε i because that is technically our residual. 2. Regress the û 2 i on 1 and z i and obtain the usual constant-adjusted R 2 = 5 n i=1 (ŷ i ȳ i ) 2 n i=1 (y i ȳ i ) 2.

3. Under the null, the statistic where p = dim(δ) = dim(z i ). = NR 2 d χ 2 p Reject H 0 if exceeds the upper critical value of a chi-squared variable with p degrees of freedom. 3.3 Feasible Estimator If the null hypothesis of homoskedasticity is rejected, then a correction for heteroskedasticity is needed. Using Feasible WLS, it is necessary to to estimate ˆΣ, that is, Diag[E(ε 2 i )]. Since E(ε 2 i ) = σ 2 h(z iθ), we must estimate θ and σ 2 : 1. Use e i 2 = (y i x i ˆβ OLS ) 2 as a proxy for ε 2 i because the squares residuals are consistent estimators of the true squared errors. Estimate θ and σ 2 using least squares. 2. Replace y i with y i = y i h(z iˆθ) 1/2 and x i with x i = x i h(z iˆθ) 1/2. Do least squares using y i and x i. In doing so, ˆΣ = Diag[h(z iˆθ)]. If the variance structure is correctly specified, then ˆβ F W LS is asymptotically BLUE, and thus has the same asymptotic variance as ˆβ GLS. his assumption requires the functional form of the heteroskedasticity to be correctly specified. 3.4 Exercises he first two exercises are questions from previous exams. Feasible WLS, specifically Breusch- Pagan, has only appeared in rue/false questions in the last 5 years, although part of a 2003 long question required listing the steps for constructing an appropriate Feasible WLS estimator. he third exercise is to demonstrate a very appropriate application of WLS and the fourth is to provide some practice with multiplicative models. 3.4.1 2002 Exam, 1B Note that a version of this question also appeared in the 2005 Exam as question 1B. Question: rue/false/explain. o test for heteroskedastic errors in a linear model, it is useful to regress functions of the absolute values of least-squares residuals (eg. the squared residuals) on functions of the regressors. he R-squared from this second stage regrssion will be (approximately) distributed as chi-square random variable under the null hypothesis of no heteroskedastidcity, with degress of freedome equal to the number of no-constant functions of the regressors in the secondstage. Answer: False. he statement would be correct if the words R-squared were replaced by the words sample size times R-squared, since, under the null of homoskedasticity, R 2 tends to zero 6

in probability, but N R 2, the studentized Breusch-Pagan test statistic, has a limiting χ 2 r distribution under H 0, where r is the number of non-constant regressors in the second stage regression. 3.4.2 2004 Exam, 1D Question: rue/false/explain. In a linear model with an intercept and two nonrandom, nonconstant regressors, and with sample size N = 200, it is suspected that a random coefficients model applies, i.e., that the intercept term and two slope coefficients are jointly random across individuals, independent of the regressors. If the squared values of the LS residuals from this model are themselves fit to a quadratic function of the regressors, and if the R 2 from this second-step regression equals 0.06, the null hypothesis of no heteroskedasticity should be rejected at an approximate 5-percent level. Answer: rue. he Breusch-Pagan test statistic for the null homoskedasticity is NR 2 = 200 0.06 = 12 for these data; the second-step regresses the squared LS residuals on a constant term and five explanatory variables for the random coefficients alternative, specifically, x 1, x 2, x 2 1, x 2 2, and x 1 x 2, where x 1 and x 2 are the non-constant regressors in the original LS regression. Since the upper 5-percent critical value for a χ 2 random variable with 5 degrees of freedom is 11.07, the null of homoskedasticity should be rejected. 3.4.3 Grouped-Data Regression Model Question: rue/false/explain. Suppose we are interested in estimating a linear model, y ij = x ijβ + ε ij, that satisfies the classical linear assumptions, including a scalar variance-covariance matrix. However, we only have access to data that is aggregated as averages for each j. Moreover, we know the amount of observations in the original model for each j. he WLS squares estimator that is weighted by square root of the number of observations in j, j is BLUE. Answer: rue. Suppose E(ε ij ) = 0 and V ar(ε ij ) = σ 2. Given our limitation in data, we are analyze the model ȳ j = x j β + ε j. Let m j be the number of observations in the the original model for each unit j. hen ε j = m 1 mj j i=1 ε ij. We can show that this transformed model satisfies the Gauss-Markov assumptions: E(m 1/2 j ε j ) = m 1/2 j E( ε j ) = m 1/2 j = m 1/2 j E(m 1 ε ij ) j m 1 j m j i=1 m j 0 i=1 = m 1/2 j (m j 0) = 0 7

V ar(m 1/2 j ε j ) = m j V ar( ε j ) = m j V ar(m 1 j = m j m 2 j m j i=1 m j ε ij ) i=1 σ 2 = m 1 j (m j σ 2 ) = σ 2 As a result, this weighting causes ˆβ W LS to be BLUE. Note that this model is applicable for any possible aggregator j. For example, j can be firms in a company, US states, or countries in a crosscountry study. However, if the original linear model is not homoskedastic, then we would proceed with Eicker-White standard errors. 3.4.4 Multiplicative Model Question: Suppose that the sample has size N=100, and the random variables y i are independent with E(y i ) = βx i and V (y i ) = σ 2 x i 2. 1) Is this a multiplicative model? Yes. he model is: y i = βx i + ε i. Let u i iid(0, σ 2 ). hen V ar(y i ) = V ar(ε i ) = σ 2 x i 2 implies that ε i = x i u i because V ar(x i u i ) = σ 2 x 2 i. 2) How could you test for heteroskedasticity in this model? We use OLS to estimate: z i = x 2 i h(z iθ) = x 2 i θ We reject H 0 : δ = 0 if: e 2 i = σ 2 + δx i 2 + r i ê i 2 = ˆσ 2 + ˆδx 2 i R 2 = (ŷ i ȳ i ) (ŷ i ȳ i ) (y i ȳ i ) (y i ȳ i ) 100R 2 > q χ 2 1 =0.95, where q χ 2 1 =0.95 is the 95th. percentile of the χ 2 1 distribution. 3) Construct a GLS estimator of β. ˆβ F W LS = (X ˆΣ 1 X) 1 X ˆΣ 1 y where ˆΣ = Diag[ˆδx 2 i ] and ˆδ = ( n i=1 x4 i ) 1 n i=1 x2 i e 2 i by least squares estimation. 8

4 Eicker-White Standard Errors Alternatively, we can use OLS and correct the standard errors. he benefit of this approach is that it does not require any structure on the nature of the heteroskedasticity. In addition, the structure of the heteroskedasticity may not be correctly specified, and a diagnostic test may falsely reject the hypothesis that the errors are homoskedastic. An incorrectly specified structure would cause ˆβ F GLS to not be asymptotically BLUE nor have a consistent variance-covariance estimator. Further, the interpretation of OLS estimates is desirable for policy because of its ceteris paribus nature. Recall that the problem with ˆβ OLS if V ar(y X) = Σ = σ 2 Ω is that although the estimator is unbiased and consistent, it is not the most efficient. Moreover, its standard errors cannot be consistently estimated because of the difficulty in consistently estimating Σ. Specifically, the variance-covariance matrix for ˆβ OLS is V ar( ˆβ OLS X) = (X X) 1 X ΣX(X X) 1. Nevertheless, White (1980) draws upon the work of Eicker (1967) to show that it is possible to consistently estimate plim ( σ ( )) 2 X ΩX n. With pure heteroskedasticity, Σ must be a diagonal matrix. Accordingly, White proves that a consistent variance-covariance estimator draws upon the ordinary least squares residuals: V ar( ˆβ OLS X) = (X X) 1 X Diag[(y i x i ˆβ OLS ) 2 ]X(X X) 1 hat is, White proves that ˆΣ = Diag[(y i x ˆβ i OLS ) 2 ], a diagonal matrix of the OLS residuals, is not a consistent estimator of Σ, but X Diag[(y i x ˆβ i OLS ) 2 ]X is a consistent estimator of plim X ΣX. n n his estimator is known as the heteroskedasticity-consistent covariance matrix estimator, and often includes combinations of the authors names. Note that Professor Powell does not prove this result because it is beyond the scope of the course. However, you should understand its purpose, be aware of its advantadges and disadvantages, and know how to implement it. In finite samples, several adjustments based on degrees of freedom have been proposed to help make small sapmle inference more accurate. Relative to an asymptotically correct ˆβ F GLS, hypothesis testing based on the corrected standard errors is likely over stated. If OLS yields highly statistically significant results, however, then we can likely trust inferences based on OLS. If OLS yields results that are economically different from [F]GLS, the probably is likely with another assumption. 5 Structural Approach to Serial Correlation Serial Correlation means that in the linear model, y t = x tβ + ε t, the variance of the errors: Ω = E(εε X) has non-zero elements off the diagonal. Equivalently, E(ε i ε j X) 0 i j. We consider time series data because we can write a plausible functional form for the relationship between the errors. We usually assume the error terms are weakly stationary, whereby V ar(y t ) = σy 2 t, thus returning to homoskedasticity. 9

As with pure heteroskedasticity, we consider how to construct consistent standard errors if we hypothesize there is serial correlation. We can test for serial correlation and construct a feasible GLS estimator based on the estimation associated with the assumed structural form of the serial correlation. Alternatively, the Newey-West estimator uses OLS and corrects its standard errors so that they are consistent. he former is known as the structural approach because of the assumed structure of the serial correlation, whereas the latter is known as the nonstructural approach because of its applicability for any possibile structure of the serial correlation that need not be known. 5.1 First-Order Serial Correlation Consider the linear model: y t = x tβ + ε t, t = 1,... where Cov(ε t, ε s ) 0. Specifically, we consider that the errors follow a weakly stationary AR(1) process: ε t = ρε t 1 + u t where the u t are i.i.d., E(u t ) = 0, V ar(u t ) = σ 2, and u t are uncorrelated with x t his last assumption eliminates the possibility of having a lagged y among the regressors. By stationarity, the variance of each ε t is the same t. V ar(ε t ) = V ar(ρε t 1 + u t ) = ρ 2 V ar(ε t 1 ) + V ar(u t ) + 2Cov(ε t 1, u t ) = ρ 2 V ar(ε t ) + σ 2 + 0 V ar(ε t )(1 ρ 2 ) = σ 2 V ar(ε t ) = σ2 1 ρ 2 Also, note that by recursion, we can repress ε t as ε t = ρε t 1 + u t = ρ(ρε t 2 + u t 1 ) + u t = ρ 2 ε t 2 + ρu t 1 + u t = ρ 2 (ρε t 3 + u t 2 ) + ρu t 1 + u t = ρ 3 ε t 3 + ρ 2 u t 2 + ρu t 1 + u t s 1... = ρ s ε t s + ρ i u t i i=0 Using this result, we can more easily compute the off-diagonal covariances in the variance-covariance matrix: 10

Using these results, s 1 Cov(ε t, ε t s ) = Cov(ρ s ε t s + ρ i u t i, ε t s ) V ar(ε) = σ 2 Ω = σ 2 i=0 s 1 = ρ s Cov(ε t s, ε t s ) + Cov( ρ i u t i, ε t s ) = ρ s V ar(ε t s ) + 0 = ρ s σ 2 i=0 1 ρ 2 1 ρ ρ 2... ρ 1 ρ 1 ρ... ρ 2.......... ρ 1 ρ 2...... 1 x 1 1 ρ 2 We can use the Cholesky decomposition to derive ˆβ GLS. Computing Ω 1 and using the Cholesky Decomposition where Ω 1 = H H yields 1 ρ 2 0 0 0... 0 ρ 1 0 0... 0 H = 0 ρ 1 0. 0... 1. 0.... 0 0 0... 0 ρ 1 he transformed model thus uses y t = Hy t and x t = Hx t, which expanded out is: y1 = 1 ρ 2 y 1, x 1 = 1 ρ 2 x 1 yt = y t ρy t 1, x t = x t ρx t 1 for t = 2,... Accordingly, except for the first observation, this regression is known as generalzed difference. 5.2 esting for Serial Correlation If ρ 0 in the AR(1) model, then there is serial correlation. If the null hypothesis: H 0 : ρ = 0 is rejected, the model reduces to the classical linear regression model. We assume that ε 0 equals zero so the sums start in t=1. his assumption is not necessary, but it helps some of the calculations. Recall from the time series exercise done in section that an ordinary least squares estimate of ρ is: 11

t=1 ρ = ε tε t 1 t=1 ε2 t 1 his estimator can be rewritten to compute its limiting distribution: ( ρ ρ) = 1 t=1 ε t 1u t 1 t=1 ε2 t 1 Recall the limiting distributions for the numerator and denominator: hus by Slutsky s heorem: 1 σ 4 ε t 1 u t d N(0, 1 ρ ) 2 t=1 1 ε 2 σ 2 t 1 p 1 ρ 2 t=1 ( ρ ρ) = 1 t=1 ε t 1u t 1 t=1 ε2 t 1 d N 0, ( σ 4 1 ρ 2 ) 2 σ 2 1 ρ 2 = N(0, 1 ρ 2 ) he problem with this estimator, however, is that we do not know ε t so we cannot calculate ρ. However, we can express the least squares residual, e t as: e t = ε t + x t(β ˆβ) Because ˆβ depends on, we can rewrite e t as e t,, where e t, probability theorems to show that t=1 ete t 1 t=1 e2 t 1 t=1 ε t 1ε t t=1 ε2 t 1 p p 0 as. ε t. As a result, we can use Accordingly, an asymptotically equivalent estimator based on the least squares residuals is: ˆρ = t=1 e te t 1 t=1 e2 t 1 (ˆρ ρ) d N(0, 1 ρ 2 ) Under the null hypothesis, ˆρ d N(0, 1) hus, this test statistic implies rejecting the null hypothesis if ˆρ exceeds the upper α critical value z(α) of a standard normal distribution. 12

able 2: Summary of ests for Serial Correlation Name Expression Distribution Comment under the null Breusch-Godfrey = NR 2 χ 2 p Higher serial corr. and lagged dep var usual test ˆρ N (0, 1) also chi-square ˆρ 2 Durbin-Watson DW = Durbin s h t=2 (êt ê t 1) 2 t=1 ê2 t DW normal approximation ˆρ 1 [SE( ˆβ 1 )] 2 N (0, 1) Lagged dep. variable [SE( ˆβ 1 )] 2 < 1 Other tests exist, and they have specific characteristics that you should study in Professor Powell s notes. Here is a table that summarizes these tests. In able 2 the tests are ranked in decreasing order of generality. For instance, Breusch-Godfrey is general in the sense that we can test serial correlation of order p, and the test can be used with lagged dependent variable. he usual test and Durbin Watson allow us to test first order serial correlation, but recall that Durbin Watson has an inconclusive region. he usual test statistic is straight forward, and it can also be used against a two-sided alternative hypothesis whereas DW has exact critical values that depend on X. Durbin s h is useful for testing in the presence of lagged dependent variable. With lagged dependent variables, ˆρ has a distribution that is more tightly distributed around zero than a standard normal, thus making it more difficult to reject the null. 5.3 Feasible GLS After determining that there is indeed serial correlation, we can construct a feasible GLS estimator. Professor Powell presented 5 methods of constructing such an estimator that you should known insofar as he they were discussed in lecture: i) Prais-Winsten ii) Cochrane-Orcutt iii) Durbin s method iv) Hildreth-Liu v) MLE 13

Professor Powell also briefly discussed how to generalize FGLS construction to the case of AR(p) serially correlated errors. As with heteroskedasticity, if the form of serial correlation is correctly specified, then these approaches give us estimators of β and ρ with the same asymptotic properties as ˆβ GLS. 5.4 Exercises As with heteroskedasticity, serial correlation has appeared regularly on exams. However, it has only appeared in the rue and False section. 5.4.1 2002 Exam, Question 1C Note that a nearly identical question appeared in the 2005 Exam. Question: In the regression model with first-order serially correlated errors and fixed (nonrandom) regressors, E(y t ) = x tβ, V ar(y t ) = σ2, and Cov(y 1 ρ 2 t, y t 1 ) = ρσ2. So if the sample correlation of the dependent variable y t with its lagged value y t 1 exceeds 1.96 1 ρ 2 in magnitude, we should reject the null hypothesis of no serial correlation, and should either estimate β and its asymptotic covariance matrix by FGLS or some other efficient method or replace the usual estimator of the LS covariance matrix by the Newey-West estimator (or some variant of it). Answer: False. he statement would be correct if the phrase,...sample correlation of the dependent variable y t with its lagged value y t 1 were replaced with...sample correlation of the least squares residual e t = y t x ˆβ t LS with its lagged value e t 1.... While the population autocvoariance of y t is the same as that for the errors ε t = y t x tβ because the regressors are assumed nonrandom, the sample autocovaraince of y t will involve both the sample autocovariance of the residuals e t and the sample autocovariance of the fitted values ŷ = x ˆβ t LS, which will generally be nonzero, depending upon the particular values of the regressors. 5.4.2 2003 Exam, Question 1B Question: In the linear model y t = x tβ + ε t, if the conditional covariances of the errors terms, ε t have the mixed heteroskedastic/autocorrelated form Cov(ε t, ε s X) = ρ t s x tθ x sθ (where it is assumed x tθ > 0 with probability one), the parameters of the covariance matrix can be estimated in a multi-step procedure, first regressing least-squares residuals e t = y t x t ˆβ LS on their lagged values e t 1 to estimate ρ, then regressing the squared generalized differenced residuals û 2 t (where û t = e t ˆρe t 1 ) on x t to estimate the θ coefficients. Answer: False. Assuming x t is stationary and E[ε t X] = 0, the probability limit of the LS regression of e t on e t 1 will be 14

ρ = Cov(ε t, ε t 1 ) V ar(ε t 1 ) = E[Cov(ε t, ε t 1 )] + Cov[E(ε t X), E(ε t 1 X)] E[V ar(ε t 1 )] + V ar[e(ε t X)] = E[Cov(ε t, ε t 1 )] E[V ar(ε t 1 )] = E[ρ (x tθ) (x sθ)] E[(x tθ)] ρ in general. Note that the second line uses the conditional variance identity (See Casella and Berger, p. 167). he remaining substitutions use stationary and the expression given in the question about the conditional covariance of the errors. o make this statement correct, we must reverse the order of autocorrelation and heteroskedasticity corrections. First, since Cov(ε t, ε t X) = ρ t t x tθ x tθ = x tθ we could regress ε 2 t on x t to estimate θ or, since ε t is unobserved, regress e 2 t on x t (à la Breusch- Pagan). Given ˆθ, we can reweight the residuals to form û t = e t / x tθ. Since Cov(u t, u t 1 X) = ρ, a least squares regression of û t on û t 1 will consistently estimate ρ (as long as the least squares residuals e t are consistent for the true errors ε t ). 5.4.3 2004 Exam, Question 1B Question: In the linear model with a lagged dependent variable, y t = x tβ + γy t 1 + ε t, suppose the error terms have first-order serial correlation, i.e., ε t = ρ t 1 + u t, where u t is an i.i.d. sequence with zero mean, variance σ 2, and is independent of x s for all t and s. For this model, the classical LS estimators will be inconsistent for β and γ, but Aitken s GLS estimator (for a known Ω matrix) will consistently estimate these parameters. Answer: rue. While the classical LS estimators of β and γ are indeed inconsistent because of the covariance between y t 1 and ε t, the GLS estimator, with the correct value of ρ, will be consistent. Apart from the first observation (which would not make a difference in large samples), the GLS estimator is LS applied to the generalized differenced regression: y t = y t ρy t 1 = (x t ρx t 1 ) β + γ(y t 1 ρy t 2 ) + (ε t ρε t 1 ) = x t β + γy t 1 + u t 15

But because u t = ε t ρε t 1 is i.i.d., it will be independent of x t and y t 1 = y t 1 ρy t 2, so E[u t x t, y t 1] = 0, as needed for consistency. So the problem with feasible GLS with lagged dependent variables isn t consistency of the estimators of β and γ with a consistent estimator of ρ, but rather it is the difficulty of getting a consistent estimator of ρ, since the usual least squares residuals invovle inconsistent estimators of the regression coefficients 6 Nonstructural Approach to Serial Correlation A handful of estimators have been proposed in the style of Eicker White to account for serial correlation. hat is, we can use ˆβ OLS = (X X) 1 X y and adjust the standard errors to obtain a consistent estimator that accounts for possible serial correlation. Such methods do not require the structure of the serial correlation to be known, and have similar advantages and disadvantages to Eicker-White. Recall that ˆβ OLS is inefficient if there is serial correlation, but still consistent and approximately normally distributed with ( ˆβLS β) d N (0, D 1 V D 1 ) where D = plim 1 X X, and V = plim 1 X ΣX and Σ = E[ɛɛ X]. Since we have a consistent estimator of D, say ˆD = X X/, we just need to get a consistent estimator for V. One popular nonparametric choice is the Newey-West estimator which is consistent: ˆV = ˆΓ M 0 + (1 j M )(ˆΓ j + ˆΓ j) j=1 where ˆΓ = 1 t=j+1 êtê t j x t x t j and M is the bandwidth parameter. his parameter is important because we weight down autocovariances near this threshold and we have a p.s.d. matrix V. Some technical requirements are that M = M( ), M/ 1/3 0 as. he proof for Newey-West is beyond the scope of the course, and you should be familiar with its existence, purpose, and vaguely its construction. 16