Handout 11: Measurement Error

Similar documents
Measurement Error. Often a data set will contain imperfect measures of the data we would ideally like.

Lecture 14. More on using dummy variables (deal with seasonality)

Handout 12. Endogeneity & Simultaneous Equation Models

1. You have data on years of work experience, EXPER, its square, EXPER2, years of education, EDUC, and the log of hourly wages, LWAGE

Econometrics. 8) Instrumental variables

Essential of Simple regression

Autocorrelation. Think of autocorrelation as signifying a systematic relationship between the residuals measured at different points in time

Problem Set 10: Panel Data

4 Instrumental Variables Single endogenous variable One continuous instrument. 2

Lab 07 Introduction to Econometrics

4 Instrumental Variables Single endogenous variable One continuous instrument. 2

ECO220Y Simple Regression: Testing the Slope

ECON Introductory Econometrics. Lecture 6: OLS with Multiple Regressors

1 Motivation for Instrumental Variable (IV) Regression

Econometrics Midterm Examination Answers

Warwick Economics Summer School Topics in Microeconometrics Instrumental Variables Estimation

Empirical Application of Simple Regression (Chapter 2)

Lecture 4: Multivariate Regression, Part 2

Answer all questions from part I. Answer two question from part II.a, and one question from part II.b.

Binary Dependent Variables

Heteroskedasticity. Occurs when the Gauss Markov assumption that the residual variance is constant across all observations in the data set

An explanation of Two Stage Least Squares

Instrumental Variables, Simultaneous and Systems of Equations

Lab 11 - Heteroskedasticity

University of California at Berkeley Fall Introductory Applied Econometrics Final examination. Scores add up to 125 points

Fixed and Random Effects Models: Vartanian, SW 683

Lecture 8: Instrumental Variables Estimation

Heteroskedasticity. (In practice this means the spread of observations around any given value of X will not now be constant)

Graduate Econometrics Lecture 4: Heteroskedasticity

Nonrecursive Models Highlights Richard Williams, University of Notre Dame, Last revised April 6, 2015

ECON2228 Notes 2. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 47

ECON3150/4150 Spring 2015

ECON3150/4150 Spring 2016

ECON Introductory Econometrics. Lecture 13: Internal and external validity

Lecture 4: Multivariate Regression, Part 2

Эконометрика, , 4 модуль Семинар Для Группы Э_Б2015_Э_3 Семинарист О.А.Демидова

THE MULTIVARIATE LINEAR REGRESSION MODEL

Applied Statistics and Econometrics

ECON3150/4150 Spring 2016

Final Exam. Question 1 (20 points) 2 (25 points) 3 (30 points) 4 (25 points) 5 (10 points) 6 (40 points) Total (150 points) Bonus question (10)

Lecture#12. Instrumental variables regression Causal parameters III

INTRODUCTION TO BASIC LINEAR REGRESSION MODEL

General Linear Model (Chapter 4)

Econ 836 Final Exam. 2 w N 2 u N 2. 2 v N

The Simple Linear Regression Model

Final Exam. 1. Definitions: Briefly Define each of the following terms as they relate to the material covered in class.

Practice exam questions

Practice 2SLS with Artificial Data Part 1

Applied Statistics and Econometrics

ECON Introductory Econometrics. Lecture 16: Instrumental variables

8. Nonstandard standard error issues 8.1. The bias of robust standard errors

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Lecture 19. Common problem in cross section estimation heteroskedasticity

ECON2228 Notes 7. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 41

Applied Statistics and Econometrics

5.2. a. Unobserved factors that tend to make an individual healthier also tend

Exercices for Applied Econometrics A

ECON Introductory Econometrics. Lecture 17: Experiments

Econometrics Homework 1

Econometrics II Censoring & Truncation. May 5, 2011

Basic econometrics. Tutorial 3. Dipl.Kfm. Johannes Metzler

Specification Error: Omitted and Extraneous Variables

Maria Elena Bontempi Roberto Golinelli this version: 5 September 2007

Chapter 2: simple regression model

Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data

At this point, if you ve done everything correctly, you should have data that looks something like:

. *DEFINITIONS OF ARTIFICIAL DATA SET. mat m=(12,20,0) /*matrix of means of RHS vars: edu, exp, error*/

Lecture 8: Functional Form

Econometrics. Week 8. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Please discuss each of the 3 problems on a separate sheet of paper, not just on a separate page!

Empirical Application of Panel Data Regression

Statistical Modelling in Stata 5: Linear Models

Section I. Define or explain the following terms (3 points each) 1. centered vs. uncentered 2 R - 2. Frisch theorem -

Simultaneous Equations with Error Components. Mike Bronner Marko Ledic Anja Breitwieser

Greene, Econometric Analysis (7th ed, 2012)

Econometrics. 7) Endogeneity

Econometrics I KS. Module 1: Bivariate Linear Regression. Alexander Ahammer. This version: March 12, 2018

Lab 10 - Binary Variables

Nonrecursive models (Extended Version) Richard Williams, University of Notre Dame, Last revised April 6, 2015

Lecture 3: Multivariate Regression

Problem Set 1 ANSWERS

Lecture 24: Partial correlation, multiple regression, and correlation

Statistical Inference with Regression Analysis

Motivation for multiple regression

Question 1 [17 points]: (ch 11)

1 Independent Practice: Hypothesis tests for one parameter:

Problem set - Selection and Diff-in-Diff

sociology sociology Scatterplots Quantitative Research Methods: Introduction to correlation and regression Age vs Income

Case of single exogenous (iv) variable (with single or multiple mediators) iv à med à dv. = β 0. iv i. med i + α 1

Interpreting coefficients for transformed variables

Answers: Problem Set 9. Dynamic Models

ESTIMATING AVERAGE TREATMENT EFFECTS: REGRESSION DISCONTINUITY DESIGNS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics

Applied Econometrics (MSc.) Lecture 3 Instrumental Variables

Heteroskedasticity Example

Course Econometrics I

Nonlinear Regression Functions

Multiple Regression Analysis: Estimation. Simple linear regression model: an intercept and one explanatory variable (regressor)

Introductory Econometrics. Lecture 13: Hypothesis testing in the multiple regression model, Part 1

S o c i o l o g y E x a m 2 A n s w e r K e y - D R A F T M a r c h 2 7,

ECON2228 Notes 10. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 48

Transcription:

Handout 11: Measurement Error In which you learn to recognise the consequences for OLS estimation whenever some of the variables you use are not measured as accurately as you might expect. A (potential) solution to the problem is also discussed that of instrumental variable estimation.

Often a data set will contain imperfect measures of the data we would ideally like. Aggregate Data: (GDP, Consumption, Investment) are only best guesses of theoretical counterparts and frequently revised by government statisticians (so earlier estimates must have been subject to error) Survey Data: (income, health, age) Individuals often lie, forget or round to nearest large number ( 102 a week or 100?) Proxy Data: (Ability, Intelligence, Permanent Income) Difficult to agree on definition, let alone measure Measurement Error in Dependent Variable True: y = b 0 + b 1 X + u (1) Observe: y= y + e (2) ie dependent variable measured with error e e is a random residual term just like u, so E(e)=0 Sub. (2) into (1) y - e = b 0 + b 1 X + u y = b 0 + b 1 X + u + e y = b 0 + b 1 X + v where v = u + e (3) Ok to estimate (3) by OLS, since E(u) = E(e) = 0 Cov(X,u) = Cov(X,e) = 0 (nothing to suggest X variable correlated with meas. error in dependent variable) So OLS estimates are unbiased in this case but standard errors are larger than would be in absence of meas. error True: ^ 2 σ u β ) = (A) N ~ 2 2 σ Estimate: ) u + σ β = e (B) N Since var(v)=var(u+e)=var(e)+var(u)+2cov(e,u) and assume things that cause measurement error in y are unrelated to residual u so cov(e,u)=0. (B) shows that residual variance in presence of measurement error in dep. variable now also contains an additional contribution from error in y variable, σ 2 e 2

3

Measurement Error in Explanatory Variable True: y = b 0 + b 1 X + u (1) Observe: X= X + w (2) ie rhs var. measured with error (w) sub. (2) into (1) y = b 0 + b 1 (X-w) + u y = b 0 + b 1 X - b 1 w + u y = b 0 + b 1 X + v (3) where now v = - b 1 w + u (so residual term again consists of 2 components) Does this matter? In model, we know that (4) ^ Cov( X, y ) Cov( X ( b 1 ) ( o + b X + u Cov X, u) b 1 = = = b1 + and X ) X ) X ) ^ ^ by assumption Cov(X,u) = 0, so that E ( b1) = b1 but in presence of measurement error, we estimate (3) and so Cov(X,v) = Cov(X +w,- b 1 w + u) Expanding terms (using (2) & (3)) = Cov(X,u)+ Cov(X,-b 1 w) + Cov(w,u) + Cov(w,-b 1 w) Since u and w are independent errors (caused by different factors), no reason to expect them to be correlated with each other or the value of X (any error in X should not depend on the level of X), so Cov(w,u)= Cov(X,u)= Cov(X,-b 1 w) = 0 This leaves Cov( X, v) 0 = Cov( w, b1 w) = b1 Cov( w, w) = b1 w) In other words there is now a correlation between the X variable and the error term in (3). So if we estimate (3) by OLS, this violates the main assumption need to get an unbiased estimate ie we need covariance between explanatory variable and residual to be zero, (see earlier notes). This is not the case when explanatory variable measured with error. (4) becomes 4

5

^ Cov( X, y ) Cov( X ( bo + b1 X + u) Cov( X, v) b1 w) b1 = = = b1 + = b1 + b1 ^ ) ^ so E( b 1 b 1 and OLS gives biased estimates in presence of measurement error in the explanatory variable Not only that, can show, using (5), that OLS estimates are always biased toward zero (Attenuation Bias) if b 1 >0 then b1 ols < b1 if b 1 <0 then b1 ols > b1 ^ ^ ie closer to zero in both cases (means harder to reject any test that coefficient is zero ) Example of Consequences Of Measurement Error In Data This example uses artificially generated measurement error to illustrate the basic issues.. list /* list observations in data set */ y_ y_observ x_ x_observ 1. 75.4666 67.6011 80 60 2. 74.9801 75.4438 100 80 3. 102.8242 109.6956 120 100 4. 125.7651 129.4159 140 120 5. 106.5035 104.2388 160 140 6. 131.4318 125.8319 180 200 7. 149.3693 153.9926 200 220 8. 143.8628 152.9208 220 240 9. 177.5218 176.3344 240 260 10. 182.2748 174.5252 260 280 The data set measerr.dta gives (unobserved) values of y and x and their observed counterparts, (the 1 st 5 observations on x underestimate by 20 and the last 5 overestimate the value by 20) First look at regression of y on x. reg y_ x_ Source SS df MS Number of obs = 10 ---------+------------------------------ F( 1, 8) = 105.60 Model 11879.9941 1 11879.9941 Prob > F = 0.0000 Residual 900.003252 8 112.500407 R-squared = 0.9296 ---------+------------------------------ Adj R-squared = 0.9208 Total 12779.9974 9 1419.99971 Root MSE = 10.607 y_ Coef. Std. Err. t P> t [95% Conf. Interval] ---------+-------------------------------------------------------------------- x_.5999999.0583875 10.276 0.000.465358.7346417 _cons 25.00002 10.47727 2.386 0.044.8394026 49.16064 (5) 6

7

Now look at consequence of measurement error in dependent variable. reg y_obs x_ Source SS df MS Number of obs = 10 ---------+------------------------------ F( 1, 8) = 77.65 Model 11880.0048 1 11880.0048 Prob > F = 0.0000 Residual 1223.99853 8 152.999817 R-squared = 0.9066 ---------+------------------------------ Adj R-squared = 0.8949 Total 13104.0033 9 1456.00037 Root MSE = 12.369 y_observ Coef. Std. Err. t P> t [95% Conf. Interval] ---------+-------------------------------------------------------------------- x_.6000001.0680908 8.812 0.000.4429824.7570178 _cons 24.99999 12.21846 2.046 0.075-3.175826 53.17581 Consequence: Coefficients virtually identical, (unbiased) but standard errors larger and hence t values smaller and confidence intervals wider. Measurement error in explanatory variable:. reg y_ x_obs Source SS df MS Number of obs = 10 -------------+------------------------------ F( 1, 8) = 83.15 Model 11658.3625 1 11658.3625 Prob > F = 0.0000 Residual 1121.63494 8 140.204367 R-squared = 0.9122 -------------+------------------------------ Adj R-squared = 0.9013 Total 12779.9974 9 1419.99971 Root MSE = 11.841 y_ Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- x_observ.4522529.0495956 9.12 0.000.3378852.5666206 _cons 50.11701 9.225319 5.43 0.001 28.84338 71.39063 Consequence: both coefficients biased. and slope coefficient is biased toward zero (0.45 compared with 0.60 ie underestimate effect by 25%) Intercept is biased upward (compare 50.1 with 25.0) Problem is that Cov(X,u) 0 ie residual and right hand side variable are correlated In such situations the X variable is said to be endogenous 8

9

Solution? - Get better data If that is not possible do something to get round the problem. - replace the variable causing the correlation with the residual with one that is not but that at the same time is still related to the original variable Any variable that has these 2 properties is called an Instrumental Variable More formally, an instrument Z for the variable of concern X satisfies 1) Cov(X,Z) 0 correlated with the problem variable 2) Cov(Z,u) = 0 but uncorrelated with the residual (so does not suffer from measurement error and also is not correlated with any unobservable factors influencing the dependent variable) Instrumental variable (IV) estimation proceeds as follows: Given a model y = b 0 + b 1 X + u (1) Multiply by the instrument Z Zy = Zb 0 + b 1 ZX + Zu So Cov(Z,y) = Cov(Zb 0 ) + Cov(b 1 Z,X) + Cov(Z,u) = 0 + b 1 Cov(Z,X) + 0 (using rules on covariance of a constant and assumption 1 above) So b 1 IV = Cov( Z, y) Cov( Z, (compare with b 1 OLS = Cov( X, y) ) The IV estimate is unbiased in large samples consistent - (can prove this using similar steps to above) which makes it a useful estimation technique to employ However can show that (in the 2 variable case) ^ s 2 Var ( β IV 1 ) = * N * 1 r 2 X Z where r 2 xz is the square of the correlation coefficient between endogenous variable and instrument ^ s 2 (compared with OLS Var ( β OLS 1 ) = ) N * 10

11

So IV estimation is less precise (efficient) than OLS estimation when the explanatory variables are exogenous, but ^ the greater the correlation between X and Z the smaller is β IV ). ie IV estimates are generally more precise if the correlation between instrument and endogenous variable is large. However if X and Z are perfectly correlated then Z must also be correlated with u and so problem is not solved. Even though IV estimation is unbiased this property is asymptotic ie only holds when sample size is very large Since can always write the IV estimator as b IV Cov( Z, y) Cov( Z, u) 1 = = b 1 + Cov( Z, Cov( Z, (just sub. in for y from (1) ) In small samples poor correlation between X and Z can lead to small sample bias So: always check extent of correlation between X and Z before any IV estimation In large samples you can have as many instruments as you like though finding good ones is a different matter. In large samples more is better. In small samples a minimum number of instruments is better (bias in small samples increases with no. of instruments). Where to find good instruments? - difficult. The appropriate instrument will vary depending on the issue under study. In the case of measurement error, could use the rank of X as an instrument (ie order the variable X by size and use the number of the order rather than the actual vale. Though this assumes that the measurement error is not so large as to affect the () ordering of the X variable) egen rankx=rank(x_obs) /* stata command to create the ranking of x_observ */. list x_obs rankx x_observ rankx 1. 60 1 2. 80 2 3. 100 3 4. 120 4 5. 140 5 6. 200 6 7. 220 7 8. 240 8 9. 260 9 10. 280 10 ranks from smallest observed x to largest Now do instrumental variable estimates using rankx as the instrument for x_obs 12

13

Example: The data set ivdat.dta contains information on the number of GCSE passes of a sample of 16 year olds and the total income of the household in which they live. Income tends to be measured with error. Individuals tend to mis-report incomes, particularly third-party incomes and non-labour income. The following regression may therefore be subject to measurement error in one of the right hand side variables, (the gender dummy variable is less subject to error).. reg nqfede inc1 female Source SS df MS Number of obs = 252 -------------+------------------------------ F( 2, 249) = 14.55 Model 274.029395 2 137.014698 Prob > F = 0.0000 Residual 2344.9706 249 9.41755263 R-squared = 0.1046 -------------+------------------------------ Adj R-squared = 0.0974 Total 2619.00 251 10.4342629 Root MSE = 3.0688 nqfede Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- inc1.0396859.0087786 4.52 0.000.022396.0569758 female 1.172351.387686 3.02 0.003.4087896 1.935913 _cons 4.929297.4028493 12.24 0.000 4.13587 5.722723. Suppose income is instrumented using the rank of household income in the sample ivreg nqfede (inc1=ranki) female Instrumental variables (2SLS) regression Source SS df MS Number of obs = 252 -------------+------------------------------ F( 2, 249) = 13.09 Model 270.466601 2 135.2333 Prob > F = 0.0000 Residual 2348.5334 249 9.43186104 R-squared = 0.1033 -------------+------------------------------ Adj R-squared = 0.0961 Total 2619 251 10.4342629 Root MSE = 3.0711 nqfede Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- inc1.0450854.0107683 4.19 0.000.0238768.066294 female 1.176652.3880121 3.03 0.003.4124481 1.940856 _cons 4.753386.4513194 10.53 0.000 3.864496 5.642277 Instrumented: inc1 Instruments: female ranki Can see estimated coefficient on income is further away from zero (confirming presence of attenuation bias in OLS estimates) Note that standard error in instrumented regression is larger than standard error in the OLS regression 14

15

Testing for Endogeneity It is good practice to compare OLS and IV estimates. If estimates are very different this may be a sign that things are amiss. Using the idea that IV estimation will always be (asymptotically) unbiased whereas OLS will only be unbiased if Cov(X,u) = 0 then can do the following: Wu-Hausman Test for Endogeneity 1. Given y = b 0 + b 1 X + u (A) Regress X on the instrument(s) Z X = d 0 + d 1 Z + v Save the residuals ^ v (B) 2. Include this residual as an extra term in the original model ie estimate y = b 0 + b 1 X + b 2^ v + e and test whether b 2 = 0 (using a t test) 3. If b 2 = 0 conclude there is no correlation between X and u If b 2 0 conclude there is correlation between X and u (intuitively, since assume Z is uncorrelated with u the Z variables are exogenous - only way X could be correlated with u is through v (in (B) and u= b 2 v + e ) Example: To test endogeneity in the household income variable, first regress the suspect variable on the instrument and any exogenous variables in the original regression reg inc1 ranki female Source SS df MS Number of obs = 252 -------------+------------------------------ F( 2, 249) = 247.94 Model 81379.4112 2 40689.7056 Prob > F = 0.0000 Residual 40863.626 249 164.110948 R-squared = 0.6657 -------------+------------------------------ Adj R-squared = 0.6630 Total 122243.037 251 487.024053 Root MSE = 12.811 inc1 Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- ranki.2470712.0110979 22.26 0.000.2252136.2689289 female.2342779 1.618777 0.14 0.885-2.953962 3.422518 _cons.7722511 1.855748 0.42 0.678-2.882712 4.427214 1. save the residuals 16

17

predict uhat, resid 2. include residuals as additional regressor in the original equation reg nqfede inc1 female uhat Source SS df MS Number of obs = 252 -------------+------------------------------ F( 3, 248) = 9.94 Model 281.121189 3 93.7070629 Prob > F = 0.0000 Residual 2337.87881 248 9.42693069 R-squared = 0.1073 -------------+------------------------------ Adj R-squared = 0.0965 Total 2619.00 251 10.4342629 Root MSE = 3.0703 nqfede Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- inc1.0450854.0107655 4.19 0.000.0238819.0662888 female 1.176652.3879107 3.03 0.003.4126329 1.940672 uhat -.0161473.0186169-0.87 0.387 -.0528147.0205201 _cons 4.753386.4512015 10.53 0.000 3.864711 5.642062 Now added residual is not statistically significantly different from zero, so conclude that there is no endogeneity bias in the OLS estimates. Hence no need to instrument. Note you can also get this result by typing the following command after the ivreg command ivendog Tests of endogeneity of: inc1 H0: Regressor is exogenous Wu-Hausman F test: 0.75229 F(1,248) P-value = 0.38659 Durbin-Wu-Hausman chi-sq test: 0.76211 Chi-sq(1) P-value = 0.38267 the first test is simply the square of the t value on uhat in the last regression (since t 2 = F) N.B. This test is only as good as the instruments used and is only valid asymptotically. This may be a problem in small samples and so you should generally use this test only with sample sizes well above 100. 18