Lecture 3: Multiple Regression. Prof. Sharyn O Halloran Sustainable Development U9611 Econometrics II

Similar documents
H. Diagnostic plots of residuals

Lab 10 - Binary Variables

STATISTICS 110/201 PRACTICE FINAL EXAM

Lecture (chapter 13): Association between variables measured at the interval-ratio level

General Linear Model (Chapter 4)

Essential of Simple regression

Problem Set 10: Panel Data

Acknowledgements. Outline. Marie Diener-West. ICTR Leadership / Team INTRODUCTION TO CLINICAL RESEARCH. Introduction to Linear Regression

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

Correlation & Simple Regression

Applied Statistics and Econometrics

Lecture 7: OLS with qualitative information

Lecture 2 Linear Regression: A Model for the Mean. Sharyn O Halloran

Lab 6 - Simple Regression

5. Let W follow a normal distribution with mean of μ and the variance of 1. Then, the pdf of W is

Computer Exercise 3 Answers Hypothesis Testing

2.1. Consider the following production function, known in the literature as the transcendental production function (TPF).

Practice exam questions

Lab 07 Introduction to Econometrics

Problem Set #5-Key Sonoma State University Dr. Cuellar Economics 317- Introduction to Econometrics

Statistical Modelling in Stata 5: Linear Models

Correlation and Simple Linear Regression

Statistical Inference with Regression Analysis

Nonlinear Regression Functions

ECO220Y Simple Regression: Testing the Slope

Review of Multiple Regression

Lecture 9: Linear Regression

Problem Set #3-Key. wage Coef. Std. Err. t P> t [95% Conf. Interval]

Binary Dependent Variables

Soc 63993, Homework #7 Answer Key: Nonlinear effects/ Intro to path analysis

1 Independent Practice: Hypothesis tests for one parameter:

Applied Statistics and Econometrics

Empirical Application of Simple Regression (Chapter 2)

ECON3150/4150 Spring 2015

Review of Statistics 101

Applied Statistics and Econometrics

ECON Introductory Econometrics. Lecture 5: OLS with One Regressor: Hypothesis Tests

Notes 6: Correlation

Sociology Exam 2 Answer Key March 30, 2012

Regression #8: Loose Ends

At this point, if you ve done everything correctly, you should have data that looks something like:

sociology sociology Scatterplots Quantitative Research Methods: Introduction to correlation and regression Age vs Income

Lecture 12: Interactions and Splines

BIOSTATS 640 Spring 2018 Unit 2. Regression and Correlation (Part 1 of 2) STATA Users

Problem Set 1 ANSWERS

Sociology Exam 1 Answer Key Revised February 26, 2007

Lecture 11: Simple Linear Regression

9. Linear Regression and Correlation

1 The basics of panel data

Warwick Economics Summer School Topics in Microeconometrics Instrumental Variables Estimation

Outline for Today. Review of In-class Exercise Bivariate Hypothesis Test 2: Difference of Means Bivariate Hypothesis Testing 3: Correla

Lecture 4: Multivariate Regression, Part 2

Lab 11 - Heteroskedasticity

Making sense of Econometrics: Basics

Longitudinal Data Analysis Using Stata Paul D. Allison, Ph.D. Upcoming Seminar: May 18-19, 2017, Chicago, Illinois

Applied Statistics and Econometrics

Simple Linear Regression: One Qualitative IV

Lecture 12: Effect modification, and confounding in logistic regression

Linear Modelling in Stata Session 6: Further Topics in Linear Modelling

Measurement Error. Often a data set will contain imperfect measures of the data we would ideally like.

SOCY5601 Handout 8, Fall DETECTING CURVILINEARITY (continued) CONDITIONAL EFFECTS PLOTS

1: a b c d e 2: a b c d e 3: a b c d e 4: a b c d e 5: a b c d e. 6: a b c d e 7: a b c d e 8: a b c d e 9: a b c d e 10: a b c d e

Nonlinear relationships Richard Williams, University of Notre Dame, Last revised February 20, 2015

10. Alternative case influence statistics

Lecture 4: Multivariate Regression, Part 2

Answer all questions from part I. Answer two question from part II.a, and one question from part II.b.

CHAPTER 4 & 5 Linear Regression with One Regressor. Kazu Matsuda IBEC PHBU 430 Econometrics

Econometrics Midterm Examination Answers

ESTIMATING AVERAGE TREATMENT EFFECTS: REGRESSION DISCONTINUITY DESIGNS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics

Analysing data: regression and correlation S6 and S7

Inference. ME104: Linear Regression Analysis Kenneth Benoit. August 15, August 15, 2012 Lecture 3 Multiple linear regression 1 1 / 58

ECON3150/4150 Spring 2016

Handout 11: Measurement Error

Sociology 63993, Exam 2 Answer Key [DRAFT] March 27, 2015 Richard Williams, University of Notre Dame,

Introduction to Regression

Final Exam. Question 1 (20 points) 2 (25 points) 3 (30 points) 4 (25 points) 5 (10 points) 6 (40 points) Total (150 points) Bonus question (10)

Lecture 8: Heteroskedasticity. Causes Consequences Detection Fixes

Statistics 5100 Spring 2018 Exam 1

Unit 2 Regression and Correlation Practice Problems. SOLUTIONS Version STATA

Introduction to Econometrics. Review of Probability & Statistics

In Class Review Exercises Vartanian: SW 540

Statistics and Quantitative Analysis U4320

PBAF 528 Week 8. B. Regression Residuals These properties have implications for the residuals of the regression.

Lecture 11 Multiple Linear Regression

Testing methodology. It often the case that we try to determine the form of the model on the basis of data

4. Nonlinear regression functions

Name: Biostatistics 1 st year Comprehensive Examination: Applied in-class exam. June 8 th, 2016: 9am to 1pm

ECON Introductory Econometrics. Lecture 4: Linear Regression with One Regressor

Question 1 carries a weight of 25%; Question 2 carries 20%; Question 3 carries 20%; Question 4 carries 35%.

ECON3150/4150 Spring 2016

University of California at Berkeley Fall Introductory Applied Econometrics Final examination. Scores add up to 125 points

Bivariate and Multiple Linear Regression (SECOND PART)

sociology 362 regression

Midterm 2 - Solutions

sociology 362 regression

Lecture 5: Hypothesis testing with the classical linear model

Working with Stata Inference on the mean

S o c i o l o g y E x a m 2 A n s w e r K e y - D R A F T M a r c h 2 7,

Week 3: Simple Linear Regression

Ron Heck, Fall Week 3: Notes Building a Two-Level Model

Transcription:

Lecture 3: Multiple Regression Prof. Sharyn O Halloran Sustainable Development Econometrics II

Outline Basics of Multiple Regression Dummy Variables Interactive terms Curvilinear models Review Strategies for Data Analysis Demonstrate the importance of inspecting, checking and verifying your data before accepting the results of your analysis. Suggest that regression analysis can be misleading without probing data, which could reveal relationships that a casual analysis could overlook. Examples of Data Exploration Spring 2005 2

Multiple Regression Data: Y 34 X 1 15 X 2-37 X 3 3.331 24 18 59 1.111 Linear regression models (Sect. 9.2.1) 1. Model with 2 X s: µ(y X 1,X 2 ) = β 0 + β 1 X 1 + β 2 X 2 2. Ex: Y: 1st year GPA, X 1 : Math SAT, X 1 :Verbal SAT 3. Ex: Y= log(tree volume), X 1 :log(height), X 2 : log(diameter) Spring 2005 3

Important notes about interpretation of β s Geometrically, β 0 + β 1 X 1 + β 2 X 2 describes a plane: For a fixed value of X 1 the mean of Y changes by β 2 for each one-unit increase in X 2 If Y is expressed in logs, then Y changes β 2 % for each one-unit increase in X 2, etc. The meaning of a coefficient depends on which explanatory variables are included! β 1 in µ(y X 1 ) = β 0 + β 1 X 1 is not the same as β 1 in µ(y X 1,X 2 ) = β 0 + β 1 X 1 + β 2 X 2 Spring 2005 4

Specially constructed explanatory variables Polynomial terms, e.g. X 2, for curvature (see Display 9.6) Indicator variables to model effects of categorical variables One indicator variable (X=0,1) to distinguish 2 groups; Ex: X=1 for females, 0 for males (K-1) indicator variables to distinguish K groups; Example: X 2 = 1 if fertilizer B was used, 0 if A or C was used X 3 = 1 if fertilizer C was used, 0 if A or B was used Product terms for interaction µ(y X 1,X 2 ) = β 0 + β 1 X 1 + β 2 X 2 + β 3 (X 1 X 2 ) µ(y X 1,X 2 =7)= (β 0 + 7β 2 )+ (β 1 + 7β 3 ) X 1 µ(y X 1,X 2 =-9)= (β 0-9β 2 )+ (β 1-9β 3 ) X 1 The effect of X 1 on Y depends on the level of X 2 Spring 2005 5

Sex discrimination? Observation: Disparity in salaries between males and females. Theory: Salary is related to years of experience Hypothesis If no discrimination, gender should not matter Null Hypothesis H 0 : β 2 =0 Years Experience β 1? + Salary Gender β 2 Spring 2005 6

Hypothetical sex discrimination example Data: Y i = salary for teacher i, X 1i = their years of experience, X 2i = 1 for male teachers, 0 if they were a female i 1 Y 23000 X 1 4 Gender male X 2 1 Gender : Categorical factor 2 39000 30 female 0 3 29000 17 female 0 4 25000 7 male 1 X 2 Indicator variable Spring 2005 7

Model with Categorical Variables Parallel lines model: µ(y X 1,X 2 ) = β 0 + β 1 X 1 + β 2 X 2 for all females: µ(y X 1,X 2 =0) = β 0 + β 1 X 1 for all males: µ(y X 1,X 2 =1) = β 0 + β 1 X 1 +β 2 β 2 Slopes: β 1 Intercepts: Males: β 0 + β 2 Females: β 0 For the subpopulation of teachers at any particular years of experience, the mean salary for males is β 2 more than that for females. Spring 2005 8

Model with Interactions µ(y X 1,X 2 ) = β 0 + β 1 X 1 + β 2 X 2 + β 3 (X 1 X 2 ) for all females: µ(y X 1,X 2 =0) = β 0 + β 1 X 1 for all males: µ(y X 1,X 2 =1) = β 0 + β 1 X 1 +β 2 + β 3 X 1 Slopes: Males: β 0 + β 2 Females: β 0 Intercepts: Males: β 1 + β 3 Females: β 1 The mean salary for inexperienced males (X 1 =0) is β 2 (dollars) more than the mean salary for inexerienced females. The rate of increase in salary with increasing experience is β 3 (dollars) more for males than for females. Spring 2005 9

Model with curvilinear effects: Modelling curvature, parallel quadratic curves: µ(y X 1,X 2 =1) = β 0 + β 1 X 1 +β 2 X 2 + β 3 X 1 2 Modelling curvature, parallel quadratic curves: µ(salary..) = β 0 + β 1 exper+β 2 Gender+ β 3 exper 2 Spring 2005 10

Notes about indicator variables A t-test for H 0 : β 0 =0 in the regression of Y on a single indicator variable I B, µ(y I B ) = β 0 + β 2 I B is the 2-sample (difference of means) t-test Regression when all explanatory variables are categorical is analysis of variance. Regression with categorical variables and one numerical X is often called analysis of covariance. These terms are used more in the medical sciences than social science. We ll just use the term regression analysis for all these variations. Spring 2005 11

Causation and Correlation Causal conclusions can be made from randomized experiments But not from observational studies One way around this problem is to start with a model of your phenomenon Then you test the implications of the model These observations can disprove the model s hypotheses But they cannot prove these hypotheses correct; they merely fail to reject the null Spring 2005 12

Models and Tests A model is an underlying theory about how the world works Assumptions Key players Strategic interactions Outcome set Models can be qualitative, quantitative, formal, experimental, etc. But everyone uses models of some sort in their research Derive Hypotheses E.g., as per capita GDP increases, countries become more democratic Test Hypotheses Collect Data Outcome and key explanatory variables Identify the appropriate functional form Apply the appropriate estimation procedures Interpret the results Spring 2005 13

The traditional scientific approach Virtuous cycle of theory informing data analysis which informs theory building Theory Empirical Findings Operational Hypothesis Statistical Test Observation Measurement Spring 2005 14

Example of a scientific approach female education reduces childbearing Is b 1 significant? Positive, negative? Magnitude? Women with higher education should have fewer children than those with less education CB i = b 0 + b 1 *educ i + resid i Using Ghana data? Women 15-49? Married or all women? How to measure education? Spring 2005 15

Strategies and Graphical Tools 1 2 3 Model Not OK 4 Define the question of Interest a) Specify theory b) Hypothesis to be tested Review Study Design assumptions, logic, data availability, correct errors Explore the Data Formulate Inferential Model Derived from theory Check Model: a) Model fit b) Examine residuals c) See if terms can be eliminated Interpret results using appropriate tools Presentation of results Tables, graphs, text Use graphical tools; consider transformation; fit a tentative model; check outliers State hypotheses in terms of model parameters Check for nonconstant variance; assess outliers Confidence intervals, tests, prediction intervals Spring 2005 16

Data Exploration Graphical tools for exploration and communication: Matrix of scatterplots (9.5.1) Coded scatterplot (9.5.2) Different plotting codes for different categories Jittered scatterplot (9.5.3) Point identification Consider transformations Fit a tentative model E.g., linear, quadratic, interaction terms, etc. Check outliers Spring 2005 17

Scatter plots Scatter plot matrices provide a compact display of the relationship between a number of variable pairs. brain weight data before log transformation. STATA command Spring 2005 18

Scatter plots Scatter plot matrices can also indicate outliers brain weight data before log transformation. Note the outliers in these relationships. STATA command Spring 2005 19

Scatterplot matrix for brain weight data after log transformation Spring 2005 20

Notice: the outliers are now gone! Spring 2005 21

Coded Scatter Plots Coded scatter plots are obtained by using different plotting codes for different categories. In this example, the variable time has two possible values (1,2). Such values are coded in the scatterplot using different symbols. STATA command Spring 2005 22

Jittering Provides a clearer view of overlapping points. Un-jittered Jittered Spring 2005 23

Point Identification How to label points with STATA. Spring 2005 24 STATA command

Transformations This variable is clearly skewed How should we correct it? Spring 2005 25 STATA command

Transformations Stata ladder command shows normality test for various transformations Select the transformation with the lowest chi 2 statistic (this tests each distribution for normality). ladder enroll Transformation formula chi2(2) P(chi2) ------------------------------------------------------------------ cubic enroll^3. 0.000 square enroll^2. 0.000 raw enroll. 0.000 square-root sqrt(enroll) 20.56 0.000 log log(enroll) 0.71 0.701 reciprocal root 1/sqrt(enroll) 23.33 0.000 reciprocal 1/enroll 73.47 0.000 reciprocal square 1/(enroll^2). 0.000 reciprocal cubic 1/(enroll^3). 0.000 Spring 2005 26

Transformations Stata ladder command shows normality test for various transformations Select the transformation with the lowest chi 2 statistic (this tests each distribution for normality). ladder enroll Transformation formula chi2(2) P(chi2) ------------------------------------------------------------------ cubic enroll^3. 0.000 square enroll^2. 0.000 raw enroll. 0.000 square-root sqrt(enroll) 20.56 0.000 log log(enroll) 0.71 0.701 reciprocal root 1/sqrt(enroll) 23.33 0.000 reciprocal 1/enroll 73.47 0.000 reciprocal square 1/(enroll^2). 0.000 reciprocal cubic 1/(enroll^3). 0.000 Spring 2005 27

Transformations A graphical view of the different transformations using gladder. Spring 2005 28 STATA command

Transformations And yet another, using qladder, which gives a quantile-normal plot of each transformation STATA command Spring 2005 29

Fit a Tentative Model This models GDP and democracy, using only a linear term Log GDP= B 0 + B 1 Polxnew scatter lgdp polxnew if year==2000 & ~always10 line plinear polxnew, sort legend(off) yti(log GDP) Spring 2005 30 STATA command

Fit a Tentative Model The residuals from this regression are clearly U-shaped STATA command Spring 2005 31

Fit a Tentative Model This models GDP and democracy, using a quadratic term as well Log GDP= B 0 + B 1 Polxnew + B 1 Polxnew 2 scatter lgdp polxnew if year==2000 & ~always10 line predy polxnew, sort legend(off) yti(log GDP) Spring 2005 32 STATA command

Fit a Tentative Model Now the residuals look normally distributed Spring 2005 33

Check for Outliers This models GDP and democracy, using a quadratic term Potential Outliers scatter lgdp polxnew if year==2000 & ~always10 line predy polxnew, sort legend(off) yti(log GDP) STATA command Spring 2005 34

Check for Outliers Identify outliers: Malawi and Iran scatter lgdp polxnew if year==2000 & ~always10 & (sftgcode=="mal" sftgcode=="irn"), mlab(sftgcode) mcolor(red) scatter lgdp polxnew if year==2000 & ~always10 & (sftgcode!="mal" & sftgcode!="irn") line predy polxnew, sort legend(off) yti(log GDP) STATA command Spring 2005 35

Check for Outliers. reg lgdp polxnew polx2 if year==2000 & ~always10 Source SS df MS Number of obs = 97 -------------+------------------------------ F( 2, 94) = 34.84 Model 36.8897269 2 18.4448635 Prob > F = 0.0000 Residual 49.7683329 94.52945035 R-squared = 0.4257 -------------+------------------------------ Adj R-squared = 0.4135 Total 86.6580598 96.902688123 Root MSE =.72763 ------------------------------------------------------------------------------ lgdp Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- polxnew -.0138071.0173811-0.79 0.429 -.0483177.0207035 polx2.022208.0032487 6.84 0.000.0157575.0286584 _cons 7.191465.1353228 53.14 0.000 6.922778 7.460152 ------------------------------------------------------------------------------ Try analysis without the outliers; same results.. reg lgdp polxnew polx2 if year==2000 & ~always10 & (sftgcode!="mal" & sftgcode!="irn") Source SS df MS Number of obs = 95 -------------+------------------------------ F( 2, 92) = 42.67 Model 40.9677226 2 20.4838613 Prob > F = 0.0000 Residual 44.164877 92.480053011 R-squared = 0.4812 -------------+------------------------------ Adj R-squared = 0.4699 Total 85.1325996 94.905665953 Root MSE =.69286 ------------------------------------------------------------------------------ lgdp Coef. Std. Err. t P> t [95% Conf. Interval] -------------+---------------------------------------------------------------- polxnew -.0209735.0166859-1.26 0.212 -.0541131.0121661 polx2.0244657.0031649 7.73 0.000.01818.0307514 _cons 7.082237.1328515 53.31 0.000 6.818383 7.346092 ------------------------------------------------------------------------------ So leave in model; See Display 3.6 for other strategies. Spring 2005 36

EXAMPLE: Rainfall and Corn Yield (Exercise: 9.15, page 261) Dependent variable (Y): Yield Explanatory variables (Xs): Rainfall Year Linear regression (scatterplot with linear regression line) Quadratic model (scatter plot with quadratic regression curve) Conditional scatter plots for yield vs. rainfall (selecting different years) Regression model with quadratic functions and interaction terms Spring 2005 37

Model of Rainfall and Corn Yield Let's say that we collected data on corn yields from various farms. Varying amounts of rainfall could affect yield. But this relation may change over time. The causal model would then look like this: Year RAIN? + Yield Spring 2005 38

Scatterplot reg yield rainfall Yield=β 0 + β 1 rainfall Initial scatterplot of yield vs rainfall, and residual plot from simple linear regression fit. graph twoway lfit yield rainfall scatter yield rainfall, msymbol(d) mcolor(cranberry) ytitle("corn yield") xtitle( Rainfall ) title("scatterplot of Corn Yield vs Rainfall") STATA command rvfplot, yline(0) xtitle("fitted: Rainfall") Corn Yield 20 25 30 35 40 6 8 10 12 14 16 Rainfall Scatterplot of Corn Yield vs Rainfall Fitted values YIELD Residuals -10-5 0 5 28 30 32 34 36 Fitted: Rainfall Spring 2005 39

Quadratic fit: represents better the yield-trend graph twoway qfit yield rainfall scatter yield rainfall, msymbol(d) mcolor(cranberry) ytitle("corn Yield") xtitle("rainfall") title("quadratic regression curve") gen rainfall2=rainfall^2 reg yield rainfall rainfall 2 Yield=β 0 + β 1 rainfall + β 2 rainfall 2 Corn Yield 20 25 30 35 40 rvfplot, yline(0) xtitle("fitted: Rainfall+(Rainfall^2)") Quadratic regression curve 6 8 10 12 14 16 Rainfall Fitted values YIELD Residuals -10-5 0 5 10 26 28 30 32 34 Spring 2005 Fitted: Rainfall+(Rainfall^2) 40

Quadratic fit: Residual plot vs time Since data were collected over time we should check for time trend and serial correlation, by plotting residuals vs. time. Yield=β 0 + β 1 rainfall + β 2 rainfall 2 1. Run regression 2. Predict residuals 3. Graph scatterplot residuals vs. time Spring 2005 41

Graph: Scatterplot residuals vs. year -10-5 0 5 10 Yield=β 0 + β 1 rainfall + β 2 rainfall 2 1890 1900 1910 1920 1930 YEAR Fitted values Residual for model (rain+rain^2) There does appear to be a trend. There is no obvious serial correlation. (more in Ch. 15) Note: Year is not an explanatory variable in the regression model. Spring 2005 42

Adding time trend Yield=β 0 + β 1 rainfall + β 2 rainfall 2 + β 3 Year Include Year in the regression model Residuals -10-5 0 5 20 25 30 35 Fitted: Rainfall +Rainfall^2+Year Residuals -10-5 0 5 residual-versus-predictor 1890 1900 1910 1920 1930 YEAR Spring 2005 43

Partly because of the outliers and partly because we suspect that the effect of rain might be changing over 1890 to 1928 (because of improvements in agricultural techniques, including irrigation), it seems appropriate to further investigate the interactive effect of year and rainfall on yield. Spring 2005 44

Conditional scatter plots: STATA commands Note: The conditional scatterplots show the effect of rainfall on yield to be smaller in later time periods. Spring 2005 45

Conditional scatter plots 1890-1898 1899-1908 20 25 30 35 40 20 25 30 35 40 6 8 10 12 14 RAINFALL 8 10 12 14 16 RAINFALL Fitted values YIELD Fitted values YIELD 1909-1917 1918-1927 26 28 30 32 34 36 25 30 35 40 6 8 10 12 14 16 RAINFALL 8 10 12 14 RAINFALL Fitted values YIELD Fitted values YIELD Spring 2005 46

Fitted Model Final regression model with quadratic functions and interaction terms Yield=β 0 + β 1 rainfall+ β 2 rainfall 2 + β 3 Year+ β 3 (Rainfall*Year) Spring 2005 47

Quadratic regression lines for 1890, 1910 & 1927 Yield=β 0 + β 1 rainfall+ β 2 rainfall 2 + β 3 Year+ β 3 (Rainfall*Year) 1. Run the regression 2. Use the regression estimates and substitute the corresponding year in the model to generate 3 new variables: The predicted yields for year=1890,1910,1927 1. 2. Pred1890=β 0 + β 1 rainfall+ β 2 rainfall 2 + β 3 1890+ β 3 (Rainfall*1890) Spring 2005 48

The predicted yield values generated for years: 1890, 1910 and 1927 Spring 2005 49

Yearly corn yield vs rainfall between 1890 and 1927 and quadratic regression lines for years 1890, 1910 and 1927 Spring 2005 50

Summary of Findings As evident in the scatterplot above, the mean yearly yield of corn in six Midwestern states from 1890 to 1927 increased with increasing rainfall up to a certain optimum rainfall, and then leveled off or decreased with rain in excess of that amount (the p- value from a t-test for the quadratic effect of rainfall on mean corn yield is.014). There is strong evidence, however, that the effect of rainfall changed over this period of observation (p-value from a t-test for the interactive effect of year and rainfall is.002). Representative quadratic fits to the regression of corn yield on rainfall are shown in the plot for 1890, 1910, and 1927. It is apparent that less rainfall was needed to produce the same mean yield as time progressed. Spring 2005 51

Example: Causes of Student Academic Performance Randomly sampling 400 elementary schools from the California Department of Education's API 2000 dataset. Data contains a measure of school academic performance as well as other attributes of the elementary schools, such as, class size, enrollment, poverty, etc. See Handout Spring 2005 52