The general linear regression with k explanatory variables is just an extension of the simple regression as follows

Similar documents
Multiple Regression Analysis. Part III. Multiple Regression Analysis

The Simple Regression Model. Part II. The Simple Regression Model

CHAPTER 6: SPECIFICATION VARIABLES

Practical Econometrics. for. Finance and Economics. (Econometrics 2)

Heteroskedasticity. Part VII. Heteroskedasticity

Statistical Inference. Part IV. Statistical Inference

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

ECNS 561 Multiple Regression Analysis

3. Linear Regression With a Single Regressor

Model Specification and Data Problems. Part VIII

The Multiple Regression Model Estimation

Multiple Linear Regression CIVL 7012/8012

5. Erroneous Selection of Exogenous Variables (Violation of Assumption #A1)

2. Linear regression with multiple regressors

Homoskedasticity. Var (u X) = σ 2. (23)

Multivariate Regression Analysis

Answers to Problem Set #4

THE MULTIVARIATE LINEAR REGRESSION MODEL

Heteroscedasticity 1

statistical sense, from the distributions of the xs. The model may now be generalized to the case of k regressors:

Introductory Econometrics

2) For a normal distribution, the skewness and kurtosis measures are as follows: A) 1.96 and 4 B) 1 and 2 C) 0 and 3 D) 0 and 0

ECON The Simple Regression Model

6. Assessing studies based on multiple regression

Regression with Qualitative Information. Part VI. Regression with Qualitative Information

Outline. 2. Logarithmic Functional Form and Units of Measurement. Functional Form. I. Functional Form: log II. Units of Measurement

Empirical Economic Research, Part II

Intermediate Econometrics

Multiple Regression Analysis

Review of Econometrics

Applied Econometrics. Applied Econometrics Second edition. Dimitrios Asteriou and Stephen G. Hall

1 Linear Regression Analysis The Mincer Wage Equation Data Econometric Model Estimation... 11

Economics 113. Simple Regression Assumptions. Simple Regression Derivation. Changing Units of Measurement. Nonlinear effects

10. Time series regression and forecasting

Advanced Quantitative Methods: ordinary least squares

Linear Regression. Junhui Qian. October 27, 2014

Making sense of Econometrics: Basics

coefficients n 2 are the residuals obtained when we estimate the regression on y equals the (simple regression) estimated effect of the part of x 1

Econometrics I KS. Module 1: Bivariate Linear Regression. Alexander Ahammer. This version: March 12, 2018

Statistical Inference with Regression Analysis

Econometrics Summary Algebraic and Statistical Preliminaries

Brief Suggested Solutions

Econometrics Multiple Regression Analysis: Heteroskedasticity

ECON2228 Notes 2. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 47

Multiple Regression. Midterm results: AVG = 26.5 (88%) A = 27+ B = C =

Making sense of Econometrics: Basics

About the seasonal effects on the potential liquid consumption

Econometrics - Slides

Heteroskedasticity and Autocorrelation

Sample Problems. Note: If you find the following statements true, you should briefly prove them. If you find them false, you should correct them.

Multiple Regression Analysis

7. Prediction. Outline: Read Section 6.4. Mean Prediction

Multiple Linear Regression

Multiple Regression Analysis: Inference MULTIPLE REGRESSION ANALYSIS: INFERENCE. Sampling Distributions of OLS Estimators

Applied Econometrics (QEM)

Intermediate Econometrics

Least Squares Estimation-Finite-Sample Properties

Motivation for multiple regression

x = 1 n (x = 1 (x n 1 ι(ι ι) 1 ι x) (x ι(ι ι) 1 ι x) = 1

Wooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares

Applied Quantitative Methods II

4. Nonlinear regression functions

11. Simultaneous-Equation Models

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'

Outline. 11. Time Series Analysis. Basic Regression. Differences between Time Series and Cross Section

Multiple Regression Analysis

Applied Econometrics (QEM)

13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process. Strict Exogeneity

Lecture 5: Omitted Variables, Dummy Variables and Multicollinearity

Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model

Introductory Econometrics

Making sense of Econometrics: Basics

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Simple Linear Regression

Lecture 8. Using the CLR Model. Relation between patent applications and R&D spending. Variables

Multiple Regression Analysis: Further Issues

LECTURE 10. Introduction to Econometrics. Multicollinearity & Heteroskedasticity

F3: Classical normal linear rgression model distribution, interval estimation and hypothesis testing

In Chapter 2, we learned how to use simple regression analysis to explain a dependent

ECO220Y Simple Regression: Testing the Slope

Simple Linear Regression

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Applied Econometrics (MSc.) Lecture 3 Instrumental Variables

Eastern Mediterranean University Department of Economics ECON 503: ECONOMETRICS I. M. Balcilar. Midterm Exam Fall 2007, 11 December 2007.

Simple Linear Regression: The Model

Brief Sketch of Solutions: Tutorial 3. 3) unit root tests

in the time series. The relation between y and x is contemporaneous.

Solution to Exercise E6.

Topic 4: Model Specifications

The Simple Regression Model. Simple Regression Model 1

7. Integrated Processes

Econometric Methods. Prediction / Violation of A-Assumptions. Burcu Erdogan. Universität Trier WS 2011/2012

Econometrics. Final Exam. 27thofJune,2008. Timeforcompletion: 2h30min

L2: Two-variable regression model

Applied Regression Analysis

Introductory Econometrics

In Chapter 2, we learned how to use simple regression analysis to explain a dependent

Regression Models - Introduction

Simple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation.

Multiple Linear Regression

Transcription:

3. Multiple Regression Analysis The general linear regression with k explanatory variables is just an extension of the simple regression as follows (1) y i = β 0 + β 1 x i1 + + β k x ik + u i. Because (2) y i x ij = β j j = 1,..., k, coefficient β j indicates the marginal effect of variable x j, and indicates the amount y is expected to change as x j changes by one unit and other variables are kept constant (ceteris paribus). The multiple regression opens up several additional options to enrich analysis and make modeling more realistic compared to the simple regression. 1

Example 3.1: Consider the hourly wage example. Enhance the model as (3) log(w) = β 0 + β 1 x 1 + β 2 x 2 + β 3 x 3, where w = average hourly earnings, x 1 = years of education (educ), x 2 = years of labor market experience (exper), and x 3 (tenure). Dependent Variable: LOG(WAGE) Method: Least Squares Date: 08/21/12 Time: 09:16 Sample: 1 526 Included observations: 526 = years with the current employer Variable Coefficient Std. Error t-statistic Prob. C 0.284360 0.104190 2.729230 0.0066 EDUC 0.092029 0.007330 12.55525 0.0000 EXPER 0.004121 0.001723 2.391437 0.0171 TENURE 0.022067 0.003094 7.133070 0.0000 R-squared 0.316013 Mean dependent var 1.623268 Adjusted R-squared 0.312082 S.D. dependent var 0.531538 S.E. of regression 0.440862 Akaike info criterion 1.207406 Sum squared resid 101.4556 Schwarz criterion 1.239842 Log likelihood -313.5478 Hannan-Quinn criter. 1.220106 F-statistic 80.39092 Durbin-Watson stat 1.768805 Prob(F-statistic) 0.000000 For example the coefficient 0.092 means that, holding exper and tenure fixed, another year of education is predicted to increase wage by approximately 9.2%. Staying another year at the same firm (educ fixed, exper= tenure=1) is expected to result in a salary increase by approximately 0.4% + 2.2% = 2.6%. 2

Example 3.2: Consider the consumption function C = f(y ), where Y is income. Suppose the assumption is that as incomes grow the marginal propensity to consume decreases. In simple regression we could try to fit a level-log model or log-log model. One possibility also could be β 1 = β 1l + β 1q Y, where according to our hypothesis β 1q < 0. Thus the consumption function becomes C = β 0 + (β 1l + β 1q Y )Y + u = β 0 + β 1l Y + β 1q Y 2 + u This is a multiple regression model with x 1 = Y and x 2 = Y 2. This simple example demonstrates that we can meaningfully enrich simple regression analysis (even though we have essentially only two variables, C and Y ) and at the same time get a meaningful interpretation to the above polynomial model. The response of C to a one unit change in Y is now C Y = β 1l + 2β 1q Y. 3

Estimation In order to estimate the model we replace the classical assumption 3 as 3. None of the independent variables is constant, and no observation vector of any independent variable can be written as a linear combination of the observation vectors of any other independent variables. The estimation method again is the OLS, which produces estimates ˆβ 0, ˆβ 1,..., ˆβ k by minimizing (4) n i=1 (y i β 0 β 1 x i1 β k x ik ) 2 with respect to the parameters. Again the first order solution is to set the (k + 1) partial derivatives equal to zero. The solution is straightforward although the explicit form of the estimators become complicated. 4

Matrix form Using matrix algebra simplifies considerably the notations in multiple regression. Denote the observation vector on y as (5) y = (y 1,..., y n ), where the prime denotes transposition. In the same manner denote the data matrix on x-variables enhanced with ones in the first column as an n (k + 1) matrix (6) X = 1 x 11 x 12 x 1k 1. x 21. x 22. x 2k. 1 x n1 x n2 x nk, where k < n (the number of observations, n, is larger than the number of x-variables, k). 5

Then we can present the whole set of regression equations for the sample (7) y 1 = β 0 + β 1 x 11 + + β k x 1k + u 1 y 2. = β 0 + β 1 x 21 + + β k x 2k + u 2 y n = β 0 + β 1 x n1 + + β k x nk + u n in the matrix form as (8) y 1 y 2. y n = or shortly 1 x 11 x 12 x 1k 1. x 21. x 22. x 2k. 1 x n1 x n2 x nk β 0 β 1 β 2. β k + u 1 u 2. u n (9) y = Xb + u, where b = (β 0, β 1,..., β k ) is the parameter vector and is the error vector. u = (u 1, u 2,..., u n ) 6

The normal equations for the first order conditions of the minimization of (4) in matrix form are simply (10) X Xˆb = X y which gives the explicit solution for the OLS estimator of b as (11) ˆb = (X X) 1 X y, where ˆb = (ˆβ 0, ˆβ 1,..., ˆβ k ) and existence of (X X) 1 is granted by assumption 3. The fitted model is (12) ŷ i = ˆβ 0 + ˆβ 1 x i1 + + ˆβ k x ik, i = 1,..., n. 7

Remark 3.1: Single and multiple regression do in general not produce the same parameter estimates on the same independent variables. For example, if you fit the simple regression ỹ = β 0 + β 1 x 1, where β 0 and β 1 are the OLS estimators, and fit a multiple regression ŷ = ˆβ 0 + ˆβ 1 x 1 + ˆβ 2 x 2 then it turns out that β 1 = ˆβ 1 + ˆβ 2 δ 1, where δ 1 is the slope coefficient from regressing x 2 on x 1. This implies that β 1 ˆβ 1 unless ˆβ 2 = 0, or x 1 and x 2 are uncorrelated. 8

Goodness-of-Fit Again in the same manner as with the simple regression we have n n n (13) (y i ȳ) 2 = (ŷ i ȳ) 2 + (y i ŷ i ) 2 or i=1 i=1 i=1 (14) SST = SSE + SSR, where (15) ŷ i = ˆβ 0 + ˆβ 1 x i1 + + ˆβ k x ik. 9

R-square: R-square denotes again the sample variation in the fitted ŷ i s as a proportion of the sample variation in the original y i s: (16) R 2 = SSE SST = 1 SSR SST. Again as in the case of the simple regression, R 2 can be shown to be the squared correlation coefficient between the actual y i and fitted ŷ i. This correlation is called the multiple correlation (17) R = (yi ȳ)(ŷ i ŷ) (yi ȳ) 2 (ŷi ŷ) 2. Remark 3.2: ŷ = ȳ. Remark 3.3: R 2 never decreases when an explanatory variable is added to the model. 10

Adjusted R-square: (18) R 2 = 1 s2 u s 2 y = 1 n 1 n k 1 (1 R2 ), where (19) s 2 u = 1 n k 1 n i=1 (y i ŷ i ) 2 = 1 n k 1 n û 2 i i=1 is an estimator of the error variance σu 2 = Var[u i ]. The square root of (19) is the so called standard error of the regression. 11

3.3 Expected values of the OLS estimators Given observation tuples (x i1, x i1,..., x ik, y i ), i = 1,..., n the classical assumptions read now: Assumptions (classical assumptions): 1. y = β 0 + k i=1 β i x + u in the population. 2. {(x i1, x i1,..., x ik, y i )} is a random sample of the model above, implying uncorrelated residuals: Cov(u i, u j ) = 0 for all i j. 3. All independent variables including the vector of constants are linearly independent, implying that (X X) 1 exists. 4. E[u x 1,..., x k ] = 0 implying E[u] = 0 and Cov(u, x 1 ) =... = Cov(u, x k ) = 0. 5. Var[u x 1,..., x k ] = σ 2 implying Var[u] = σ 2. Under these assumptions we can show that the estimators of the regression coefficients are unbiased. That is Theorem 3.1: Given observations on the x-variables (20) E[ˆβ j ] = βj 12

Using matrix notations, the proof of Theorem 3.1 is pretty straightforward. To do this write ˆb = (X X) 1 X y (21) = (X X) 1 X (Xb + u) = (X X) 1 X Xb + (X X) 1 X u = b + (X X) 1 X u. Given X, the expected value of ˆb is (22) E[ˆb] = b + (X X) 1 X E[u] = b because by Assumption 4 E[u] = 0. Thus the OLS estimators of the regression coefficients are unbiased. Remark 3.4: If z = (z 1,..., z k ) is a random vector, then E[z] = (E[z 1 ],..., E[z n ]). That is the expected value of a vector is a vector whose components are the individual expected values. 13

Irrelevant variables in a regression Suppose the correct model is (23) y = β 0 + β 1 x 1 + u but we estimate the model as (24) y = β 0 + β 1 x 1 + β 2 x 2 + u. Thus, β 2 = 0 in reality. The OLS estimation results yield (25) ŷ = ˆβ 0 + ˆβ 1 x 1 + ˆβ 2 x 2 By Theorem 3.1 E[ˆβ j ] = β j, thus in particular E[ˆβ 2 ] = β 2 = 0, implying that inclusion of extra variables to a regression does not bias the results. However, as will be seen later, they decrease accuracy of estimation by increasing variance of the OLS estimates. 14

Omitted variable bias Suppose now as an example that the correct model is (26) y = β 0 + β 1 x 1 + β 2 x 2 + u, but we misspecify the model as (27) y = β 0 + β 1 x 1 + v, where the omitted variable is embedded into the residual term v = β 2 x 2 + u. OLS estimator for β 1 for specification (27)is (28) β 1 = (xi1 x 1 )y i (xi1 x 1 ) 2. From Equation (2.37) we have (29) β 1 = β 1 + where n i=1 a i v i, (30) a i = (x i1 x 1 ) (xi1 x 1 ) 2. 15

Thus because E[v i ] = E[β 2 x i2 + u i ] = β 2 x i2 E[ β 1 ] = β 1 + a i E[v i ] (31) i.e., = β 1 + a i β 2 x i2 = β 1 + β 2 (xi1 x 1 )x i2 (xi1 x 1 ) 2 }{{} δ 1 (32) E[ β 1 ] = β 1 + β 2 δ 1, where δ 1 is the slope coefficient of regressing x 2 upon x 1, implying that β 1 is biased for β 1 unless x 1 and x 2 are uncorrelated (or β 2 = 0). This is called the omitted variable bias. The direction of the omitted variable bias is as follows: Corr(x 1, x 2 ) > 0 Corr(x 1, x 2 ) < 0 β 2 > 0 positive bias negative bias β 2 < 0 negative bias positive bias 16

3.4 The variance of OLS estimators Write the regression model (33) y i = β 0 + β 1 x i1 + + β k x ik + u i i = 1,..., n in the matrix form (34) y = Xb + u. Then we can write the OLS estimators compactly as (35) ˆb = (X X) 1 X y. 17

Under the classical assumption 1 5, and assuming X fixed we can show that the variancecovariance matrix b is (36) Cov[ˆb] = (X X) 1 σ 2 u. Variances of the individual coefficients are obtained form the main diagonal of the matrix, and can be shown to be of the form (37) Var[ˆβ j ] = σ 2 u (1 R 2 j ) n i=1 (x ij x j ) 2, j = 1,..., k, where Rj 2 is the R-square when regressing x j on the other explanatory variables and the constant term. 18

Multicollinearity In terms of linear algebra, we say that vectors x 1, x 2,..., x k are linearly independent if a 1 x 1 + + a k x k = 0 holds only if a 1 = = a k = 0. Otherwise x 1,..., x k are linearly dependent. In such a case some a l 0 and we can write x l = c 1 x 1 + + c l 1 x l 1 + c l+1 x l+1 + + c k x k, where c j = a j /a l that is x l can be represented as a linear combination of the other variables. 19

In statistics the multiple correlation measures the degree of linear dependence. If the variables are perfectly linearly dependent. That is, if for example, x j is a linear combination of other variables, the multiple correlation R j = 1. A perfect linear dependence is rare between random variables. However, particularly between macro economic variables dependencies are often high. 20

From the variance equation (37) Var[ˆβ j ] = σ 2 u (1 R 2 j ) n i=1 (x ij x j ) 2, we see that Var[ˆβ j ] as Rj 2 1. That is, the more the explanatory variables are linearly dependent the larger the variance becomes. This implies that the coefficient estimates become increasingly instable. High (but not perfect) correlation between two or more explanatory variables is called multicollinearity. 21

Symptoms of multicollinearity: (1) High correlations between explanatory variables. (2) R 2 is relatively high, but the coefficient estimates tend to be insignificant (see the section of hypothesis testing) (3) Some of the coefficients are of wrong sign and some coefficients are at the same time unreasonably large. (4) Coefficient estimates change much from one model alternative to another. 22

Example 3.3: Variable E t denotes cost expenditures in a sample of Toyota Mark II cars at time point t, M t denotes the milage and A t age. Consider model alternatives: Model A: Model B: Model C: E t = α 0 + α 1 A t + u 1t E t = β 0 + β 1 M t + u 2t E t = γ 0 + γ 1 M t + γ 2 A t + u 3t 23

Estimation results: (t-values in parentheses) Variable Malli A Malli B Malli C Constant -626.24-796.07 7.29 (-5.98) (-5.91) (0.06) Age 7.35 27.58 (22.16) (9.58) Miles 53.45-151.15 (18.27) (-7.06) df 55 55 54 R 2 0.897 0.856 0.946 ˆσ 368.6 437.0 268.3 Findings: Apriori, coefficients α 1, β 1, γ 1, and γ 2 should be positive. However, ˆγ 2 = 151.15 (!!?), but ˆβ 1 = 53.45. Correlation r M,E = 0.996! 24

Remedies: In the collinearity problem the question is the there is not enough information to reliably identify each variables contribution as an explanatory variable in the model. Thus in order to alleviate the problem: (1) Use non-sample information if available to impose restrictions between coefficients. (2) Increase the sample size if possible. (3) Drop the most collinear (on the base of R 2 j ) variables. (4) If a linear combination (usually a sum) of the most collinear variables is meaningful, replace the collinear variables by the linear combination. 25

Variances of misspecified models Consider again as in (26) the regression model (38) y = β 0 + β 1 x 1 + β 2 x 2 + u. Suppose the following models are estimated by OLS (39) ŷ = ˆβ 0 + ˆβ 1 x 1 + ˆβ 2 x 2 and (40) ỹ = β 0 + β 1 x 1. Then by (37) (41) Var[ˆβ 1 ] = and in analogy to (2.40) (42) Var[ β 1 ] = Var[β 2x 2 + u] (xi1 x 1 ) 2 = σ 2 u (1 r 2 12 ) (x i1 x 1 ) 2 σ 2 u (xi1 x 1 ) 2, where r 12 is the sample correlation between x 1 and x 2. 26

Thus Var[ β 1 ] Var[ˆβ 1 ], and the inequality is strict if r 12 0. In summary (assuming r 12 0): (1) If β 2 0, then β 1 is biased, ˆβ 1 is unbiased, and Var[ β 1 ] < Var[ˆβ 1 ] (2) If β 2 = 0, then both β 1 and ˆβ 1 are unbiased, but Var[ β 1 ] < Var[ˆβ 1 ] 27

Estimating error variance σ 2 u An unbiased estimator of the error variance Var[u] = σ 2 u is (43) ˆσ 2 u = where 1 n k 1 n i=1 û 2 i, (44) û i = y i ˆβ 0 ˆβ 1 x i1 ˆβ k u ik. The term n k 1 in (42) is the degrees of freedom (df). It can be shown that (45) E[ˆσ 2 u ] = σ2 u, i.e., ˆσ 2 u is unbiased estimator of σ2 u. ˆσ u is called the standard error of the regression. 28

Standard errors of ˆβ k Standard deviation of ˆβ j is the square root of (37), i.e. (46) σˆβ j = Var[β j ] = σ u (1 R 2 j ) (x ij x j ) 2 Substituting σ u by its estimate ˆσ u = the standard error of ˆβ j ˆσ 2 u gives (47) se(ˆβ j ) = ˆσ u (1 R 2 j ) (x ij x j ) 2. 29

3.5 The Gauss-Markov Theorem Theorem 3.2: Under the classical assumptions 1 5 ˆβ 0, ˆβ 1,... ˆβ k are best linear unbiased estimators (BLUEs) of β 0, β 1,... β k, respectively. BLUE: Best: The variance of the OLS estimator is smallest among all linear unbiased estimators of β j Linear: ˆβ j = n i=1 w ij y i. Unbiased: E[ˆβ j ] = β j. 30