The Multiple Regression Model Estimation

Similar documents
Heteroskedasticity and Autocorrelation

The Simple Regression Model. Part II. The Simple Regression Model

Multiple Regression Analysis. Part III. Multiple Regression Analysis

Homoskedasticity. Var (u X) = σ 2. (23)

The general linear regression with k explanatory variables is just an extension of the simple regression as follows

ECON The Simple Regression Model

Multiple Regression Analysis

Exercises (in progress) Applied Econometrics Part 1

5. Erroneous Selection of Exogenous Variables (Violation of Assumption #A1)

3. Linear Regression With a Single Regressor

2. Linear regression with multiple regressors

Simple Linear Regression: The Model

Practical Econometrics. for. Finance and Economics. (Econometrics 2)

10. Time series regression and forecasting

Econometric Methods. Prediction / Violation of A-Assumptions. Burcu Erdogan. Universität Trier WS 2011/2012

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Exercise sheet 6 Models with endogenous explanatory variables

Solution to Exercise E6.

Econometrics I Lecture 3: The Simple Linear Regression Model

INTRODUCTORY ECONOMETRICS

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Econometrics of Panel Data

Heteroskedasticity. Part VII. Heteroskedasticity

Empirical Economic Research, Part II

4. Nonlinear regression functions

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Intermediate Econometrics

The Simple Regression Model. Simple Regression Model 1

Advanced Econometrics I

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)

Reference: Davidson and MacKinnon Ch 2. In particular page

Least Squares Estimation-Finite-Sample Properties

Answers to Problem Set #4

Intermediate Econometrics

Statistics II. Management Degree Management Statistics IIDegree. Statistics II. 2 nd Sem. 2013/2014. Management Degree. Simple Linear Regression

x = 1 n (x = 1 (x n 1 ι(ι ι) 1 ι x) (x ι(ι ι) 1 ι x) = 1

Motivation for multiple regression

Econometrics I KS. Module 1: Bivariate Linear Regression. Alexander Ahammer. This version: March 12, 2018

CHAPTER 6: SPECIFICATION VARIABLES

Advanced Quantitative Methods: ordinary least squares

Topic 4: Model Specifications

Applied Econometrics (QEM)

Multivariate Regression

REVIEW (MULTIVARIATE LINEAR REGRESSION) Explain/Obtain the LS estimator () of the vector of coe cients (b)

Exercise E7. Heteroskedasticity and Autocorrelation. Pilar González and Susan Orbe. Dpt. Applied Economics III (Econometrics and Statistics)

Diagnostics of Linear Regression

Applied Econometrics. Professor Bernard Fingleton

Introductory Econometrics

Econometrics Summary Algebraic and Statistical Preliminaries

Making sense of Econometrics: Basics

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Regression Analysis: Basic Concepts

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017

The regression model with one fixed regressor cont d

6. Assessing studies based on multiple regression

Simple Linear Regression

Ch 2: Simple Linear Regression

Multivariate Regression Analysis

Time Series. Chapter Time Series Data

Econometrics of Panel Data

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'

Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model

Properties of the least squares estimates

Review of Econometrics

The Statistical Property of Ordinary Least Squares

ECON2228 Notes 2. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 47

ECON Program Evaluation, Binary Dependent Variable, Misc.

Greene, Econometric Analysis (7th ed, 2012)

Economics 113. Simple Regression Assumptions. Simple Regression Derivation. Changing Units of Measurement. Nonlinear effects

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Brief Suggested Solutions

Lecture 8. Using the CLR Model. Relation between patent applications and R&D spending. Variables

Multiple Regression Model: I

F9 F10: Autocorrelation

ECON3150/4150 Spring 2016

WISE International Masters

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit

The multiple regression model; Indicator variables as regressors

Introductory Econometrics

The Linear Regression Model

11. Simultaneous-Equation Models

statistical sense, from the distributions of the xs. The model may now be generalized to the case of k regressors:

Brief Sketch of Solutions: Tutorial 3. 3) unit root tests

Exercise sheet 3 The Multiple Regression Model

Lab 07 Introduction to Econometrics

Lecture 3: Multiple Regression

Econometrics - Slides

Föreläsning /31

Linear Regression. September 27, Chapter 3. Chapter 3 September 27, / 77

Econometrics Review questions for exam

Multiple Linear Regression CIVL 7012/8012

1/34 3/ Omission of a relevant variable(s) Y i = α 1 + α 2 X 1i + α 3 X 2i + u 2i

1. The OLS Estimator. 1.1 Population model and notation

Introduction to Statistical modeling: handout for Math 489/583

Linear Regression. Junhui Qian. October 27, 2014

Panel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63

ECON3150/4150 Spring 2016

UNIVERSIDAD CARLOS III DE MADRID ECONOMETRICS FINAL EXAM (Type B) 2. This document is self contained. Your are not allowed to use any other material.

Transcription:

Lesson 5 The Multiple Regression Model Estimation Pilar González and Susan Orbe Dpt Applied Econometrics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 1 / 64

Learning objectives To estimate a linear regression model by Ordinary Least Squares (OLS) using available data To interpret the estimated coefficients of a linear regression model To derive the properties of the OLS estimator To derive the properties of the sample regression function To calculate a measure of Goodness-of-fit To estimate the linear regression model using non sample information by Restricted Least Squares Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 2 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 3 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 4 / 64

Estimation of the Linear Regression Model Objective To estimate the unknown parameters of the linear regression model: Y i = β 1 + β 2 X 2i + + β k X ki + u i i = 1, 2,, N u i NID(0, σ 2 ) using the available sample Y : dependent variable, endogenous variable X j, j = 2,, k: regressors β: unknown coefficients u: non observable error term i: index used with cross-section data (t in case of time series data) N: sample size (T in case of time series data) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 5 / 64

Estimation of the Linear Regression Model General linear regression model Y i = β 1 + β 2 X 2i + + β k X ki + u i i = 1, 2,, N u i NID(0, σ 2 ) Some remarks: This is the expression that will be used for the GLRM throughout the theoretical exposition of the estimation methodology This expression includes all the regression models linear in the coefficients discussed in Lesson 4 X j, j = 2,, k,: regressors that can be quantitative variables, dummy variables or other terms, such as product of variables, quadratic or logarithmic transformations, The interpretation of the coefficients depends on what X j stands for Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 6 / 64

Estimation of the Linear Regression Model Consider the general linear regression model: Y i = β 1 + β 2 X 2i + + β k X ki + u i i = 1, 2,, N Written for each observation: Y 1 = β 1 + β 2 X 21 + β 3 X 31 + + β k X k1 + u 1 i = 1 Y 2 = β 1 + β 2 X 22 + β 3 X 32 + + β k X k2 + u 2 i = 2 Y i = β 1 + β 2 X 2i + β 3 X 3i + + β k X ki + u i i = i Y N = β 1 + β 2 X 2N + β 3 X 3N + + β k X kn + u N i = N Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 7 / 64

Estimation of the Linear Regression Model The linear regression model can be written as well in matrix form This expression will be used for some proofs and to derive some complex expressions The GLRM in matrix form Y = Xβ + u Y 1 Y 2 Y i Y N = 1 X 21 X 31 X k1 1 X 22 X 32 X k2 1 X 2i X 3i X ki 1 X 2N X 3N X kn β 1 β 2 β 3 β k + u 1 u 2 u i u N Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 8 / 64

The OLS estimator Objective: to estimate the unknown coefficients of the Population Regression Function: E(Y i X) = β 1 + β 2 X 2i + + β k X ki Sample Regression Function (SRF): obtained substituting the estimated coefficients: Ŷ i = ˆβ 1 + ˆβ 2 X 2i + + ˆβ k X ki where Ŷi are called the fitted or adjusted values The estimation error is referred as residual: û i = Y i Ŷi = û i = Y i ˆβ 1 ˆβ 2 X 2i ˆβ k X ki This residual depends on the error term and on the error due to the estimation of the coefficients: û i = u i + (β 1 ˆβ 1 ) + (β 2 ˆβ 2 ) X 2i + + (β k ˆβ k ) X ki Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 9 / 64

The OLS estimator The OLS criterion The ordinary least squares estimator (OLS) is obtained minimizing the objective function: min ˆβ1,, ˆβ k N i=1 N û 2 i = min ˆβ1,, ˆβ k (Y i ˆβ 1 ˆβ 2 X 2i ˆβ k X ki ) 2 i=1 First order conditions Given by the first derivatives of the objective function equal to zero: N i=1 û2 i ˆβ = 0,, 1 ˆβ1= ˆβ OLS 1 N i=1 û2 i ˆβ = 0 k ˆβk = ˆβ OLS k Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 10 / 64

The OLS estimator The first order conditions are: N i=1 û2 i ˆβ 1 ˆβ1= ˆβ OLS 1 N i=1 û2 i ˆβ 2 ˆβ2= ˆβ OLS 2 N = 2 (Y i ˆβ 1 ˆβ 2 X 2i ˆβ k X ki ) = 0 i=1 N = 2 (Y i ˆβ 1 ˆβ 2 X 2i ˆβ k X ki )X 2i = 0 i=1 N i=1 û2 i ˆβ k ˆβk = ˆβ OLS k N = 2 (Y i ˆβ 1 ˆβ 2 X 2i ˆβ k X ki )X ki = 0 i=1 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 11 / 64

The OLS estimator Normal equations Y i = N ˆβ 1 + ˆβ 2 X 2i + + ˆβ k X ki X 2iY i = ˆβ 1 X 2i + ˆβ 2 X 22i + + ˆβ k X 2iX ki X ki Y i = ˆβ 1 X ki + ˆβ 2 X ki X 2i + + ˆβ k X 2 ki The first order conditions provide a system of k linear independent equations with k unknown parameters, which is called normal equations The OLS estimator is obtained by solving this system of equations Given that the second order conditions of minimization are always satisfied, we may affirm that the solution of the normal equations system is a minimum Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 12 / 64

The OLS estimator The OLS estimator in matrix form Objective function: min ˆβ û û = min ˆβ (Y X ˆβ) (Y X ˆβ) First order conditions: û û ˆβ = 0 2X (Y X ˆβ OLS ) = 0 X û OLS = 0 ˆβ= ˆβOLS Normal equations: X Y = X X ˆβ OLS Second order conditions: û û ˆβ ˆβ ˆβ= ˆβOLS = 2X X It is always a semidefinite positive matrix Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 13 / 64

The OLS estimator OLS estimator Solving this system of k linear independent equations, we get the OLS estimators: ˆβ OLS = (X X) 1 X Y X X = N X2i X3i X2i X 2 2i X2iX 3i X3i X3iX 2i X 2 3i Xki Xki X 2i Xki X 3i X Y = Yi X2iY i X3iY i Xki Y i ˆβ = ˆβ 1 ˆβ 2 ˆβ 3 ˆβ k Xki X2iX ki X3iX ki X 2 ki See Examples 51 and 52 for applications Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 14 / 64

The OLS estimator Some equivalences Observation i Y i = β 1 + β 2 X 2i + + β K X ki + u i E(Y i X) = β 1 + β 2 X 2i + + β k X ki Ŷ i = ˆβ 1 + ˆβ 2 X 2i + + ˆβ K X ki u i = Y i β 1 β 2 X 2i β k X ki û i = Y i Ŷi û i = Y i ˆβ 1 ˆβ 2 X 2i ˆβ K X ki û i = u i + (β 1 ˆβ 1 ) + + (β k ˆβ k )X ki In matrix form Y = Xβ + u E(Y X) = Xβ Ŷ = X ˆβ U = Y Xβ Û = Y Ŷ Û = Y X ˆβ Û = U + X(β ˆβ) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 15 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 16 / 64

GLRM estimation The Sample Regression Function (SRF) Some algebraic properties of the sample regression function: Ŷ i = ˆβ 1 + ˆβ 2 X 2i + + ˆβ k X ki N 1 The sum of the residuals is zero: û i = 0 i=1 2 The residuals are orthogonal to the regressors: i = 1, 2,, N N û i X ji = 0 i=1 Proof From the normal equations system: X (Y X ˆβ) = X û = 0 N 1 ûi N 1 X2iûi N 1 X3iûi N 1 X kiû i = 0 0 0 0 { N i=1 ûi = 0 N ûixji = 0, j i=1 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 17 / 64

The Sample Regression Function 3 The residuals are orthogonal to the fitted values: Ŷ û = 0 Proof: Ŷ û = (X ˆβ) û = ˆβ }{{} X û = 0 =0 4 The sample means of Y and Ŷ are equal: Ȳ = Ŷ Proof: û i = Y i Ŷi Yi = Ŷi + ûi Y i = Ŷi + û i }{{} =0 1 Y i = 1 Ŷi Ȳ = Ŷ N N Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 18 / 64

The Sample Regression Function 5 The SRF lays on the vector of sample means (Ȳ, X 2,, X k ) Proof: N û i = 0 i=1 (Y i ˆβ 1 ˆβ 2X 2i ˆβ k X ki ) = 0 Y i N ˆβ 1 ˆβ 2 X 2i ˆβ k X ki = 0 Y i = N ˆβ 1 + ˆβ 2 X 2i + + ˆβ k X ki 1 Y i = N ˆβ 1 + ˆβ 1 2 X 2i + + N ˆβ 1 k X ki N Ȳ = ˆβ 1 + ˆβ 2 X2 + + ˆβ k Xk Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 19 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 20 / 64

Properties of the OLS estimator Properties of the OLS estimator: ˆβ = (X X) 1 X Y 1 Linearity An estimator is linear if and only if it can be expressed as a linear function of the dependent variable, conditional on X Note that, conditional on X, ˆβ can be written as a linear function of the error term as well Proof Under assumptions A1 and A2: ˆβ = (X X) 1 X Y = (X X) 1 X (Xβ + u) = β + (X X) 1 X u 2 Unbiasness ˆβ is unbiased, that is, the expected value of the OLS estimator is equal to the population value of the coefficients of the model Proof Under assumptions A1 through A3: E( ˆβ X) = E[(β + (X X) 1 X u) X] = β + (X X) 1 X E(u X) = β Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 21 / 64

Properties of the OLS estimator 3 Variance: V ( ˆβ X) = σ 2 (X X) 1 Proof Under assumptions A1 through A5: V ( ˆβ X) = E[( ˆβ E( ˆβ X))( ˆβ E( ˆβ X)) X] = E[( ˆβ β)( ˆβ β) X] = = E [[ ] [ (X X) 1 X u (X X) 1 X u = (X X) 1 X σ 2 I T X(X X) 1 = ] X ] = σ 2 (X X) 1 X X(X X) 1 = σ 2 (X X) 1 = For simplicity, this covariance matrix will be denoted by V ( ˆβ) V ( ˆβ) = σ 2 (X X) 1 = V ( ˆβ 1) Cov( ˆβ 1, ˆβ 2) Cov( ˆβ 1, ˆβ k ) Cov( ˆβ 2, ˆβ 1) V ( ˆβ 2) Cov( ˆβ 2, ˆβ k ) Cov( ˆβ k, ˆβ 1) Cov( ˆβ k, ˆβ 2) V ( ˆβ k ) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 22 / 64

Properties of the OLS estimator This variance is minimum in the class of all linear and unbiased estimators Gauss-Markov Theorem Under assumptions A1 through A5, in the class of linear unbiased estimators, OLS has the smallest variance, conditional on X Under assumptions A1 through A5, ˆβ OLS is the best linear unbiased estimator (BLUE) of ˆβ This theorem justifies the use of the OLS method instead of other competing estimators Note All the analysis is conditional on X, even though it is not going to be explicitely written from now onwards in order to simplify the notation Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 23 / 64

Estimation OLS residuals The OLS residuals can be written in terms of the error term as follows: û = Mu where M is a squared matrix of order N, symmetric (M = M ), idempotent (MM = M), with rank rg(m)= tr(m)= N K and orthogonal to X (MX = 0) Proof: û = Y Ŷ = Y X ˆβ = Y X(X X) 1 X Y = = [I N X(X X) 1 X ]Y = [I N X(X X) 1 X ](Xβ + u) = = Xβ X(X X) 1 X Xβ + [I N X(X X) 1 X ]u = = [I N X(X X) 1 X ] u = Mu }{{} =M Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 24 / 64

Estimation Properties of the OLS residuals 1 Expected value: E(û) = 0 Proof: E(û) = ME(u) = M 0 = 0 2 Variance: Even in the case of homoskedastic and uncorrelated disturbances (u (0, σ 2 I)), the OLS residuals are NOT homoskedastic and they ARE correlated Proof: V (û) = E(ûû ) = E(Muu M ) = = Mσ 2 I N M = σ 2 M M I Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 25 / 64

Estimation Estimator of the variance of the error term, σ 2 ˆσ 2 = û û N k = SSR N k = û2 i N k Proof: This estimator of the variance of the error term is unbiased: given that: E(ˆσ 2 ) = E(û û) N k = σ2 (N k) = σ 2 N k E(û û) = E(u Mu) = E(tr(u Mu)) = E(tr(Muu )) = = tr(e(muu )) = tr(mσ 2 I N ) = = σ 2 tr(m) = σ 2 (N k) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 26 / 64

Estimation Estimator of the covariance matrix of ˆβ V ( ˆβ) = V ( ˆβ 1 ) Ĉov( ˆβ 1, ˆβ 2 ) Ĉov( ˆβ 1, ˆβ k ) Ĉov( ˆβ 2, ˆβ 1 ) V ( ˆβ2 ) Ĉov( ˆβ 2, ˆβ k ) Ĉov( ˆβ k, ˆβ 1 ) Ĉov( ˆβ k, ˆβ 2 ) V ( ˆβk ) = σ 2 (X X) 1 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 27 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 28 / 64

GLRM estimation Goodness-of-fit Sum of squares decomposition SST = SSE + SSR T Total Sum of Squares: SST = (Y t Ȳ )2 t=1 It is a measure of the total sample variation in Y T Explained Sum of Squares: SSE = (Ŷt Ȳ )2 t=1 It is a measure of the sample variation in Ŷ T Residual Sum of Squares: SSR = (Y t Ŷt) 2 = t=1 It is the sample variation in the OLS residuals T t=1 û 2 t Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 29 / 64

Goodness-of-fit Regression model: Y t = Ŷt + û t = Y t is decomposed into two parts: fitted value and residual Estimating by OLS, the following decomposition holds: t=1 Therefore, it can be proved that: SST = SSE + SSR N N N (Y t Ȳ )2 = (Ŷt Ȳ )2 + The total sample variation in Y can be expressed as the sum of t=1 t=1 the variation in the fitted values of Y, Ŷ (the variation in Y explained by the regression) + the variation in the OLS residuals, û (the variation in Y UNexplained by the regression) û 2 t Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 30 / 64

Goodness-of-fit Proof Writting the GLRM as follows: û t = Y t Ŷt Y t = Ŷt + û t The total sample variation in Y is: SST = T (Y t Ȳ )2 t=1 T (Y t Ȳ )2 = t=1 = = T (Ŷt + ût Ȳ )2 = t=1 T (Ŷt Ȳ )2 + t=1 T (Ŷt Ȳ )2 + t=1 T t=1 T [(Ŷt Ȳ ) + ût]2 = t=1 û 2 t + 2 T û 2 t = t=1 T t=1 (Ŷt Ȳ )ût = T (Ŷt Ŷ ) 2 + t=1 T t=1 û 2 t SST = SSE + SSR Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 31 / 64

Goodness-of-fit Goodness-of-fit measure Coefficient of determination: R 2 = SSE SST R 2 is the fraction of the sample variation in Y that is explained by the variation in the explanatory variables, X If the model contains an intercept, the coefficient of determination may be calculated as follows: R 2 = 1 SSR (0, 1) SST Gretl computes the coefficient of determination using the formula in terms of the Sum of Squares Residuals! Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 32 / 64

Goodness-of-fit Some performance criteria These criteria are useful to compare the performance of nested regression models Rationale: to adjust the sum of squared residuals by the degrees of freedom Given that they are based on the SSR, the smaller the value of the criterion the better the performance of the model except for the adjusted R 2 Adjusted R 2 : R2 = 1 N 1 N k R2 Log-likelihood: l(ˆθ) = N 2 (1 + ln 2π ln N) N 2 ln SSR Akaike criterion: AIC = 2l(ˆθ) + 2k Schwarz criterion: BIC = 2l(ˆθ) + k ln N Hannan-Quinn criterion: HQC = 2l(ˆθ) + 2k ln ln N Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 33 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 34 / 64

Restricted Least Squares estimator Objective To estimate a linear regression model: Y i = β 1 + β 2 X 2i + + β k X ki + u i i = 1, 2,, N u i NID(0, σu) 2 including non sample information on the coefficients The Restricted Least Squares estimator (RLS) is ˆβ RLS = (X RX R ) 1 X RY R where X R y Y R are the observation matrix and the dependent variable of the restricted model, that is, of the model that results from including the non sample information in the regression model See Example 53 for applications Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 35 / 64

Restricted Least Squares estimator Properties ˆβ RLS = (X RX R ) 1 X RY R Conditioning on X, the RLS estimator is: Linear in u Unbiased if the non sample information included in the estimation is true The variance of the RLS estimator is always smaller than the variance of the OLS estimator, even if the non sample information included is false Therefore, if the non sample information available is true, the RLS estimator should be used because it is unbiased and has a smaller variance than the OLS estimator Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 36 / 64

Restricted Least Squares estimator Example 1 Consider the regression model: pizza i = β 1 + β 2 income i + β 3 age i + u i i = 1, 2,, N subject to: β 2 = β 3 Restricted model: pizza i = β 1 + β 2 (income i age i ) + u i 1 income 1 age 1 pizza 1 1 income 2 age 2 pizza 2 X R = YR = 1 income N age N pizza N ˆβ RLS = ( ˆβ1 ˆβ 2 ) ˆβ 3 RLS RLS = ˆβ 2 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 37 / 64

Restricted Least Squares estimator Example 2 Consider the regression model: pizza i = β 1 + β 2 income i + β 3 (age i income i ) + u i i = 1,, N subject to: β 2 = 5 Restricted model: pizza i 5 income i = β 1 + β 3 (age i income i ) + u i X R = 1 age 1 income 1 1 age 2 income 2 1 age N income N YR = ˆβ RLS = ( ˆβ1 ˆβ 3 ) pizza 1 5 income 1 pizza 2 5 income 2 pizza N 5 income N ˆβ RLS 2 = 5 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 38 / 64

Restricted Least Squares estimator Example 3 Consider the regression model: Y t = β 1 + β 2 X2 t + β 3 X3 t + β 4 X4 t + u t t = 1, 2,, T subject to: β 3 + β 4 = 1 Restricted model: X R = Y t X4 t = β 1 + β 2 X2 t + β 3 (X3 t X4 t ) + u t 1 X2 1 X3 1 X4 1 1 X2 2 X3 2 X4 2 1 X2 T X3 T X4 T ˆβ RLS = ˆβ 1 ˆβ 2 ˆβ 3 YR = ˆβRLS RLS 4 = 1 ˆβ 3 Y 1 X4 1 Y 2 X4 2 Y T X4 T Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 39 / 64

Restricted Least Squares estimator Example 4 Consider the regression model: Y t = β 1 + β 2 X3 t + β 3 X3 2 t + β 4 time + u t t = 1, 2,, T subject to: β 2 = β 3 = 0 Restricted model: Y t = β 1 + β 4 time + u t X R = 1 1 1 2 1 T ˆβ RLS = ( ˆβ1 ˆβ 4 ) YR = Y 1 Y 2 Y T ˆβ 2 RLS RLS = ˆβ 3 = 0 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 40 / 64

GLRM estimation Omitting a relevant variable Omitting a relevant variable Let s assume that this regression model satisfies all the assumptions: Y i = β 1 + β 2 X 2i + β 3 X 3i + + β k X ki + u i i = 1, 2,, N To omit X 2 in this model to include a false restriction (β 2 = 0) The restricted model (omitting X 2 ) will be: Y i = β 1 + β 3 X 3i + + β k X ki + u i i = 1, 2,, N The Restricted Least Squares Estimator, ˆβRLS = (X R X R) 1 X R Y R, will have the following properties, conditional on X: Linear in u BIASED because the restriction included is NOT true Its variance is smaller than the variance of the OLS estimator Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 41 / 64

GLRM estimation Including an irrelevant variable Including an irrelevant variable Consider the linear regression model: Y i = β 1 + β 2 X 2i + + β k X ki + u i i = 1, 2,, N (1) Assume that it is known that β 2 = 0, that is, that the regressor X 2 is irrelevant In this case, it can be proved that the OLS estimator in model (1), conditional on X, is: Linear in u Unbiased because, even though an irrelevant variable is included, E(u X) = 0 The variance of the OLS estimator is the smallest in the class of linear and unbiased estimators BUT it is possible to estimate the coefficients of model (1) with a smaller variance: = including in the model the true restriction β 2 = 0 and estimating by Restricted Least Squares Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 42 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 43 / 64

Estimation results Example Pizza consumption The file pizzagdt contains information from N individuals on annual pizza consumption (in dollars) and some characteristics of the individuals, such as: Annual income (in thousands of dollars) Age of the consumer (in years) Gender Highest level of studies (basic, high-school, college, postgraduated) The factors that determine pizza consumption will be analysed in detail in the Example 51 The objective of this section is to estimate a very simple model for pizza consumption in order to explain the estimation output produced by Gretl Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 44 / 64

Estimation results Consider the linear regression model: pizza i = β 1 + β 2 income i + β 3 age i + u i i = 1,, N where: pizza: dependent variable (pizza consumption) income: quantitative explanatory variable age: quantitative explanatory variable β 1, β 2, β 3 : unknown coefficients u: error term N: sample size i: index Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 45 / 64

Estimation results Gretl output Model 1: OLS, using observations 1 40 Dependent variable: pizza Coefficient std error t-ratio p-value const 342885 723434 47397 00000 income 183248 0464301 39467 00003 age 757556 231699 32696 00023 Mean dependent var 1915500 SD dependent var 1558806 Sum squared resid 6356367 SE of regression 1310701 R squared 0329251 Adjusted R squared 0292994 F (2, 37) 9081100 p-value (F ) 0000619 Log-likelihood 2502276 Akaike criterion 5064552 Schwarz criterion 5115218 Hannan Quinn 5082871 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 46 / 64

Estimation results Gretl output Model 1: OLS, using observations 1 40 Dependent variable: pizza The information shown in the estimation output heading is: Model 1 this is the first model estimated since Gretl was started OLS the estimation method used using observations 1 40 the sample size is 40 Dependent variable: pizza the dependent variable of the estimated regression model is pizza Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 47 / 64

Estimation results Gretl output Model 1: OLS, using observations 1 40 Dependent variable: pizza Coefficient std error t-ratio p-value const 342885 723434 47397 00000 income 183248 0464301 39467 00003 age 757556 231699 32696 00023 The first column shows the regressors included in the estimated regression model These regressors may be quantitative variables, dummy variables, products or transformations of variables, That is, they represent the columns of the observation matrix X: the constant (column of 1s) and the regressors income and age Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 48 / 64

Estimation results Gretl output Model 1: OLS, using observations 1 40 Dependent variable: pizza Coefficient std error t-ratio p-value const 342885 723434 47397 00000 income 183248 0464301 39467 00003 age 757556 231699 32696 00023 The second column shows the estimates of the coefficients of the model obtained using the OLS estimator: ˆβ OLS = (X X) 1 X Y The third column shows the standard errors of the OLS estimators of the coefficients These standard errors are the squared root of the elements in the diagonal of the covariance matrix: V ( ˆβ) = ˆσ 2 (X X) 1, where the estimator of the variance of the error term is given by ˆσ 2 = SSR/(N k) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 49 / 64

Estimation results Gretl output Model 1: OLS, using observations 1 40 Dependent variable: pizza Coefficient std error t-ratio p-value const 342885 723434 47397 00000 income 183248 0464301 39467 00003 age 757556 231699 32696 00023 The two last columns contain the t-ratios and the p-values This information is useful to test the individual significance of the regressors (see Lesson 6) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 50 / 64

Estimation results Gretl output Mean dependent var 1915500 SD dependent var 1558806 Sum squared resid 6356367 SE of regression 1310701 R squared 0329251 Adjusted R squared 0292994 F (2, 37) 9081100 p-value (F ) 0000619 Log-likelihood 2502276 Akaike criterion 5064552 Schwarz criterion 5115218 Hannan Quinn 5082871 At the bottom of the estimation output appears some information of interest: the sum of squared residuals, coefficient of determination and the performance criteria explained in this lesson (adjusted R 2, Log-likelihood, Akaike criterion, Schwarz criterion and Hannan Quinn criterion) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 51 / 64

Estimation results Gretl output Mean dependent var 1915500 SD dependent var 1558806 Sum squared resid 6356367 SE of regression 1310701 R squared 0329251 Adjusted R squared 0292994 F (2, 37) 9081100 p-value (F ) 0000619 Log-likelihood 2502276 Akaike criterion 5064552 Schwarz criterion 5115218 Hannan Quinn 5082871 Mean dependent var = 1915500 pizza = 1 N N i=1 pizza i SD dependent var = 1558806 SD pizza = SE regression = 1310701 ˆσ = SSR N k 1 N 1 N i=1 (pizza i pizza) 2 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 52 / 64

Estimation results Gretl output Mean dependent var 1915500 SD dependent var 1558806 Sum squared resid 6356367 SE of regression 1310701 R squared 0329251 Adjusted R squared 0292994 F (2, 37) 9081100 p-value (F ) 0000619 Log-likelihood 2502276 Akaike criterion 5064552 Schwarz criterion 5115218 Hannan Quinn 5082871 The information given by F (2, 37) = 9081100 and the p-value (F ) = 0000619 are useful to test the joint significance of the regressors (see Lesson 6) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 53 / 64

Estimation results Gretl output The file chickengdt contains annual time series data from 1990 to 2012 on: Y : annual chicken consumption (in kilograms) X2: per capital real disposable income (in euros) X3: price of chicken (in euros/kilogram) X4: price of pork (in euros/kilogram) X5: price of beef (in euros/kilogram) Avian flue epidemic (1999-2003) The factors that determine the evolution of chicken consumption will be analysed in detail in the Example 52 In this section, a very simple model to determine chicken consumption will be estimated in order to explain the Gretl estimation output, focusing on the specific results for time series data Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 54 / 64

Estimation results Consider the regression model Y t = β 1 + β 2 X2 t + β 3 X3 t + β 4 X4 t + u t t = 1, T where: Y : dependent variable (chicken consumption) X2: quantitative explanatory variable X3: quantitative explanatory variable X4: quantitative explanatory variable β 1, β 2, β 3, β 4 : unknown coefficients u: error term T : sample size t: index Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 55 / 64

Estimation results Gretl output Model 1: OLS, using observations 1990 2012 (T = 23) Dependent variable: Y Coefficient Std Error t-ratio p-value const 387207 361477 107118 00000 X4 437920 155013 28250 00108 X2 00109351 000236651 46208 00002 X3 136515 391865 34837 00025 Mean dependent var 3966957 SD dependent var 7372950 Sum squared resid 7477556 SE of regression 1983824 R 2 0937475 Adjusted R 2 0927603 F (3, 19) 9495932 P-value(F ) 128e 11 Log-likelihood 4619405 Akaike criterion 1003881 Schwarz criterion 1049301 Hannan Quinn 1015304 ˆρ 0563845 Durbin Watson 0882646 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 56 / 64

Estimation results Gretl output Model 1: OLS, using observations 1990 2012 (T = 23) Dependent variable: Y If the sample consists of time series data, the heading of the estimation output shows the sample frequency: 1990 2012 (T = 23) 23 annual observations For quarterly data: 1950:1-1955:3 (T = 23) For monthly data: 1980:01-1981:11 (T = 23) For daily data: 1950-01-01:1950-06-04 (T = 23) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 57 / 64

Estimation results Gretl output Model 1: OLS, using observations 1990 2012 (T = 23) Dependent variable: Y Coefficient Std Error t-ratio p-value const 387207 361477 107118 00000 X4 437920 155013 28250 00108 X2 00109351 000236651 46208 00002 X3 136515 391865 34837 00025 Note that while the regression model is written as: Y t = β 1 + β 2 X 2t + β 3 X 3t + β 4 X 4t + u t the list of regressors in the estimation table is: const, X4, X2, X3 This is because the regressors appear in the table in the same order we introduce them in the Gretl specification window (see Example 511) This fact has to be taken into account to read the results, that is, ˆβ 2 = 00109351 and ˆβ 4 = 43792 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 58 / 64

Estimation results Gretl output Mean dependent var 3966957 SD dependent var 7372950 Sum squared resid 7477556 SE of regression 1983824 R 2 0937475 Adjusted R 2 0927603 F (3, 19) 9495932 P-value(F ) 128e 11 Log-likelihood 4619405 Akaike criterion 1003881 Schwarz criterion 1049301 Hannan Quinn 1015304 ˆρ 0563845 Durbin Watson 0882646 ˆρ = T t=2 ûtût 1 T t=2 û2 t 1 Durbin-Watson: T t=2 (ût ût 1)2 T t=1 û2 t These results appear only when the data set structure is time series The Durbin-Watson autocorrelation test will be studied in Lesson 7 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 59 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 60 / 64

Presentation of results The estimation results are usually summarized writing down the Sample Regression Function along with the standard error of the estimators, the coefficient of determination and the Sum of Squares Residuals Summary pizza i = 342885 (72343) + 183248 (046430) income i 757556 (23170) age i i = 1,, 40 R 2 = 0329 SSR = 6356367 (standard error in parentheses) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 61 / 64

Presentation of results If we are working with time series data, the same information is presented but adding the value of the Durbin-Watson statistic (see Lesson 7) Summary Ŷ t = 387207 (3614476) + 437920 (15501) X4 t + 00109351 (00023665) X2 t 136515 (391865) X3 t t = 1990,, 2012 R 2 = 0937475 SCR = 7477556 DW = 08832646 (standard error in parentheses) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 62 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 63 / 64

Contents 1 The Ordinary Least Squares method 2 Sample Regression Function (SRF) 3 Sampling properties of the OLS estimator 4 Goodness-of-fit of the model 5 Use of non-sample information Restricted Least Squares Estimator (RLS) Omitting and including relevant variables 6 Estimation results using Gretl 7 Presentation of results 8 Tasks: T51 and T52 9 Exercises: E51, E52 and E53 Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model: Estimation 64 / 64