Properties of the least squares estimates

Similar documents
Weighted Least Squares

Weighted Least Squares

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

[y i α βx i ] 2 (2) Q = i=1

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Introduction to Econometrics Midterm Examination Fall 2005 Answer Key

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Homoskedasticity. Var (u X) = σ 2. (23)

Applied Econometrics (QEM)

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

. a m1 a mn. a 1 a 2 a = a n

Regression #3: Properties of OLS Estimator

Simple Linear Regression

IEOR 165 Lecture 7 1 Bias-Variance Tradeoff

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28

The Statistical Property of Ordinary Least Squares

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

1 One-way analysis of variance

Appendix A: Review of the General Linear Model

x 21 x 22 x 23 f X 1 X 2 X 3 ε

14 Multiple Linear Regression

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is

3 Multiple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression

Multivariate Regression Analysis

MAT2377. Rafa l Kulik. Version 2015/November/26. Rafa l Kulik

Multiple Regression Analysis. Part III. Multiple Regression Analysis

Example: Suppose Y has a Poisson distribution with mean

GLS and FGLS. Econ 671. Purdue University. Justin L. Tobias (Purdue) GLS and FGLS 1 / 22

Problem Selected Scores

Financial Econometrics

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11

Applied Statistics Preliminary Examination Theory of Linear Models August 2017

Economics 583: Econometric Theory I A Primer on Asymptotics

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36

The Multiple Regression Model Estimation

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

1 Cricket chirps: an example

Linear Models Review

ECON The Simple Regression Model

INTRODUCTORY ECONOMETRICS

Association studies and regression

Next is material on matrix rank. Please see the handout

Steps in Regression Analysis

Chapter 2 The Simple Linear Regression Model: Specification and Estimation

Applied Regression. Applied Regression. Chapter 2 Simple Linear Regression. Hongcheng Li. April, 6, 2013

STA 302f16 Assignment Five 1

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij =

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61

Correlation in Linear Regression

Lecture 1: OLS derivations and inference

Statistics 910, #5 1. Regression Methods

Data Mining Stat 588

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

Two-Variable Regression Model: The Problem of Estimation

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

REGRESSION WITH SPATIALLY MISALIGNED DATA. Lisa Madsen Oregon State University David Ruppert Cornell University

Probability and Statistics Notes

Linear models. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. October 5, 2016

Lecture 6 Multiple Linear Regression, cont.

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Lecture 34: Properties of the LSE

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

Multiple Linear Regression

STAT 540: Data Analysis and Regression

Lecture 24: Weighted and Generalized Least Squares

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

Ridge Regression Revisited

4 Multiple Linear Regression

Introduction The framework Bias and variance Approximate computation of leverage Empirical evaluation Discussion of sampling approach in big data

assumes a linear relationship between mean of Y and the X s with additive normal errors the errors are assumed to be a sample from N(0, σ 2 )

variability of the model, represented by σ 2 and not accounted for by Xβ

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

MS&E 226: Small Data

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'

Econ 583 Final Exam Fall 2008

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Ch 2: Simple Linear Regression

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017

Applied Regression Analysis

Basic Econometrics - rewiev

Matrix Approach to Simple Linear Regression: An Overview

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Our point of departure, as in Chapter 2, will once more be the outcome equation:

coefficients n 2 are the residuals obtained when we estimate the regression on y equals the (simple regression) estimated effect of the part of x 1

Introduction to Econometrics Final Examination Fall 2006 Answer Sheet

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

MS&E 226: Small Data

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

STAT5044: Regression and Anova. Inyoung Kim

THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS

Simple and Multiple Linear Regression

The Linear Regression Model

Regression #4: Properties of OLS Estimator (Part 2)

Lecture 9 SLR in Matrix Form

STAT 100C: Linear models

Lecture 13: Simple Linear Regression in Matrix Format

Transcription:

Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares estimates are: ˆβ p 1 = X T X ) X T y Our goal today is to learn about the statistical properties of these estimates, in particular their expectation and variance. Random Vectors ˆβ is a vector-valued random variable, so we first need to cover a little more background. Let U 1,..., U n be scalar random variables. Then the vector U = U 1,..., U n ) T is a vector valued random variable, a.k.a. a random vector. The expectation of U is the vector of expectations, E U) = E U 1 ),..., E U n )) T And the variance-covariance matrix of U is Var U 1 ) Cov U 1, U 2 ) Cov U 1, U n ) Cov U 2, U 1 ) Var U 2 ) Cov U 2, U n ) Var U) = Cov U) =...... Cov U n, U 1 ) Cov U n, U 2 ) Var U n ) For example, the errors in multiple linear regression, ɛ i, i = 1,..., n are independent with mean 0 and variance σ 2. Then, ɛ 1 0 σ 2 0 0 ɛ 2 ɛ =., E ɛ) = 0 0 σ 2 0 = 0, Var ɛ) =....... = σ2 I n 0 0 0 0 σ 2 ɛ n n n 1

Properties of Expectation and Variance for random vectors Let: U n 1 be a random vector A m n be a constant matrix b m 1 be a constant vector Then: E AU + b) = AE U) + b Var AU + b) = AVar U) A T These are the vector analogs of the properties you wrote down in the warmup. Find E y) and Var y) where y n 1 satisfies the multiple linear regression equation: y = Xβ + ɛ and E ɛ) = 0, Var ɛ) = σ 2 I 2

Expectation of the least squares estimates Assume the regression set up with the usual dimensions): y = Xβ + ɛ where X is fixed with rank p, E ɛ) = 0, and Var ɛ) = σ 2 I n. Fill in the blanks to show the least squares estimates are unbiased E X ˆβ) = E T X ) ) X T y X = E T X ) ) X T ) plug in the regression equation for y = E + ) expand = E + ) simplify term on the left A A = I = + E ) property of expectation = regression assumptions = β 3

Variance-covariance matrix of the least square estimates Fill in the blanks to find the variance covariance matrix of least squares estimates Var X ˆβ) = Var T X ) ) X T y X = Var T X ) X T Xβ + X T X ) ) X T ɛ plug in reg. eqn. and expand = 0 + ) Var ɛ) ) T property of Var = ) ) T regression assumption = ) ) T move scalar to front = σ 2 ) distribute transpose = σ 2 since A A = I = σ 2 X T X ) We can pull out the variance of a particular parameter estimate, say ˆβ i, from the diagonal of the matrix: Var β i ) = σ 2 X T X) i+1i+1 where A ij indicates the element in the i th row and j th column of the matrix A. Why i + 1? The off diagonal terms tell us about the covariance between parameter estimates. Estimating σ To make use of the variance-covariance results we need to be able to estimate σ 2. An unbiased estimate is: ˆσ 2 = 1 n p n i=1 e 2 i = e 2 n p 4

The denominator n p is known as the model degrees of freedom. Standard errors of particular parameters The standard error of a particular parameter is then the squareroot of the variance replacing σ 2 with its estimate: SE β i ) = ˆσ 2 X T X) i+1i+1 Gauss Markov Theorem You might wonder if we can find estimates with better properties. The Guass-Markov theorem says the least squares estimates are BLUE Best Linear Unbiased Estimator). Of all linear, unbiased estimates, the least squares estimates have the smallest variance. Of course if you are willing to have a non-linear estimate and/or, a biased estimate you might be able to find an estimate with smaller variance. Proof, see Section 2.8 in Faraday 5

Summary For the linear regression model Y = Xβ + ɛ where E ɛ) = 0 n 1, Var ɛ) = σ 2 I n, and the matrix X n p is fixed with rank p. The least squares estimates are ˆβ = X T X) X T Y Furthermore, the least squares estimates are BLUE, and E ˆβ) = β, E ˆσ 2) = E 1 n p Var ˆβ) = σ 2 X T X) ) n = σ 2 e 2 i i=1 We have not used any Normality assumptions to show these properties. 6