So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

Similar documents
Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Lecture 3: Multiple Regression

LECTURE 5 HYPOTHESIS TESTING

Interpreting Regression Results

ECON 4160, Autumn term Lecture 1

1. The Multivariate Classical Linear Regression Model

Lecture 4: Testing Stuff

the error term could vary over the observations, in ways that are related

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

1. The OLS Estimator. 1.1 Population model and notation

INTERVAL ESTIMATION AND HYPOTHESES TESTING

Hypothesis testing Goodness of fit Multicollinearity Prediction. Applied Statistics. Lecturer: Serena Arima

WISE International Masters

Heteroskedasticity. We now consider the implications of relaxing the assumption that the conditional

The regression model with one fixed regressor cont d

Ch 2: Simple Linear Regression

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

The Statistical Property of Ordinary Least Squares

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

Review of Econometrics

Lectures 5 & 6: Hypothesis Testing

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017

MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators

Multiple Regression Analysis

ECON 5350 Class Notes Functional Form and Structural Change

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM

Econometrics Midterm Examination Answers

[y i α βx i ] 2 (2) Q = i=1

Confidence Intervals, Testing and ANOVA Summary

Final Exam - Solutions

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

Recent Advances in the Field of Trade Theory and Policy Analysis Using Micro-Level Data

11 Hypothesis Testing

Applied Statistics and Econometrics

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Repeated observations on the same cross-section of individual units. Important advantages relative to pure cross-section data

THE ROYAL STATISTICAL SOCIETY 2008 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE (MODULAR FORMAT) MODULE 4 LINEAR MODELS

The Standard Linear Model: Hypothesis Testing

Econometrics of Panel Data

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics

14 Multiple Linear Regression

The Linear Regression Model

Variable Selection and Model Building

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Homoskedasticity. Var (u X) = σ 2. (23)

Motivation for multiple regression

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

LECTURE 5. Introduction to Econometrics. Hypothesis testing

Contest Quiz 3. Question Sheet. In this quiz we will review concepts of linear regression covered in lecture 2.

Math 423/533: The Main Theoretical Topics

MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2

Econometrics Summary Algebraic and Statistical Preliminaries

GROUPED DATA E.G. FOR SAMPLE OF RAW DATA (E.G. 4, 12, 7, 5, MEAN G x / n STANDARD DEVIATION MEDIAN AND QUARTILES STANDARD DEVIATION

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

ANALYSIS OF VARIANCE AND QUADRATIC FORMS

The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing

Regression. ECO 312 Fall 2013 Chris Sims. January 12, 2014

Intermediate Econometrics

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

Multiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

Multivariate Distributions

MS&E 226: Small Data

Regression and Statistical Inference

Summary of Chapters 7-9

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1)

2 Prediction and Analysis of Variance

Econometrics. 4) Statistical inference

ECON 4160, Lecture 11 and 12

UNIVERSITY OF TORONTO Faculty of Arts and Science

Statistics and econometrics

Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18)

The linear model is the most fundamental of all serious statistical models encompassing:

The outline for Unit 3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

STA 2101/442 Assignment 3 1

Homework Set 2, ECO 311, Fall 2014

Y i = η + ɛ i, i = 1,...,n.

CHAPTER 2: Assumptions and Properties of Ordinary Least Squares, and Inference in the Linear Regression Model

Maximum-Likelihood Estimation: Basic Ideas

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

AMS-207: Bayesian Statistics

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA

Econ 510 B. Brown Spring 2014 Final Exam Answers

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Chapter 5 Matrix Approach to Simple Linear Regression

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Regression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T,

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

Microeconometria Day # 5 L. Cembalo. Regressione con due variabili e ipotesi dell OLS

Multiple Regression Analysis

ECON The Simple Regression Model

1/34 3/ Omission of a relevant variable(s) Y i = α 1 + α 2 X 1i + α 3 X 2i + u 2i

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Inferences for Regression

Lecture 15. Hypothesis testing in the linear model

ECON 4160, Spring term 2015 Lecture 7

Transcription:

Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,..., N or y = Xβ + u We have introduced the OLS estimator β OLS and shown that this gives an unbiased and effi cient estimator (in the class of linear, unbiased estimators) in the classical linear regression models 1

The OLS estimator gives us point estimates of the β k parameters For example, if we estimate β 2 = 1.032 in a particular application, our best guess for the value of the β 2 parameter may well be 1.032 But we know that this guess will almost certainly be wrong Often we are more interested in characterizing an interval within which the true value of the parameter lies with a specified probability 2

For example, we may want to say that there is a 95% probability that β 2 lies in an interval like (0.988, 1.076) This is known as interval estimation, and the interval in this example is called a 95% confidence interval 3

Closely related to this, if we estimate β 2 = 1.032, we may want to test whether the true value of the β 2 parameter could plausibly be one If the 95% confidence interval is (0.988, 1.076), which includes the value one, we would conclude that a true value of one is quite likely, and we would not reject the hypothesis that β 2 = 1 (at the conventional 5% significance level) This is known as hypothesis testing 4

Our result that the conditional variance V ( β OLS X) = σ 2 (X X) 1 in the classical linear regression model with stochastic regressors will be useful for characterizing confidence intervals and developing hypothesis tests But knowing the conditional expectation and variance of the OLS estimator is not suffi cient We also need results about the (conditional) distribution of the OLS estimator before we can use these to estimate confidence intervals and conduct hypothesis tests 5

There are two approaches we can take here adding the assumption that u X (or equivalently y X) has a Normal distribution will allow us to derive the exact, finite sample distribution of β OLS X in the classical linear regression model with stochastic regressors without specifying the form of the distribution for u X (or y X), we can still characterize the distribution of β OLS X in large samples (more precisely, as the sample size N tends to infinity), and we can hope that our sample size is large enough for this asymptotic result to provide a useful approximation 6

One motivation for assuming that the stochastic error terms are drawn from a Normal distribution may be the idea that the error term captures the combined effects of many different factors that we have left out of our model We can then appeal to a central limit theorem to suggest that the sum of many independent random variables will have a Normal distribution Adding the assumption that u X has a Normal distribution to the assumptions we made to specifiy the classical linear regression model with stochastic regressors gives us the classical linear regression model with normally distributed errors (from now on, unless otherwise specified, we will be treating the explanatory variables as stochastic) 7

Classical linear regression model with normally distributed errors This can be stated as y = Xβ + u u X N(0, σ 2 I) X is stochastic and full rank or equivalently as y X N(Xβ, σ 2 I) X is stochastic and full rank 8

The key result is that β OLS X N(β, σ 2 (X X) 1 ) We already knew that E( β OLS X) = β and V ( β OLS X) = σ 2 (X X) 1, since we had established those results in the classical linear regression model without using the Normality assumption The new ingredient is that the random vector β OLS X also has a Normal distribution 9

This follows directly from the property that, conditional on X, β OLS X is a linear function of the random vector y X, which we have now assumed has a Normal distribution Here we use the fact that if the N 1 vector y is stochastic conditional on X, with y X N(µ, Σ), and if the K 1 vector a is non-stochastic conditional on X and the K N matrix B is non-stochastic conditional on X, with full row rank (rank(b) = K), then the K 1 vector z = a + By is stochastic conditional on X, with z X N(a + Bµ, BΣB ) 10

Since β OLS = (X X) 1 X y = Ay, with A non-stochastic conditional on X and rank(a) = K, we can use this fact to obtain the conditional distribution of β OLS X Some more useful facts about the multivariate Normal distribution are now collected Let y N(µ, Σ) where y and µ are N 1 vectors and Σ is an N N matrix 11

Partition y = y 1 y 2, µ = µ 1 µ 2 and Σ = Σ 11 Σ 12 Σ 21 Σ 22 where y 1 and µ 1 are N 1 1 vectors, y 2 and µ 2 are N 2 1 vectors, N 1 +N 2 = N, Σ 11 is an N 1 N 1 matrix, Σ 22 is an N 2 N 2 matrix, Σ 12 is an N 1 N 2 matrix, and Σ 21 = Σ 12 is an N 2 N 1 matrix Marginal Distribution y 1 N(µ 1, Σ 11 ) Conditional Distribution y 2 y 1 N(a + B y 1, Σ 22 B Σ 11 B) where here B = (Σ 11 ) 1 Σ 12 is an N 1 N 2 matrix, and a = µ 2 B µ 1 is an N 2 1 vector 12

Zero correlation implies independence: if Σ 12 = 0 then y 1 and y 2 are independent This does not hold in general for two random vectors y 1 and y 2, but does hold if they are Normally distributed Finally if the scalar random variable z has a standard Normal distribution, z N(0, 1), then Pr( 1.96 < z < 1.96) = 0.95 13

An immediate implication of the Marginal Distribution property is that since β OLS X N(β, σ 2 (X X) 1 ), we have β k X N(β k, v kk ) for k = 1,..., K where β k is again the k th element of β OLS, β k is the k th element of true parameter vector β, and v kk = V ( β k X) is the element in the k th row and the k th column of V ( β OLS X) = σ 2 (X X) 1 (equivalently the k th element of the main diagonal of V ( β OLS X) = σ 2 (X X) 1 ) And the linear function of β k z k = ( βk β k vkk ) = ( βk vkk ) + ( 1 vkk ) β k 14

has a standard Normal distribution z k X N(0, 1) for k = 1,..., K Moreover, because the conditional distribution of z k X is the same distribution (standard Normal) for any realization of X, we also have that the unconditional distribution of z k is standard Normal z k = ( βk β k vkk ) N(0, 1) for k = 1,..., K One subtle point to note here is that the realized value of z k does depend on the realized X, which affects both β k and v kk, but the conditional distribution does not depend on the realized X, which is suffi cient to obtain the unconditional distribution result 15

In contrast, since the conditional distribution of β k X N(β k, v kk ) does depend on the realized X, we do not have an unconditional Normal distribution for the estimated parameter β k itself (in the model with stochastic regressors considered here) But this does not matter, since we use the standardized variable z k in the construction of confidence intervals and test statistics If we knew the variance parameter, we could use these results directly Imagine for a moment that we knew σ 2, and hence we knew V ( β OLS X) = σ 2 (X X) 1 and v kk 16

The scalar random variable z k N(0, 1), so that Pr( 1.96 < z k < 1.96) = 0.95 Pr( 1.96 < β k β k vkk < 1.96) = 0.95 Pr( 1.96 v kk < β k β k < 1.96 v kk ) = 0.95 Pr( 1.96 v kk < β k β k < 1.96 v kk ) = 0.95 Pr( β k 1.96 v kk < β k < β k + 1.96 v kk ) = 0.95 The penultimate step uses the symmetry of the interval about zero This would give our 95% confidence interval for the true value of β k as ( β k 1.96 v kk, β k + 1.96 v kk ), or equivalently as β k ± 1.96 v kk 17

To test the hypothesis that the true value of β k is some candidate value β 0 k, we would form the test statistic z k = β k β 0 k vkk which would have a standard Normal distribution if this hypothesis is correct If this value of z k looks like an unlikely draw from a standard Normal distribution (e.g. if this value of z k lies outside the interval ( 1.96, 1.96)), it is unlikely that the hypothesis is correct, and we would reject this hypothesis (at the 5% significance level) Otherwise (e.g. if our test statistic z k is in the interval ( 1.96, 1.96)) we would not reject the hypothesis 18

These approaches are related: if the hypothesized value β 0 k lies outside the 95% confidence interval, we will reject the hypothesis that this is the true value of β k at the 5% significance level Similar ideas will be used in the more realistic case when we do not know the variance parameter σ 2 and so have to estimate this from the data to obtain estimates of V ( β OLS X) and v kk We can use the OLS estimator and estimate V ( β OLS X) using σ 2 = û û N K V ( β OLS X) = σ 2 (X X) 1 19

Letting v kk denote the k th element on the main diagonal of V ( β OLS X), we will use the standardized statistic ) ( βk β t k = k = vkk ( βk β k se k ) where se k = v kk is called the standard error of the estimated parameter β k However, replacing the unknown v kk by the estimated se k = v kk changes the finite sample distribution of this statistic, so we now need to derive the distribution of t k 20

To obtain the distribution of the t k statistic, we use some more properties of Normally distributed random vectors We start with properties of functions of a standard Normal vector The N 1 random vector z N(0, I), where I is an N N identity matrix, is called a standard Normal vector, with elements z i N(0, 1) for i = 1, 2,..., N that are independent standard Normal random variables The density of z f(z) = (2π) N 2 ( z ) z exp 2 = N i=1 is the product of N standard Normal densities 21 [( 1 2π ) exp ( z 2 i 2 )]

1) The scalar w = z z = N i=1 z 2 i χ2 (N) The sum of squares of N independent standard Normal random variables has a chi-squared distribution with N degrees of freedom Special case (N = 1) z 2 i χ2 (1) where z i N(0, 1) 22

2) If the random variables w 1 χ 2 (m) and w 2 χ 2 (n) and w 1 and w 2 are independent, then the scalar v = (w 1/m) (w 2 /n) F (m, n) The ratio of two independent chi-squared random variables, each divided by their respective degrees of freedom, has a Snedecor F-distribution with degrees of freedom of the numerator and denominator variables respectively 23

3) If the random variables z N(0, 1) and w χ 2 (n) are independent, then the scalar u = z (w/n) t(n) The ratio of a standard Normal random variable to the square root of an independent chi-squared random variable divided by its degrees of freedom has a Student t-distribution with that degrees of freedom Corollary: since z 2 χ 2 (1) u 2 = z2 (w/n) = (z2 /1) (w/n) F (1, n) The square of a r.v. with a t(n) distribution has a F (1, n) distribution 24

We also use two properties of quadratic forms of Normally distributed random vectors 1) If the N 1 vector y N(µ, Σ) and the scalar w = (y µ) Σ 1 (y µ), then w χ 2 (N) 2) If the N 1 vector z N(0, I) and the non-stochastic N N matrix G is idempotent and has rank(g) = r N, then the scalar w = z Gz χ 2 (r) The result that z z χ 2 (N) follows as a special case of either of these two properties 25

We can now find the distribution of the statistic ) ) ( βk β t k = k ( βk β = k vkk se k in the classical linear regression model with Normally distributed errors First let u = σz, with z X N(0, I) and y = Xβ + σz Recall that û = My where M = I X(X X) 1 X is a symmetric, idempotent matrix with the property that MX = 0 Then û = My = M[Xβ + σz] = MXβ + Mσz = σmz And û û = (σmz) (σmz) = σ 2 (z M )Mz = σ 2 z MMz = σ 2 z Mz 26

Moreover rank(m) = N K and z X is a standard Normal vector So the scalar w 0 = û û σ 2 has a conditional distribution = σ2 z Mz σ 2 = z Mz w 0 X χ 2 (N K) using property (2) of quadratic forms in Normal vectors, and the fact that M is not stochastic conditional on X This is one of the key results we need to establish the distribution of t k 27

Now write β OLS = Ay = A[Xβ + σz] = AXβ + Aσz = β + σaz since AX = I Equivalently β OLS β = σaz Consider the random vectors d 1 = û σ = σmz σ = Mz and d 2 = β OLS β σ = σaz σ = Az Since M and A are not stochastic conditional on X, and since z X N(0, I), we know that d 1 X and d 2 X are Normally distributed 28

Also since AM = (X X) 1 X [I X(X X) 1 X ] = (X X) 1 X (X X) 1 X X(X X) 1 X = (X X) 1 X (X X) 1 X = 0, we know that d 1 X and d 2 X are uncorrelated, and hence also independent But û = σd 1 and β OLS = β + σd 2, where β and σ are non-stochastic Hence û X and β OLS X are independent Moreover any function of û X and any function of β OLS X are also independent This is another key result we need to find the distribution of t k 29

Now write t k = ( βk β k vkk ) = ( vkk vkk ) ( βk β k vkk ) = ( vkk vkk ) z k where z k X N(0, 1) Since V ( β OLS X) = σ 2 (X X) 1 = σ 2 Q 1, we can write v kk = σ 2 q kk where q kk is the k th element on the main diagonal of Q 1 = (X X) 1, and v kk = σ q kk Similarly since V ( β OLS X) = σ 2 (X X) 1 = σ 2 Q 1, we can write v kk = σ 2 q kk, and v kk = σ q kk 30

Then vkk vkk = σ q kk σ q kk = σ σ Now σ 2 σ 2 = ( ) ( 1 û û ) σ 2 N K = ( 1 N K ) ) (û û σ 2 = w 0 N K where w 0 = û û σ 2 and we showed that w 0 X χ 2 (N K) Hence t k = ( vkk vkk ) z k = ( σ σ) z k = z k ( σ/σ) = z k ( σ 2 /σ 2 ) = z k w0 /(N K) 31

Consider t k = z k w0 /(N K) We know that z k X N(0, 1) and w 0 X χ 2 (N K) Moreover z k = β k β vkk k is a function of β OLS While w 0 = û û σ 2 is a function of û Conditional on X, we showed that any function of û X and any function of β OLS X are independent 32

So conditional on X, the scalar t k = ( βk β k vkk ) = z k w0 /(N K) is the ratio of a standard Normal random variable to the square root of an independent chi-squared random variable divided by its degrees of freedom Hence t k X t(n K) And since this conditional distribution of t k X is the same distribution (t-distribution with N K degrees of freedom) for any realization of X, we again have that the unconditional distribution is the same, i.e. t k t(n K) 33

t k = ( βk β k vkk ) = se k t(n K) ( βk β k ) We can use this result to form confidence intervals and to construct hypothesis tests The t-distributions are also symmetric about zero, with more mass in their tails ( fatter tails ) compared to the standard Normal distribution 34

To construct the 95% confidence interval, we now find the lower 2.5% percentile and the upper 97.5% percentile of the t-distribution with N K degrees of freedom Since the distribution is symmetric about zero, these critical values have the form Pr(t k > c 0.025 (N K)) = 0.025 Pr(t k < c 0.025 (N K)) = 0.025 or Pr( c 0.025 (N K) < t k < c 0.025 (N K)) = 0.95 35

Pr( c 0.025 (N K) < β k β k se k < c 0.025 (N K)) = 0.95 Again we can rearrange to obtain the 95% confidence interval for β k as Pr( β k c 0.025 (N K)se k < β k < β k + c 0.025 (N K)se k ) = 0.95 which may be written as the interval ( β k c 0.025 (N K)se k, β k + c 0.025 (N K)se k ) or as β k ± c 0.025 (N K)se k Notice that, all else equal, the 95% confidence interval for β k will be narrower if the standard error se k = v kk is smaller; a lower standard error means we have a more precise estimate of β k 36

Note that the critical value depends on the degrees of freedom, and hence on the sample size (N) relative to the number of parameters in β (K) For example, with N K = 20, we have c 0.025 (N K) = 2.086, which is bigger than the corresponding 97.5% percentile of the standard Normal distribution (1.96) However, as N K increases, c 0.025 (N K) approaches 1.96 from above For example, with N K = 40, we have c 0.025 (N K) = 2.021, and with N K = 100, we have c 0.025 (N K) = 1.984 37

In the limit, as (N K), the t(n K) distribution approaches the standard Normal distribution, so that c 0.025 (N K) 1.96 In very large samples, relative to the dimension of the parameter vector, we could still construct confidence intervals based on the standard Normal distribution, even though we have estimated σ 2 Other confidence intervals can be constructed in the same way Letting 5% = 0.05 = α, we have constructed the (1 α) confidence interval, using c 0.025 (N K) = c α/2 (N K) such that Pr( c 0.025 (N K) < t k < c 0.025 (N K)) = 0.95 38

More generally, we use the critical value c α/2 (N K) such that Pr( c α/2 (N K) < t k < c α/2 (N K)) = 1 α to obtain the (1 α) confidence interval as β k ± c α/2 (N K)se k For example, in very large samples, we could use the 95% percentile of the standard Normal distribution to obtain the 90% confidence interval as β k ± 1.645se k Or we could use the 99.5% percentile of the standard Normal distribution to obtain the 99% confidence interval as β k ± 2.576se k 39

The same result is used to construct a simple hypothesis test about the true value of an individual parameter in the classical linear regression model with Normally distributed errors The hypothesis we wish to test is called the null hypothesis For example, H 0 : β 2 = 0, or more generally H 0 : β k = β 0 k for some k = 1,..., K For a two-sided test, the alternative hypothesis would be H 1 : β 2 0, or more generally H 1 : β k β 0 k 40

We form the test statistic t k = β k β 0 k se k If the null hypothesis is true, we know that t k = β k β 0 k se k = β k β k se k t(n K) under H 0 : β k = β 0 k We then ask whether the value of the test statistic t k that we have constructed is an unlikely draw from a t-distribution with N K degrees of freedom 41

If the value of t k is a suffi ciently unlikely draw from the distribution of this statistic under the null hypothesis, we conclude that the null hypothesis is unlikely to be true For example, we reject the null hypothesis at the 5% level of significance if the probability of drawing a test statistic with an absolute value as large as t k from its null distribution (i.e. its distribution if the null hypothesis is true) is less than 5% Otherwise we do not reject the null hypothesis at the 5% significance level 42

Since we have Pr( c 0.025 (N K) < t k < c 0.025 (N K)) = 0.95 under H 0 : β k = β 0 k we reject the null hypothesis against the two-sided alternative at the 5% level of significance if the value of t k lies outside the symmetric interval ( c 0.025 (N K), c 0.025 (N K)) or equivalently if t k > c 0.025 (N K) Otherwise we do not reject the null hypothesis against the two-sided alternative at the 5% level 43

While it is most common to conduct hypothesis tests at the 5% significance level, the same principle can be used to test hypotheses at any desired level of significance, and results are sometimes reported at the 1% level or at the 10% level In general, we reject H 0 : β k = β 0 k against H 1 : β k β 0 k at the α level of significance if the value of t k lies outside the interval ( c α/2 (N K), c α/2 (N K)) or equivalently if t k > c α/2 (N K) 44

Note that it is quite possible to reject the null hypothesis at the 10% level, but not at the 5% level In a very large sample, this would just correspond to a t-statistic with an absolute value between 1.645 and 1.96 In such cases, the evidence against the validity of the null hypothesis is sometimes described as weakly significant or marginally significant (depending on whether the author thinks the hypothesis is true or false, respectively) 45

An alternative to reporting whether the null hypothesis is rejected or not rejected at some arbitrary level of significance is to report the significance level at which the null hypothesis would just be on the borderline between being rejected and being not rejected For example, if we obtained a t-statistic with an absolute value of 1.96 in a very large sample, we know that the significance level for which we would just reject the null hypothesis is 5% That is, we would reject the null at the 5.1% significance level or higher, but not at the 4.9% significance level or lower 46

This is referred to as the marginal significance level or p-value of the test statistic More generally, the p-value is the probability of drawing a test statistic with an absolute value greater than or equal to the one we have obtained, if the null hypothesis is true Conventionally the null hypothesis is then rejected if the p-value is less than 0.05 or 5% By reporting the p-value, we allow readers to decide whether a draw with a 5% probability is suffi ciently unlikely to want to reject the null hypothesis 47

There is a one-to-one correspondence between confidence intervals and hypothesis tests in this context A question in one of the problem sets asks you to show that we reject H 0 : β k = β 0 k against H 1 : β k β 0 k at the 5% significance level if and only if the hypothesized value β 0 k lies outside the 95% confidence interval for β k 48

A special case of the t-test we have considered arises when the null hypothesis of interest is that the coeffi cient on the k th explanatory variable is zero, i.e. H 0 : β k = 0 (in which case we might choose to omit this explanatory variable from the model) The test statistic is then t k = β k 0 se k = β k se k t(n K) under H 0 : β k = 0 which is simpy the ratio of the OLS estimate of this coeffi cient to its standard error This ratio is referred to as the t-ratio 49

We reject H 0 : β k = 0 against the two-sided alternative H 1 : β k 0 at the 5% significance level if the t-ratio lies outside the interval ( c 0.025 (N K), c 0.025 (N K)) or if the absolute value of the t-ratio is bigger than c 0.025 (N K) The estimated coeffi cient (not the variable) is then said to be significantly different from zero at the 5% level Sometimes this statement gets abbreviated to statistically significant at the 5% level, although it is better practice to be clear about the particular null hypothesis that has been rejected when making statements of this kind 50

Sometimes we may want to test the null hypothesis H 0 : β k = β 0 k against a one-sided alternative hypothesis H 1 : β k > β 0 k (or perhaps H 1 : β k < β 0 k) For example, if the coeffi cient β k has an interpretation as a mark-up parameter in a regression of price on marginal cost, we may wish to test H 0 : β k = 1 against the one-sided alternative H 1 : β k > 1 Here point estimates of β k below one may be interpreted as being consistent with a perfectly competitive market structure (price = marginal cost), while evidence that β k > 1 may be interpreted as indicating market power 51

To test H 0 : β k = β 0 k against H 1 : β k > β 0 k, we conduct a one-sided test, asking whether the constructed test statistic t k is an unlikely draw from the upper tail of the null distribution of the test statistic Here we do not regard unlikely draws from the lower tail of the null distribution as providing evidence against the validity of the null hypothesis We reject H 0 against this one-sided alternative H 1 at the 5% significance level if t k > c 0.05 (N K), indicating a less than 5% probability of drawing the value t k if the null hypothesis is true [For H 1 : β k < β 0 k, we would focus instead on the lower tail] 52

We have considered how to test a hypothesis about the value taken by an individual element of the parameter vector β But often we are interested in testing a hypothesis about a linear combination of these parameters For example, we may have estimated a Cobb-Douglas production function of the form y i = β 1 + β 2 x 2i + β 3 x 3i + u i where y i is the log of value added for observation i, x 2i is the log of labour input for observation i, and x 3i is the log of capital input for observation i 53

In this case, we may want to test the constant returns to scale restriction β 2 + β 3 = 1 Also we may want to test a hypothesis that imposes more than one restriction on the vector of parameters in the model For example, the joint hypothesis that β 2 = 0 and β 3 = 0 imposes two linear restrictions To conduct tests of these more general linear restrictions in the classical Normal linear regression model, we first derive a distribution result for a more general test statistic 54

Consider the p 1 random vector θ OLS = H β OLS for some non-stochastic p K matrix H with rank(h) = p (full row rank) For example, with K = 3, the 1 3 matrix ( ) H = 0 1 1 gives θ OLS = β 2 + β 3, and the 2 3 matrix gives H = θols = 0 1 0 0 0 1 β 2 β 3 55

Let E( θ OLS X) = HE( β OLS X) = Hβ = θ, such that θ OLS is an unbiased estimator of θ Also let V ( θ OLS X) = HV ( β OLS X)H = H[σ 2 (X X) 1 ]H = σ 2 H(X X) 1 H = σ 2 D 1, where D 1 = H(X X) 1 H is non-stochastic conditional on X Then by linearity we have θols X N(θ, σ 2 D 1 ) Similarly let V ( θ OLS X) = H V ( β OLS X)H = H[ σ 2 (X X) 1 ]H = σ 2 H(X X) 1 H = σ 2 D 1 56

Using property (1) of quadratic forms of Normal random vectors, we have w X χ 2 (p), where the scalar random variable w = ( θ OLS θ) [σ 2 D 1 ] 1 ( θ OLS θ) ( ) 1 = ( θ σ 2 OLS θ) D( θ OLS θ) Consider and ŵ = v = ŵ p = ( ) 1 σ 2 ( θ OLS θ) D( θ OLS θ) ( ) 1 p σ 2 ( θ OLS θ) D( θ OLS θ) 57

Write v = ŵ p = ( σ 2 σ 2 ) ( ) w p = ( σ 2 σ 2 ) ( ) 1 ( θ pσ 2 OLS θ) D( θ OLS θ) As before, we have σ 2 σ = w 0 2 N K where w 0 = û û σ 2 and we showed that w 0 X χ 2 (N K) Thus v = ( σ 2 σ 2 ) ( ) w p = (w/p) ( σ 2 /σ 2 ) = (w/p) (w 0 /(N K)) is the ratio of two chi-squared random variables, each divided by their respective degrees of freedom 58

v = (w/p) (w 0 /(N K)) where w X χ 2 (p) and w 0 X χ 2 (N K) Moreover w = ( 1 σ 2 ) ( θols θ) D( θ OLS θ) is a function of β OLS since θols = H β OLS While w 0 = û û σ 2 is a function of û Conditional on X, we showed that any function of û X and any function of β OLS X are independent 59

So conditional on X, the scalar v = (w/p) (w 0 /(N K)) is the ratio of two independent chi-squared random variables, each divided by their respective degrees of freedom Hence we have v X F (p, N K) And since this conditional distribution of v X is the same distribution (Fdistribution with p and N K degrees of freedom) for any realization of X, we again have that the unconditional distribution is the same, i.e. v F (p, N K) 60

We have shown that the scalar random variable ( ) 1 v = p σ 2 ( θ OLS θ) D( θ OLS θ) ( ) 1 = ( θ OLS θ) [ V ( θ OLS X)] 1 ( θ OLS θ) F (p, N K) p where θ = Hβ and θ OLS = H β OLS We can use this result to test p linear restrictions involving one or more of the elements of the parameter vector β Notice that our earlier result is a special case 61

Letting H be the 1 K matrix ( H = 0 0 1 0 0 ) with the scalar 1 as its k th element and zeros elsewhere, we have θ = Hβ = β k and θ OLS = β k ( ) 1 Then v = ( θ OLS θ) [ V ( θ OLS X)] 1 ( θ OLS θ) p = ( β k β k ) 2 v kk = t 2 k F (1, N K) This holds, as the square of a random variable with a t(n K) distribution was shown to have a F (1, N K) distribution 62

To test a hypothesis about p linear combinations of the β k parameters, we formulate the null hypothesis in the form H 0 : Hβ = θ 0 where H is a non-stochastic p K matrix with rank(h) = p (full row rank) The alternative hypothesis is simply H 1 : Hβ θ 0 63

We use the estimator θ OLS = H β OLS and construct the scalar test statistic ( ) 1 v = ( θ OLS θ 0 ) [ V ( θ OLS X)] 1 ( θ OLS θ 0 ) p where V ( θ OLS X) = H V ( β OLS X)H = σ 2 H(X X) 1 H in the classical linear regression model with Normally distributed errors If the null hypothesis is true, we have shown that v F (p, N K) We ask whether the value of the test statistic v that we obtain is an unlikely draw from this null distribution 64

Again we reject H 0 : Hβ = θ 0 against H 1 : Hβ θ 0 if the probability of drawing a test statistic with a value as high as v from an F-distribution with (p, N K) degrees of freedom is below our chosen level of significance, conventionally 5% Note that the quadratic form for v implies that the scalar v is positive, so here we are only concerned with the upper tail of the F (p, N K) distribution We reject H 0 : Hβ = θ 0 at the 5% significance level if the test statistic v lies above the upper 95% percentile of the F (p, N K) distribution 65

Examples Consider the model with K = 3 and x 1i = 1 for all i = 1,..., N y i = β 1 + β 2 x 2i + β 3 x 3i + u i 1) To test the joint hypothesis that β 2 = 0 and β 3 = 0 against the alternative that β 2 0 or β 3 0, we formulate the null hypothesis as H 0 : Hβ = 0 1 0 β 1 β 2 = β 2 = 0 0 0 1 0 β 3 β 3 66

We use and θols = H β OLS = β 2 β 3 V ( θ OLS X) = H V ( β OLS X)H = 0 1 0 v 11 v 12 v 13 v 12 v 22 v 23 0 0 1 v 13 v 23 v 33 0 0 1 0 0 1 = v 22 v 23 v 23 v 33 67

To test H 0 : θ = 0 against H 1 : θ 0, we construct the test statistic ( ) 1 v = ( θ OLS θ 0 ) [ V ( θ OLS X)] 1 ( θ OLS θ 0 ) p ( ) 1 1 β2 0 = v 22 v 23 β 2 0 2 β 3 0 v 23 v 33 β 3 0 1 ( ) 1 = ( β2, 2 β ) 3 v 22 v 23 β 2 v 23 v 33 β 3 F (2, N K) under H 0 : θ = 0 We reject H 0 : θ = 0 at the 5% level of significance if the probability of drawing this value of v from the F (2, N K) distribution is less than 0.05, i.e. if v lies above the upper 95% percentile of the F (2, N K) distribution 68

For example, if N K = 30, the upper 95% percentile of the F (2, 30) distribution is 3.32, and values of the test statistic v > 3.32 would reject the null hypothesis at the 5% level 2) To test the null hypothesis that β 2 + β 3 = 1 against the alternative hypothesis that β 2 + β 3 1, we formulate the null hypothesis as ( ) H 0 : Hβ = 0 1 1 β 1 β 2 β 3 = β 2 + β 3 = 1 69

We use θols = H β OLS = β 2 + β 3 and V ( θ OLS X) = H V ( β OLS X)H ( ) v 11 v 12 v 13 0 = 0 1 1 v 12 v 22 v 23 1 v 13 v 23 v 33 1 = v 22 + 2 v 23 + v 33 (as expected) 70

To test H 0 : θ = β 2 + β 3 = 1 against H 1 : θ = β 2 + β 3 1, we construct the test statistic ( ) 1 v = ( θ OLS θ 0 ) [ V ( θ OLS X)] 1 ( θ OLS θ 0 ) p = ( β 2 + β 3 1) 2 v 22 + 2 v 23 + v 33 F (1, N K) under H 0 : θ = β 2 + β 3 = 1 Again we reject H 0 : β 2 + β 3 = 1 against H 1 : β 2 + β 3 1 at the 5% level if the value of the test statistic v lies above the upper 95% percentile of the F (1, N K) distribution 71

In this special case with p = 1, we could also use the result that v = β2 + β 3 1 v22 + 2 v 23 + v 33 t(n K) under H 0 : β 2 + β 3 = 1 We reject H 0 : β 2 +β 3 = 1 against the two-sided alternative H 1 : β 2 +β 3 1 at the 5% significance level if the absolute value of the test statistic v lies above the upper 97.5% percentile of the t(n K) distribution In this special case, one-sided tests can also be considered 72

Using linear transformations to simplify hypothesis tests In this last example, another possibility is to re-parameterize the model so that the hypothesis of interest can be tested using a simple t-test on a single regression coeffi cient We have the model y i = β 1 + β 2 x 2i + β 3 x 3i + u i and wish to test H 0 : β 2 + β 3 = 1 against H 1 : β 2 + β 3 1 73

Subtracting x 3i from both sides of the model gives y i x 3i = β 1 + β 2 x 2i + (β 3 1)x 3i + u i Now subtracting β 2 x 3i from the second term on the right-hand side, and adding β 2 x 3i to the third term on the right-hand side, gives y i x 3i = β 1 + β 2 (x 2i x 3i ) + (β 2 + β 3 1)x 3i + u i So if we regress (y i x 3i ) on (x 2i x 3i ) and x 3i, and an intercept, we can test H 0 : β 2 +β 3 = 1 β 2 +β 3 1 = 0 using a simple t-test of the null hypothesis that the coeffi cient on x 3i is equal to zero, in this re-parameterization of the model 74

If we do not reject this hypothesis and wish to impose it, this also indicates that the restricted model y i = β 1 + β 2 x 2i + β 3 x 3i + u i subject to β 2 + β 3 = 1 can be estimated quite straightforwardly using the representation y i x 3i = β 1 + β 2 (x 2i x 3i ) + u i This gives the restricted estimates of β 1 and β 2 directly as β 1 and β 2 We obtain the restricted estimate of β 3 simply as β 3 = 1 β 2 75

Goodness of fit Note that y = ŷ + û, and that ŷ û = (X β OLS ) û = ( β OLSX )û = β OLS(X û) = 0 since the first order conditions used to obtain β OLS imply that X û = 0 The OLS residuals û are orthogonal to both the explanatory variables X and to the fitted values ŷ = X β OLS in the sample Consider y y = (ŷ + û) (ŷ + û) = ŷ ŷ + ŷ û + û ŷ + û û = ŷ ŷ + û û 76

y y = ŷ ŷ + û û Or equivalently N N N yi 2 = ŷi 2 + i=1 This is an analysis (or decomposition) of sums of squares i=1 i=1 û 2 i We also have y i = ŷ i + û i for i = 1,..., N, which implies N N N y i = ŷ i + i=1 i=1 i=1 û i and hence (dividing by N) y = ŷ + û 77

y = ŷ + û Now provided we have û = 0, we obtain y = ŷ Subtracting Ny 2 from both sides of N N N y 2 i = ŷ 2 i + û 2 i i=1 i=1 i=1 then gives ( N ) ( N ) ( N ) y 2 i Ny 2 = ŷ 2 i Ny 2 + û 2 i i=1 i=1 i=1 78

( N ( N Noting that have û = 0, ( N ) ( N yi 2 Ny 2 = i=1 i=1 i=1 i=1 ŷ 2 i y 2 i ) i=1 ) Ny 2 = Ny 2 = ŷ 2 i ) ( N Ny 2 + i=1 N (y i y) 2 and that, provided again we i=1 N (ŷ i y) 2, we obtain i=1 N N N (y i y) 2 = (ŷ i y) 2 + i=1 i=1 û 2 i T SS = ESS + RSS This is now an analysis (or decomposition) of variation (squared deviations from sample means) 79 û 2 i )

Note that we require the average of the sample residuals to be zero in order to equate the sample means of the predicted values ŷ and the actual values y This condition will always be satisfied if the model includes an intercept term Dividing both sides by N (y i y) 2 gives i=1 1 = N (ŷ i y) 2 i=1 + N (y i y) 2 i=1 N i=1 û 2 i N (y i y) 2 i=1 80

or rearranging R 2 = = N (ŷ i y) 2 i=1 = 1 N (y i y) 2 i=1 ESS T SS N i=1 û 2 i N (y i y) 2 i=1 = 1 RSS T SS Note that 0 R 2 1, provided we have û = 0 R 2 can then also be shown to be the square of the correlation coeffi cient between y i and ŷ i R 2 measures the proportion of the total variation in the dependent variable that is explained by the model 81

R 2 = 1 (perfect fit) occurs when y = X β OLS so that y ŷ = û = 0 and so N û û = û 2 i = 0 i=1 R 2 = 0 occurs when each element of ŷ = X β OLS = y for i = 1, 2,..., N, so N that (ŷ i y) 2 = 0 i=1 Note that this outcome occurs when the model contains only an intercept term 82

If X is an N 1 vector with each element equal to 1, then X û = Setting X û = 0 is thus equivalent to setting 1 N N û i = 0 Letting the coeffi cient on the intercept be β 1, this gives 1 N (y i N β 1 ) = 0 β 1 = 1 N y i = y N i=1 Hence ŷ i = β 1 1 = y for i = 1, 2,..., N i=1 i=1 N i=1 û i R 2 thus measures the goodness of fit of a model with additional explanatory variables relative to the fit of a model that includes only an intercept term 83

The inclusion of an intercept term in a linear model estimated using OLS N N ensures that û i = 0 û = 1 N û i = 0, which in turn ensures that R 2 i=1 lies between zero and one i=1 For linear models that do not include an intercept term (or do not otherwise N ensure that û = 1 N û i = 0), R 2 should not be used as a measure of goodness of fit i=1 84

Note that in a sample of size N, we could always obtain a perfect fit (R 2 = 1) simply by regressing y i on a set of N linearly independent explanatory variables More generally, increasing the number of explanatory variables cannot reduce the fit as measured by R 2 Adding an explanatory variable that is irrelevant in the sample (s.t. the estimated coeffi cient on this variable is zero) leaves the fit unchanged While adding a variable that is relevant in the sample (s.t. the estimated coeffi cient is not zero) improves the fit 85

Some researchers prefer to report an adjusted R 2, defined by (1 R 2 ) = (N 1)(1 R2 ) (N K) which discounts this improvement in the fit as K increases in a given sample size N Note that there is no strong theoretical rationale for this particular adjustment A disadvantage of the R 2 measure of goodness of fit is that the value of R 2 is not invariant to linear transformations of the model 86

To see this, compare y i = β 1 + β 2 x 2i + β 3 x 3i + u i and (y i x 3i ) = y i = β 1 + β 2 x 2i + (β 3 1)x 3i + u i which is obtained by subtracting x 3i from both sides of the original model These models are equivalent, and yield the same OLS estimates of (β 1, β 2, β 3 ), noting that the coeffi cient on x 3i is (β 3 1) in the transformed model They have the same error term u i, the same residuals û i, and hence the N same residual sum of squares RSS = 87 i=1 û 2 i

y i = β 1 + β 2 x 2i + β 3 x 3i + u i (y i x 3i ) = y i = β 1 + β 2 x 2i + (β 3 1)x 3i + u i However they have different dependent variables, and in general N N (y i y) 2 (yi y ) 2 i=1 i=1 Hence they have different values for R 2 = 1 RSS T SS The implication is that values of the R 2 statistic should not be used to compare the fit of models that have different dependent variables 88

y i = β 1 + β 2 x 2i + β 3 x 3i + u i (y i x 3i ) = y i = β 1 + β 2 x 2i + (β 3 1)x 3i + u i A more appropriate comparison in this situation would be provided by comparing estimates of σ 2 = û û N K or σ, which provide unbiased estimates of the variance or standard deviation of the error term u i 89

Using R 2 to test exclusion restrictions Consider the linear model with K = 4 and x 1i = 1 for all i = 1,..., N y i = β 1 + β 2 x 2i + β 3 x 3i + β 4 x 4i + u i Suppose we want to test the null hypothesis that β 3 = β 4 = 0 We can form a F (2, N K) test statistic in the usual way An equivalent test can be obtained by comparing the R 2 from the unrestricted model given above and the R 2 from the restricted model y i = β 1 + β 2 x 2i + u i which imposes these two exclusion restrictions 90

Denote the R 2 obtained from the unrestricted model by R 2 U and denote the R 2 obtained from the restricted model by R 2 R, noting that R2 U R2 R Then we can also write the test statistic as ( ) ( N K R 2 v = U R 2 ) R 2 1 RU 2 F (2, N K) This extends naturally to testing p exclusion restrictions, replacing 2 by p ( ) ( N K R 2 v = U R 2 ) R p 1 RU 2 F (p, N K) 91

v = ( ) ( N K R 2 U R 2 ) R p 1 RU 2 F (p, N K) This form of the F-test statistic is very intuitive The null hypothesis is rejected if the exclusion of these p explanatory variables results in a suffi ciently large fall in the R 2 goodness of fit measure, or in a suffi ciently large deterioration in the fit of the model 92

This can be used to test the restriction that all K 1 of the slope coeffi cients in a linear model are equal to zero, i.e. to test the exclusion of all the explanatory variables except the intercept The restricted model y i = β 1 + u i has R 2 R = 0 The test statistic simplifies to ( ) ( ) N K R 2 v = K 1 1 R 2 F (K 1, N K) where R 2 = R 2 U denotes the R2 from the unrestricted model This is sometimes referred to as the F-test for the model 93

Asymptotic results If t k t(n) then t k D N(0, 1) as n Write t k = z where z N(0, 1) and w χ 2 (n) w/n n We know that w = zi 2 where z i N(0, 1) for i = 1, 2,..., n independent i=1 standard Normal random variables Note that E(w) = ne(zi 2 ) = n 1 = n and E(w/n) = E(w)/n = n/n = 1 Moreover w/n = 1 n n z 2 i P E(z 2 i ) = 1 as n [applying a Law of Large Numbers] i=1 94

Now since (w/n) P 1 as n, we have the result that t k = z has w/n the same limit distribution as z N(0, 1) [applying Slutsky s theorem] In large samples (i.e. as N treating the number of parameters K as fixed, so that (N K) ), to test one linear restriction, we can use critical values obtained from the standard Normal distribution Similarly if v F (m, n) then mv D χ 2 (m) as n In large samples, to test p linear restrictions, we can use the test statistic pv and critical values obtained from the χ 2 (p) distribution 95

Write v = w 1/m w 2 /n so that mv = w 1 w 2 /n, where w 1 χ 2 (m) and w 2 χ 2 (n) Again w 2 = i=1 n zi 2 with z i N(0, 1) for i = 1, 2,..., n independent standard Normal random variables, so that E(w 2 ) = n and E(w 2 /n) = 1 Again we have that (w 2 /n) = 1 n mv = w 1 w 2 /n has the same limit distribution as w 1 χ 2 (m) LLN and Slutsky s theorem] n i=1 z 2 i P E(zi 2 ) = 1 as n, so that [again applying 96