FIRST MIDTERM EXAM ECON 7801 SPRING 2001

Similar documents
Reference: Davidson and MacKinnon Ch 2. In particular page

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

ECONOMICS DEPARTMENT, UNIVERSITY OF UTAH

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Introduction to Estimation Methods for Time Series models. Lecture 1

STA 2101/442 Assignment Four 1

Advanced Quantitative Methods: ordinary least squares

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

Lecture 14 Simple Linear Regression

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Solutions for Econometrics I Homework No.1

ECON The Simple Regression Model

STAT Homework 8 - Solutions

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

INTRODUCTORY ECONOMETRICS

MS&E 226. In-Class Midterm Examination Solutions Small Data October 20, 2015

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Linear Regression. Junhui Qian. October 27, 2014

3 Multiple Linear Regression

4 Multiple Linear Regression


Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Statistics and Econometrics I

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

The Simple Regression Model. Part II. The Simple Regression Model

STAT 100C: Linear models

Well-developed and understood properties

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Introduction to Econometrics Midterm Examination Fall 2005 Answer Key

The Standard Linear Model: Hypothesis Testing

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

Large Sample Properties of Estimators in the Classical Linear Regression Model

Regression #3: Properties of OLS Estimator

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017

CLUSTER EFFECTS AND SIMULTANEITY IN MULTILEVEL MODELS

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Greene, Econometric Analysis (7th ed, 2012)

An Introduction to Parameter Estimation

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

ECON 3150/4150 spring term 2014: Exercise set for the first seminar and DIY exercises for the first few weeks of the course

The Multiple Regression Model Estimation

Applied Econometrics (QEM)

9. Least squares data fitting

Multivariate Regression (Chapter 10)

ECON3150/4150 Spring 2015

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

LINEAR REGRESSION MODELS W4315

The Hilbert Space of Random Variables

Economics 620, Lecture 4: The K-Varable Linear Model I

Regression #4: Properties of OLS Estimator (Part 2)

Regression Estimation Least Squares and Maximum Likelihood

Regression. ECO 312 Fall 2013 Chris Sims. January 12, 2014

Advanced Econometrics I

PANEL DATA RANDOM AND FIXED EFFECTS MODEL. Professor Menelaos Karanasos. December Panel Data (Institute) PANEL DATA December / 1

Ridge Regression. Flachs, Munkholt og Skotte. May 4, 2009

Matrix Algebra, part 2

Econometrics of Panel Data

Making sense of Econometrics: Basics

Math 3330: Solution to midterm Exam

Intermediate Econometrics

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Link lecture - Lagrange Multipliers

Econ 510 B. Brown Spring 2014 Final Exam Answers

Regression Models for Time Trends: A Second Example. INSR 260, Spring 2009 Bob Stine

The Linear Regression Model

Raquel Prado. Name: Department of Applied Mathematics and Statistics AMS-131. Spring 2010

Simple Linear Regression

Final Exam. Economics 835: Econometrics. Fall 2010

STAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow)

Chapter 5 Matrix Approach to Simple Linear Regression

Simple Linear Regression Analysis

Non-Spherical Errors

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA

The Geometry of Linear Regression

Projection. Ping Yu. School of Economics and Finance The University of Hong Kong. Ping Yu (HKU) Projection 1 / 42

Linear Models and Estimation by Least Squares

1. The OLS Estimator. 1.1 Population model and notation

Multiple Regression Model: I

Brief Suggested Solutions

The multiple regression model; Indicator variables as regressors

Introductory Econometrics

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Statistics 910, #5 1. Regression Methods

1. Simple Linear Regression

Panel Data Models. James L. Powell Department of Economics University of California, Berkeley

STAT5044: Regression and Anova. Inyoung Kim

Lecture 13: Simple Linear Regression in Matrix Format

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

Least Squares Estimation-Finite-Sample Properties

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean.

Regression Models - Introduction

Chapter 2 Multiple Regression I (Part 1)

Maximum Likelihood Estimation

Econometrics Master in Business and Quantitative Methods

STAT 462-Computational Data Analysis

Review of Econometrics

Transcription:

FIRST MIDTERM EXAM ECON 780 SPRING 200 ECONOMICS DEPARTMENT, UNIVERSITY OF UTAH Problem 2 points Let y be a n-vector (It may be a vector of observations of a random variable y, but it does not matter how the y i were obtained) Prove that the scalar α which minimizes the sum () (y α) 2 + (y 2 α) 2 + + (y n α) 2 = (y i α) 2 is the arithmetic mean α = ȳ Answer Use (??) Problem 2 a 2 points Verify that the matrix D = I n ιι is symmetric and idempotent Date of exam Tuesday, February 20, 9 0:30 am

2 U OF U ECONOMICS b point Compute the trace tr D Answer tr D = n One can see this either by writing down the matrix element by element, or use the linearity of the trace plus the rule that tr(ab) = tr(ba) tr I = n and tr(ιι ) = tr(ι ι) = tr n = n c point For any vector of observations y compute Dy Answer Element by element one can write n n n y y ȳ n n n y (2) Dy = 2 y = 2 ȳ y n n n n y n ȳ There is also a more elegant matrix theoretical proof available d point Is there a vector a o for which Da = o? If so, give an example of such a vector Answer ι is, up to a scalar factor, the only nonzero vector with Dι = o e point Show that the sample variance of a vector of observations y can be written in matrix notation as (3) The sample variance of y is n (yi ȳ) 2 = n y Dy

FIRST MIDTERM EXAM ECON 780 SPRING 200 3 Answer Let s get rid of the factor which appears on both sides: we have to show that n (4) (y i ȳ) 2 = y Dy = y D Dy This is the squared length of the vector Dy which we computed in part c Problem 3 point Compute the matrix product [ ] 2 4 4 0 2 0 3 3 3 8 Answer [ ] [ ] 4 0 [ ] [ ] 2 4 4 + 2 2 + 4 3 0 + 2 + 4 8 20 34 2 = = 0 3 3 0 4 + 3 2 + 3 3 0 0 + 3 + 3 8 5 27 3 8 Problem 4 2 points Assume that X has full column rank Show that ˆε = My where M = I X(X X) X Show that M is symmetric and idempotent Answer By definition, ˆε = y X ˆβ ( ) = y X(X X) Xy = I X(X X) X y Idempotent, ie MM = M: (5) ( MM = I X(X X) X )( I X(X X) X ) = I X(X X) X X(X X) X

4 U OF U ECONOMICS Problem 5 2 points We are in the multiple regression model y = Xβ + ε with intercept, ie, X is such that there is a vector a with ι = Xa Define the row vector x = n ι X, ie, it has as its jth component the sample mean of the jth independent variable Using the normal equations X y = X X ˆβ, show that ȳ = x ˆβ (ie, the regression plane goes through the center of gravity of all data points) Answer Premultiply the normal equation by a to get ι y ι X ˆβ = 0 Premultiply by /n to get the result Problem 6 2 points Let θ be a vector of possibly random parameters, and ˆθ an estimator of θ Show that (6) MSE[ˆθ; θ] = V[ˆθ θ] + (E[ˆθ θ])(e[ˆθ θ]) Don t assume the scalar result but make a proof that is good for vectors and scalars Answer For any random vector x follows [ ] E[xx ] = E (x E[x] + E[x])(x E[x] + E[x]) = E[ (x E[x])(x E[x]) ] E[ (x E[x]) E[x] ] E = V[x] O O + E[x] E[x] Setting x = ˆθ θ the statement follows [ E[x](x E[x]) ] + E [E[x] E[x] ]

FIRST MIDTERM EXAM ECON 780 SPRING 200 5 Problem 7 Consider two very [ simple-minded estimators of the unknown nonrandom parameter vector φ = φ φ 2 ] Neither of these estimators depends on any observations, they are constants The first estimator is ˆφ = [ ], and the second is φ = [ 2 8 ] a 2 points Compute the MSE-matrices of these two estimators if the true value of the parameter vector is φ = [ 0 0 ] For which estimator is the trace of the MSE matrix smaller? Answer ˆφ has smaller trace of the MSE-matrix [ ] ˆφ φ = MSE[ ˆφ; φ] = E[( ˆφ φ)( ˆφ φ) ] [ ] [ ] [ ] [ ] = E[ ] = E[ ] = [ ] 2 φ φ = 2 [ ] 4 4 MSE[ φ; φ] = 4 4

6 U OF U ECONOMICS Note that both MSE-matrices are singular, ie, both estimators allow an error-free look at certain linear combinations of the parameter vector b point Give two vectors g = [ g g 2 ] and h = [ h h 2 ] satisfying MSE[g ˆφ; g φ] < MSE[g φ; g φ] and MSE[h ˆφ; h φ] > MSE[h φ; h φ] (g and h are not unique; there are many possibilities) Answer With g = [ ] and h = [ ] for instance we get g ˆφ g φ = 0, g φ g φ = 4, h ˆφ; h φ = 2, h φ; h φ = 0, therefore MSE[g ˆφ; g φ] = 0, MSE[g φ; g φ] = 6, MSE[h ˆφ; h φ 4, MSE[h φ; h φ] = 0 An alternative way to compute this is eg MSE[h φ; [ ] [ ] [ ] h 4 4 φ] = = 6 4 4 c point Show that neither MSE[ ˆφ; φ] MSE[ φ; φ] nor MSE[ φ; φ] MSE[ ˆφ; φ] is a nonnegative definite matrix Hint: you are allowed to use the mathematical fact that if a matrix is nonnegative definite, then its determinant is nonnegative Answer [ ] (7) MSE[ φ; φ] MSE[ ˆφ; 3 5 φ] = 5 3 Its determinant is negative, and the determinant of its negative is also negative

FIRST MIDTERM EXAM ECON 780 SPRING 200 7 Problem 8 a 2 points Show that the sampling error of the OLS estimator is (8) ˆβ β = (X X) X ε b 2 points Derive from this that ˆβ is unbiased and that its MSE-matrix is (9) MSE[ ˆβ; β] = σ 2 (X X) Problem 9 a 2 points Show that (0) SSE = ε Mε where M = I X(X X) X Answer SSE = ˆε ˆε, where ˆε = y X ˆβ = y X(X X) X y = My where M = I X(X X) X From MX = O follows ˆε = M(Xβ + ε) = Mε Since M is idempotent and symmetric, it follows ˆε ˆε = ε Mε b point Is SSE observed? Is ε observed? Is M observed? c 3 points Under the usual assumption that X has full column rank, show that () E[SSE] = σ 2 (n k) Answer E[ˆε ˆε] = E[tr ε Mε] = E[tr Mεε ] = σ 2 tr M = σ 2 tr(i X(X X) X ) = σ 2 (n tr(x X) X X) = σ 2 (n k)

8 U OF U ECONOMICS Problem 0 The prediction problem in the Ordinary Least Squares model can be formulated as follows: [ [ [ ] [ ] [ [ ] [ ] y X ε ε o ε (2) = β + y0] X0] ε E[ ] = 0 ε 0 o] V[ ] = σ 2 I O ε 0 O I X and X 0 are known, y is observed, y 0 is not observed a 4 points Show that y 0 = X 0 ˆβ is the Best Linear Unbiased Predictor (BLUP) of y 0 on the basis of y, where ˆβ is the OLS estimate in the model y = Xβ + ε Answer Take any other predictor ỹ 0 = By and write B = X 0 (X X) X + D Unbiasedness means E[ỹ 0 y 0 ] = X 0 (X X) X Xβ + DXβ X 0 β = o, from which follows DX = O Because of unbiasedness [ we know MSE[ỹ 0 ; y 0 ] ] [ = V[ỹ ] 0 y 0 ] Since the prediction error can be written ỹ 0 y = X 0 (X X) X y + D I, one obtains y 0 [ ] [ ] [ ] V[ỹ 0 y 0 ] = X 0 (X X) X y X(X + D I V[ ] X) X 0 + D y 0 I [ ] [ ] = σ 2 X 0 (X X) X X(X X) + D I X 0 + D I = σ 2( X 0 (X X) X + D )( X 0 (X X) X + D ) + σ 2 I = σ 2( X 0 (X X) X 0 + DD + I )

FIRST MIDTERM EXAM ECON 780 SPRING 200 9 This is smallest for D = O b 2 points The same y 0 = X 0 ˆβ is also the Best Linear Unbiased Estimator of X 0 β, which is the expected value of y 0 You are not required to re-prove this here, but you are asked to compute MSE[X 0 ˆβ; X0 β] and compare it with MSE[y 0; y 0 ] Can you explain the difference? Answer Estimation error and MSE are X 0 ˆβ X 0 β = X 0 ( ˆβ β) = X 0 (X X) X ε due to (8) MSE[X 0 ˆβ; X 0 β] = V[X 0 ˆβ X 0 β] = V[X 0 (X X) X ε] = σ 2 X 0 (X X) X 0 It differs from the prediction MSE matrix by σ 2 I, which is the uncertainty about the value of the new disturbance ε 0 about which the data have no information Problem Given a regression with a constant term and two explanatory variables which we will call x and z, ie, (3) y t = α + βx t + γz t + ε t a point How will you estimate β and γ if it is known that β = γ? Answer Write (4) y t = α + β(x t + z t) + ε t

0 U OF U ECONOMICS b point How will you estimate β and γ if it is known that β + γ =? Answer Setting γ = β gives the regression (5) y t z t = α + β(x t z t) + ε t Problem 2 2 points Use the simple matrix differentiation rules (w β)/ β = w and (β Mβ)/ β = 2β M to compute L/ β where L is the Lagrange function for the constrained OLS problem (6) L(β) = (y Xβ) (y Xβ) + (Rβ u) λ Answer Write the objective function as y y 2y Xβ + β X Xβ + λ Rβ λ u to get L/ β = 2y X + 2β X X + λ R Problem 3 Assume ˆβ is the constrained least squares estimator subject to the constraint Rβ = o, and ˆβ is the unconstrained least squares estimator a point With the usual notation ŷ = X ˆβ and ŷ = X ˆβ, show that (7) y = ŷ + (ŷ ŷ) + ˆε Point out these vectors in the reggeom simulation

FIRST MIDTERM EXAM ECON 780 SPRING 200 Answer In the reggeom-simulation, y is the purple line; X ˆβ is the red line starting at the origin, one could also call it ŷ; X( ˆβ ˆβ) = ŷ ŷ is the light blue line, and ˆε is the green line which does not start at the origin In other words: if one projects y on a plane, and also on a line in that plane, and then connects the footpoints of these two projections, one obtains a zig-zag line with two right angles b 4 points Show that in (7) the three vectors ŷ, ŷ ŷ, and ˆε are orthogonal You are allowed to use, without proof, the following formula for ˆβ: (8) ˆβ = ˆβ (X X) R ( R(X X) R ) (R ˆβ u) Answer One has to verify that the scalar products of the three vectors on the right hand side of (7) are zero ŷ ˆε = ˆβ X ˆε = 0 and (ŷ ŷ) ˆε = ( ˆβ ˆβ) X ˆε = 0 follow from X ˆε = o; geometrically on can simply say that ŷ and ŷ are in the space spanned by the columns of X, and ˆε is orthogonal to that space Finally, using (8) for ˆβ ˆβ, ŷ (ŷ ŷ) = ˆβ X X( ˆβ ˆβ) = = ˆβ X X(X X) R ( R(X X) R ) R ˆβ = = ˆβ R ( R(X X) R ) R ˆβ = 0 because ˆβ satisfies the constraint R ˆβ = o, hence ˆβ R = o

2 U OF U ECONOMICS Problem 4 In the intermediate econometrics textbook [WW79], the following regression line is estimated: (9) b t = 03 + 068y t + 023w t + ˆε t, where b t is the public purchase of Canadian government bonds (in billion $), y t is the national income, and w t is a dummy variable with the value w t = for the war years 940 45, and zero otherwise a point This equation represents two regression lines, one for peace and one for war, both of which have the same slope, but which have different intercepts What is the intercept of the peace time regression, and what is that of the war time regression line? Answer In peace, w t = 0, therefore the regression reads b t = 03 + 068y t + ˆε t, therefore the intercept is 3 In war, w t =, therefore b t = 03 + 068y t + 023 + ˆε t, therefore the intercept is 3 + 23 = 36 b point What would the estimated equation have been if, instead of w t, they had used a variable p t with the values p t = 0 during the war years, and p t = otherwise? (Hint: the coefficient for p t will be negative, because the intercept in peace times is below the intercept in war times)

FIRST MIDTERM EXAM ECON 780 SPRING 200 3 Answer Now the intercept of the whole equation is the intercept of the war regression line, which is 36, and the coefficient of p t is the difference between peace and war intercepts, which is -23 (20) b t = 36 + 068y t 23p t + ˆε t c point What would the estimated equation have been if they had thrown in both w t and p t, but left out the intercept term? Answer Now the coefficient of w t is the intercept in the war years, which is 36, and the coefficient of p t is the intercept in the peace years, which is 3 (2) b t = 36w t + 3p t + 068y t + ˆε t? d 2 points What would the estimated equation have been, if bond sales and income had been measured in millions of dollars instead of billions of dollars? ( billion = 000 million) Answer From b t = 03+068y t +023w t + ˆε t follows 000b t = 30+068 000y t +230w t +000ˆε t, or (22) b (m) t = 30 + 068y (m) t + 230w t + ˆε (m) t, where b (m) t is bond sales in millions (ie, b (m) t = 000b t), and y (m) t is national income in millions (ie, y (m) t = 000y t)

4 U OF U ECONOMICS - Problem 5 Regression models incorporate seasonality often by the assumption that the intercept of the regression is different in every season, while the slopes remain the same Assuming X contains quarterly data (but the constant term is not incorporated in X), this can be achieved in several different ways: You may write your model as (23) 0 0 0 0 0 0 0 0 0 0 0 0 y = Xβ + Cδ + ε, C = 0 0 0 0 0 0 0 0 0 0 0 0

FIRST MIDTERM EXAM ECON 780 SPRING 200 5 Alternatively, you may write your model in the form y = ια + Xβ + Kδ + ε, ι = K = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (24) In R this is the default method to generate dummy variables from a seasonal factor variable (Splus has a different default) This is also the procedure shown in [Gre97,

6 U OF U ECONOMICS p 383] But the following third alternative is often preferrable: (25) y = ια + Xβ + Kδ + ε, ι = 0 0 0 0 0 0 K = 0 0 0 0 0 0 In R one gets these dummy variables from a seasonal factor variable if one specifies contrast="contrsum" 3 points What is the meaning of the seasonal dummies δ, δ 2, δ 3, and of the constant term α or the fourth seasonal dummy δ 4, in models (23), (24), and (25)? Answer Clearly, in model (23), δ i is the intercept in the ith season For (24) and (25), it is best to write the regression equation for each season separately, filling in the values the dummies take for these seasons, in order to see the meaning of these dummies Assuming X consists of one column

FIRST MIDTERM EXAM ECON 780 SPRING 200 7 only, (24) becomes y y 2 y 3 y 4 y 5 y 6 y 7 y 8 x 0 0 0 ε x 2 0 0 ε 2 x 3 0 0 x 4 0 0 [ ] ε 3 δ ε 4 = α + x 5 x 6 β + 0 0 0 0 0 δ 2 + ε 5 δ x 7 0 0 3 ε 6 ε 7 x 8 0 0 ε 8 or, written element by element y = α + x β + 0 δ + 0 δ 2 + 0 δ 3 + ε y 2 = α + x 2 β + δ + 0 δ 2 + 0 δ 3 + ε 2 y 3 = α + x 3 β + 0 δ + δ 2 + 0 δ 3 + ε 3 y 4 = α + x 4 β + 0 δ + 0 δ 2 + δ 3 + ε 4 winter spring summer autumn therefore the overall intercept α is the intercept of the first quarter (winter); δ is the difference between the spring intercept and the winter intercept, etc

8 U OF U ECONOMICS (25) becomes y y 2 y 3 y 4 y 5 y 6 y 7 y 8 or, written element by element x 0 0 ε x 2 0 0 ε 2 x 3 0 0 x 4 [ ] ε 3 δ ε 4 = α + x 5 x 6 β + 0 0 0 0 δ 2 + ε 5 δ x 7 0 0 3 ε 6 ε 7 x 8 ε 8 y = α + x β + δ + 0 δ 2 + 0 δ 3 + ε y 2 = α + x 2 β + 0 δ + δ 2 + 0 δ 3 + ε 2 y 3 = α + x 3 β + 0 δ + 0 δ 2 + δ 3 + ε 3 y 4 = α + x 4 β δ δ 2 δ 3 + ε 4 winter spring summer autumn Here the winter intercept is α + δ, the spring intercept α + δ 2, summer α + δ 3, and autmumn α δ δ 2 δ 3 Summing this and dividing by 4 shows that the constant term α is the arithmetic mean of all intercepts, therefore δ is the difference between the winter intercept and the arithmetic mean of all intercepts, etc Maximum number of points: 52

FIRST MIDTERM EXAM ECON 780 SPRING 200 9 References [Gre97] William H Greene, Econometric analysis, third ed, Prentice Hall, Upper Saddle River, NJ, 997 6 [WW79] Ronald J Wonnacott and Thomas H Wonnacott, Econometrics, second ed, Wiley, New York, 979 2 Economics Department, University of Utah, 645 Campus Center Drive, Salt Lake City, UT 842-9300, USA E-mail address: ehrbar@econutahedu URL: http://wwweconutahedu/ehrbar