ECON 3150/4150 spring term 2014: Exercise set for the first seminar and DIY exercises for the first few weeks of the course

Similar documents
Lecture 14 Simple Linear Regression

ECON 3150/4150, Spring term Lecture 6

ECON 3150/4150, Spring term Lecture 7

The regression model with one stochastic regressor (part II)

The regression model with one fixed regressor cont d

Reference: Davidson and MacKinnon Ch 2. In particular page

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

The multiple regression model; Indicator variables as regressors

The regression model with one stochastic regressor.

Econometrics I Lecture 3: The Simple Linear Regression Model

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 7: Single equation models

Simple Linear Regression: The Model

Ch 2: Simple Linear Regression

The Simple Linear Regression Model

1. The OLS Estimator. 1.1 Population model and notation

ECON 4160, Autumn term Lecture 1

Introductory Econometrics

Dynamic Regression Models (Lect 15)

ECON3150/4150 Spring 2015

Simple Linear Regression

The linear regression model: functional form and structural breaks

Simple Linear Regression

Advanced Econometrics I

Reliability of inference (1 of 2 lectures)

E 4160 Autumn term Lecture 9: Deterministic trends vs integrated series; Spurious regression; Dickey-Fuller distribution and test

Chapter 2: simple regression model

ECON Introductory Econometrics. Lecture 6: OLS with Multiple Regressors

Chapter 14 Simple Linear Regression (A)

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

3. Linear Regression With a Single Regressor

Introduction to Econometrics

ECON The Simple Regression Model

Well-developed and understood properties

Applied Econometrics (MSc.) Lecture 3 Instrumental Variables

Simple Linear Regression

Applied Econometrics (QEM)

ECON 4160, Lecture 11 and 12

Empirical Application of Simple Regression (Chapter 2)

Statistics and Econometrics I

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

ECON3150/4150 Spring 2016

Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)

ECON3150/4150 Spring 2016

ECON 4160, Spring term Lecture 12

Introduction to Linear Regression Analysis

Ch 3: Multiple Linear Regression

MAT2377. Rafa l Kulik. Version 2015/November/26. Rafa l Kulik

Linear Regression with one Regressor

MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators

4. Nonlinear regression functions

TMA4255 Applied Statistics V2016 (5)

Lecture 5: Omitted Variables, Dummy Variables and Multicollinearity

STAT5044: Regression and Anova. Inyoung Kim

Econometrics of Panel Data

Applied Econometrics Lecture 1

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II

Exogeneity and Causality

2. Linear regression with multiple regressors

Lectures on Structural Change

Lecture 3: Multiple Regression

AMS 315/576 Lecture Notes. Chapter 11. Simple Linear Regression

Using regression to study economic relationships is called econometrics. econo = of or pertaining to the economy. metrics = measurement

E 4101/5101 Lecture 9: Non-stationarity

Regression Models - Introduction

FIRST MIDTERM EXAM ECON 7801 SPRING 2001

Linear IV and Simultaneous Equations

1. Simple Linear Regression

Review of Econometrics

Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model

LECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit

REGRESSION ANALYSIS AND INDICATOR VARIABLES

Advanced Quantitative Methods: ordinary least squares

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Applied Statistics and Econometrics

The Simple Regression Model. Part II. The Simple Regression Model

E 31501/4150 Properties of OLS estimators (Monte Carlo Analysis)

Measuring the fit of the model - SSR

L2: Two-variable regression model

Simple linear regression

Lecture 8a: Spurious Regression

Econometrics of Panel Data

Statistics for Engineers Lecture 9 Linear Regression

Making sense of Econometrics: Basics

Applied Econometrics (QEM)

Linear Regression with 1 Regressor. Introduction to Econometrics Spring 2012 Ken Simons

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Covariance and Correlation

ECON 450 Development Economics

STAT 135 Lab 11 Tests for Categorical Data (Fisher s Exact test, χ 2 tests for Homogeneity and Independence) and Linear Regression

Simple Linear Regression Analysis

Intro to Linear Regression

ECON Introductory Econometrics. Lecture 4: Linear Regression with One Regressor

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

WEIGHTED LEAST SQUARES. Model Assumptions for Weighted Least Squares: Recall: We can fit least squares estimates just assuming a linear mean function.

CHAPTER 21: TIME SERIES ECONOMETRICS: SOME BASIC CONCEPTS

ECON 3150/4150, Spring term Lecture 3

Linear Models and Estimation by Least Squares

Linear Regression. Junhui Qian. October 27, 2014

I 1. 1 Introduction 2. 2 Examples Example: growth, GDP, and schooling California test scores Chandra et al. (2008)...

Transcription:

ECON 3150/4150 spring term 2014: Exercise set for the first seminar and DIY exercises for the first few weeks of the course Ragnar Nymoen 13 January 2014. Exercise set to seminar 1 (week 6, 3-7 Feb) This first exercise set is too long for a single seminar. Therefore, it is only Question A which will be discussed at the seminar meeting. The other exercises is for your own work (alone or together with others) while you wait for the seminars to begin in calendar week 6. Answers to the other exercises will be given during the first six lectures, or they will be posted on the web page of ECON 4150. Note about this will be given! Question A Stock and Watson Chapter 2: 2.4; 2.8; 2.17; 2.24;2.26 Stock and Watson Chapter 3: 3.17;3.20; 3.21 As noted, the rest are DIY exercises (for calendar week 3,4 and 5) Question B 1. Consider the three stochastic variables X, Y and Z. The three variables are connected by the linear function Y = a + bx + Z where a and b are parameters. Assume that E(Z) = 0 and Cov(X, Z) = 0 and show that the parameter b can be written as b = Cov(X, Y ) V ar(x) Note: This is an example of a population regression. An interpretation of the regression model with stochastic regressor in econometrics is that is a method that by conditioning allows us to obtain probabilistic knowledge about the population slope parameter, written as b here, without having prior knowledge about Cov(X, Y ) and V ar(x) 1

Table 2: Conditional distribution for Y given X, based on Table 1. X -8 0 8 Y -2 6 0 0.5 0.7 0.2 0.7 0.2 0.2 2. Assume that X and Y are two stochastic variables. The law of iterated expectations (also called law of double expectation) states that (1) E [E(Y X)] = E(Y ) (a) Interpret the two expectations operators on the left hand side of equality sign. (b) Try to prove (1) for the case where X and Y are discrete stochastic variables. 3. Consider the two stochastic variables X and Y. The conditional expectation E (Y x) is deterministic for any given value of X = x. But we can also regard the expectation of Y for all possible values of X. In this interpretation E (Y X) is a stochastic variable with realization E (Y x) when X = x. Hence, the conditional expectation function E (Y X) is a function of the stochastic variable X so that we can write E (Y X) = g X (X) where g X (X) is a function. Consider the discrete distribution function in table 1. Table 1: A discrete probability distribution for X and Y X -8 0 8 f Y (y i ) -2 0.5 0.7 Y 6 0 0.2 0.3 f X (x i ) 0.7 0.2 Note that the marginal probabilities are in the last row (X) and in the column on the right right (Y ) of the table. (a) Obtain the conditional distribution for Y shown in table 2. (b) Show that the conditional expectation function E(Y X) becomes: E(Y X = 8) = 2 E(Y X = 0) = 2 7 E(Y X = 8) = 2 2

(c) Calculate E [E(Y X)] by using the law of iterated expectation, i.e., by summing the three conditional expectations multiplied by the marginal probabilities f x (x i ). (d) Use the marginal distribution for Y to show that E(Y ) = E [E(Y X)], as a check of the law of iterated expectations. 4. Consider the stochastic variables X and Y and the conditional expectations function E(Y X). Let the stochastic variable ε be implicitly defined by the equation Y = E(Y X) + ε Show that E(ε X) = 0. We know that Cov(X, ε) = E(Xε) = 0 is a consequence of E(ε X) = 0. Assume that E(Y X) is linear: E(Y X) = β 0 + β 1 X. What are E(ε X) and Cov(X, ε) for the stochastic variable ε defined by Y = β 0 + β 1 X + ε? Question C 1. For a given set of data observations x i (i = 1, 2,..., n), and y i (i = 1, 2,..., n), show that: (2) (x i x) = 0 (3) (4) (5) (x i x)(y i ȳ) = (x i x)(y i ȳ) = (x i x) 2 = where x and ȳ denote the arithmetic means. (x i x)y i x i (y i ȳ) x i (x i x) 2. The least squares estimation principle entails that we chose the estimates ˆβ 0 and ˆβ 1 that minimize the following criterion function; (6) S(β 0, β 1 ) = (y i β 0 β 1 x i ) 2 where {x i, y i ; i = 1, 2,..., n} are interpreted as given numbers (data). Show that the first order conditions for a minimum of S(β 0, β 1 ) are: (7) (8) x i y i ˆβ 0 ȳ ˆβ 0 ˆβ 1 x = 0 x i ˆβ 1 x 2 i = 0. 3

3. Solve (7) and (8) for ˆβ 0 and ˆβ 1. 4. Show that the alternative formulation of the criterion function (9) S(α, β 1 ) = (y i α β 1 (x i x)) 2 gives rise to the same OLS estimate ˆβ 1 as in you answer to (3) above. 5. What is the algebraic relationship between the two OLS estimates ˆβ 0 and ˆα? 6. Assume that the empirical correlation coefficient between x and y, r X,Y, is 0.5. Does this imply that the regression coefficient ˆβ 1 in the regression where X is the regressor is positive? Explain. 7. Consider the ( inverse ) regression where X is the regressand and Y is the regressor. What is the expression for the OLS estimate of the slope coefficient (call it ˆβ 1) in that regression? Are the two regression coefficients the same? 8. Assume that you run the two regressions on two sub-samples with time series data and that you observe the following for the OLS estimate ˆβ 1 (in the regression with X as the regressor) and the squared empirical correlation coefficient: ˆβ 1 rx,y 2 First sample 0.5 0.5 2 = 0.25 Second sample 0.5 0.9 2 = 0.81 What are the two estimates for ˆβ 1 in the inverse regression (where Y is the regressor)? 9. How does the invariance of ˆβ 1 with respect to the break in the correlation structure between the two sample periods affect your thinking about a possible causal relationship between X and Y? Question D Assume that Y i is explained by a single dummy (or indicator) variable X i that takes either the value 0 or 1. For concreteness think of Y i as the number of recorded damages to the eye during New Year celebrations in year i, and define X i as { 1, if year i is with the law against rockets in private fireworks X i = 0, if not 1. What are the algebraic expressions for the OLS estimates ˆβ 0 and ˆβ 1 in this case? 4

2. In Norway, the use of rockets in private fireworks became illegal in 2008. Before this policy intervention the number of damages during New Year festivities averaged 20. The numbers of damages during the New Year celebrations after rockets were prohibited have been: 10, 10,17,16,17,15 (2013 celebration) What is the estimated values of ˆβ 0 and ˆβ 1 based on these data? 1 3. Using the model above: What is your predicted number of damages for the 2014 New Year celebration in Norway? What do you regard to be the main sources of forecast uncertainty in this case? (No calculations expected/required!) Question E Define the OLS fitted values for Y i by Ŷ i = ˆβ 0 + ˆβ 1 X i, i = 1, 2,..., n where ˆβ 0 and ˆβ 1 are the OLS estimates given by (7) and (8) above. 1. Show that, equivalently, Ŷi can be defined by Ŷ i = ˆα + ˆβ 1 (X i X), i = 1, 2,..., n 2. Define the OLS residuals by (10) ˆε i = Y i Ŷi, i = 1, 2,..., n and show that (11) (12) (13) 1 n ˆε = 1 n (ˆε i ˆε)(X i X) = 0 Ŷ = 1 n ˆε i = 0 Ŷ i = Y 3. Explain intuitively why the following decomposition of the total variation in Y (the regressand) is true: (14) (Y i Y ) 2 = } {{ } total variation ˆε 2 i }{{} residual variation + (Ŷi Ŷ )2. }{{} explained variation.aspx 4. What is the relationship between the empirical correlation coefficient r X,Y between X and Y and the coefficient of determination (R 2 )? 1 http://www.helse-bergen.no/aktuelt/nyheter/sider/17-personar-med-alvorlege-augeskadar 5

5. Do (11), (12) and (14) hold if the fitted values are instead from an OLS estimated linear relationship with no intercept (i.e. ˆβ0 is dropped)? What does this imply for the conventional R 2 statistic? 6. If we scale the variables Y i and X i in the way explained in Lecture 2, prior to OLS estimation. Will R 2 be the same as in the unscaled case? Question F 1. Assume that Y i and X i are n random variables, i = 1, 2,..., n. Assume that the conditional expectations function E(Y i X i ) is linear: E(Y i X i ) = β 0 + β 1 X i. Show that the (OLS) estimator ˆβ 1 is unbiased: E( ˆβ 1 β 1 ) = 0. The regular exercises for Seminar 2-10 will be posted separately. 6