Econ 2120: Section 2
|
|
- Coleen Wade
- 5 years ago
- Views:
Transcription
1 Econ 2120: Section 2 Part I - Linear Predictor Loose Ends Ashesh Rambachan Fall 2018
2 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted Variable Bias Formula Residual Regression Useful Trick
3 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted Variable Bias Formula Residual Regression Useful Trick
4 Orthogonality Conditions Goal of first half of the class: Get you to think in terms of projections. Best Linear Predictor: Projection onto space of linear functions of X Conditional expectation: Projection onto space of all linear functions of X Key: There is a one-to-one map between the projection problem and the orthogonality condition.
5 Orthogonality Conditions We can always (trivially) write Y = g(x ) + U, U = Y g(x ). To understand what g(x ) describes, you just need to know the orthogonality condition that U satisfies. Two cases: (1): g(x ) = β 0 + β 1 X and E[U] = E[U X ] = 0 = g(x ) is the best linear predictor. (2): E[U h(x )] = 0 for all h( ). = g(x ) = E[Y X ].
6 Orthogonality Conditions Why is this useful? If you can transform your minimization problem e.g. into a projection problem, min E[(Y β 0 β 1 X ) 2 ] β 0,β 1 min Y β 0 β 1 X 2, β 0,β 1 you have a whole new set of (very) useful tools. By the projection theorem, we can characterize the solution by a set of orthogonality conditions AND we have uniqueness.
7 Best Linear Predictor Why are we emphasizing the best linear predictor E [Y X ] so much?
8 Best Linear Predictor Why are we emphasizing the best linear predictor E [Y X ] so much? In some sense, it can always be computed. Requires very few assumptions aside from the existence of second moments of the joint distribution. You can always go to Stata, type reg y x and get something that has the interpretation of a best linear predictor. Regression output = estimated coefficients of best linear predictor.
9 Best Linear Predictor Why are we emphasizing the best linear predictor E [Y X ] so much? In some sense, it can always be computed. Requires very few assumptions aside from the existence of second moments of the joint distribution. You can always go to Stata, type reg y x and get something that has the interpretation of a best linear predictor. Regression output = estimated coefficients of best linear predictor. BUT: It is a completely separate question if this is an interesting interpretation. To say more, you need to assume more.
10 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted Variable Bias Formula Residual Regression Useful Trick
11 Linear Predictor X = (X 1,..., X K ) is a K 1 vector, β = (β 1,..., β K ) is the K 1 vector of coefficients of the best linear predictor. Coefficients characterized by K orthogonality conditions E[(Y X β)x j ] = E[X j (Y X β)] = 0 for j = 1,..., K.
12 Linear Predictor X = (X 1,..., X K ) is a K 1 vector, β = (β 1,..., β K ) is the K 1 vector of coefficients of the best linear predictor. Coefficients characterized by K orthogonality conditions E[(Y X β)x j ] = E[X j (Y X β)] = 0 for j = 1,..., K. Re-write in vector form to get E[ X (Y X β) ] = 0. K This is a K 1 system of linear equations.
13 Linear Predictor X = (X 1,..., X K ) is a K 1 vector, β = (β 1,..., β K ) is the K 1 vector of coefficients of the best linear predictor. Coefficients characterized by K orthogonality conditions E[(Y X β)x j ] = E[X j (Y X β)] = 0 for j = 1,..., K. Re-write in vector form to get E[ X (Y X β) ] = 0. K This is a K 1 system of linear equations. Provided E[XX ] is non-singular, E[XY ] K 1 E[XX ] K K β K 1 = 0 = β K 1 = E[XX ] 1 K K E[XY ]. K 1
14 Least Squares Data are (y i, x i1,..., x ik ) for i = 1,..., n. x i = (x i1,..., x ik ) is the 1 K vector of covariates for the i-th observation and x j = (x 1j,..., x nj ) be the n 1 vector of observations of the j-th covariate.
15 Least Squares Data are (y i, x i1,..., x ik ) for i = 1,..., n. x i = (x i1,..., x ik ) is the 1 K vector of covariates for the i-th observation and x j = (x 1j,..., x nj ) be the n 1 vector of observations of the j-th covariate. Define y = n 1 y 1. y n, b = K 1 b 1. b k, x = n K x 1. x n = ( x 1... x K ). n K matrix x is sometimes referred to as the design matrix.
16 Least Squares Least-squares coefficients b j defined by K orthogonality conditions (x j ) (y xb) = 0 for j = 1,..., K. 1 n n 1
17 Least Squares Least-squares coefficients b j defined by K orthogonality conditions (x j ) (y xb) = 0 for j = 1,..., K. 1 n n 1 Stack these orthogonality conditions to get (x 1 ). (y xb) = x (y xb) = 0 (x K ) n 1 K n n 1 K n Produces K 1 system of linear equations.
18 Least Squares Least-squares coefficients b j defined by K orthogonality conditions (x j ) (y xb) = 0 for j = 1,..., K. 1 n n 1 Stack these orthogonality conditions to get (x 1 ). (y xb) = x (y xb) = 0 (x K ) n 1 K n n 1 K n Produces K 1 system of linear equations. Provided x x K K non-singular, b = (x x) 1 x y. K 1 K K K 1
19 Least Squares Can show (just algebra) x x = n x i x i, x y = i=1 n x i y i. i=1
20 Least Squares Can show (just algebra) x x = n x i x i, x y = i=1 n x i y i. i=1 Can also write the least-squares coefficients as ( n ) 1 ( n ) ( 1 b = x i x i x i y i = n i=1 i=1 n ) 1 ( 1 x i x i n i=1 n x i y i ). i=1 This formula is extremely useful when we discuss asymptotics.
21 OVB formula (again) Short linear predictor of Y : E [Y X 1,..., X K 1 ] = α 1 X α K 1 X K 1,
22 OVB formula (again) Short linear predictor of Y : E [Y X 1,..., X K 1 ] = α 1 X α K 1 X K 1, Long linear predictor of Y E [Y X 1,..., X K ] = β 1 X β K X K
23 OVB formula (again) Short linear predictor of Y : E [Y X 1,..., X K 1 ] = α 1 X α K 1 X K 1, Long linear predictor of Y E [Y X 1,..., X K ] = β 1 X β K X K Auxiliary linear predictor of X K E [X K X 1,..., X K 1 ] = γ 1 X γ K 1 X K 1.
24 OVB formula (again) Let X K = X K γ 1 X 1... γ K 1 X K 1.
25 OVB formula (again) Let X K = X K γ 1 X 1... γ K 1 X K 1. So X K = γ 1 X γ K 1 X K 1 + X K, Y = β 1 X β K 1 X K 1 + β K X K + U where U = Y E [Y X 1,..., X K ] and β j = β j + γ j β K for j = 1,..., K 1.
26 OVB formula (again) Note that Y β 1 X 1... β K 1 X K 1, X j = β K X K + U, X j = 0 for j = 1,..., K 1.
27 OVB formula (again) Note that Y β 1 X 1... β K 1 X K 1, X j = β K X K + U, X j = 0 for j = 1,..., K 1. These are the same orthogonality conditions of the short linear predictor. So α j = β j + γ j β K for j = 1,..., K 1.
28 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted Variable Bias Formula Residual Regression Useful Trick
29 Residual Regression Very useful trick that is used all the time. Consider best linear predictor with K covariates E [Y X 1,..., X K ] = β 1 X β K X K. Focus on β K. Is there simple closed-form expression for β K?
30 Residual Regression Very useful trick that is used all the time. Consider best linear predictor with K covariates E [Y X 1,..., X K ] = β 1 X β K X K. Focus on β K. Is there simple closed-form expression for β K? Yes! Use residual regression.
31 Residual Regression Auxiliary linear predictor of X K given X 1,..., X K 1. Denote this as E [X K X 1,..., X K 1 ] = γ 1 X γ K 1 X K 1 Associated residual is X K = X K γ 1 X 1... γ K 1 X K 1.
32 Residual Regression Auxiliary linear predictor of X K given X 1,..., X K 1. Denote this as E [X K X 1,..., X K 1 ] = γ 1 X γ K 1 X K 1 Associated residual is X K = X K γ 1 X 1... γ K 1 X K 1. Theorem (Residual Regression) β K can be written as β K = E[Y X K ] E[ X 2 K ].
33 Residual Regression: proof By definition, X K = γ 1 X γ K 1 X K 1 + X K.
34 Residual Regression: proof By definition, X K = γ 1 X γ K 1 X K 1 + X K. Substitute into E [Y X 1,..., X k ] to get E [Y X 1,..., X K ] = β 1 X β K 1 X K 1 + β K X K where β j = β j + γ j β K for j = 1,..., K 1.
35 Residual Regression: proof X K is a linear combination of X 1,..., X K. So, it is orthogonal to the residual Y E [Y X 1,..., X K ]. Y β 1 X 1... β K 1 X K 1 β K XK, X K = 0.
36 Residual Regression: proof X K is a linear combination of X 1,..., X K. So, it is orthogonal to the residual Y E [Y X 1,..., X K ]. Y β 1 X 1... β K 1 X K 1 β K XK, X K = 0. X K is orthogonal to X 1,..., X K 1. Above simplifies to Y β K XK, X K = 0 and so, β K = E[Y X K ] E[ X 2 K ].
37 Residual Regression: intuition Coefficient β K on X K is the coefficient of the best linear predictor of Y given the residuals of X K. If the conditional expectation is linear, β K is the "partial effect" of X K on Y holding all else constant.
38 Exercise 1 Consider the long and short predictors in the population: E [Y 1, X 1, X 2, X 3 ] = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 E [Y 1, X 1, X 2 ] = α 0 + α 1 X 1 + α 2 X 2 (1) Provide a formula relating α 2 and β 2.
39 Exercise 1 Consider the long and short predictors in the population: E [Y 1, X 1, X 2, X 3 ] = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 E [Y 1, X 1, X 2 ] = α 0 + α 1 X 1 + α 2 X 2 (1) Provide a formula relating α 2 and β 2. (2) Suppose that X 3 is uncorrelated to X 1 and X 2. Does α 2 = β 2?
40 Exercise 1 Consider the long and short predictors in the population: E [Y 1, X 1, X 2, X 3 ] = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 E [Y 1, X 1, X 2 ] = α 0 + α 1 X 1 + α 2 X 2 (1) Provide a formula relating α 2 and β 2. (2) Suppose that X 3 is uncorrelated to X 1 and X 2. Does α 2 = β 2? (3): Suppose that Cov(X 2, X 3 ) = 0, Cov(X 1, X 3 ) 0, Cov(X 1, X 2 ) 0. Does α 2 = β 2?
41 Exercise 2 The partial covariance between the random variables Y, X K given X 1,..., X K 1 is where The partial variance of Y is Cov (Y, X K X 1,..., X K 1 ) = E[Ỹ X K ] Ỹ = Y E [Y X 1,..., X K 1 ] X K = X K E [X K X 1,..., X K 1 ]. V (Y X 1,..., X K 1 ) = E[Ỹ 2 ]. The partial correlation between Y, X K is Cor (Y, X K X 1,..., X K 1 ) = E[Ỹ X K ] (E[Ỹ ]E[ X K ]) 1/2.
42 Exercise 2 (continued) (1): Consider the linear predictor E [Y 1, X 1,..., X K ] = β 0 + β 1 X β K X K. Use our residual regression results to show that β K = Cov (Y, X K 1, X 1,..., X K 1 ) V. (X K 1, X 1,..., X K 1 )
43 Residual Regression: sample analog Consider the least-squares fit using K covariates ŷ i = y i x 1,..., x K = b 1 x i b K x ik. Following a similar argument, can show that b K is the coefficient of the least-squares fit of y on x k, the vector of residuals given by That is, b K can be written as x ik = x ik x ik x 1,..., x K 1. b K = 1 n n i=1 y i x ik 1 n n i=1 x ik 2.
44 Frisch-Waugh-Lovell Theorem Residual regression is typically known as the Frisch-Waugh-Lovell Theorem.
45 Frisch-Waugh-Lovell Theorem Residual regression is typically known as the Frisch-Waugh-Lovell Theorem. Interested in regression Y = X 1 β 1 + X 2 β 2 + u, where Y is an n 1 vector, X 1 is an n K 1 matrix, X 2 is an n K 2 matrix and u is an n 1 vector of residuals. How can we write the least squares coefficients in β 2?
46 Frisch-Waugh-Lovell Theorem Same as the estimate in the modified regression M X1 Y = M X1 X 2 β 2 + M X1 u where M X1 is the orthogonal complement of the projection matrix X 1 (X 1 X 1) 1 X 1. M X1 = I X 1 (X 1X 1 ) 1 X 1 It projects onto the orthogonal complement of the space spanned by the columns of X 1.
47 Frisch-Waugh-Lovell Theorem Same as the estimate in the modified regression M X1 Y = M X1 X 2 β 2 + M X1 u where M X1 is the orthogonal complement of the projection matrix X 1 (X 1 X 1) 1 X 1. M X1 = I X 1 (X 1X 1 ) 1 X 1 It projects onto the orthogonal complement of the space spanned by the columns of X 1. β 2 can be written as the coefficient in the regression of residuals of Y on residuals of X 2.
48 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted Variable Bias Formula Residual Regression Useful Trick
49 Useful Trick - Orthogonal Decomposition Notice we used the same trick in OVB and residual regression results. Decomposed variable into two pieces - one piece that lives in simple space and another that is orthogonal to this simple space.
50 Useful Trick - Orthogonal Decomposition Example: random variable Y, decompose it into Y = E[Y X ] + (Y E[Y X ]). }{{} U E[Y X ] lives on the space of functions of X only and U is orthogonal to any function of X.
51 Useful Trick - Orthogonal Decomposition Example: random variable Y, decompose it into Y = E[Y X ] + (Y E[Y X ]). }{{} U E[Y X ] lives on the space of functions of X only and U is orthogonal to any function of X. Example: random variable Y, decompose it into Y = E [Y X 1,..., X K ] + (Y E [Y X 1,..., X K ]). }{{} V E [Y X 1,..., X K ] lives on the space of linear functions of X 1,..., X K only and V is orthogonal to any linear function of X 1,..., X K.
Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012
Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.
More informationMotivation for multiple regression
Motivation for multiple regression 1. Simple regression puts all factors other than X in u, and treats them as unobserved. Effectively the simple regression does not account for other factors. 2. The slope
More informationCovariance and Correlation
Covariance and Correlation ST 370 The probability distribution of a random variable gives complete information about its behavior, but its mean and variance are useful summaries. Similarly, the joint probability
More informationProblem Set - Instrumental Variables
Problem Set - Instrumental Variables 1. Consider a simple model to estimate the effect of personal computer (PC) ownership on college grade point average for graduating seniors at a large public university:
More informationIn the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)
RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made
More informationEconomics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18)
Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18) MichaelR.Roberts Department of Economics and Department of Statistics University of California
More informationOrdinary Least Squares Regression
Ordinary Least Squares Regression Goals for this unit More on notation and terminology OLS scalar versus matrix derivation Some Preliminaries In this class we will be learning to analyze Cross Section
More informationSimple Linear Regression Estimation and Properties
Simple Linear Regression Estimation and Properties Outline Review of the Reading Estimate parameters using OLS Other features of OLS Numerical Properties of OLS Assumptions of OLS Goodness of Fit Checking
More informationSolutions for Econometrics I Homework No.1
Solutions for Econometrics I Homework No.1 due 2006-02-20 Feldkircher, Forstner, Ghoddusi, Grafenhofer, Pichler, Reiss, Yan, Zeugner Exercise 1.1 Structural form of the problem: 1. q d t = α 0 + α 1 p
More informationReference: Davidson and MacKinnon Ch 2. In particular page
RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy
More informationECO375 Tutorial 8 Instrumental Variables
ECO375 Tutorial 8 Instrumental Variables Matt Tudball University of Toronto Mississauga November 16, 2017 Matt Tudball (University of Toronto) ECO375H5 November 16, 2017 1 / 22 Review: Endogeneity Instrumental
More informationHomework Set 2, ECO 311, Spring 2014
Homework Set 2, ECO 311, Spring 2014 Due Date: At the beginning of class on March 31, 2014 Instruction: There are twelve questions. Each question is worth 2 points. You need to submit the answers of only
More informationEssential of Simple regression
Essential of Simple regression We use simple regression when we are interested in the relationship between two variables (e.g., x is class size, and y is student s GPA). For simplicity we assume the relationship
More informationMultivariate Regression: Part I
Topic 1 Multivariate Regression: Part I ARE/ECN 240 A Graduate Econometrics Professor: Òscar Jordà Outline of this topic Statement of the objective: we want to explain the behavior of one variable as a
More informationf X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx
INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't
More informationFinal Exam. Economics 835: Econometrics. Fall 2010
Final Exam Economics 835: Econometrics Fall 2010 Please answer the question I ask - no more and no less - and remember that the correct answer is often short and simple. 1 Some short questions a) For each
More informationMultiple Regression Model: I
Multiple Regression Model: I Suppose the data are generated according to y i 1 x i1 2 x i2 K x ik u i i 1...n Define y 1 x 11 x 1K 1 u 1 y y n X x n1 x nk K u u n So y n, X nxk, K, u n Rks: In many applications,
More informationChapter 2: simple regression model
Chapter 2: simple regression model Goal: understand how to estimate and more importantly interpret the simple regression Reading: chapter 2 of the textbook Advice: this chapter is foundation of econometrics.
More informationProperties of Summation Operator
Econ 325 Section 003/004 Notes on Variance, Covariance, and Summation Operator By Hiro Kasahara Properties of Summation Operator For a sequence of the values {x 1, x 2,..., x n, we write the sum of x 1,
More informationWe begin by thinking about population relationships.
Conditional Expectation Function (CEF) We begin by thinking about population relationships. CEF Decomposition Theorem: Given some outcome Y i and some covariates X i there is always a decomposition where
More informationBasic Probability Reference Sheet
February 27, 2001 Basic Probability Reference Sheet 17.846, 2001 This is intended to be used in addition to, not as a substitute for, a textbook. X is a random variable. This means that X is a variable
More informationDealing With Endogeneity
Dealing With Endogeneity Junhui Qian December 22, 2014 Outline Introduction Instrumental Variable Instrumental Variable Estimation Two-Stage Least Square Estimation Panel Data Endogeneity in Econometrics
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationLecture 9: Panel Data Model (Chapter 14, Wooldridge Textbook)
Lecture 9: Panel Data Model (Chapter 14, Wooldridge Textbook) 1 2 Panel Data Panel data is obtained by observing the same person, firm, county, etc over several periods. Unlike the pooled cross sections,
More informationLecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)
Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16) 1 2 Model Consider a system of two regressions y 1 = β 1 y 2 + u 1 (1) y 2 = β 2 y 1 + u 2 (2) This is a simultaneous equation model
More informationLinear Regression. Junhui Qian. October 27, 2014
Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency
More informationStatistics 910, #15 1. Kalman Filter
Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations
More informationThe returns to schooling, ability bias, and regression
The returns to schooling, ability bias, and regression Jörn-Steffen Pischke LSE October 4, 2016 Pischke (LSE) Griliches 1977 October 4, 2016 1 / 44 Counterfactual outcomes Scholing for individual i is
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationLecture 6: Geometry of OLS Estimation of Linear Regession
Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns
More informationECON The Simple Regression Model
ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More informationECO220Y Simple Regression: Testing the Slope
ECO220Y Simple Regression: Testing the Slope Readings: Chapter 18 (Sections 18.3-18.5) Winter 2012 Lecture 19 (Winter 2012) Simple Regression Lecture 19 1 / 32 Simple Regression Model y i = β 0 + β 1 x
More informationThe Hilbert Space of Random Variables
The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2
More informationProblem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51
Yi Lu Correlation and Covariance Yi Lu ECE 313 2/51 Definition Let X and Y be random variables with finite second moments. the correlation: E[XY ] Yi Lu ECE 313 3/51 Definition Let X and Y be random variables
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationTopics in Probability and Statistics
Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X
More informationHomework Set 2, ECO 311, Fall 2014
Homework Set 2, ECO 311, Fall 2014 Due Date: At the beginning of class on October 21, 2014 Instruction: There are twelve questions. Each question is worth 2 points. You need to submit the answers of only
More informationRegression Analysis. Ordinary Least Squares. The Linear Model
Regression Analysis Linear regression is one of the most widely used tools in statistics. Suppose we were jobless college students interested in finding out how big (or small) our salaries would be 20
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationSTA 302f16 Assignment Five 1
STA 30f16 Assignment Five 1 Except for Problem??, these problems are preparation for the quiz in tutorial on Thursday October 0th, and are not to be handed in As usual, at times you may be asked to prove
More informationRegression and Covariance
Regression and Covariance James K. Peterson Department of Biological ciences and Department of Mathematical ciences Clemson University April 16, 2014 Outline A Review of Regression Regression and Covariance
More informationNon-Spherical Errors
Non-Spherical Errors Krishna Pendakur February 15, 2016 1 Efficient OLS 1. Consider the model Y = Xβ + ε E [X ε = 0 K E [εε = Ω = σ 2 I N. 2. Consider the estimated OLS parameter vector ˆβ OLS = (X X)
More informationEconometrics I. Professor William Greene Stern School of Business Department of Economics 3-1/29. Part 3: Least Squares Algebra
Econometrics I Professor William Greene Stern School of Business Department of Economics 3-1/29 Econometrics I Part 3 Least Squares Algebra 3-2/29 Vocabulary Some terms to be used in the discussion. Population
More informationRegression: Ordinary Least Squares
Regression: Ordinary Least Squares Mark Hendricks Autumn 2017 FINM Intro: Regression Outline Regression OLS Mathematics Linear Projection Hendricks, Autumn 2017 FINM Intro: Regression: Lecture 2/32 Regression
More informationCS70: Jean Walrand: Lecture 22.
CS70: Jean Walrand: Lecture 22. Confidence Intervals; Linear Regression 1. Review 2. Confidence Intervals 3. Motivation for LR 4. History of LR 5. Linear Regression 6. Derivation 7. More examples Review:
More informationEconometrics for PhDs
Econometrics for PhDs Amine Ouazad April 2012, Final Assessment - Answer Key 1 Questions with a require some Stata in the answer. Other questions do not. 1 Ordinary Least Squares: Equality of Estimates
More informationKey Algebraic Results in Linear Regression
Key Algebraic Results in Linear Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 30 Key Algebraic Results in
More information1 Correlation between an independent variable and the error
Chapter 7 outline, Econometrics Instrumental variables and model estimation 1 Correlation between an independent variable and the error Recall that one of the assumptions that we make when proving the
More informationLecture 4: Least Squares (LS) Estimation
ME 233, UC Berkeley, Spring 2014 Xu Chen Lecture 4: Least Squares (LS) Estimation Background and general solution Solution in the Gaussian case Properties Example Big picture general least squares estimation:
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationFor more information about how to cite these materials visit
Author(s): Kerby Shedden, Ph.D., 2010 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Share Alike 3.0 License: http://creativecommons.org/licenses/by-sa/3.0/
More informationMTH 2310, FALL Introduction
MTH 2310, FALL 2011 SECTION 6.2: ORTHOGONAL SETS Homework Problems: 1, 5, 9, 13, 17, 21, 23 1, 27, 29, 35 1. Introduction We have discussed previously the benefits of having a set of vectors that is linearly
More informationIntroduction to Simple Linear Regression
Introduction to Simple Linear Regression Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68 About me Faculty in the Department
More informationLinear Regression with Time Series Data
Econometrics 2 Linear Regression with Time Series Data Heino Bohn Nielsen 1of21 Outline (1) The linear regression model, identification and estimation. (2) Assumptions and results: (a) Consistency. (b)
More informationECE Lecture #9 Part 2 Overview
ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationLecture 4: Proofs for Expectation, Variance, and Covariance Formula
Lecture 4: Proofs for Expectation, Variance, and Covariance Formula by Hiro Kasahara Vancouver School of Economics University of British Columbia Hiro Kasahara (UBC) Econ 325 1 / 28 Discrete Random Variables:
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationMath Linear Algebra
Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner
More informationCS70: Lecture 33. Linear Regression. 1. Examples 2. History 3. Multiple Random variables 4. Linear Regression 5. Derivation 6.
CS70: Lecture 33. Let s Guess! Dollar or not with equal probability? Guess how much you get! Guess a 1/2! The expected value. Win X, 100 times. How much will you win the 101st. Guess average! Let s Guess!
More information3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.
7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of
More informationApplied Statistics and Econometrics
Applied Statistics and Econometrics Lecture 6 Saul Lach September 2017 Saul Lach () Applied Statistics and Econometrics September 2017 1 / 53 Outline of Lecture 6 1 Omitted variable bias (SW 6.1) 2 Multiple
More informationECON Introductory Econometrics. Lecture 16: Instrumental variables
ECON4150 - Introductory Econometrics Lecture 16: Instrumental variables Monique de Haan (moniqued@econ.uio.no) Stock and Watson Chapter 12 Lecture outline 2 OLS assumptions and when they are violated Instrumental
More informationMultiple Equation GMM with Common Coefficients: Panel Data
Multiple Equation GMM with Common Coefficients: Panel Data Eric Zivot Winter 2013 Multi-equation GMM with common coefficients Example (panel wage equation) 69 = + 69 + + 69 + 1 80 = + 80 + + 80 + 2 Note:
More informationSTAT 350: Geometry of Least Squares
The Geometry of Least Squares Mathematical Basics Inner / dot product: a and b column vectors a b = a T b = a i b i a b a T b = 0 Matrix Product: A is r s B is s t (AB) rt = s A rs B st Partitioned Matrices
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1
MA 575 Linear Models: Cedric E Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1 1 Within-group Correlation Let us recall the simple two-level hierarchical
More informationMatrix Factorizations
1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More informationLinear Models in Econometrics
Linear Models in Econometrics Nicky Grant At the most fundamental level econometrics is the development of statistical techniques suited primarily to answering economic questions and testing economic theories.
More informationEcon 582 Fixed Effects Estimation of Panel Data
Econ 582 Fixed Effects Estimation of Panel Data Eric Zivot May 28, 2012 Panel Data Framework = x 0 β + = 1 (individuals); =1 (time periods) y 1 = X β ( ) ( 1) + ε Main question: Is x uncorrelated with?
More informationLinear Model Under General Variance
Linear Model Under General Variance We have a sample of T random variables y 1, y 2,, y T, satisfying the linear model Y = X β + e, where Y = (y 1,, y T )' is a (T 1) vector of random variables, X = (T
More informationSection I. Define or explain the following terms (3 points each) 1. centered vs. uncentered 2 R - 2. Frisch theorem -
First Exam: Economics 388, Econometrics Spring 006 in R. Butler s class YOUR NAME: Section I (30 points) Questions 1-10 (3 points each) Section II (40 points) Questions 11-15 (10 points each) Section III
More informationWooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares
Wooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares Many economic models involve endogeneity: that is, a theoretical relationship does not fit
More informationFitting Linear Statistical Models to Data by Least Squares II: Weighted
Fitting Linear Statistical Models to Data by Least Squares II: Weighted Brian R. Hunt and C. David Levermore University of Maryland, College Park Math 420: Mathematical Modeling April 21, 2014 version
More informationUniversity of Regina. Lecture Notes. Michael Kozdron
University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationEcon 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )
Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationExercises * on Principal Component Analysis
Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................
More informationWLS and BLUE (prelude to BLUP) Prediction
WLS and BLUE (prelude to BLUP) Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 21, 2018 Suppose that Y has mean X β and known covariance matrix V (but Y need
More informationPROBABILITY THEORY REVIEW
PROBABILITY THEORY REVIEW CMPUT 466/551 Martha White Fall, 2017 REMINDERS Assignment 1 is due on September 28 Thought questions 1 are due on September 21 Chapters 1-4, about 40 pages If you are printing,
More informationThe Regression Tool. Yona Rubinstein. July Yona Rubinstein (LSE) The Regression Tool 07/16 1 / 35
The Regression Tool Yona Rubinstein July 2016 Yona Rubinstein (LSE) The Regression Tool 07/16 1 / 35 Regressions Regression analysis is one of the most commonly used statistical techniques in social and
More informationECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1
EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation
More information1.12 Multivariate Random Variables
112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random
More informationRegression. Econometrics II. Douglas G. Steigerwald. UC Santa Barbara. D. Steigerwald (UCSB) Regression 1 / 17
Regression Econometrics Douglas G. Steigerwald UC Santa Barbara D. Steigerwald (UCSB) Regression 1 / 17 Overview Reference: B. Hansen Econometrics Chapter 2.20-2.23, 2.25-2.29 Best Linear Projection is
More informationEconometrics I Lecture 3: The Simple Linear Regression Model
Econometrics I Lecture 3: The Simple Linear Regression Model Mohammad Vesal Graduate School of Management and Economics Sharif University of Technology 44716 Fall 1397 1 / 32 Outline Introduction Estimating
More informationChapter 4 continued. Chapter 4 sections
Chapter 4 sections Chapter 4 continued 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP:
More informationLecture Notes Special: Using Neural Networks for Mean-Squared Estimation. So: How can we evaluate E(X Y )? What is a neural network? How is it useful?
Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation The Story So Far... So: How can we evaluate E(X Y )? What is a neural network? How is it useful? EE 278: Using Neural Networks for
More informationUnsupervised Learning: Dimensionality Reduction
Unsupervised Learning: Dimensionality Reduction CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 3 Outline In this lecture, we set about to solve the problem posed in the previous lecture Given a dataset,
More informationProjection. Ping Yu. School of Economics and Finance The University of Hong Kong. Ping Yu (HKU) Projection 1 / 42
Projection Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Projection 1 / 42 1 Hilbert Space and Projection Theorem 2 Projection in the L 2 Space 3 Projection in R n Projection
More informationLinear Algebra. Maths Preliminaries. Wei CSE, UNSW. September 9, /29
September 9, 2018 1/29 Introduction This review focuses on, in the context of COMP6714. Key take-away points Matrices as Linear mappings/functions 2/29 Note You ve probability learned from matrix/system
More informationLecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices
Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationSingular Value Decomposition and Principal Component Analysis (PCA) I
Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression
More informationInstrumental Variables
Instrumental Variables Department of Economics University of Wisconsin-Madison September 27, 2016 Treatment Effects Throughout the course we will focus on the Treatment Effect Model For now take that to
More informationECON2228 Notes 2. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 47
ECON2228 Notes 2 Christopher F Baum Boston College Economics 2014 2015 cfb (BC Econ) ECON2228 Notes 2 2014 2015 1 / 47 Chapter 2: The simple regression model Most of this course will be concerned with
More information