REVIEW (MULTIVARIATE LINEAR REGRESSION) Explain/Obtain the LS estimator () of the vector of coe cients (b)
|
|
- Marybeth Lester
- 5 years ago
- Views:
Transcription
1 REVIEW (MULTIVARIATE LINEAR REGRESSION) Explain/Obtain the LS estimator () of the vector of coe cients (b) Explain/Obtain the variance-covariance matrix of Both in the bivariate case (two regressors) and in the multivariate case (k regressors) Decompose the residual sum of squares The coe cient of the multiple correlation Information criteria
2 BIVARIATE REGRESSION Consider the following bivariate regression Y i b 1 + b X i + e i ; i 1; : : : ; N The above expression can be written in a matrix as or 6 4 Y 1 Y. Y N Y (Nx1) X b + (Nx)(x1) 1 X 1 1 X.. 1 X N 3 " b1 7 5 b e ; (Nx1) e 1 e. e N 3 7 5
3 SUM OF THE SQUARED ERRORS The sum of the squared errors can be written as NX i1 e i e 0 e h e 1 e e N i 6 4 e 1 e. e N e 1 + e + + e N The vector of the errors is given by e (Nx1) Y (Nx1) X b (Nx)(x1)
4 X 0 X FOR THE BIVARIATE CASE X 0 X is given by X 0 X (x) " X 1 X X N " N P X P X P X X 1 1 X.. 1 X N From the above equation it follows that (X 0 X) 1 " P N Ni1 1 X P P i X X " P X P Ni1 X P i X N N P X ( P X) " P X P Ni1 X P i X N N P x
5 X 0 Y FOR THE BIVARIATE CASE Similarly, X 0 Y is given by X 0 Y (x1) " X 1 X X N " P Y P XY 6 4 Y 1 Y. Y N 3 7 5
6 LS ESTIMATOR OF THE SLOPE COEFFICIENT The LS estimator is " 1 (X 0 X) 1 X 0 Y " P X P " Ni1 P X P i Y P X N XY " P X P Y N P XY N P x P P X XY P P X Y N P x Thus, since N P xy N P P P XY X Y : N P P P P XY X Y xy N P x P x
7 LS ESTIMATOR OF THE CONSTANT " P X P P P Y X XY N P P P XY X Y N P x Thus P X P P P Y X XY 1 N P x It can be shown that 1 Y X
8 LS ESTIMATOR k VARIABLES Consider the: i) Nx1 vector of the Y s, ii) Nxk matrix of the regressors (X s), iii) Nx1 vector of the parameters (b s) and, iv) Nx1 vector of the errors (e s) Y (Nx1) b (kx1) Y 1 Y. Y N b 1 b. b K ; X 3 7 (Nxk) 5 ; e (Nx1) X 1 X k1 1. X.. X k. 1 X N X kn e 1 e. e N ;
9 The multiple regression is Y i b 1 + b X i + + b k X ki + e i In a matrix form it can be written as Y (Nx1) X b + (Nxk)(kx1) e (Nx1) The LS estimator of the vector b, as in the bivariate case, is given by (X 0 X) 1 X 0 Y (kx1) (kxk) (kx1)
10 VARIANCE-COVARIANCE MATRIX OF THE ERRORS ERRORS Consider the x matrix ee 0 ee 0 (x) " e1 e h e1 e i " e 1 e 1 e e 1 e e Since E(e i ) 0, E(e i ) (the errors are homoskedastic) and E(e 1 e ) 0 (the errors are serially uncorrelated) the variance-covariance matrix of the vector of errors is given by E(ee 0 ) (x) E " e 1 e 1 e e 1 e e where I is the identity matrix " 0 0 I
11 VARIANCE-COVARIANCE MATRIX OF THE ERRORS N ERRORS If we have N observations: ee 0 (NxN) 6 4 e 1 e 1 e e 1 e N e e 1 e e e N... e N e 1 e N e e N Accordingly, the variance-covariance matrix of the errors, E(ee 0 ), is (NxN) E(ee 0 ) (NxN) E e 1 e 1 e e 1 e N e e 1 e e e N... e N e 1 e N e e N I 3 7 5
12 VARIANCE-COVARIANCE MATRIX OF THE LS ESTI- MATOR VECTOR BIVARIATE CASE The x variance covariance matrix of the vector is de ned as V ar() E[( b) ( b) 0 ] (x) (x1) (1x) Note that if was a scalar with expected value b then V ar() E( b) Since is a x1 vector: V ar() (x) " V ar( 1 ) Cov( 1 ; ) Cov( 1 ; ) V ar( ) It can be shown that V ar() (x) (X 0 X) 1
13 VARIANCE OF THE ESTIMATOR OF THE SLOPE CO- EFFICIENT Recall that (X 0 X) 1 " P X P X P X N N X x Thus, V ar() (x) V ar() (x) (X 0 X) 1 is given by " V ar( 1 ) Cov( 1 ; ) Cov( 1 ; ) V ar( ) " P X P Ni1 X i P X N N X x Moreover, the above expression implies V ar( ) N N P x P x ;
14 VARIANCE OF THE ESTIMATOR OF THE CONSTANT Since V ar() (x) " P X P Ni1 X i P X N N X x It can be shown that V ar( 1 ) [ 1 N + X P x ]:
15 BIVARIATE CASE: COVARIANCE BETWEEN THE TWO ESTIMATORS V ar() (x) " P X P Ni1 X i P X N N X x Finally, Cov( 1 ; ) P X N P x X P x :
16 VARIANCE-COVARIANCE MATRIX OF THE LS VEC- TOR -MULTIVARIATE CASE For the multiple linear regression we also have V ar() (kxk) (X 0 X) 1 (kxk) In other words, in this case we have k unknowns: The k variances and the k(k 1) covariances.
17 DECOMPOSITION OF RESIDUAL SUM OF SQUARES (RSS) It can be shown that TSS z } { X y ESS z } { 0 X 0 X NY + RSS z} { be 0 be TSS: total sum of squares; ESS: explained sum of squares; RSS: residual sum of squares
18 R We de ne the R of the regression (the coe cient of the multiple correlation) as R ESS TSS TSS-RSS TSS 1 RSS TSS The adjusted R (R ) is given by R 1 RSS(N k) TSS(N 1)
19 INFORMATION CRITERIA Next we de ne two information criteria: The Schwarz and Akaike information criteria (SIC, AIC respectively): SIC ln b e 0 be N + k N ln(n); e AIC ln b0 be N + k N the pre ered model is the one with the minimum information criterion.
20 SUMMARY (MULTIPLE LINEAR REGRESSION) We obtained the least square estimator of the vector of coe cients: (X 0 X) 1 X 0 Y In the bivariate case this gives us: 1 Y X; P xy P x
21 We obtained the variance-covariance matrix of : V ar() (X 0 X) 1 In the bivariate case this gives us V ar( 1 ) [ 1 N + X P x ]; V ar( ) P x ; Cov( 1 ; ) X P x ;
22 We decomposed the TSS: TSS z } { X y ESS z } { 0 X 0 X NY + RSS z} { be 0 be We de ned the R of the regression: R RSS 1 TSS We de ned two information criteria: SIC ln b e 0 be N + k N ln(n); e AIC ln b0 be N + k N
23 PROOFS FROM WEEK 7 ONWARDS ONLY FOR THE EC5501 SUM OF THE SQUARED ERRORS The sum of the squared errors can be written as NX i1 e i e 0 e h e 1 e e N i 6 4 e 1 e. e N e 1 + e + + e N The vector of the errors is given by e (Nx1) Y (Nx1) X b (Nx)(x1) The sum of the squared errors, P N i1 e i, can be written as NX i1 e i e 0 e (Y Xb) 0 (Y Xb) (Y 0 b 0 X 0 )(Y Xb) Y 0 Y b 0 X 0 Y Y 0 Xb + b 0 X 0 Xb
24 SUM OF THE SQUARED ERRORS NX i1 e i e0 e Y 0 Y b 0 X 0 Y Y 0 Xb + b 0 X 0 Xb Next note that since Y 0 X b (1xN) (Nx) (Y 0 Xb) 0 b 0 X 0 Y Y 0 Xb. Thus (x1) is a scalar we have: NX i1 e i e0 e Y 0 Y b 0 X 0 Y + b 0 X 0 Xb
25 LS ESTIMATOR OF THE VECTOR b NX i1 e i e0 e Y 0 Y b 0 X 0 Y + b 0 X 0 Xb The least squares (LS) principle is to choose the vector of the parameters b in order to minimize P N i1 e i. Thus we take the rst derivative of e 0 e with respect to b and set it equal to 0 e) 0 X 0 Y + X 0 X The above equation implies that the LS estimator of b, denoted by, is given by X 0 X X 0 Y ) (X 0 X) 1 X 0 Y This is a system of two equations and two unknowns ( 1, ) because X 0 X X0 Y and (xn)(nx). (xn)(nx1)
26 X 0 X FOR THE BIVARIATE CASE X 0 X is given by X 0 X (x) " X 1 X X N " N P X P X P X X 1 1 X.. 1 X N From the above equation it follows that (X 0 X) 1 " P N Ni1 1 X P P i X X " P X P Ni1 X P i X N N P X ( P X) " P X P Ni1 X P i X N N P x
27 X 0 Y FOR THE BIVARIATE CASE Similarly, X 0 Y is given by X 0 Y (x1) " X 1 X X N " P Y P XY 6 4 Y 1 Y. Y N 3 7 5
28 LS ESTIMATOR OF THE SLOPE COEFFICIENT The LS estimator is " 1 (X 0 X) 1 X 0 Y " P X P " Ni1 P X P i Y P X N XY " P X P Y N P XY N P x P P X XY P P X Y N P x Thus, since N P xy N P P P XY X Y : N P P P P XY X Y xy N P x P x
29 LS ESTIMATOR OF THE CONSTANT " P X P P P Y X XY N P P P XY X Y N P x Further 1 P X P P P Y X XY N P x P X Y X P XY P x Next, in the numerator we NY X to get 1 Y (P X NX ) X( P XY NY X) P x Y P x X P xy P x Y X
30 PROPERTIES OF THE LS RESIDUALS From the least squares methodology we have (X 0 X) X 0 Y Since Y X + be, it follows that In other words " X 1 X X N (X 0 X) X 0 (X + be) ) X 0 be be 1 be. be N " P b e i P Xi be i 0 P That is, the residuals, be i, have mean zero: e bi 0, and are orthogonal to the regressor X i : P X i be i 0 Recall also that Y 1 + X. In other words (X; Y ) satisfy the estimated linear relationship
31 The predicted values for Y, denoted by b Y, are given by by X. Thus by 0 be (X) 0 be 0 X 0 be 0 In other words, the vector of regression values for Y is uncorrelated with be
32 EXPECTED VALUE OF THE LS ESTIMATOR Recall that LS vector estimator is (X 0 X) 1 X 0 Y Since Y Xb + e, it follows that (X 0 X) 1 X 0 (Xb + e) (X 0 X) 1 X 0 Xb + (X 0 X) 1 X 0 e b + (X 0 X) 1 X 0 e (1) Taking expectation from both sides of the above equation gives E() b + (X 0 X) 1 X 0 E(e) b since E(e) 0. That is is an unbiased estimator of b
33 VARIANCE-COVARIANCE MATRIX OF THE LS ESTI- MATOR VECTOR BIVARIATE CASE The x variance covariance matrix of the vector is de ned as V ar() E[( b) ( b) 0 ] (x) (x1) (1x) Note that if was a scalar with expected value b then V ar() E( b) Since is a x1 vector: V ar() (x) " V ar( 1 ) Cov( 1 ; ) Cov( 1 ; ) V ar( )
34 From equation (1) we have b (X 0 X) 1 X 0 e Hence, E[( b) (x1) ( b) 0 ] Ef(X 0 X) 1 X 0 e[(x 0 X) 1 X 0 e] 0 g (1x) E[(X 0 X) 1 X 0 ee 0 X(X 0 X) 1 ] since [(X 0 X) 1 ] 0 (X 0 X) 1 and (X 0 ) 0 X Next, we have E[( b) (x1) ( b) 0 ] [(X 0 X) 1 X 0 E(ee 0 )X(X 0 X) 1 ] (1x) Using the fact that E(ee 0 ) I we get V ar() (x) (X 0 X) 1 X 0 X(X 0 X) 1 (X 0 X) 1
35 VARIANCE OF THE ESTIMATOR OF THE CONSTANT Recall that (X 0 X) 1 " P X P X P X N N X x Thus, V ar() (x) V ar() (x) (X 0 X) 1 is given by " V ar( 1 ) Cov( 1 ; ) Cov( 1 ; ) V ar( ) " P X P Ni1 X i P X N N X x Thus, V ar( 1 ) P X N P x ;
36 V ar( 1 ) P X N P x ; Using P x P X NX ) P X P x +NX, we get V ar( 1 ) P x + NX N P x [ 1 N + X P x ]:
37 DECOMPOSITION OF RESIDUAL SUM OF SQUARES (RSS) Recall that Y X + be b Y + be In other words we decompose the Y vector into: the part explained by the regression, b Y X (the predicted values of Y ), and the unexplained part be (the residuals)
38 Thus X Y Y 0 Y ( b Y + be) 0 ( b Y + be) b Y 0 b Y + be 0 be X b Y + NX i1 since b Y 0 be be 0 b Y 0 (from the rst order conditions). Note that b Y 0 b Y (X) 0 X 0 X 0 X be i
39 X Y b Y 0 b Y + be 0 be 0 X 0 X + be 0 be Recall that P y P Y NY N d V ar(y ). Thus TSS z } { X y ESS z } { 0 X 0 X NY + RSS z} { be 0 be TSS: total sum of squares; ESS: explained sum of squares; RSS: residual sum of squares
Föreläsning /31
1/31 Föreläsning 10 090420 Chapter 13 Econometric Modeling: Model Speci cation and Diagnostic testing 2/31 Types of speci cation errors Consider the following models: Y i = β 1 + β 2 X i + β 3 X 2 i +
More informationEconomics 620, Lecture 2: Regression Mechanics (Simple Regression)
1 Economics 620, Lecture 2: Regression Mechanics (Simple Regression) Observed variables: y i ; x i i = 1; :::; n Hypothesized (model): Ey i = + x i or y i = + x i + (y i Ey i ) ; renaming we get: y i =
More information1. The Multivariate Classical Linear Regression Model
Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The
More informationLecture 3: Multiple Regression
Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationThe Multiple Regression Model Estimation
Lesson 5 The Multiple Regression Model Estimation Pilar González and Susan Orbe Dpt Applied Econometrics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model:
More information9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures
FE661 - Statistical Methods for Financial Engineering 9. Model Selection Jitkomut Songsiri statistical models overview of model selection information criteria goodness-of-fit measures 9-1 Statistical models
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationWe begin by thinking about population relationships.
Conditional Expectation Function (CEF) We begin by thinking about population relationships. CEF Decomposition Theorem: Given some outcome Y i and some covariates X i there is always a decomposition where
More informationSimple Linear Regression
Simple Linear Regression Christopher Ting Christopher Ting : christophert@smu.edu.sg : 688 0364 : LKCSB 5036 January 7, 017 Web Site: http://www.mysmu.edu/faculty/christophert/ Christopher Ting QF 30 Week
More informationSimple Linear Regression: The Model
Simple Linear Regression: The Model task: quantifying the effect of change X in X on Y, with some constant β 1 : Y = β 1 X, linear relationship between X and Y, however, relationship subject to a random
More information(c) i) In ation (INFL) is regressed on the unemployment rate (UNR):
BRUNEL UNIVERSITY Master of Science Degree examination Test Exam Paper 005-006 EC500: Modelling Financial Decisions and Markets EC5030: Introduction to Quantitative methods Model Answers. COMPULSORY (a)
More informationEnvironmental Econometrics
Environmental Econometrics Syngjoo Choi Fall 2008 Environmental Econometrics (GR03) Fall 2008 1 / 37 Syllabus I This is an introductory econometrics course which assumes no prior knowledge on econometrics;
More informationEconomics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N
1 Economics 620, Lecture 4: The K-Variable Linear Model I Consider the system y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N or in matrix form y = X + " where y is N 1, X is N
More informationQuantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017
Summary of Part II Key Concepts & Formulas Christopher Ting November 11, 2017 christopherting@smu.edu.sg http://www.mysmu.edu/faculty/christophert/ Christopher Ting 1 of 16 Why Regression Analysis? Understand
More information2 Regression Analysis
FORK 1002 Preparatory Course in Statistics: 2 Regression Analysis Genaro Sucarrat (BI) http://www.sucarrat.net/ Contents: 1 Bivariate Correlation Analysis 2 Simple Regression 3 Estimation and Fit 4 T -Test:
More informationEconomics 620, Lecture 5: exp
1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationSimple Linear Regression Estimation and Properties
Simple Linear Regression Estimation and Properties Outline Review of the Reading Estimate parameters using OLS Other features of OLS Numerical Properties of OLS Assumptions of OLS Goodness of Fit Checking
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationBasic Econometrics - rewiev
Basic Econometrics - rewiev Jerzy Mycielski Model Linear equation y i = x 1i β 1 + x 2i β 2 +... + x Ki β K + ε i, dla i = 1,..., N, Elements dependent (endogenous) variable y i independent (exogenous)
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationLecture Notes Part 2: Matrix Algebra
17.874 Lecture Notes Part 2: Matrix Algebra 2. Matrix Algebra 2.1. Introduction: Design Matrices and Data Matrices Matrices are arrays of numbers. We encounter them in statistics in at least three di erent
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationEconometrics I. Professor William Greene Stern School of Business Department of Economics 3-1/29. Part 3: Least Squares Algebra
Econometrics I Professor William Greene Stern School of Business Department of Economics 3-1/29 Econometrics I Part 3 Least Squares Algebra 3-2/29 Vocabulary Some terms to be used in the discussion. Population
More information1 Correlation between an independent variable and the error
Chapter 7 outline, Econometrics Instrumental variables and model estimation 1 Correlation between an independent variable and the error Recall that one of the assumptions that we make when proving the
More informationExercises (in progress) Applied Econometrics Part 1
Exercises (in progress) Applied Econometrics 2016-2017 Part 1 1. De ne the concept of unbiased estimator. 2. Explain what it is a classic linear regression model and which are its distinctive features.
More informationEconomics 620, Lecture 4: The K-Varable Linear Model I
Economics 620, Lecture 4: The K-Varable Linear Model I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 4: The K-Varable Linear Model I 1 / 20 Consider the system
More informationPANEL DATA RANDOM AND FIXED EFFECTS MODEL. Professor Menelaos Karanasos. December Panel Data (Institute) PANEL DATA December / 1
PANEL DATA RANDOM AND FIXED EFFECTS MODEL Professor Menelaos Karanasos December 2011 PANEL DATA Notation y it is the value of the dependent variable for cross-section unit i at time t where i = 1,...,
More informationUses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).
1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need
More informationIn the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)
RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made
More informationNotes on Time Series Modeling
Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g
More informationTopic 4: Model Specifications
Topic 4: Model Specifications Advanced Econometrics (I) Dong Chen School of Economics, Peking University 1 Functional Forms 1.1 Redefining Variables Change the unit of measurement of the variables will
More information2. Multivariate ARMA
2. Multivariate ARMA JEM 140: Quantitative Multivariate Finance IES, Charles University, Prague Summer 2018 JEM 140 () 2. Multivariate ARMA Summer 2018 1 / 19 Multivariate AR I Let r t = (r 1t,..., r kt
More informationTwo-Variable Regression Model: The Problem of Estimation
Two-Variable Regression Model: The Problem of Estimation Introducing the Ordinary Least Squares Estimator Jamie Monogan University of Georgia Intermediate Political Methodology Jamie Monogan (UGA) Two-Variable
More informationINTRODUCTORY ECONOMETRICS
INTRODUCTORY ECONOMETRICS Lesson 2b Dr Javier Fernández etpfemaj@ehu.es Dpt. of Econometrics & Statistics UPV EHU c J Fernández (EA3-UPV/EHU), February 21, 2009 Introductory Econometrics - p. 1/192 GLRM:
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationQuick Review on Linear Multiple Regression
Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,
More informationEcon 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )
Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=
More informationRegression, Ridge Regression, Lasso
Regression, Ridge Regression, Lasso Fabio G. Cozman - fgcozman@usp.br October 2, 2018 A general definition Regression studies the relationship between a response variable Y and covariates X 1,..., X n.
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationLinear Regression. y» F; Ey = + x Vary = ¾ 2. ) y = + x + u. Eu = 0 Varu = ¾ 2 Exu = 0:
Linear Regression 1 Single Explanatory Variable Assume (y is not necessarily normal) where Examples: y» F; Ey = + x Vary = ¾ 2 ) y = + x + u Eu = 0 Varu = ¾ 2 Exu = 0: 1. School performance as a function
More informationApplied Statistics and Econometrics
Applied Statistics and Econometrics Lecture 6 Saul Lach September 2017 Saul Lach () Applied Statistics and Econometrics September 2017 1 / 53 Outline of Lecture 6 1 Omitted variable bias (SW 6.1) 2 Multiple
More informationModels, Testing, and Correction of Heteroskedasticity. James L. Powell Department of Economics University of California, Berkeley
Models, Testing, and Correction of Heteroskedasticity James L. Powell Department of Economics University of California, Berkeley Aitken s GLS and Weighted LS The Generalized Classical Regression Model
More informationEmpirical Economic Research, Part II
Based on the text book by Ramanathan: Introductory Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 7, 2011 Outline Introduction
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationECON The Simple Regression Model
ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In
More informationCHAPTER 6: SPECIFICATION VARIABLES
Recall, we had the following six assumptions required for the Gauss-Markov Theorem: 1. The regression model is linear, correctly specified, and has an additive error term. 2. The error term has a zero
More informationLinear Regression. September 27, Chapter 3. Chapter 3 September 27, / 77
Linear Regression Chapter 3 September 27, 2016 Chapter 3 September 27, 2016 1 / 77 1 3.1. Simple linear regression 2 3.2 Multiple linear regression 3 3.3. The least squares estimation 4 3.4. The statistical
More information5. Erroneous Selection of Exogenous Variables (Violation of Assumption #A1)
5. Erroneous Selection of Exogenous Variables (Violation of Assumption #A1) Assumption #A1: Our regression model does not lack of any further relevant exogenous variables beyond x 1i, x 2i,..., x Ki and
More informationLECTURE 5 HYPOTHESIS TESTING
October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.
More information1. The OLS Estimator. 1.1 Population model and notation
1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology
More informationWeek 5 Quantitative Analysis of Financial Markets Modeling and Forecasting Trend
Week 5 Quantitative Analysis of Financial Markets Modeling and Forecasting Trend Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 :
More information1 The Multiple Regression Model: Freeing Up the Classical Assumptions
1 The Multiple Regression Model: Freeing Up the Classical Assumptions Some or all of classical assumptions were crucial for many of the derivations of the previous chapters. Derivation of the OLS estimator
More informationSimple Linear Regression for the MPG Data
Simple Linear Regression for the MPG Data 2000 2500 3000 3500 15 20 25 30 35 40 45 Wgt MPG What do we do with the data? y i = MPG of i th car x i = Weight of i th car i =1,...,n n = Sample Size Exploratory
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationGreene, Econometric Analysis (7th ed, 2012)
EC771: Econometrics, Spring 2012 Greene, Econometric Analysis (7th ed, 2012) Chapters 2 3: Classical Linear Regression The classical linear regression model is the single most useful tool in econometrics.
More informationThe Simple Regression Model. Part II. The Simple Regression Model
Part II The Simple Regression Model As of Sep 22, 2015 Definition 1 The Simple Regression Model Definition Estimation of the model, OLS OLS Statistics Algebraic properties Goodness-of-Fit, the R-square
More informationEconomics 241B Estimation with Instruments
Economics 241B Estimation with Instruments Measurement Error Measurement error is de ned as the error resulting from the measurement of a variable. At some level, every variable is measured with error.
More informationLECTURE 13: TIME SERIES I
1 LECTURE 13: TIME SERIES I AUTOCORRELATION: Consider y = X + u where y is T 1, X is T K, is K 1 and u is T 1. We are using T and not N for sample size to emphasize that this is a time series. The natural
More information13 Endogeneity and Nonparametric IV
13 Endogeneity and Nonparametric IV 13.1 Nonparametric Endogeneity A nonparametric IV equation is Y i = g (X i ) + e i (1) E (e i j i ) = 0 In this model, some elements of X i are potentially endogenous,
More informationBusiness Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM
Subject Business Economics Paper No and Title Module No and Title Module Tag 8, Fundamentals of Econometrics 3, The gauss Markov theorem BSE_P8_M3 1 TABLE OF CONTENTS 1. INTRODUCTION 2. ASSUMPTIONS OF
More informationRegression. Econometrics II. Douglas G. Steigerwald. UC Santa Barbara. D. Steigerwald (UCSB) Regression 1 / 17
Regression Econometrics Douglas G. Steigerwald UC Santa Barbara D. Steigerwald (UCSB) Regression 1 / 17 Overview Reference: B. Hansen Econometrics Chapter 2.20-2.23, 2.25-2.29 Best Linear Projection is
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More informationChapter 6: Endogeneity and Instrumental Variables (IV) estimator
Chapter 6: Endogeneity and Instrumental Variables (IV) estimator Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 15, 2013 Christophe Hurlin (University of Orléans)
More informationECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University
ECONOMET RICS P RELIM EXAM August 24, 2010 Department of Economics, Michigan State University Instructions: Answer all four (4) questions. Be sure to show your work or provide su cient justi cation for
More informationMFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators
MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators Thilo Klein University of Cambridge Judge Business School Session 4: Linear regression,
More informationEmpirical Market Microstructure Analysis (EMMA)
Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg
More informationMultiple Regression Analysis
Chapter 4 Multiple Regression Analysis The simple linear regression covered in Chapter 2 can be generalized to include more than one variable. Multiple regression analysis is an extension of the simple
More informationDay 4: Shrinkage Estimators
Day 4: Shrinkage Estimators Kenneth Benoit Data Mining and Statistical Learning March 9, 2015 n versus p (aka k) Classical regression framework: n > p. Without this inequality, the OLS coefficients have
More informationEconometric Methods. Prediction / Violation of A-Assumptions. Burcu Erdogan. Universität Trier WS 2011/2012
Econometric Methods Prediction / Violation of A-Assumptions Burcu Erdogan Universität Trier WS 2011/2012 (Universität Trier) Econometric Methods 30.11.2011 1 / 42 Moving on to... 1 Prediction 2 Violation
More informationEconomics 620, Lecture 7: Still More, But Last, on the K-Varable Linear Model
Economics 620, Lecture 7: Still More, But Last, on the K-Varable Linear Model Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 7: the K-Varable Linear Model IV
More informationKey Algebraic Results in Linear Regression
Key Algebraic Results in Linear Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 30 Key Algebraic Results in
More informationAkaike criterion: Kullback-Leibler discrepancy
Model choice. Akaike s criterion Akaike criterion: Kullback-Leibler discrepancy Given a family of probability densities {f ( ; ), 2 }, Kullback-Leibler s index of f ( ; ) relativetof ( ; ) is Z ( ) =E
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationSimple Linear Regression
Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationRegression Models - Introduction
Regression Models - Introduction In regression models, two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent variable,
More informationMultivariate Regression Analysis
Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x
More informationEcon 2120: Section 2
Econ 2120: Section 2 Part I - Linear Predictor Loose Ends Ashesh Rambachan Fall 2018 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted
More informationL2: Two-variable regression model
L2: Two-variable regression model Feng Li feng.li@cufe.edu.cn School of Statistics and Mathematics Central University of Finance and Economics Revision: September 4, 2014 What we have learned last time...
More information2014 Preliminary Examination
014 reliminary Examination 1) Standard error consistency and test statistic asymptotic normality in linear models Consider the model for the observable data y t ; x T t n Y = X + U; (1) where is a k 1
More informationSo far our focus has been on estimation of the parameter vector β in the. y = Xβ + u
Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,
More informationReference: Davidson and MacKinnon Ch 2. In particular page
RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy
More information1/34 3/ Omission of a relevant variable(s) Y i = α 1 + α 2 X 1i + α 3 X 2i + u 2i
1/34 Outline Basic Econometrics in Transportation Model Specification How does one go about finding the correct model? What are the consequences of specification errors? How does one detect specification
More informationLecture 9 SLR in Matrix Form
Lecture 9 SLR in Matrix Form STAT 51 Spring 011 Background Reading KNNL: Chapter 5 9-1 Topic Overview Matrix Equations for SLR Don t focus so much on the matrix arithmetic as on the form of the equations.
More informationInstrumental Variables and Two-Stage Least Squares
Instrumental Variables and Two-Stage Least Squares Generalised Least Squares Professor Menelaos Karanasos December 2011 Generalised Least Squares: Assume that the postulated model is y = Xb + e, (1) where
More information10. Time series regression and forecasting
10. Time series regression and forecasting Key feature of this section: Analysis of data on a single entity observed at multiple points in time (time series data) Typical research questions: What is the
More informationMBF1923 Econometrics Prepared by Dr Khairul Anuar
MBF1923 Econometrics Prepared by Dr Khairul Anuar L4 Ordinary Least Squares www.notes638.wordpress.com Ordinary Least Squares The bread and butter of regression analysis is the estimation of the coefficient
More informationLecture Notes on Measurement Error
Steve Pischke Spring 2000 Lecture Notes on Measurement Error These notes summarize a variety of simple results on measurement error which I nd useful. They also provide some references where more complete
More information36-309/749 Experimental Design for Behavioral and Social Sciences. Dec 1, 2015 Lecture 11: Mixed Models (HLMs)
36-309/749 Experimental Design for Behavioral and Social Sciences Dec 1, 2015 Lecture 11: Mixed Models (HLMs) Independent Errors Assumption An error is the deviation of an individual observed outcome (DV)
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More informationEconometrics Midterm Examination Answers
Econometrics Midterm Examination Answers March 4, 204. Question (35 points) Answer the following short questions. (i) De ne what is an unbiased estimator. Show that X is an unbiased estimator for E(X i
More informationReview of Econometrics
Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7
MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is
More informationEconometrics Homework 1
Econometrics Homework Due Date: March, 24. by This problem set includes questions for Lecture -4 covered before midterm exam. Question Let z be a random column vector of size 3 : z = @ (a) Write out z
More informationCorrelation and Regression Theory 1) Multivariate Statistics
Correlation and Regression Theory 1) Multivariate Statistics What is a multivariate data set? How to statistically analyze this data set? Is there any kind of relationship between different variables in
More information