Financial Econometrics
|
|
- Darrell Knight
- 5 years ago
- Views:
Transcription
1 Material : solution Class : Teacher(s) : zacharias psaradakis, marian vavra Example 1.1: Consider the linear regression model y Xβ + u, (1) where y is a (n 1) vector of observations on the dependent variable, X is a (n k) matrix of nonstochastic explanatory variables such that rk) k < n, β is a (k 1) vector of unknown parameters, and u is a (n 1) vector of unobserved disturbances with E(u) 0 and E(uu ) σ I n. 1. Prove that the OLS estimator ˆβ X) 1 X y is the Best Linear Unbiased Estimator BLUE] of β. Explain what role each of the assumptions about X and u plays in your proof.. Show that c ˆβ is the BLUE of c β, where c is a (k 1) vector of constants. Solution 1.1: Recall that an estimator is the BLUE if it has a minimum variance among all other linear estimators (sometimes also called the Gauss Markov Theorem). The proof of the BLUE consists of two parts: unbiasedness and minimum variance. An estimator is said to be unbiased if E(ˆβ) β E(ˆβ) E X) 1 X y] E X) 1 X β + u)] β + E X) 1 X u] β + X) 1 X E(u) β. () As you can see from above, the proof of unbiasedness is based on the assumption that E(u) 0 and X is nonstochastic matrix (this ensures that X and u are always independent!). In the case of the stochastic matrix X, we have to impose extra restriction on error vector u to be strictly independence of X. An estimator is called a minimum variance estimator if all other linear estimators have higher 1
2 variance of estimated parameters. The variance of the OLS estimator ˆβ is as follows V(ˆβ) E(ˆβ β)(ˆβ β) ] E X) 1 X uu X X) 1 ] X) 1 X E(uu )X X) 1 σ X) 1 X X X) 1 σ X) 1, (3) where we apply the fact that E(uu ) σ I n and that rk) k < n, which implies that X) 1 exists. To proof the second property of BLUE, that means the minimum variance of the OLS estimator, we have to define another linear estimator and check its variance. Let us assume that a new estimator takes the form β Cy CXβ + Cu. Recall, just for completeness, that the OLS estimator can be rewritten into the form ˆβ X) 1 X y Ay, where A X) 1 X. However, It is very important to point out that the new matrix C is some other matrix with different structure than the matrix A! We only require that CX I. We need this assumption for ensuring that the new estimator β is also unbiased. Moreover, let us determine a new matrix D such that C D + A D + X) 1 X. So, the variance of β is as follows V( β) E( β β)( β β) ] E(D + A)uu X(D + A) ] (D + A)E(uu )(D + A) σ (D + A)(D + A) σ (DD + AD + DA + AA ) σ (DD + X) 1 ) V(ˆβ) + σ DD, (4) where DD is positive definite definite matrix. So, it holds that V( β i ) > V( ˆβ i ) for all i 1,..., k. Therefore, the OLS estimator ˆβ is said to be BLUE. Note: The derivation is based on the fact that CX I, which can be rewritten as (D + A)X DX + AX I, which implies that DX 0, therefore AD DA 0 and these terms disappear from equation (4). As for the second question, let assume that c is a (k 1) vector of constants. Then, it holds that E(c ˆβ) c E(ˆβ) c β, (5) which means that c ˆβ is also unbiased. Before checking minimum variance of c ˆβ, we have to find a variance of this modified estimator V(c ˆβ) Ec (ˆβ β)(ˆβ β) c] c E(ˆβ β)(ˆβ β) ]c c V(ˆβ)c. (6)
3 The variance of c β estimator can be as follows V(c β) c V( β)c c V(ˆβ) + σ DD ]c V(c ˆβ) + σ c DD c. (7) It means that also the modified OLS estimator is BLUE, since if DD is positive definite, then c DD c > 0 for all c. Note: Make sure you know exactly all properties about a positive (semi)definite matrix. Moreover, you should be able to understand what is meant by the variance-covariance matrix V(ˆβ): what are diagonal and off-diagonal elements! Example 1.: For regression model y t β 1 x 1t + β x t + β 3 x 3t + u t, for t 1,..., n, (8) where a sample of n 33 observations yields x 1t x1t x t 1 x1t y t 5 x t 10 xt x 3t 0 xt y t 10 x 3t 1 x1t x 3t 1 x3t y t 4 y t 35 where all summations are over t form 1 to Compute the OLS estimates ˆβ and ˆσ V(û t ).. Obtain an estimate of the variance-covariance matrix of ˆβ. How would you obtain the estimated standard errors for each element of ˆβ? 3. Test the hypothesis H 0 : β 0 versus H 1 : β > Test the hypothesis H 0 : β 1 + β versus H 1 : β 1 + β. Solution 1.: From lectures you certainly know that the regression model can be written in the matrix form as follows y t β 1 x 1t + β x t + β 3 x 3t + u t, (9) y Xβ + u, (10) where y is a (33 1) vector of observations on the dependent variable, X is a (33 3) matrix of nonstochastic explanatory variables such that rk) 3 < 33, β is a (3 1) vector of unknown parameters, and u is a (33 1) vector of unobserved but disturbances with E(u) 0 and E(uu ) σ I n. 3
4 1. So, the OLS estimate of β (β 1, β, β 3 ) can be calculated directly from very well known formula ˆβ X) 1 X y, where x x 1n x 11 x 1 x 31 x X) x 1... x n 1t x1t x t x1t x 3t x1t x t x t xt x 3t x x 3n x 1n x n x x1t 3n x 3t xt x 3t x 3t (11) We also know that X X 9. So, the inverse of the matrix X X is as follows X) (1) For the OLS estimate we also need the following result x x 1n y 11 X y x 1... x n x1t y 1t 5. xt y t 10. (13) x x 3n x3t y 3t 4 y n1 Therefore, the OLS estimate can be calculated as follows ˆβ X) 1 X y (14) For an estimation of variance of u t we use an unbiased estimator of σ, which is defined as ˆσ û û/n k, where n denotes a sample size and k number of estimated parameters in the regression. In our case, the estimated variance is as ˆσ 1 û t 30 t 1 (y t x t 4x 3t ) 30 t 1 (yt y t x t 8y t x 3t + x t + 8x t x 3t + 16x 30 3t) t 1 30 ( ) (15). Estimated standard errors of estimated parameters ˆβ 1, ˆβ, and ˆβ 3 can be obtain as a diagonal elements of the variance-covariance matrix ˆV(ˆβ), which is given by ˆV(ˆβ) E(ˆβ β)(ˆβ β) ] ˆσ X) (16)
5 So, the estimated standard deviations of parameters are Ŝ( ˆβ 1 ) ˆV( ˆβ 1 ) , Ŝ( ˆβ ) ˆV( ˆβ ) , (17) Ŝ( ˆβ 3 ) ˆV( ˆβ 3 ) For applying an exact test, the t-test in our case, we have to impose some restriction on a distribution of error terms. In the context of OLS, we usually assume that u N(0, σ I n ). If we do not impose any restriction, tests are only asymptotically valid, but NOT exact. But even when a distribution is specified, we can have some difficulties to find an appropriate exact test - you will see this in upcoming weeks. So, let us assume that u N(0, σ I n ). In our case, we should test hypothesis H 0 : β 0 against some alternative hypothesis H 1 : β > 0. Two things have to be still specified. First, the significance level α denoting the probability of type I error (we usually use α {0.1, 0.05, 0.01}). Second, which test we can apply and why. The different test can lead to the different result! In our case, the hypothesis is a single parameter hypothesis, which leads us to the standard t-test defined as t ˆβ 0 Ŝ( ˆβ ) (18) A critical value of the t-distribution for α 0.05 is 1.7, which is smaller than t 5.9, therefore we do REJECT the null hypothesis! It means that the estimated parameter ˆβ is not zero at a given significance level. Note: Make sure you understand the difference between large sample and small sample properties of beta ˆ for the hypothesis testing purpose. Moreover, you have to understand the logic of t-test! 4. In the second case, the null is specified as H 0 : β 1 + β, against the alternative hypothesis H 1 : β 1 + β. So, we have a multiple parameter hypothesis, which is usually tested by Wald test defined as W (cˆβ c β) c X) 1 c] 1 (c ˆβ c β) σ χ(p), (19) where p rk(c). The problem of this test is that is based on unknown quantity σ, but we have only its unbiased estimate ˆσ. Therefore, we have to rewrite the W -test into the form of F -test given by F (cˆβ c β) c X) 1 c] 1 (c ˆβ c β) pˆσ F (p, n k). (0) 5
6 In our case, the F -test (statistic) takes the form F (1 ) c X) 1 c] 1 (1 ) (1) A critical value of F distribution for a given α 0.05 is 4.. A value of the test (3.3) is smaller than a critical value (4.), therefore we CANNOT REJECT the null hypothesis at a given significance level. Note: Make sure you understand that different level of significance level α can lead to different results! It is worth noting that F -test can be also expressed using the sum of squared errors of unrestricted model and restricted model, or using a vector of restrictions c as in our case. 6
7 Example 1.3: Consider the regression model from question 1, where X X 1 partitioned into the k 1 and k k 1 columns. : X ] is 1. Show that X) 1 ] 1M X 1 ) 1 1X 1 ) 1 X 1X M 1 X ) 1 X ) 1 X X 1 M 1 X ) 1 M 1 X ) 1 ] 1M X 1 ) 1 1M X 1 ) 1 X 1X X ) 1 M 1 X ) 1 X X 1 1X 1 ) 1 M 1 X ) 1 (), where M i I n X i i X i) 1 X i, for i 1,.. Show that ˆβ ˆβ1 ˆβ 1 M X 1 ) 1 X 1M y M 1 X ) 1 X. (3) M 1 y 3. Show that V(β 1 ) σ 1X 1 ) 1 + 1X 1 ) 1 X 1X M 1 X )X X 1 1X 1 ) 1 ]. Solution 1.3: Let us start with a partition of the matrix X X, which is given by X X1 X) 1 ] X X 1 X 1 X 1X X X 1 X. (4) X X Check the dimension of the whole matrix X X and each element of this matrix. 1. The inverse matrix of X) is given by the following identity provided that X X is non-singular matrix X) X) 1 I, which can be, for the purpose of a solution, rewritten into the form as follows X 1 X 1 X 1X A11 A 1 I 0 X X 1 X, (5) X A 1 A 0 I which leads to a system of 4 equations with 4 unknown matrices From equation (8) results that 1X 1 )A X )A 1 I; (6) 1X 1 )A 1 + 1X )A 1 0; (7) X 1 )A 11 + X )A 1 0; (8) X 1 )A 1 + X )A I. (9) A 1 X ) 1 X 1 )A 11, (30) 7
8 which we plug into the equation (6), which gives a closed-form solution for A 11 1X 1 )A X )A 1 I 1X 1 )A 11 1X ) X ) 1 X 1 )A 11 I { } X 1 I X X ) 1 X X1 A11 I { } X 1 I X X ) 1 X X1 A11 I A 11 1M X 1 ) 1, where M I X X ) 1 X. Then we insert a solution for A 11 back into equation (30), which gives a closed-form solution for A 1 A 1 X ) 1 X 1 ) 1M X 1 ) 1. (31) Using the same procedure, we can get closed-form solutions for the remaining matrices in the form A M 1 X ) 1, A 1 1X 1 ) 1 1X ) M 1 X ) 1, where M 1 I X 1 1X 1 ) 1 X 1. Since we know that X) is a symmetric matrix, then also X) 1 must be a symmetric matrix: X) 1 ] X) ] 1 X) 1. So, in the case of partitioned matrix holds that A 1 A 1 X) 1 A11 A 1 A 1 A ] 1M X 1 ) 1 1X 1 ) 1 X 1X M 1 X ) 1 X ) 1 X X 1 M 1 X ) 1 M 1 X ) 1 ] 1M X 1 ) 1 1M X 1 ) 1 X 1X X ) 1 M 1 X ) 1 X X 1 1X 1 ) 1 M 1 X ) 1 (3), where M i I X i i X i) 1 X i, for i 1,.. We know that ˆβ X) 1X y, which can be rewritten also in the form (using the last line of equation (3)) as ˆβ X) 1X y ] 1M X 1 ) 1 1M X 1 ) 1 X 1X X ) 1 X 1 y M 1 X ) 1 X X 1 1X 1 ) 1 M 1 X ) 1 X y 1 M X 1 ) 1 X 1y 1M X 1 ) 1 X 1X X ) 1 X y M 1 X ) 1 X y M 1 X ) 1 X X 1 1X 1 ) 1 X 1y 1 M X 1 ) 1 X 1 I X X ) 1 X ]y M 1 X ) 1 X I X 1 1X 1 ) 1 X 1]y 1 M X 1 ) 1 X 1 M y M 1 X ) 1 (33) X M 1 y 8
9 3. From previous classes we already know that V(ˆβ) σ X) 1. We also know that V( ˆβ 1 ) σ 1M X 1 ) 1, which is a first element of the X) 1 matrix in our case (see previous part). The matrix 1M X 1 ) 1 can be rewritten as 1M X 1 ) 1 X 1(I X X ) 1 X )X 1 ] 1 X 1X 1 X 1X X ) 1 X X 1 ] 1. (34) In the next step we have to apply the Inverse Matrix Theorem (a proof can be found in Lütkepohl (1997): Handbook of Matrices) in order to decompose a matrix 1M X 1 ) 1. The Theorem states that for all compatible matrices A, B, and C holds that (A BC 1 B ) A 1 + A 1 B(C B A 1 B) 1 B A 1. (35) In our case, A X 1X 1, B X 1X and finally C X ). matrices into equation (35) gives Plugging all these 1M X 1 ) 1 X 1X 1 X 1X X ) 1 X X 1 ] 1 1X 1 ) 1 + 1X 1 ) 1 X 1X X X X X 1 1X 1 ) 1 X 1X ]X X 1 1X 1 ) 1 1X 1 ) 1 + 1X 1 ) 1 X 1X X M 1 X ]X X 1 1X 1 ) 1. (36) Finally, we have to plug this result form equation (36) back into the formula for V( ˆβ 1 ), which results V( ˆβ 1 ) σ 1M X 1 ) 1 σ 1X 1 ) 1 + 1X 1 ) 1 X 1X M 1 X )X X 1 1X 1 ) 1]. (37) Keywords for revision: You should be absolutely clear about these terms: size and power of the test, consistent test, unbiased test, the most powerful test. Moreover, you are strongly advised to check a matrix differentiation needed for derivation of the OLS estimator! 9
Regression #4: Properties of OLS Estimator (Part 2)
Regression #4: Properties of OLS Estimator (Part 2) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #4 1 / 24 Introduction In this lecture, we continue investigating properties associated
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More informationIntroduction to Econometrics Midterm Examination Fall 2005 Answer Key
Introduction to Econometrics Midterm Examination Fall 2005 Answer Key Please answer all of the questions and show your work Clearly indicate your final answer to each question If you think a question is
More informationBusiness Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM
Subject Business Economics Paper No and Title Module No and Title Module Tag 8, Fundamentals of Econometrics 3, The gauss Markov theorem BSE_P8_M3 1 TABLE OF CONTENTS 1. INTRODUCTION 2. ASSUMPTIONS OF
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More information1. The Multivariate Classical Linear Regression Model
Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationProperties of the least squares estimates
Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares
More informationEconometrics Master in Business and Quantitative Methods
Econometrics Master in Business and Quantitative Methods Helena Veiga Universidad Carlos III de Madrid Models with discrete dependent variables and applications of panel data methods in all fields of economics
More informationMultiple Regression Analysis
Multiple Regression Analysis y = 0 + 1 x 1 + x +... k x k + u 6. Heteroskedasticity What is Heteroskedasticity?! Recall the assumption of homoskedasticity implied that conditional on the explanatory variables,
More informationInference in Regression Analysis
Inference in Regression Analysis Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 1 Today: Normal Error Regression Model Y i = β 0 + β 1 X i + ǫ i Y i value
More informationMultivariate Regression Analysis
Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationECON The Simple Regression Model
ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 11, 2012 Outline Heteroskedasticity
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 2 Jakub Mućk Econometrics of Panel Data Meeting # 2 1 / 26 Outline 1 Fixed effects model The Least Squares Dummy Variable Estimator The Fixed Effect (Within
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationHeteroskedasticity and Autocorrelation
Lesson 7 Heteroskedasticity and Autocorrelation Pilar González and Susan Orbe Dpt. Applied Economics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 7. Heteroskedasticity
More informationChapter 4: Constrained estimators and tests in the multiple linear regression model (Part III)
Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III) Florian Pelgrin HEC September-December 2010 Florian Pelgrin (HEC) Constrained estimators September-December
More informationWe can relax the assumption that observations are independent over i = firms or plants which operate in the same industries/sectors
Cluster-robust inference We can relax the assumption that observations are independent over i = 1, 2,..., n in various limited ways One example with cross-section data occurs when the individual units
More informationLeast Squares Estimation-Finite-Sample Properties
Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions
More informationHeteroskedasticity. We now consider the implications of relaxing the assumption that the conditional
Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance V (u i x i ) = σ 2 is common to all observations i = 1,..., In many applications, we may suspect
More informationEstimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17
Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear
More informationINTRODUCTORY ECONOMETRICS
INTRODUCTORY ECONOMETRICS Lesson 2b Dr Javier Fernández etpfemaj@ehu.es Dpt. of Econometrics & Statistics UPV EHU c J Fernández (EA3-UPV/EHU), February 21, 2009 Introductory Econometrics - p. 1/192 GLRM:
More information1 Appendix A: Matrix Algebra
Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 15: Examples of hypothesis tests (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 32 The recipe 2 / 32 The hypothesis testing recipe In this lecture we repeatedly apply the
More informationProblem Set #6: OLS. Economics 835: Econometrics. Fall 2012
Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationThe Statistical Property of Ordinary Least Squares
The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting
More informationLECTURE 5 HYPOTHESIS TESTING
October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.
More informationLECTURE 6. Introduction to Econometrics. Hypothesis testing & Goodness of fit
LECTURE 6 Introduction to Econometrics Hypothesis testing & Goodness of fit October 25, 2016 1 / 23 ON TODAY S LECTURE We will explain how multiple hypotheses are tested in a regression model We will define
More informationQuantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017
Summary of Part II Key Concepts & Formulas Christopher Ting November 11, 2017 christopherting@smu.edu.sg http://www.mysmu.edu/faculty/christophert/ Christopher Ting 1 of 16 Why Regression Analysis? Understand
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More information18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013
18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are
More informationCh 3: Multiple Linear Regression
Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna November 23, 2013 Outline Introduction
More informationEmpirical Economic Research, Part II
Based on the text book by Ramanathan: Introductory Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 7, 2011 Outline Introduction
More informationGeneral Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35
General Linear Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 35 Suppose the NTGMM holds so that y = Xβ + ε, where ε N(0, σ 2 I). opyright
More informationReview of Econometrics
Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 17, 2012 Outline Heteroskedasticity
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationThe Linear Regression Model
The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general
More informationGENERALISED LEAST SQUARES AND RELATED TOPICS
GENERALISED LEAST SQUARES AND RELATED TOPICS Haris Psaradakis Birkbeck, University of London Nonspherical Errors Consider the model y = Xβ + u, E(u) =0, E(uu 0 )=σ 2 Ω, where Ω is a symmetric and positive
More informationEconometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague
Econometrics Week 4 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 23 Recommended Reading For the today Serial correlation and heteroskedasticity in
More informationUnit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2
Unit roots in vector time series A. Vector autoregressions with unit roots Scalar autoregression True model: y t y t y t p y tp t Estimated model: y t c y t y t y t p y tp t Results: T j j is asymptotically
More informationLinear Model Under General Variance
Linear Model Under General Variance We have a sample of T random variables y 1, y 2,, y T, satisfying the linear model Y = X β + e, where Y = (y 1,, y T )' is a (T 1) vector of random variables, X = (T
More informationMaking sense of Econometrics: Basics
Making sense of Econometrics: Basics Lecture 2: Simple Regression Egypt Scholars Economic Society Happy Eid Eid present! enter classroom at http://b.socrative.com/login/student/ room name c28efb78 Outline
More informationQuick Review on Linear Multiple Regression
Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,
More informationChristopher Dougherty London School of Economics and Political Science
Introduction to Econometrics FIFTH EDITION Christopher Dougherty London School of Economics and Political Science OXFORD UNIVERSITY PRESS Contents INTRODU CTION 1 Why study econometrics? 1 Aim of this
More informationAdvanced Quantitative Methods: ordinary least squares
Advanced Quantitative Methods: Ordinary Least Squares University College Dublin 31 January 2012 1 2 3 4 5 Terminology y is the dependent variable referred to also (by Greene) as a regressand X are the
More informationF9 F10: Autocorrelation
F9 F10: Autocorrelation Feng Li Department of Statistics, Stockholm University Introduction In the classic regression model we assume cov(u i, u j x i, x k ) = E(u i, u j ) = 0 What if we break the assumption?
More informationInstrumental Variables, Simultaneous and Systems of Equations
Chapter 6 Instrumental Variables, Simultaneous and Systems of Equations 61 Instrumental variables In the linear regression model y i = x iβ + ε i (61) we have been assuming that bf x i and ε i are uncorrelated
More informationMEI Exam Review. June 7, 2002
MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)
More informationAGEC 621 Lecture 16 David Bessler
AGEC 621 Lecture 16 David Bessler This is a RATS output for the dummy variable problem given in GHJ page 422; the beer expenditure lecture (last time). I do not expect you to know RATS but this will give
More informationSTAT 540: Data Analysis and Regression
STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationMa 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA
Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA March 6, 2017 KC Border Linear Regression II March 6, 2017 1 / 44 1 OLS estimator 2 Restricted regression 3 Errors in variables 4
More informationF3: Classical normal linear rgression model distribution, interval estimation and hypothesis testing
F3: Classical normal linear rgression model distribution, interval estimation and hypothesis testing Feng Li Department of Statistics, Stockholm University What we have learned last time... 1 Estimating
More information2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28
2. A Review of Some Key Linear Models Results Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics 510 1 / 28 A General Linear Model (GLM) Suppose y = Xβ + ɛ, where y R n is the response
More informationNon-Spherical Errors
Non-Spherical Errors Krishna Pendakur February 15, 2016 1 Efficient OLS 1. Consider the model Y = Xβ + ε E [X ε = 0 K E [εε = Ω = σ 2 I N. 2. Consider the estimated OLS parameter vector ˆβ OLS = (X X)
More informationThe outline for Unit 3
The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 1 Jakub Mućk Econometrics of Panel Data Meeting # 1 1 / 31 Outline 1 Course outline 2 Panel data Advantages of Panel Data Limitations of Panel Data 3 Pooled
More informationLecture 3: Multiple Regression
Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u
More informationAn Introduction to Parameter Estimation
Introduction Introduction to Econometrics An Introduction to Parameter Estimation This document combines several important econometric foundations and corresponds to other documents such as the Introduction
More informationEcon 510 B. Brown Spring 2014 Final Exam Answers
Econ 510 B. Brown Spring 2014 Final Exam Answers Answer five of the following questions. You must answer question 7. The question are weighted equally. You have 2.5 hours. You may use a calculator. Brevity
More informationEconometrics. Week 8. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague
Econometrics Week 8 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 25 Recommended Reading For the today Instrumental Variables Estimation and Two Stage
More informationChapter 3 Best Linear Unbiased Estimation
Chapter 3 Best Linear Unbiased Estimation C R Henderson 1984 - Guelph In Chapter 2 we discussed linear unbiased estimation of k β, having determined that it is estimable Let the estimate be a y, and if
More informationSensitivity of GLS estimators in random effects models
of GLS estimators in random effects models Andrey L. Vasnev (University of Sydney) Tokyo, August 4, 2009 1 / 19 Plan Plan Simulation studies and estimators 2 / 19 Simulation studies Plan Simulation studies
More informationAnswers to Problem Set #4
Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2
More informationSimple Linear Regression: The Model
Simple Linear Regression: The Model task: quantifying the effect of change X in X on Y, with some constant β 1 : Y = β 1 X, linear relationship between X and Y, however, relationship subject to a random
More informationRegression #3: Properties of OLS Estimator
Regression #3: Properties of OLS Estimator Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #3 1 / 20 Introduction In this lecture, we establish some desirable properties associated with
More informationLectures 5 & 6: Hypothesis Testing
Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across
More informationMatrices and vectors A matrix is a rectangular array of numbers. Here s an example: A =
Matrices and vectors A matrix is a rectangular array of numbers Here s an example: 23 14 17 A = 225 0 2 This matrix has dimensions 2 3 The number of rows is first, then the number of columns We can write
More informationWooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model
Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model Most of this course will be concerned with use of a regression model: a structure in which one or more explanatory
More informationthe error term could vary over the observations, in ways that are related
Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance Var(u i x i ) = σ 2 is common to all observations i = 1,..., n In many applications, we may
More informationECON 5350 Class Notes Functional Form and Structural Change
ECON 5350 Class Notes Functional Form and Structural Change 1 Introduction Although OLS is considered a linear estimator, it does not mean that the relationship between Y and X needs to be linear. In this
More informationECON 4160, Lecture 11 and 12
ECON 4160, 2016. Lecture 11 and 12 Co-integration Ragnar Nymoen Department of Economics 9 November 2017 1 / 43 Introduction I So far we have considered: Stationary VAR ( no unit roots ) Standard inference
More information. a m1 a mn. a 1 a 2 a = a n
Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by
More informationLecture 07 Hypothesis Testing with Multivariate Regression
Lecture 07 Hypothesis Testing with Multivariate Regression 23 September 2015 Taylor B. Arnold Yale Statistics STAT 312/612 Goals for today 1. Review of assumptions and properties of linear model 2. The
More information3. Linear Regression With a Single Regressor
3. Linear Regression With a Single Regressor Econometrics: (I) Application of statistical methods in empirical research Testing economic theory with real-world data (data analysis) 56 Econometrics: (II)
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationMultiple Regression Analysis
Chapter 4 Multiple Regression Analysis The simple linear regression covered in Chapter 2 can be generalized to include more than one variable. Multiple regression analysis is an extension of the simple
More informationMultiple Regression Analysis. Part III. Multiple Regression Analysis
Part III Multiple Regression Analysis As of Sep 26, 2017 1 Multiple Regression Analysis Estimation Matrix form Goodness-of-Fit R-square Adjusted R-square Expected values of the OLS estimators Irrelevant
More information14 Multiple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in
More informationAdvanced Econometrics
Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 16, 2013 Outline Univariate
More informationIEOR 165 Lecture 7 1 Bias-Variance Tradeoff
IEOR 165 Lecture 7 Bias-Variance Tradeoff 1 Bias-Variance Tradeoff Consider the case of parametric regression with β R, and suppose we would like to analyze the error of the estimate ˆβ in comparison to
More informationProbability and Statistics Notes
Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline
More informationPanel Data Models. James L. Powell Department of Economics University of California, Berkeley
Panel Data Models James L. Powell Department of Economics University of California, Berkeley Overview Like Zellner s seemingly unrelated regression models, the dependent and explanatory variables for panel
More informationApplied Statistics Preliminary Examination Theory of Linear Models August 2017
Applied Statistics Preliminary Examination Theory of Linear Models August 2017 Instructions: Do all 3 Problems. Neither calculators nor electronic devices of any kind are allowed. Show all your work, clearly
More informationPart 1.) We know that the probability of any specific x only given p ij = p i p j is just multinomial(n, p) where p k1 k 2
Problem.) I will break this into two parts: () Proving w (m) = p( x (m) X i = x i, X j = x j, p ij = p i p j ). In other words, the probability of a specific table in T x given the row and column counts
More informationIntermediate Econometrics
Intermediate Econometrics Heteroskedasticity Text: Wooldridge, 8 July 17, 2011 Heteroskedasticity Assumption of homoskedasticity, Var(u i x i1,..., x ik ) = E(u 2 i x i1,..., x ik ) = σ 2. That is, the
More informationEconometrics I Lecture 3: The Simple Linear Regression Model
Econometrics I Lecture 3: The Simple Linear Regression Model Mohammad Vesal Graduate School of Management and Economics Sharif University of Technology 44716 Fall 1397 1 / 32 Outline Introduction Estimating
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationThe Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
The Gauss-Markov Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 61 Recall that Cov(u, v) = E((u E(u))(v E(v))) = E(uv) E(u)E(v) Var(u) = Cov(u, u) = E(u E(u)) 2 = E(u 2
More informationModel Mis-specification
Model Mis-specification Carlo Favero Favero () Model Mis-specification 1 / 28 Model Mis-specification Each specification can be interpreted of the result of a reduction process, what happens if the reduction
More informationMultiple Linear Regression
Multiple Linear Regression Asymptotics Asymptotics Multiple Linear Regression: Assumptions Assumption MLR. (Linearity in parameters) Assumption MLR. (Random Sampling from the population) We have a random
More informationBasic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:
8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions
More informationPanel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63
1 / 63 Panel Data Models Chapter 5 Financial Econometrics Michael Hauser WS17/18 2 / 63 Content Data structures: Times series, cross sectional, panel data, pooled data Static linear panel data models:
More information