Ma 3/103: Lecture 24 Linear Regression I: Estimation
|
|
- Agatha Kelly
- 6 years ago
- Views:
Transcription
1 Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, / 32
2 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the regression function; components of X = (X 1,..., X K ) are regressors. KC Border Linear Regression I March 3, / 32
3 The standard linear model The standard linear model or Y = Xβ + ε y t = x t,1 β x t,k β K + ε t (t = 1,..., N) KC Border Linear Regression I March 3, / 32
4 The standard linear model The linear model is more general than you might think Kepler s 3rd Law. The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit. or Hubble s Law. Newton s Law of Gravity: P 2 = ca 3. 2 ln P = ln c + 3 ln A red shift = c distance F = G M 1M 2 d 2 ln F = ln G + ln M 1 + ln M 2 2 ln d KC Border Linear Regression I March 3, / 32
5 The standard linear model Polynomials: Geometric means: y = b 0 + b 1 x + b 2 x b K x K y = b 0 x b 1 1 x b 2 2 x b K K ln y = ln b 0 + b 1 ln x b K ln x K Dummy variables, or indicators: e.g., { 1 Honda X 1 = 0 otherwise { 1 Kawasaki X 2 = 0 otherwise X l =. { 1 Ducati 0 otherwise KC Border Linear Regression I March 3, / 32
6 The standard linear model Variates The variates X k may be fixed constants chosen by an experimenter or they may be random variables themselves. They are called regressors. Almost always a constant variate is included. KC Border Linear Regression I March 3, / 32
7 The standard linear model Data N observations of the values x 1,..., x K and y. y t = x t,1 β x t,k β K + ε t (t = 1,..., N) where the ε t s are unobserved errors. In matrix form: y = Xβ + ε KC Border Linear Regression I March 3, / 32
8 The standard linear model y = y 1. is a N 1 column vector y N x 1,1 x 1,K X =..... x N,1 x N,K β 1 β =. β K is a N K matrix, is a K 1 column vector, and ε = ε 1. is a N 1 column vector. ε N KC Border Linear Regression I March 3, / 32
9 The standard linear model The estimation problem The problem is to estimate (β 1,..., β K ). Statistical assumptions of the standard model: E(ε X) = 0, Var(ε X) = E(εε X) = σ 2 I N N. This last assumption is known as homoskedasticity. KC Border Linear Regression I March 3, / 32
10 The standard linear model The Least Squares approach KC Border Linear Regression I March 3, / 32
11 The standard linear model Sum of squared residuals Vector of residuals as a function of b is y Xb The sum of squared residuals (SSR) is (y Xb) (y Xb). Expanding yields SSR(b) = y y 2y Xb + b X Xb. which is a convex quadratic function in the components of b. KC Border Linear Regression I March 3, / 32
12 The standard linear model Minimizing the sum of squared residuals By convexity, the minimum occurs whenever the gradient equals zero. The gradient of this function is SSR(b) = 2X y + 2X Xb. Thus the minimizer ˆβ OLS satisfies the first-order condition SSR( ˆβ OLS ) = 0: X y = X X ˆβ OLS. This matrix equation is known as the normal equation for ˆβ OLS. KC Border Linear Regression I March 3, / 32
13 The standard linear model Least Squares Estimator On the hypothesis that X X (a K K matrix) is nonsingular, we then have that ˆβ OLS = (X X) 1 X y minimizes the sum of squared residuals. This ˆβ OLS is called the ordinary least squares (OLS) estimator of β. KC Border Linear Regression I March 3, / 32
14 The standard linear model The singular case What if X X is singular? Then where not all a k are zero. Then a 1 X a K X K = 0, y = β 1 X β K X K + ε + c (a 1 X a K X K ) }{{} =0 = (β 1 + ca 1 )X (β K + ca k )X K + ε for any value of c. Whenever a k is nonzero, the coefficient on X k can be whatever we want. That is, the data cannot tell us what the coefficient β k is, even if every error term is zero. KC Border Linear Regression I March 3, / 32
15 The standard linear model Properties ˆβ OLS = (X X) 1 X y = (X X) 1 X (Xβ + ε) = β + (X X) 1 X ε. This is a random vector. Set e = y X ˆβ OLS, the vector e of residuals is orthogonal to each k th column vector of the values of the regressor X k. X e = 0, since X e = X (y X ˆβ OLS ) = X y X X ˆβ OLS = X y X X(X X) 1 X y = X y X y = 0. KC Border Linear Regression I March 3, / 32
16 The standard linear model If the regressors include a constant term, then the fitted plane passes through the sample means. That is, Proof: so ȳ = x 1 ˆβ1 + + x K ˆβK. y = X ˆβ OLS + e, 1 y = 1 X ˆβ OLS + 1 e, where 1 is a N-vector of ones. Since it is one of the regressors, 1 e = 0. Dividing by N gives ȳ = x 1 ˆβ1 + + x K ˆβK. KC Border Linear Regression I March 3, / 32
17 The standard linear model The Geometry of LSE y ˆβ 1 x 1 e x 1 ŷ 0 x 2 ˆβ2 x 2 KC Border Linear Regression I March 3, / 32
18 OLS and MLE OLS and MLE When the error vector ε has a multivariate normal distribution N(0, σ 2 I) distribution, then the OLS estimator of β is also the Maximum Likelihood Estimator. KC Border Linear Regression I March 3, / 32
19 OLS and MLE MLE of β The density of ε = y Xβ is the multivariate normal density N(0, σ 2 I) ( 1 2π ) N 1 det σ 2 I e 1 2 (y Xβ) (σ 2 I) 1 (y Xβ) = Taking logs, we find the log likelihood function is ( ) 1 N ( ) e 1 2π (σ 2 ) N 2σ 2 (y Xβ) (y Xβ) N 2 log(2π) N 2 log σ2 1 2σ 2 (y Xβ) (y Xβ). Maximizing this with respect to β amounts to minimizing (y Xβ) (y Xβ), which is exactly what OLS does. KC Border Linear Regression I March 3, / 32
20 OLS and MLE MLE of σ 2 The first order condition for the maximum with respect to σ 2 is N 1 2 σ (y 1 Xβ) (y Xβ) (σ 2 ) 2 = 0. Then multiply by 2(σ 2 ) 2 to get Nσ 2 + (y Xβ) (y Xβ) = 0, so where ˆσ 2 MLE = e e N, e = y X ˆβ. KC Border Linear Regression I March 3, / 32
21 OLS and MLE ˆβ OLS is unbiased ˆβ OLS = (X X) 1 X y = (X X) 1 X (Xβ + ε) = β + (X X) 1 X ε, ˆβ OLS β = (X X) 1 X ε E( ˆβ OLS β) = E(X X) 1 X ε = (X X) 1 X E ε = 0. ˆβ OLS is unbiased, E ˆβ OLS = β. KC Border Linear Regression I March 3, / 32
22 OLS and MLE Variance-covariance matrix ˆβ OLS ( ˆβ OLS β)( ˆβ OLS β) = (X X) 1 X εε X(X X) 1, Var( ˆβ OLS ) = E( ˆβ OLS β)( ˆβ OLS β) = (X X) 1 X σ 2 IX(X X) 1 = σ 2 (X X) 1. KC Border Linear Regression I March 3, / 32
23 OLS and MLE Gauss Markov Theorem In the standard linear model, if X has rank K, then the OLS estimator ˆβ OLS is the Best Linear Unbiased Estimate (BLUE) of β in the following sense. Given any other estimator b of β which is linear in y and which satisfies E b = β for any possible value of β, then Var b = Var ˆβ OLS + P, where P is positive semidefinite. This implies that for any vector w of weights Var w b Var w ˆβOLS. KC Border Linear Regression I March 3, / 32
24 OLS and MLE Proof of Gauss Markov Let b = Ay. Define Then D = A (X X) 1 X. b = Ay = ( D + (X X) 1 X ) y = ( D + (X X) 1 X ) (Xβ + ε) So in expectation, = DXβ + β + ( D + (X X) 1 X ) ε, b β = DXβ + ( (X X) 1 X + D ) ε. (1) E b β = DXβ + ( (X X) 1 X + D ) E ε. }{{} =0 KC Border Linear Regression I March 3, / 32
25 OLS and MLE Proof of Gauss Markov, continued Now b is unbiased if and only if DXβ = 0 for all β. Therefore DX = 0, so (1) becomes b β = ( D + (X X) 1 X ) ε. KC Border Linear Regression I March 3, / 32
26 OLS and MLE Proof of Gauss Markov, continued So for an unbiased linear estimator b, Var b = E(b β)(b β) = ( D + (X X) 1 X ) E(εε ) ( D + (X X) 1 X ) = σ 2( D + (X X) 1 X )( D + X(X X) 1) = σ 2( DD + }{{} DX (X X) 1 + (X X) 1 X D ) }{{} + (X X) 1 =0 =0 = σ 2 DD + Var ˆβ OLS. But P = σ 2 DD is positive semidefinite as w DD w = (D w) (D w) 0. q.e.d. KC Border Linear Regression I March 3, / 32
27 OLS and MLE Estimating σ 2 e = My = Mε, where M = I X(X X) 1 X. e e = ε M Mε = ε Mε. Since ε Mε is 1 1, it is equal to its trace, and since trace is a linear operator, the expected value of the trace of a random matrix is the trace of the expected matrix. Thus by the magic of linear algebra, E(e e) = E(ε Mε) = (N K)σ 2 KC Border Linear Regression I March 3, / 32
28 OLS and MLE Estimating σ 2, continued Define s 2 = e e N K, s = e e N K. Theorem If ε N(0, σ 2 I), then ˆβ OLS N ( β, σ 2 (X X) 1), and (N K)s 2 σ 2 χ 2 (N K) Also, ˆβ OLS and s 2 are independent. KC Border Linear Regression I March 3, / 32
29 OLS and MLE Test statistics If ε is jointly Normal, then for any K-vector w of weights, w ( ˆβ ( ) OLS β) N 0, σ 2 w (X X) 1 w, so w ( ˆβ OLS β) t(n K). (2) s w (X X) 1 w KC Border Linear Regression I March 3, / 32
30 OLS and MLE Standard error of ˆβ k OLS Special case, w is the k th unit coordinate vector: ˆβ k β k t(n K). s (X X) 1 kk Since σ 2 (X X) 1 kk = Var ˆβ kols, we have that s (X X) 1 kk is the estimated standard deviation of ˆβ kols, and is called the standard error of ˆβ kols. KC Border Linear Regression I March 3, / 32
31 OLS and MLE Confidence intervals for β k The 1 α confidence interval for β k is ( ) ˆβ k t α,n K 2 s (X X) 1 kk, ˆβk + t 1 α,n K 2 s (X X) 1 kk KC Border Linear Regression I March 3, / 32
32 OLS and MLE Testing β k To test Compute H 0 : β k = β 0 k versus H 1 : β k β 0 k t = ˆβ kols βk 0 s (X X) 1 kk We reject the null hypothesis if t > t α 2,N K. For the null hypothesis H 0 : ˆβ k = 0, we have t = ˆβ kols. s (X X) 1 kk It is this value of t that statistical software reports as the t-value for β k. KC Border Linear Regression I March 3, / 32
Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA
Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA March 6, 2017 KC Border Linear Regression II March 6, 2017 1 / 44 1 OLS estimator 2 Restricted regression 3 Errors in variables 4
More informationThe Standard Linear Model: Hypothesis Testing
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 25: The Standard Linear Model: Hypothesis Testing Relevant textbook passages: Larsen Marx [4]:
More informationAdvanced Quantitative Methods: ordinary least squares
Advanced Quantitative Methods: Ordinary Least Squares University College Dublin 31 January 2012 1 2 3 4 5 Terminology y is the dependent variable referred to also (by Greene) as a regressand X are the
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationCh 3: Multiple Linear Regression
Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery
More informationChapter 2 Multiple Regression I (Part 1)
Chapter 2 Multiple Regression I (Part 1) 1 Regression several predictor variables The response Y depends on several predictor variables X 1,, X p response {}}{ Y predictor variables {}}{ X 1, X 2,, X p
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationReference: Davidson and MacKinnon Ch 2. In particular page
RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy
More informationSTAT 540: Data Analysis and Regression
STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State
More informationLinear models and their mathematical foundations: Simple linear regression
Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationSimple and Multiple Linear Regression
Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationLeast Squares Estimation-Finite-Sample Properties
Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:
More informationIntroduction to Econometrics Midterm Examination Fall 2005 Answer Key
Introduction to Econometrics Midterm Examination Fall 2005 Answer Key Please answer all of the questions and show your work Clearly indicate your final answer to each question If you think a question is
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationEcon 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )
Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=
More information1. The OLS Estimator. 1.1 Population model and notation
1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology
More informationThe Multiple Regression Model Estimation
Lesson 5 The Multiple Regression Model Estimation Pilar González and Susan Orbe Dpt Applied Econometrics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model:
More informationProblems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B
Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationIn the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)
RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made
More informationLecture 6: Linear models and Gauss-Markov theorem
Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix
More informationLinear Regression Model. Badr Missaoui
Linear Regression Model Badr Missaoui Introduction What is this course about? It is a course on applied statistics. It comprises 2 hours lectures each week and 1 hour lab sessions/tutorials. We will focus
More information4 Multiple Linear Regression
4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ
More information2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28
2. A Review of Some Key Linear Models Results Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics 510 1 / 28 A General Linear Model (GLM) Suppose y = Xβ + ɛ, where y R n is the response
More informationSimple Linear Regression
Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)
More informationQuick Review on Linear Multiple Regression
Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,
More informationOutline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model
Outline 1 Multiple Linear Regression (Estimation, Inference, Diagnostics and Remedial Measures) 2 Special Topics for Multiple Regression Extra Sums of Squares Standardized Version of the Multiple Regression
More informationNon-Spherical Errors
Non-Spherical Errors Krishna Pendakur February 15, 2016 1 Efficient OLS 1. Consider the model Y = Xβ + ε E [X ε = 0 K E [εε = Ω = σ 2 I N. 2. Consider the estimated OLS parameter vector ˆβ OLS = (X X)
More informationLecture 34: Properties of the LSE
Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationEconometrics. Andrés M. Alonso. Unit 1: Introduction: The regression model. Unit 2: Estimation principles. Unit 3: Hypothesis testing principles.
Andrés M. Alonso andres.alonso@uc3m.es Unit 1: Introduction: The regression model. Unit 2: Estimation principles. Unit 3: Hypothesis testing principles. Unit 4: Heteroscedasticity in the regression model.
More informationReliability of inference (1 of 2 lectures)
Reliability of inference (1 of 2 lectures) Ragnar Nymoen University of Oslo 5 March 2013 1 / 19 This lecture (#13 and 14): I The optimality of the OLS estimators and tests depend on the assumptions of
More informationMatrix Algebra, part 2
Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues
More informationSimple Linear Regression
Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.
More informationEcon 583 Final Exam Fall 2008
Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random
More information17: INFERENCE FOR MULTIPLE REGRESSION. Inference for Individual Regression Coefficients
17: INFERENCE FOR MULTIPLE REGRESSION Inference for Individual Regression Coefficients The results of this section require the assumption that the errors u are normally distributed. Let c i ij denote the
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationLecture Notes on Different Aspects of Regression Analysis
Andreas Groll WS 2012/2013 Lecture Notes on Different Aspects of Regression Analysis Department of Mathematics, Workgroup Financial Mathematics, Ludwig-Maximilians-University Munich, Theresienstr. 39,
More informationEstimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17
Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) The Simple Linear Regression Model based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #2 The Simple
More information(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.
Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 3 Jakub Mućk Econometrics of Panel Data Meeting # 3 1 / 21 Outline 1 Fixed or Random Hausman Test 2 Between Estimator 3 Coefficient of determination (R 2
More informationLecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is
Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is Q = (Y i β 0 β 1 X i1 β 2 X i2 β p 1 X i.p 1 ) 2, which in matrix notation is Q = (Y Xβ) (Y
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationRestricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model
Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives
More information7. GENERALIZED LEAST SQUARES (GLS)
7. GENERALIZED LEAST SQUARES (GLS) [1] ASSUMPTIONS: Assume SIC except that Cov(ε) = E(εε ) = σ Ω where Ω I T. Assume that E(ε) = 0 T 1, and that X Ω -1 X and X ΩX are all positive definite. Examples: Autocorrelation:
More informationLecture 9 SLR in Matrix Form
Lecture 9 SLR in Matrix Form STAT 51 Spring 011 Background Reading KNNL: Chapter 5 9-1 Topic Overview Matrix Equations for SLR Don t focus so much on the matrix arithmetic as on the form of the equations.
More informationEC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,
THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation (1) y = β 0 + β 1 x 1 + + β k x k + ε, which can be written in the following form: (2) y 1 y 2.. y T = 1 x 11... x 1k 1
More informationBasic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:
8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions
More informationOrdinary Least Squares Regression
Ordinary Least Squares Regression Goals for this unit More on notation and terminology OLS scalar versus matrix derivation Some Preliminaries In this class we will be learning to analyze Cross Section
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 6: Bias and variance (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 49 Our plan today We saw in last lecture that model scoring methods seem to be trading off two different
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationReview of Econometrics
Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,
More informationwhere x and ȳ are the sample means of x 1,, x n
y y Animal Studies of Side Effects Simple Linear Regression Basic Ideas In simple linear regression there is an approximately linear relation between two variables say y = pressure in the pancreas x =
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7
MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is
More informationStandard Linear Regression Model (SLM)
Ordinary Least Squares Estimator (OLSE Nathan Smooha Abstract Estimation of Standard Linear Model - Ordinary Least Squares Estimator: Model specfication, objective function, finite sample and asymptotic
More informationMultiple Regression Analysis
Chapter 4 Multiple Regression Analysis The simple linear regression covered in Chapter 2 can be generalized to include more than one variable. Multiple regression analysis is an extension of the simple
More informationNext is material on matrix rank. Please see the handout
B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0
More informationLecture 4: Heteroskedasticity
Lecture 4: Heteroskedasticity Econometric Methods Warsaw School of Economics (4) Heteroskedasticity 1 / 24 Outline 1 What is heteroskedasticity? 2 Testing for heteroskedasticity White Goldfeld-Quandt Breusch-Pagan
More informationSCHOOL OF MATHEMATICS AND STATISTICS. Linear and Generalised Linear Models
SCHOOL OF MATHEMATICS AND STATISTICS Linear and Generalised Linear Models Autumn Semester 2017 18 2 hours Attempt all the questions. The allocation of marks is shown in brackets. RESTRICTED OPEN BOOK EXAMINATION
More informationFinancial Econometrics
Material : solution Class : Teacher(s) : zacharias psaradakis, marian vavra Example 1.1: Consider the linear regression model y Xβ + u, (1) where y is a (n 1) vector of observations on the dependent variable,
More informationThe Statistical Property of Ordinary Least Squares
The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting
More informationThe outline for Unit 3
The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.
More informationRegression #4: Properties of OLS Estimator (Part 2)
Regression #4: Properties of OLS Estimator (Part 2) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #4 1 / 24 Introduction In this lecture, we continue investigating properties associated
More informationLecture 15. Hypothesis testing in the linear model
14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma
More informationAssociation studies and regression
Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration
More informationECON The Simple Regression Model
ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In
More informationRegression: Lecture 2
Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and
More information3 Multiple Linear Regression
3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is
More informationMAT2377. Rafa l Kulik. Version 2015/November/26. Rafa l Kulik
MAT2377 Rafa l Kulik Version 2015/November/26 Rafa l Kulik Bivariate data and scatterplot Data: Hydrocarbon level (x) and Oxygen level (y): x: 0.99, 1.02, 1.15, 1.29, 1.46, 1.36, 0.87, 1.23, 1.55, 1.40,
More informationthe error term could vary over the observations, in ways that are related
Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance Var(u i x i ) = σ 2 is common to all observations i = 1,..., n In many applications, we may
More informationThis model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that
Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear
More informationChapter 3: Multiple Regression. August 14, 2018
Chapter 3: Multiple Regression August 14, 2018 1 The multiple linear regression model The model y = β 0 +β 1 x 1 + +β k x k +ǫ (1) is called a multiple linear regression model with k regressors. The parametersβ
More informationLecture 16 Solving GLMs via IRWLS
Lecture 16 Solving GLMs via IRWLS 09 November 2015 Taylor B. Arnold Yale Statistics STAT 312/612 Notes problem set 5 posted; due next class problem set 6, November 18th Goals for today fixed PCA example
More information14 Multiple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in
More informationSimple Linear Regression: The Model
Simple Linear Regression: The Model task: quantifying the effect of change X in X on Y, with some constant β 1 : Y = β 1 X, linear relationship between X and Y, however, relationship subject to a random
More informationEmpirical Economic Research, Part II
Based on the text book by Ramanathan: Introductory Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 7, 2011 Outline Introduction
More informationTopic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form
Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in
More informationHeteroskedasticity and Autocorrelation
Lesson 7 Heteroskedasticity and Autocorrelation Pilar González and Susan Orbe Dpt. Applied Economics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 7. Heteroskedasticity
More informationSchool of Education, Culture and Communication Division of Applied Mathematics
School of Education, Culture and Communication Division of Applied Mathematics MASTER THESIS IN MATHEMATICS / APPLIED MATHEMATICS Estimation and Testing the Quotient of Two Models by Marko Dimitrov Masterarbete
More informationEconomics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N
1 Economics 620, Lecture 4: The K-Variable Linear Model I Consider the system y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N or in matrix form y = X + " where y is N 1, X is N
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear
More information