The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
|
|
- Gyles McCormick
- 6 years ago
- Views:
Transcription
1 The Gauss-Markov Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
2 Recall that Cov(u, v) = E((u E(u))(v E(v))) = E(uv) E(u)E(v) Var(u) = Cov(u, u) = E(u E(u)) 2 = E(u 2 ) (E(u)) 2. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
3 If u and v are random vectors, then m 1 n 1 Cov(u, v) = [Cov(u i, v j )] m n Var(u) = [Cov(u i, u j )] m m. and Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
4 It follows that Cov(u, v) = E((u E(u))(v E(v)) ) = E(uv ) E(u)E(v ) Var(u) = E((u E(u))(u E(u)) ) = E(uu ) E(u)E(u ). (Note that if Z = [z ij ], then E(Z) [E(z ij )]; i.e., the expected value of a matrix is defined to be the matrix of the expected values of the elements in the original matrix.) opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
5 From these basis definitions, it is straightforward to show the following: If a, b, A, B are fixed and u, v are random, then Cov(a + Au, b + Bv) = ACov(u, v)b. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
6 Some common special cases are Cov(Au, Bv) = ACov(u, v)b Cov(Au, Bu) = ACov(u, u)b = AVar(u)B Var(Au) = Cov(Au, Au) = ACov(u, u)a = AVar(u)A. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
7 The Gauss-Markov Model where y = Xβ + ε, E(ε) = 0 and Var(ε) = σ 2 I for some unknown σ 2. Mean zero, constant variance, uncorrelated errors. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
8 More succinct statement of Gauss-Markov Model: E(y) = Xβ, Var(y) = σ 2 I or E(y) C(X), Var(y) = σ 2 I. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
9 Note that the Gauss-Markov Model (GMM) is a special case of the GLM that we have been studying. Hence, all previous results for the GLM apply to the GMM. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
10 Suppose c β is an estimable function. Find Var(c ˆβ), the variance of the LSE of c β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
11 Example: Two-treatment ANCOVA y ij = µ + τ i + γx ij + ε ij i = 1, 2; j = 1,..., m E(ε ij ) = 0 i = 1, 2; j = 1,..., m 0 if (i, j) (s, t) Cov(ε ij, ε st ) = σ 2 if (i, j) = (s, t). x ij is the known value of a covariate for treatment i and observation j (i = 1, 2; j = 1,..., m). opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
12 Under what conditions is γ estimable? γ is estimable? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
13 [ ] x1 To find the LSE of γ, recall X =. Thus x 2 2m m m x X m m 0 x 1 X =. m 0 m x 2 x x 1 x 2 x x Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
14 ([ ]) 1 0 When x is not in C, rank(x) = rank(x X) = 3. Thus, a GI of 0 1 X X is 1 0 m 0 x , where A = 0 A 0 m x x 1 x 2 x x Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
15 m 0 x 1 Because 0 m x 2 is not so easy to invert, let s consider a x 1 x 2 x x different strategy. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
16 To find LSE of γ and its variance in an alternative way, consider [ ] 1 0 x1 x 1 1 W =. 0 1 x 2 x 2 1 This matrix arises by dropping the first column of X and applying GS orthogonalization to remaining columns. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
17 X T = W [ ] [ ] x1 1 0 x x1 x 1 1 = x x x 2 x Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
18 [ ] x 1 [ ] 1 0 x1 x x 2 x x = x x W S = X opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
19 [ ] [ ] 1 0 W x1 x x1 x 1 1 W = 0 1 x 2 x x 2 x 2 1 m 0 1 x 1 x = 0 m 1 x 2 x x 1 x x 2 x m i=1 j=1 (x ij x i ) 2 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
20 m 0 0 = 0 m m i=1 j=1 (x ij x i ) 2 1 m 0 0 (W W) 1 = 1 0 m i=1 m j=1 (x ij x i ) 2. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
21 [ ] [ ] 1 0 W x1 x 1 1 y1 y = 0 1 x 2 x 2 1 y 2 = 2 i=1 y 1 y 2. m j=1 y ij(x ij x i ) Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
22 ˆα = (W W) W y = ȳ 1 ȳ 2 2 m i=1 j=1 y ij(x ij x i ) 2 m i=1 j=1 (x ij x i ) 2. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
23 Recall that E(y) = Xβ = Wα = WSβ = XTα. c β estimable c Tα is estimable with LSE c T ˆα. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
24 Thus, LSE of β is c T ˆα [ ] 1 0 x x ȳ 1 ȳ 2 2 m i=1 j=1 y ij(x ij x i ) 2 m i=1 j=1 (x ij x i ) 2 = 2 m i=1 j=1 y ij(x ij x i ) 2 m i=1 j=1 (x ij x. i ) 2 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
25 Note that in this case ] c ˆβ = [0 0 1 ˆα = ˆα 3. ] ) Var(c ˆβ) = Var ([0 0 1 ˆα [ ] = σ (W W) 1 = σ i=1 m j=1 (x ij x i ) opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
26 Theorem 4.1 (Gauss-Markov Theorem): Suppose the GMM holds. If c β is estimable, then the LSE c ˆβ is the best (minimum variance) linear unbiased estimator (BLUE) of c β. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
27 Proof of Theorem 4.1: c β is estimable c = a X for some vector a. The LSE of c β is c ˆβ = a Xˆβ = a X(X X) X y = a P X y. Now suppose u + v y is any other linear unbiased estimator of c β. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
28 Then E(u + v y) = c β β R p u + v Xβ = c β β R p u = 0 and v X = c. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
29 Var(u + v y) = Var(v y) = Var(v y c ˆβ + c ˆβ) = Var(v y a P X y + c ˆβ) = Var((v a P X )y + c ˆβ) = Var(c ˆβ) + Var((v a P X )y) + 2Cov((v a P X )y, c ˆβ). opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
30 Now Cov((v a P X )y, c ˆβ) = Cov((v a P X )y, a P X y) = (v a P X )Var(y)P X a = σ 2 (v a P X )P X a = σ 2 (v a P X )X(X X) X a = σ 2 (v X a P X X)(X X) X a = σ 2 (v X a X)(X X) X a = 0 v X = c = a X. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
31 we have Var(u + v y) = Var(c ˆβ) + Var((v a P X )y). It follows that with equality iff Var(c ˆβ) Var(u + v y) Var((v a P X )y) = 0; opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
32 i.e., iff σ 2 (v a P X )(v P X a) = σ 2 (v P X a) (v P X a) = σ 2 v P X a 2 = 0; i.e., iff i.e., iff v = P X a; u + v y is a P X y = c ˆβ, the LSE of c β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
33 Result 4.1: The BLUE of an estimable c β is uncorrelated with all linear unbiased estimators of zero. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
34 Suppose c 1 β,..., c qβ are q estimable functions c 1,..., c q are LI. Let C = c 1. c q. Note that rank( C ) = q. q p Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
35 c 1 β Cβ =. is a vector of estimable functions. c qβ The vector of BLUEs is the vector of LSEs c ˆβ 1. = Cˆβ. c ˆβ q Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
36 Because c 1 β,..., c qβ are estimable, If we let A = a 1 a i X a i = c i i = 1,..., q.., then AX = C. a q Find Var(Cˆβ). opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
37 Find rank(var(cˆβ)). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
38 Note Var(Cˆβ) = σ 2 C(X X) C is a q q matrix of rank q and is thus nonsingular. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
39 We know that each component of Cˆβ is the BLUE of each corresponding component of Cβ; i.e., c i ˆβ is the BLUE of c iβ i = 1,..., q. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
40 Likewise, we can show that Cˆβ is the BLUE of Cβ in the sense that Var(s + Ty) Var(Cˆβ) is nonnegative definite for all unbiased linear estimators s + Ty of Cβ. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
41 Nonnegative Definite and Positive Definite Matrices A symmetric matrix n n A is nonnegative definite (NND) if and only if x Ax 0 x R n. A symmetric matrix n n A is positive definite (PD) if and only if x Ax > 0 x R n \ {0}. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
42 Proof that Var(s + Ty) Var(Cˆβ) is NND: s + Ty is a linear unbiased estimator of Cβ E(s + Ty) = Cβ β R p s + TXβ = Cβ β R p s = 0 and TX = C. Thus, we may write any linear unbiased estimators of Cβ as Ty where TX = C. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
43 Now w R q consider w [Var(Ty) Var(Cˆβ)]w = w Var(Ty)w w Var(Cˆβ)w = Var(w Ty) Var(w Cˆβ) 0 by Gauss-Markov Theorem because... opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
44 (i) w Cβ is an estimable function: w C = w AX X A w = C w C w C(X ). (ii) w Ty is a linear unbiased estimator of w Cβ : E(w Ty) = w TXβ = w Cβ β R p. (iii) w Cˆβ is the LSE of w Cβ and is thus the BLUE of w Cβ. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
45 We have shown w [Var(Ty) Var(Cˆβ)]w 0, w R p. Thus, Var(Ty) Var(Cˆβ) is nonnegative definite (NND). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
46 Is it true that Var(Ty) Var(Cˆβ) is positive definite if Ty is a linear unbiased estimator of Cβ that is not the BLUE of Cˆβ? opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17
Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear
More informationThe Aitken Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41
The Aitken Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 The Aitken Model (AM): Suppose where y = Xβ + ε, E(ε) = 0 and Var(ε) = σ 2 V for some σ 2 > 0 and some known
More informationEstimable Functions and Their Least Squares Estimators. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51
Estimable Functions and Their Least Squares Estimators Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Consider the GLM y = n p X β + ε, where E(ε) = 0. p 1 n 1 n 1 Suppose
More informationGeneral Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35
General Linear Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 35 Suppose the NTGMM holds so that y = Xβ + ε, where ε N(0, σ 2 I). opyright
More information2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28
2. A Review of Some Key Linear Models Results Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics 510 1 / 28 A General Linear Model (GLM) Suppose y = Xβ + ɛ, where y R n is the response
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationConstraints on Solutions to the Normal Equations. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41
Constraints on Solutions to the Normal Equations Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 If rank( n p X) = r < p, there are infinitely many solutions to the NE X Xb
More informationBasic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:
8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions
More informationScheffé s Method. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 37
Scheffé s Method opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 37 Scheffé s Method: Suppose where Let y = Xβ + ε, ε N(0, σ 2 I). c 1β,..., c qβ be q estimable functions, where
More informationML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58
ML and REML Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 58 Suppose y = Xβ + ε, where ε N(0, Σ) for some positive definite, symmetric matrix Σ.
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationLikelihood Ratio Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 42
Likelihood Ratio Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 42 Consider the Likelihood Ratio Test of H 0 : Cβ = d vs H A : Cβ d. Suppose
More informationANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32
ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationWhen is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40
When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 40 When is the Ordinary Least Squares Estimator (OLSE) the Best Linear Unbiased Estimator (BLUE)? Copyright
More informationTHE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS
THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS We begin with a relatively simple special case. Suppose y ijk = µ + τ i + u ij + e ijk, (i = 1,..., t; j = 1,..., n; k = 1,..., m) β =
More informationANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32
ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach
More informationLinear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34
Linear Mixed-Effects Models Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 34 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p design matrix of known constants β R
More informationDistributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31
Distributions of Quadratic Forms Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 31 Under the Normal Theory GMM (NTGMM), y = Xβ + ε, where ε N(0, σ 2 I). By Result 5.3, the NTGMM
More information11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49
11. Linear Mixed-Effects Models Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics 510 1 / 49 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p matrix of known constants β R
More information3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43
3. The F Test for Comparing Reduced vs. Full Models opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics 510 1 / 43 Assume the Gauss-Markov Model with normal errors: y = Xβ + ɛ, ɛ N(0, σ
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationLecture 6: Linear models and Gauss-Markov theorem
Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationProperties of the least squares estimates
Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationMIT Spring 2016
Exponential Families II MIT 18.655 Dr. Kempthorne Spring 2016 1 Outline Exponential Families II 1 Exponential Families II 2 : Expectation and Variance U (k 1) and V (l 1) are random vectors If A (m k),
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationProbability Lecture III (August, 2006)
robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose
More informationMiscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51
Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationINTRODUCTORY ECONOMETRICS
INTRODUCTORY ECONOMETRICS Lesson 2b Dr Javier Fernández etpfemaj@ehu.es Dpt. of Econometrics & Statistics UPV EHU c J Fernández (EA3-UPV/EHU), February 21, 2009 Introductory Econometrics - p. 1/192 GLRM:
More information20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36
20. REML Estimation of Variance Components Copyright c 2018 (Iowa State University) 20. Statistics 510 1 / 36 Consider the General Linear Model y = Xβ + ɛ, where ɛ N(0, Σ) and Σ is an n n positive definite
More informationSTAT 540: Data Analysis and Regression
STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State
More information3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.
7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) The Simple Linear Regression Model based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #2 The Simple
More information4 Multiple Linear Regression
4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ
More information2.7 Estimation with linear Restriction
Proof (Method 1: show that that a C(W T ), which implies that the GLSE is an estimable function under the old model is also an estimable function under the new model; secnd show that E[a T ˆβ G ] = a T
More informationThe Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationIntroduction to Econometrics Midterm Examination Fall 2005 Answer Key
Introduction to Econometrics Midterm Examination Fall 2005 Answer Key Please answer all of the questions and show your work Clearly indicate your final answer to each question If you think a question is
More informationAdvanced Quantitative Methods: ordinary least squares
Advanced Quantitative Methods: Ordinary Least Squares University College Dublin 31 January 2012 1 2 3 4 5 Terminology y is the dependent variable referred to also (by Greene) as a regressand X are the
More informationWell-developed and understood properties
1 INTRODUCTION TO LINEAR MODELS 1 THE CLASSICAL LINEAR MODEL Most commonly used statistical models Flexible models Well-developed and understood properties Ease of interpretation Building block for more
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationRegression Models - Introduction
Regression Models - Introduction In regression models, two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent variable,
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationMatrices and Multivariate Statistics - II
Matrices and Multivariate Statistics - II Richard Mott November 2011 Multivariate Random Variables Consider a set of dependent random variables z = (z 1,..., z n ) E(z i ) = µ i cov(z i, z j ) = σ ij =
More informationREPEATED MEASURES. Copyright c 2012 (Iowa State University) Statistics / 29
REPEATED MEASURES Copyright c 2012 (Iowa State University) Statistics 511 1 / 29 Repeated Measures Example In an exercise therapy study, subjects were assigned to one of three weightlifting programs i=1:
More informationDeterminants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25
Determinants opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 25 Notation The determinant of a square matrix n n A is denoted det(a) or A. opyright c 2012 Dan Nettleton (Iowa State
More informationTime Series Analysis
Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Regression based methods, 1st part: Introduction (Sec.
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationQuick Review on Linear Multiple Regression
Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationReview of Statistics
Economics 672 Fall 2017 Tauchen 1 Random Variables Review of Statistics A random variable takes on values as determined by its probability distribution. The probability distribution can be discrete, continuous,
More informationLecture 34: Properties of the LSE
Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationChapter 5 Prediction of Random Variables
Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,
More informationLecture 20: Linear model, the LSE, and UMVUE
Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;
More informationPanel Data Models. James L. Powell Department of Economics University of California, Berkeley
Panel Data Models James L. Powell Department of Economics University of California, Berkeley Overview Like Zellner s seemingly unrelated regression models, the dependent and explanatory variables for panel
More informationECE Lecture #9 Part 2 Overview
ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by
More informationFinancial Econometrics
Material : solution Class : Teacher(s) : zacharias psaradakis, marian vavra Example 1.1: Consider the linear regression model y Xβ + u, (1) where y is a (n 1) vector of observations on the dependent variable,
More information11 Hypothesis Testing
28 11 Hypothesis Testing 111 Introduction Suppose we want to test the hypothesis: H : A q p β p 1 q 1 In terms of the rows of A this can be written as a 1 a q β, ie a i β for each row of A (here a i denotes
More informationPreliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100
Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012
More information21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model
21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model Copyright c 2018 (Iowa State University) 21. Statistics 510 1 / 26 C. R. Henderson Born April 1, 1911,
More informationThe Statistical Property of Ordinary Least Squares
The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting
More informationInstrumental Variables
Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the
More informationSimple Linear Regression: The Model
Simple Linear Regression: The Model task: quantifying the effect of change X in X on Y, with some constant β 1 : Y = β 1 X, linear relationship between X and Y, however, relationship subject to a random
More information13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares
13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares opyright c 2018 Dan Nettleton (Iowa State University) 13. Statistics 510 1 / 18 Suppose M 1,..., M k are independent
More informationLecture 1: Linear Models and Applications
Lecture 1: Linear Models and Applications Claudia Czado TU München c (Claudia Czado, TU Munich) ZFS/IMS Göttingen 2004 0 Overview Introduction to linear models Exploratory data analysis (EDA) Estimation
More informationEconometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11
Econometrics A Keio University, Faculty of Economics Simple linear model (2) Simon Clinet (Keio University) Econometrics A October 16, 2018 1 / 11 Estimation of the noise variance σ 2 In practice σ 2 too
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 6: Bias and variance (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 49 Our plan today We saw in last lecture that model scoring methods seem to be trading off two different
More informationMultivariate Regression Analysis
Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x
More informationGauss Markov & Predictive Distributions
Gauss Markov & Predictive Distributions Merlise Clyde STA721 Linear Models Duke University September 14, 2017 Outline Topics Gauss-Markov Theorem Estimability and Prediction Readings: Christensen Chapter
More informationMultiple Regression Analysis. Part III. Multiple Regression Analysis
Part III Multiple Regression Analysis As of Sep 26, 2017 1 Multiple Regression Analysis Estimation Matrix form Goodness-of-Fit R-square Adjusted R-square Expected values of the OLS estimators Irrelevant
More informationEC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,
THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation (1) y = β 0 + β 1 x 1 + + β k x k + ε, which can be written in the following form: (2) y 1 y 2.. y T = 1 x 11... x 1k 1
More informationThe Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationEconometrics Master in Business and Quantitative Methods
Econometrics Master in Business and Quantitative Methods Helena Veiga Universidad Carlos III de Madrid Models with discrete dependent variables and applications of panel data methods in all fields of economics
More information3 Multiple Linear Regression
3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is
More informationidentity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij
Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting
More informationChapter 1 Simple Linear Regression (part 6: matrix version)
Chapter Simple Liear Regressio (part 6: matrix versio) Overview Simple liear regressio model: respose variable Y, a sigle idepedet variable X Y β 0 + β X + ε Multiple liear regressio model: respose Y,
More informationGeneralized Linear Mixed-Effects Models. Copyright c 2015 Dan Nettleton (Iowa State University) Statistics / 58
Generalized Linear Mixed-Effects Models Copyright c 2015 Dan Nettleton (Iowa State University) Statistics 510 1 / 58 Reconsideration of the Plant Fungus Example Consider again the experiment designed to
More information14 Multiple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix
More informationMixed-Models. version 30 October 2011
Mixed-Models version 30 October 2011 Mixed models Mixed models estimate a vector! of fixed effects and one (or more) vectors u of random effects Both fixed and random effects models always include a vector
More informationTopic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form
Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in
More informationLecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)
Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16) 1 2 Model Consider a system of two regressions y 1 = β 1 y 2 + u 1 (1) y 2 = β 2 y 1 + u 2 (2) This is a simultaneous equation model
More informationAppendix A: Review of the General Linear Model
Appendix A: Review of the General Linear Model The generallinear modelis an important toolin many fmri data analyses. As the name general suggests, this model can be used for many different types of analyses,
More informationBiostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options.
1 Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam Name: KEY Problems do not have equal value and some problems will take more time than others. Spend your time wisely. You do
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationBest Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model. *Modified notes from Dr. Dan Nettleton from ISU
Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model *Modified notes from Dr. Dan Nettleton from ISU Suppose intelligence quotients (IQs) for a population of
More information1 of 7 7/16/2009 6:12 AM Virtual Laboratories > 7. Point Estimation > 1 2 3 4 5 6 1. Estimators The Basic Statistical Model As usual, our starting point is a random experiment with an underlying sample
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More information