Estimable Functions and Their Least Squares Estimators. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

Size: px
Start display at page:

Download "Estimable Functions and Their Least Squares Estimators. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51"

Transcription

1 Estimable Functions and Their Least Squares Estimators Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

2 Consider the GLM y = n p X β + ε, where E(ε) = 0. p 1 n 1 n 1 Suppose we wish to estimate c β for some fixed and known c R p. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

3 An estimator t(y) is an unbiased estimator of the function c β iff E[t(y)] = c β β R p. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

4 An estimator t(y) is a linear estimator in y iff t(y) = d + a y for some known constants d, a 1,..., a n. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

5 A function c β is linearly estimable iff a linear estimator that is an unbiased estimator of c β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

6 Henceforth, we will use estimable as a synonym for linearly estimable. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

7 A function c β is said to be nonestimable if there does not exist a linear estimator that is an unbiased estimator of c β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

8 Result 3.1: Under the GLM, c β is estimable iff the following equivalent conditions hold: (i) a E(a y) = c β β R p (ii) a c = a X (X a = c) (iii) c C(X ). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

9 Show conditions (i), (ii), and (iii) are equivalent. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

10 (i) (ii) (iii): a E(a y) = c β β R p a a E(y) = c β β R p a a Xβ = c β β R p a a X = c a X a = c c C(X ). opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

11 Show that any of the equivalent conditions is equivalent to c β estimable. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

12 c β estimable d, a E(d + a y) = c β β R p d, a d + a Xβ = c β β R p a a Xβ = c β β R p a a X = c. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

13 Example: Suppose that when team i competes against team j, the expected margin of victory for team i over team j is µ i µ j, where µ 1,..., µ 5 are unknown parameters. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

14 Suppose we observe the following outcomes. Team 1 beats Team 2 by 7 points Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

15 Determine y, X, β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

16 , , µ 1 µ 2 µ 3 µ 4 µ 5 y, X, β Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

17 Is µ 1 µ 2 is estimable? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

18 Yes, µ 1 µ 2 is estimable. µ 1 µ 2 = c β, where c = [1, 1, 0, 0, 0] = [1, 0, 0, 0, 0, 0] = a X. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

19 Thus, a y = [1, 0, 0, 0, 0, 0]y = y 1 is an unbiased estimator of c β = µ 1 µ 2. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

20 Is µ 1 µ 3 is estimable? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

21 Yes, µ 1 µ 3 is estimable. µ 1 µ 3 = c β, where c = [1, 0, 1, 0, 0] = [0, 1, 0, 0, 0, 0] = a X. y 2 is unbiased estimator of µ 1 µ 3. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

22 Is µ 1 µ 5 is estimable? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

23 Yes, µ 1 µ 5 is estimable. µ 1 µ 5 = c β, where c = [1, 0, 0, 0, 1] = [0, 1, 0, 1, 0, 0] = a X. y 4 y 2 is unbiased estimator of µ 1 µ 5. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

24 Is µ 1 estimable? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

25 µ 1 = c β, where c = [1, 0, 0, 0, 0]. So, does a a X = [1, 0, 0, 0, 0]? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

26 a X = [a 1, a 2, a 3, a 4, a 5, a 6 ] = [a 1 a 2 a 6, a 1 a 3, a 2 + a 3 + a 4, a 5 + a 6, a 4 a 5 ]? = [1, 0, 0, 0, 0]. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

27 a 1 a 2 a 6 = 1 (1) a 1 a 3 = 0 (2) a 2 + a 3 + a 4 = 0 (3) a 5 + a 6 = 0 (4) a 4 a 5 = 0 (5) (1) and (2) imply a 2 a 3 a 6 = 1. (4) and (5) imply a 4 = a 6, which together with (3) implies a 2 + a 3 + a 6 = 0. There does not exist a a X = [1, 0, 0, 0, 0] µ 1 is nonestimable. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

28 Result 3.1 tells us that c β is estimable iff a c β = a Xβ β R p. Recall that E(y) = Xβ. Thus, c β is estimable iff it is a LC of the elements of E(y). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

29 This leads to Method 3.1: LCs of expected values of observations are estimable. c β is estimable iff c β is a LC of the elements of E(y); i.e., c β = n a i E(y i ) for some a 1,..., a n. i=1 opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

30 Use Method 3.1 to show that µ 2 µ 4 is estimable in our previous example. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

31 µ 1 µ 2 µ 3 µ 1 E(y) = Xβ = µ 3 µ 2. µ 3 µ 5 µ 4 µ 5 µ 4 µ 1 µ 2 µ 4 = (µ 1 µ 2 ) (µ 4 µ 1 ) = E(y 1 ) E(y 6 ). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

32 Method 3.2: c β is estimable iff c C(X ). Thus, find a basis for C(X ), say {v 1,..., v r }, and determine if c = r d i v i for some d 1,..., d r. i=1 opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

33 Method 3.3: By Result A.5, we know that C(X ) and N (X) are orthogonal complements in R p. Thus, c C(X ) iff c d = 0 d N (X), which is equivalent to Xd = 0 c d = 0. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

34 Reconsider our previous example. Use Method 3.3 to show that µ 1 is nonestimable. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

35 Xd = 0 d 1 d 2 = 0 d 3 d 1 = 0 d 3 d 2 = 0 d 3 d 5 = 0 d 4 d 5 = 0 d 4 d 1 = 0. d 1 = d 2 = d 3 = d 4 = d 5. For example, X1 = 0. Note [1, 0, 0, 0, 0]1 = 1 0. Thus, µ 1 is not estimable. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

36 Now use method 3.3 to establish that c β = c 1 µ 1 + c 2 µ 2 + c 3 µ 3 + c 4 µ 4 + c 5 µ 5 is estimable iff 5 c i = 0. i=1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

37 {d R 5 : Xd = 0} = {d1 : d R}. Thus 5 1 c d = 0 d N (X) c (d1) = 0 dc 1 = 0 d 5 c i = 0 i=1 5 c i = 0. i=1 d R d R d R Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

38 The least squares estimator of an estimable function c β is c ˆβ, where ˆβ is any solution to the NE (X Xb = X y). opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

39 Result 3.2: If c β is estimable, then c ˆβ is the same for all solutions ˆβ to the NE. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

40 Proof of Result 3.2: Suppose ˆβ 1, and ˆβ 2 are any two solutions to the NE. From Corollary 2.3, we know Xˆβ 1 = Xˆβ 2. Now c β is estimable a c = a X. Thus, c ˆβ1 = a Xˆβ 1 = a Xˆβ 2 = c ˆβ2. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

41 Result 3.3: The least squares estimator of an estimable function c β is a linear unbiased estimator of c β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

42 Proof of Result 3.3: From Result 3.2, we know c ˆβ is the same solution to NE. We know (X X) X y is a solution to NE. Thus, c ˆβ = c (X X) X y. This is a linear estimator. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

43 Furthermore, c β estimable a c = a X. Thus, E(c ˆβ) = E(c (X X) X y) = c (X X) X E(y) = c (X X) X Xβ = a X(X X) X Xβ = a Xβ = c β. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

44 Consider again our previous example. Recall that y 1 is a linear unbiased estimator of µ 1 µ 2. Is this the least squares estimator? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

45 It can be shown that the least squares estimator of µ 1 µ 2 is 7 11 y y y y y y 6. Based on the observed y, the least squares estimate is 8.0. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

46 One of infinitely many solutions to the NE is ˆβ = (X X) X y = Which team is best? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

47 Suppose y = Xβ + ε, where E(ε) = 0 and rank( n p X) = p. Show that c β is estimable c R p. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

48 Proof: rank( n p X) = p X has p LI columns and each column is an element of R p. Thus, by Fact V4, p LI columns of X form a basis for R p. C(X ) = R p. This also follows from Result A.7: C(X ) R p, dim(c(x )) = dim(r p ) = p C(X ) = R p. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

49 Now from Result 3.1, we know c β is estimable iff c C(X ). C(X ) = R p, c β is estimable c R p. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

50 Alternatively, we can prove the result by noting the following rank( n p X) = p rank(x X) = p by Corollary 2.2. Now X X is p p of rank p and (X X) 1 exists. Thus, ˆβ = (X X) 1 X y is the unique solution to NE X Xb = X y. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

51 Note that c R p E(c ˆβ) = c (X X) 1 XE(y) = c (X X) 1 X Xβ = c β. Thus, c ˆβ = c (X X) 1 X y is a linear unbiased estimator of c β c R p. c β is estimable c R p. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

Constraints on Solutions to the Normal Equations. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41

Constraints on Solutions to the Normal Equations. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41 Constraints on Solutions to the Normal Equations Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 If rank( n p X) = r < p, there are infinitely many solutions to the NE X Xb

More information

The Aitken Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41

The Aitken Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41 The Aitken Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 The Aitken Model (AM): Suppose where y = Xβ + ε, E(ε) = 0 and Var(ε) = σ 2 V for some σ 2 > 0 and some known

More information

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17 Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear

More information

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27 Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix

More information

Likelihood Ratio Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 42

Likelihood Ratio Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 42 Likelihood Ratio Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 42 Consider the Likelihood Ratio Test of H 0 : Cβ = d vs H A : Cβ d. Suppose

More information

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61 The Gauss-Markov Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 61 Recall that Cov(u, v) = E((u E(u))(v E(v))) = E(uv) E(u)E(v) Var(u) = Cov(u, u) = E(u E(u)) 2 = E(u 2

More information

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X = The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design

More information

General Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35

General Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35 General Linear Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 35 Suppose the NTGMM holds so that y = Xβ + ε, where ε N(0, σ 2 I). opyright

More information

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40 When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 40 When is the Ordinary Least Squares Estimator (OLSE) the Best Linear Unbiased Estimator (BLUE)? Copyright

More information

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28 2. A Review of Some Key Linear Models Results Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics 510 1 / 28 A General Linear Model (GLM) Suppose y = Xβ + ɛ, where y R n is the response

More information

ML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58

ML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58 ML and REML Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 58 Suppose y = Xβ + ε, where ε N(0, Σ) for some positive definite, symmetric matrix Σ.

More information

Scheffé s Method. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 37

Scheffé s Method. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 37 Scheffé s Method opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 37 Scheffé s Method: Suppose where Let y = Xβ + ε, ε N(0, σ 2 I). c 1β,..., c qβ be q estimable functions, where

More information

Distributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31

Distributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31 Distributions of Quadratic Forms Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 31 Under the Normal Theory GMM (NTGMM), y = Xβ + ε, where ε N(0, σ 2 I). By Result 5.3, the NTGMM

More information

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No. 7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of

More information

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

Miscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51 Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and

More information

3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43

3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43 3. The F Test for Comparing Reduced vs. Full Models opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics 510 1 / 43 Assume the Gauss-Markov Model with normal errors: y = Xβ + ɛ, ɛ N(0, σ

More information

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance: 8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions

More information

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36 The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X

More information

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32 ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32 ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach

More information

14 Multiple Linear Regression

14 Multiple Linear Regression B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in

More information

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36 The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X

More information

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38 Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters

More information

Lecture 6: Linear models and Gauss-Markov theorem

Lecture 6: Linear models and Gauss-Markov theorem Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables

More information

For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations

For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations 1 Generalised Inverse For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations (X T X)β = X T y (2) If rank of X, denoted

More information

11 Hypothesis Testing

11 Hypothesis Testing 28 11 Hypothesis Testing 111 Introduction Suppose we want to test the hypothesis: H : A q p β p 1 q 1 In terms of the rows of A this can be written as a 1 a q β, ie a i β for each row of A (here a i denotes

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 6: Bias and variance (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 49 Our plan today We saw in last lecture that model scoring methods seem to be trading off two different

More information

Gauss Markov & Predictive Distributions

Gauss Markov & Predictive Distributions Gauss Markov & Predictive Distributions Merlise Clyde STA721 Linear Models Duke University September 14, 2017 Outline Topics Gauss-Markov Theorem Estimability and Prediction Readings: Christensen Chapter

More information

Linear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34

Linear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34 Linear Mixed-Effects Models Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 34 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p design matrix of known constants β R

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Advanced Quantitative Methods: ordinary least squares

Advanced Quantitative Methods: ordinary least squares Advanced Quantitative Methods: Ordinary Least Squares University College Dublin 31 January 2012 1 2 3 4 5 Terminology y is the dependent variable referred to also (by Greene) as a regressand X are the

More information

4 Multiple Linear Regression

4 Multiple Linear Regression 4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ

More information

ECON The Simple Regression Model

ECON The Simple Regression Model ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In

More information

2.7 Estimation with linear Restriction

2.7 Estimation with linear Restriction Proof (Method 1: show that that a C(W T ), which implies that the GLSE is an estimable function under the old model is also an estimable function under the new model; secnd show that E[a T ˆβ G ] = a T

More information

Vectors. Vectors and the scalar multiplication and vector addition operations:

Vectors. Vectors and the scalar multiplication and vector addition operations: Vectors Vectors and the scalar multiplication and vector addition operations: x 1 x 1 y 1 2x 1 + 3y 1 x x n 1 = 2 x R n, 2 2 y + 3 2 2x = 2 + 3y 2............ x n x n y n 2x n + 3y n I ll use the two terms

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

Ch 3: Multiple Linear Regression

Ch 3: Multiple Linear Regression Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery

More information

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares 13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares opyright c 2018 Dan Nettleton (Iowa State University) 13. Statistics 510 1 / 18 Suppose M 1,..., M k are independent

More information

Linear Regression. 1 Introduction. 2 Least Squares

Linear Regression. 1 Introduction. 2 Least Squares Linear Regression 1 Introduction It is often interesting to study the effect of a variable on a response. In ANOVA, the response is a continuous variable and the variables are discrete / categorical. What

More information

Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options.

Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options. 1 Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam Name: KEY Problems do not have equal value and some problems will take more time than others. Spend your time wisely. You do

More information

Introduction to Econometrics Midterm Examination Fall 2005 Answer Key

Introduction to Econometrics Midterm Examination Fall 2005 Answer Key Introduction to Econometrics Midterm Examination Fall 2005 Answer Key Please answer all of the questions and show your work Clearly indicate your final answer to each question If you think a question is

More information

Determinants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25

Determinants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25 Determinants opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 25 Notation The determinant of a square matrix n n A is denoted det(a) or A. opyright c 2012 Dan Nettleton (Iowa State

More information

Linear Normed Spaces (cont.) Inner Product Spaces

Linear Normed Spaces (cont.) Inner Product Spaces Linear Normed Spaces (cont.) Inner Product Spaces October 6, 017 Linear Normed Spaces (cont.) Theorem A normed space is a metric space with metric ρ(x,y) = x y Note: if x n x then x n x, and if {x n} is

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter

More information

1. The OLS Estimator. 1.1 Population model and notation

1. The OLS Estimator. 1.1 Population model and notation 1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology

More information

Confidence Intervals and Sets

Confidence Intervals and Sets Confidence Intervals and Sets Throughout we adopt the normal-error model, and wish to say some things about the construction of confidence intervals [and sets] for the parameters β 0 β 1. 1. Confidence

More information

11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49

11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49 11. Linear Mixed-Effects Models Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics 510 1 / 49 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p matrix of known constants β R

More information

Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options.

Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options. 1 Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam Name: Problems do not have equal value and some problems will take more time than others. Spend your time wisely. You do not

More information

THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS

THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS We begin with a relatively simple special case. Suppose y ijk = µ + τ i + u ij + e ijk, (i = 1,..., t; j = 1,..., n; k = 1,..., m) β =

More information

STA 302f16 Assignment Five 1

STA 302f16 Assignment Five 1 STA 30f16 Assignment Five 1 Except for Problem??, these problems are preparation for the quiz in tutorial on Thursday October 0th, and are not to be handed in As usual, at times you may be asked to prove

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R

More information

The general linear model (and PROC GLM)

The general linear model (and PROC GLM) The general linear model (and PROC GLM) Solving systems of linear equations 3"! " " œ 4" " œ 8! " can be ritten as 3 4 "! œ " 8 " or " œ c. No look at the matrix 3. One can see that 3 3 3 œ 4 4 3 œ 0 0

More information

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata' Business Statistics Tommaso Proietti DEF - Università di Roma 'Tor Vergata' Linear Regression Specication Let Y be a univariate quantitative response variable. We model Y as follows: Y = f(x) + ε where

More information

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36 20. REML Estimation of Variance Components Copyright c 2018 (Iowa State University) 20. Statistics 510 1 / 36 Consider the General Linear Model y = Xβ + ɛ, where ɛ N(0, Σ) and Σ is an n n positive definite

More information

Linear Independence. Linear Algebra MATH Linear Algebra LI or LD Chapter 1, Section 7 1 / 1

Linear Independence. Linear Algebra MATH Linear Algebra LI or LD Chapter 1, Section 7 1 / 1 Linear Independence Linear Algebra MATH 76 Linear Algebra LI or LD Chapter, Section 7 / Linear Combinations and Span Suppose s, s,..., s p are scalars and v, v,..., v p are vectors (all in the same space

More information

x 21 x 22 x 23 f X 1 X 2 X 3 ε

x 21 x 22 x 23 f X 1 X 2 X 3 ε Chapter 2 Estimation 2.1 Example Let s start with an example. Suppose that Y is the fuel consumption of a particular model of car in m.p.g. Suppose that the predictors are 1. X 1 the weight of the car

More information

Solutions for Econometrics I Homework No.1

Solutions for Econometrics I Homework No.1 Solutions for Econometrics I Homework No.1 due 2006-02-20 Feldkircher, Forstner, Ghoddusi, Grafenhofer, Pichler, Reiss, Yan, Zeugner Exercise 1.1 Structural form of the problem: 1. q d t = α 0 + α 1 p

More information

Chapter 3 Best Linear Unbiased Estimation

Chapter 3 Best Linear Unbiased Estimation Chapter 3 Best Linear Unbiased Estimation C R Henderson 1984 - Guelph In Chapter 2 we discussed linear unbiased estimation of k β, having determined that it is estimable Let the estimate be a y, and if

More information

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46 A Generalized Linear Model for Binomial Response Data Copyright c 2017 Dan Nettleton (Iowa State University) Statistics 510 1 / 46 Now suppose that instead of a Bernoulli response, we have a binomial response

More information

( ) y 2! 4. ( )( y! 2)

( ) y 2! 4. ( )( y! 2) 1. Dividing: 4x3! 8x 2 + 6x 2x 5.7 Division of Polynomials = 4x3 2x! 8x2 2x + 6x 2x = 2x2! 4 3. Dividing: 1x4 + 15x 3! 2x 2!5x 2 = 1x4!5x 2 + 15x3!5x 2! 2x2!5x 2 =!2x2! 3x + 4 5. Dividing: 8y5 + 1y 3!

More information

Advanced Econometrics I

Advanced Econometrics I Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics

More information

Applied Econometrics (QEM)

Applied Econometrics (QEM) Applied Econometrics (QEM) The Simple Linear Regression Model based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #2 The Simple

More information

Introduction to Econometrics Final Examination Fall 2006 Answer Sheet

Introduction to Econometrics Final Examination Fall 2006 Answer Sheet Introduction to Econometrics Final Examination Fall 2006 Answer Sheet Please answer all of the questions and show your work. If you think a question is ambiguous, clearly state how you interpret it before

More information

Weighted Least Squares

Weighted Least Squares Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where 21 : CHAPTER Seemingly-Unrelated Regressions The Algebra of the Kronecker Product Consider the matrix equation Y = AXB where Y =[y kl ]; k =1,,r,l =1,,s, (1) X =[x ij ]; i =1,,m,j =1,,n, A=[a ki ]; k =1,,r,i=1,,m,

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij =

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij = 20. ONE-WAY ANALYSIS OF VARIANCE 1 20.1. Balanced One-Way Classification Cell means parametrization: Y ij = µ i + ε ij, i = 1,..., I; j = 1,..., J, ε ij N(0, σ 2 ), In matrix form, Y = Xβ + ε, or 1 Y J

More information

MS&E 226. In-Class Midterm Examination Solutions Small Data October 20, 2015

MS&E 226. In-Class Midterm Examination Solutions Small Data October 20, 2015 MS&E 226 In-Class Midterm Examination Solutions Small Data October 20, 2015 PROBLEM 1. Alice uses ordinary least squares to fit a linear regression model on a dataset containing outcome data Y and covariates

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

AMS-207: Bayesian Statistics

AMS-207: Bayesian Statistics Linear Regression How does a quantity y, vary as a function of another quantity, or vector of quantities x? We are interested in p(y θ, x) under a model in which n observations (x i, y i ) are exchangeable.

More information

Lecture 14 Simple Linear Regression

Lecture 14 Simple Linear Regression Lecture 4 Simple Linear Regression Ordinary Least Squares (OLS) Consider the following simple linear regression model where, for each unit i, Y i is the dependent variable (response). X i is the independent

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

MLES & Multivariate Normal Theory

MLES & Multivariate Normal Theory Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate

More information

STAT 540: Data Analysis and Regression

STAT 540: Data Analysis and Regression STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Estimability Tools for Package Developers by Russell V. Lenth

Estimability Tools for Package Developers by Russell V. Lenth CONTRIBUTED RESEARCH ARTICLES 195 Estimability Tools for Package Developers by Russell V. Lenth Abstract When a linear model is rank-deficient, then predictions based on that model become questionable

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Simple Linear Regression (Part 3)

Simple Linear Regression (Part 3) Chapter 1 Simple Linear Regression (Part 3) 1 Write an Estimated model Statisticians/Econometricians usually write an estimated model together with some inference statistics, the following are some formats

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Weighted Least Squares

Weighted Least Squares Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w

More information

Chapter 3: Multiple Regression. August 14, 2018

Chapter 3: Multiple Regression. August 14, 2018 Chapter 3: Multiple Regression August 14, 2018 1 The multiple linear regression model The model y = β 0 +β 1 x 1 + +β k x k +ǫ (1) is called a multiple linear regression model with k regressors. The parametersβ

More information

Multivariate Regression (Chapter 10)

Multivariate Regression (Chapter 10) Multivariate Regression (Chapter 10) This week we ll cover multivariate regression and maybe a bit of canonical correlation. Today we ll mostly review univariate multivariate regression. With multivariate

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 8

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 8 EECS 6A Designing Information Devices and Systems I Spring 8 Lecture Notes Note 8 8 Subspace In previous lecture notes, we introduced the concept of a vector space and the notion of basis and dimension

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

REGRESSION ANALYSIS AND INDICATOR VARIABLES

REGRESSION ANALYSIS AND INDICATOR VARIABLES REGRESSION ANALYSIS AND INDICATOR VARIABLES Thesis Submitted in partial fulfillment of the requirements for the award of degree of Masters of Science in Mathematics and Computing Submitted by Sweety Arora

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix

More information

Matrices and vectors A matrix is a rectangular array of numbers. Here s an example: A =

Matrices and vectors A matrix is a rectangular array of numbers. Here s an example: A = Matrices and vectors A matrix is a rectangular array of numbers Here s an example: 23 14 17 A = 225 0 2 This matrix has dimensions 2 3 The number of rows is first, then the number of columns We can write

More information

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X. Estimating σ 2 We can do simple prediction of Y and estimation of the mean of Y at any value of X. To perform inferences about our regression line, we must estimate σ 2, the variance of the error term.

More information