Gauss Markov & Predictive Distributions

Size: px
Start display at page:

Download "Gauss Markov & Predictive Distributions"

Transcription

1 Gauss Markov & Predictive Distributions Merlise Clyde STA721 Linear Models Duke University September 14, 2017

2 Outline Topics Gauss-Markov Theorem Estimability and Prediction Readings: Christensen Chapter 2, Chapter 6.3, ( Appendix A, and Appendix B as needed)

3 Gauss-Markov Theorem Theorem Under the assumptions: E[Y] = µ

4 Gauss-Markov Theorem Theorem Under the assumptions: E[Y] = µ Cov(Y) = σ 2 I n

5 Gauss-Markov Theorem Theorem Under the assumptions: E[Y] = µ Cov(Y) = σ 2 I n every estimable function ψ = λ T β has a unique unbiased linear estimator ˆψ which has minimum variance in the class of all unbiased linear estimators.

6 Gauss-Markov Theorem Theorem Under the assumptions: E[Y] = µ Cov(Y) = σ 2 I n every estimable function ψ = λ T β has a unique unbiased linear estimator ˆψ which has minimum variance in the class of all unbiased linear estimators. ˆψ = λ T ˆβ where ˆβ is any set of ordinary least squares estimators.

7 Unique Unbiased Estimator Lemma If ψ = λ T β is estimable, there exists a unique linear unbiased estimator of ψ = a T Y with a C(X).

8 Unique Unbiased Estimator Lemma If ψ = λ T β is estimable, there exists a unique linear unbiased estimator of ψ = a T Y with a C(X). If a T Y is any unbiased linear estimator of ψ then a is the projection of a onto C(X), i.e. a = P X a.

9 Unique Unbiased Estimator Proof Since ψ is estimable, there exists an a R n for which E[a T Y] = λ T β = ψ with λ T = a T X

10 Unique Unbiased Estimator Proof Since ψ is estimable, there exists an a R n for which E[a T Y] = λ T β = ψ with λ T = a T X Let a = a + u where a C(X) and u C(X)

11 Unique Unbiased Estimator Proof Since ψ is estimable, there exists an a R n for which E[a T Y] = λ T β = ψ with λ T = a T X Let a = a + u where a C(X) and u C(X) Then ψ = E[a T Y] = E[a T Y] + E[u T Y]

12 Unique Unbiased Estimator Proof Since ψ is estimable, there exists an a R n for which E[a T Y] = λ T β = ψ with λ T = a T X Let a = a + u where a C(X) and u C(X) Then ψ = E[a T Y] = E[a T Y] + E[u T Y] = E[a T Y] + 0 E[u T Y] = u T Xβ since u C(X) (i.e. u C(X) ) E[u T Y] = 0

13 Unique Unbiased Estimator Proof Since ψ is estimable, there exists an a R n for which E[a T Y] = λ T β = ψ with λ T = a T X Let a = a + u where a C(X) and u C(X) Then ψ = E[a T Y] = E[a T Y] + E[u T Y] = E[a T Y] + 0 E[u T Y] = u T Xβ since u C(X) (i.e. u C(X) ) E[u T Y] = 0 Thus a T Y is also an unbiased linear estimator of ψ with a C(X)

14 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β

15 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β 0 = E[a T Y] E[v T Y]

16 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β 0 = E[a T Y] E[v T Y] = (a v) T Xβ

17 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β 0 = E[a T Y] E[v T Y] = (a v) T Xβ So (a v) T X = 0 for all β

18 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β 0 = E[a T Y] E[v T Y] = (a v) T Xβ So (a v) T X = 0 for all β Implies (a v) C(X)

19 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β 0 = E[a T Y] E[v T Y] = (a v) T Xβ So (a v) T X = 0 for all β Implies (a v) C(X) but by assumption (a v) C(X)

20 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β 0 = E[a T Y] E[v T Y] = (a v) T Xβ So (a v) T X = 0 for all β Implies (a v) C(X) but by assumption (a v) C(X) (C(X) is a vector space)

21 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β 0 = E[a T Y] E[v T Y] = (a v) T Xβ So (a v) T X = 0 for all β Implies (a v) C(X) but by assumption (a v) C(X) (C(X) is a vector space) the only vector in BOTH is 0, so a = v

22 Uniqueness Proof. Suppose that there is another v C(X) such that E[v T Y] = ψ. Then for all β 0 = E[a T Y] E[v T Y] = (a v) T Xβ So (a v) T X = 0 for all β Implies (a v) C(X) but by assumption (a v) C(X) (C(X) is a vector space) the only vector in BOTH is 0, so a = v Therefore a T Y is the unique linear unbiased estimator of ψ with a C(X).

23 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X).

24 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X)

25 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a

26 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a = σ 2 a 2

27 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a = σ 2 a 2 = σ 2 ( a 2 + u 2 + 2a T u)

28 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a = σ 2 a 2 = σ 2 ( a 2 + u 2 + 2a T u) = σ 2 ( a 2 + u 2 ) + 0

29 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a = σ 2 a 2 = σ 2 ( a 2 + u 2 + 2a T u) = σ 2 ( a 2 + u 2 ) + 0 = Var(a T Y) + σ 2 u 2

30 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a = σ 2 a 2 = σ 2 ( a 2 + u 2 + 2a T u) = σ 2 ( a 2 + u 2 ) + 0 = Var(a T Y) + σ 2 u 2 Var(a T Y)

31 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a = σ 2 a 2 = σ 2 ( a 2 + u 2 + 2a T u) = σ 2 ( a 2 + u 2 ) + 0 = Var(a T Y) + σ 2 u 2 Var(a T Y) with equality if and only if a = a

32 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a = σ 2 a 2 = σ 2 ( a 2 + u 2 + 2a T u) = σ 2 ( a 2 + u 2 ) + 0 = Var(a T Y) + σ 2 u 2 Var(a T Y) with equality if and only if a = a Hence a T Y is the unique linear unbiased estimator of ψ with minimum variance

33 Proof of Minimum Variance (G-M) Let a T Y be the unique unbiased linear estimator of ψ with a C(X). Let a T Y be any unbiased estimate of ψ; a = a + u with a C(X) and u C(X) Var(a T Y) = a T Cov(Y)a = σ 2 a 2 = σ 2 ( a 2 + u 2 + 2a T u) = σ 2 ( a 2 + u 2 ) + 0 = Var(a T Y) + σ 2 u 2 Var(a T Y) with equality if and only if a = a Hence a T Y is the unique linear unbiased estimator of ψ with minimum variance BLUE = Best Linear Unbiased Estimator

34 Continued Proof. Show that ˆψ = a T Y = λ T ˆβ

35 Continued Proof. Show that ˆψ = a T Y = λ T ˆβ Since a C(X) we have a = P X a

36 Continued Proof. Show that ˆψ = a T Y = λ T ˆβ Since a C(X) we have a = P X a a T Y = a T P T X Y

37 Continued Proof. Show that ˆψ = a T Y = λ T ˆβ Since a C(X) we have a = P X a a T Y = a T P T X Y = a T P x Y

38 Continued Proof. Show that ˆψ = a T Y = λ T ˆβ Since a C(X) we have a = P X a a T Y = a T P T X Y = a T P x Y = a T Xˆβ

39 Continued Proof. Show that ˆψ = a T Y = λ T ˆβ Since a C(X) we have a = P X a a T Y = a T P T X Y = a T P x Y = a T Xˆβ = λ T ˆβ

40 Continued Proof. Show that ˆψ = a T Y = λ T ˆβ Since a C(X) we have a = P X a for λ T = a T X or λ = X T a a T Y = a T P T X Y = a T P x Y = a T Xˆβ = λ T ˆβ

41 MVUE Gauss-Markov Theorem says that OLS has minimum variance in the class of all Linear Unbiased estimators

42 MVUE Gauss-Markov Theorem says that OLS has minimum variance in the class of all Linear Unbiased estimators Requires just first and second moments

43 MVUE Gauss-Markov Theorem says that OLS has minimum variance in the class of all Linear Unbiased estimators Requires just first and second moments Additional assumption of normality, OLS = MLEs have minimum variance out of ALL unbiased estimators (MVUE); not just linear estimators

44 MVUE Gauss-Markov Theorem says that OLS has minimum variance in the class of all Linear Unbiased estimators Requires just first and second moments Additional assumption of normality, OLS = MLEs have minimum variance out of ALL unbiased estimators (MVUE); not just linear estimators (requires Completeness and Rao-Blackwell Theorem - next semester)

45 MVUE Gauss-Markov Theorem says that OLS has minimum variance in the class of all Linear Unbiased estimators Requires just first and second moments Additional assumption of normality, OLS = MLEs have minimum variance out of ALL unbiased estimators (MVUE); not just linear estimators (requires Completeness and Rao-Blackwell Theorem - next semester)

46 Prediction For predicting at new x is there always a unique unbiased estimator of E[Y x ]?

47 Prediction For predicting at new x is there always a unique unbiased estimator of E[Y x ]? If one does exist, how do we know that if we are given λ?

48 Existence x β has a unique unbiased estimator if x λ = X T a

49 Existence x β has a unique unbiased estimator if x λ = X T a Clearly if x = x i (ith row of observed data) then it is estimable with a equal to the vector with a 1 in the ith position even if X is not full rank!

50 Existence x β has a unique unbiased estimator if x λ = X T a Clearly if x = x i (ith row of observed data) then it is estimable with a equal to the vector with a 1 in the ith position even if X is not full rank! What about out of sample prediction?

51 Existence x β has a unique unbiased estimator if x λ = X T a Clearly if x = x i (ith row of observed data) then it is estimable with a equal to the vector with a 1 in the ith position even if X is not full rank! What about out of sample prediction?

52 Example x1 = -4:4 x2 = c(-2, 1, -1, 2, 0, 2, -1, 1, -2) x3 = 3*x1-2*x2 x4 = x2 - x1 + 4 Y = 1+x1+x2+x3+x4 + c(-.5,.5,.5,-.5,0,.5,-.5,-.5,.5) dev.set = data.frame(y, x1, x2, x3, x4) lm1234 = lm(y ~ x1 + x2 + x3 + x4, data=dev.set) round(coefficients(lm1234), 4) ## (Intercept) x1 x2 x3 x4 ## NA NA lm3412 = lm(y ~ x3 + x4 + x1 + x2, data = dev.set) round(coefficients(lm3412), 4) ## (Intercept) x3 x4 x1 x2 ## NA NA

53 In Sample Predictions cbind(dev.set, predict(lm1234), predict(lm3412)) ## Y x1 x2 x3 x4 predict(lm1234) predict(lm3412) ## ## ## ## ## ## ## ## ## Both models agree for estimating the mean at the observed X points!

54 Out of Sample out = data.frame(test.set, Y1234=predict(lm1234, new=test.set), Y3412=predict(lm3412, new=test.set)) out ## x1 x2 x3 x4 Y1234 Y3412 ## ## ## ## ## ##

55 Out of Sample out = data.frame(test.set, Y1234=predict(lm1234, new=test.set), Y3412=predict(lm3412, new=test.set)) out ## x1 x2 x3 x4 Y1234 Y3412 ## ## ## ## ## ## Agreement for cases 1, 3, and 4 only! Can we determine that without finding the predictions and comparing?

56 Determining Estimable λ Estimable means that λ T = a T X for a C(X)

57 Determining Estimable λ Estimable means that λ T = a T X for a C(X) Transpose: λ = X T a for a C(X)

58 Determining Estimable λ Estimable means that λ T = a T X for a C(X) Transpose: λ = X T a for a C(X) λ C(X T ) (λ R(X))

59 Determining Estimable λ Estimable means that λ T = a T X for a C(X) Transpose: λ = X T a for a C(X) λ C(X T ) (λ R(X)) λ C(X T )

60 Determining Estimable λ Estimable means that λ T = a T X for a C(X) Transpose: λ = X T a for a C(X) λ C(X T ) (λ R(X)) λ C(X T ) C(X T ) is the null space of X

61 Determining Estimable λ Estimable means that λ T = a T X for a C(X) Transpose: λ = X T a for a C(X) λ C(X T ) (λ R(X)) λ C(X T ) C(X T ) is the null space of X v C(X T ) : Xv = 0 v N(X)

62 Determining Estimable λ Estimable means that λ T = a T X for a C(X) Transpose: λ = X T a for a C(X) λ C(X T ) (λ R(X)) λ C(X T ) C(X T ) is the null space of X v C(X T ) : Xv = 0 v N(X) λ N(X)

63 Determining Estimable λ Estimable means that λ T = a T X for a C(X) Transpose: λ = X T a for a C(X) λ C(X T ) (λ R(X)) λ C(X T ) C(X T ) is the null space of X v C(X T ) : Xv = 0 v N(X) λ N(X) if P is a projection onto C(X T ) then I P is a projection onto N(X) and therefore (I P)λ = 0 if λ is estimable

64 Determining Estimable λ Estimable means that λ T = a T X for a C(X) Transpose: λ = X T a for a C(X) λ C(X T ) (λ R(X)) λ C(X T ) C(X T ) is the null space of X v C(X T ) : Xv = 0 v N(X) λ N(X) if P is a projection onto C(X T ) then I P is a projection onto N(X) and therefore (I P)λ = 0 if λ is estimable Take P X T = (X T X)(X T X) as a projection onto C(X T ) and show (I P X T )λ = 0 p

65 Example library("estimability" ) cbind(epredict(lm1234, test.set), epredict(lm3412, test.set ## [,1] [,2] ## ## 2 NA NA ## ## ## 5 NA NA ## 6 NA NA Rows 2, 5, and 6 are not estimable! No linear unbiased estimator

66 Summary When BLUEs exist, under normality they are MVUE (ditto for prediction - BLUP)

67 Summary When BLUEs exist, under normality they are MVUE (ditto for prediction - BLUP) BLUE/BLUP do not always for estimation/prediction if X is not full rank

68 Summary When BLUEs exist, under normality they are MVUE (ditto for prediction - BLUP) BLUE/BLUP do not always for estimation/prediction if X is not full rank may occur with redundancies for modest p < n and of course p > n

69 Summary When BLUEs exist, under normality they are MVUE (ditto for prediction - BLUP) BLUE/BLUP do not always for estimation/prediction if X is not full rank may occur with redundancies for modest p < n and of course p > n Eliminate redundancies by removing variables (variable selection)

70 Summary When BLUEs exist, under normality they are MVUE (ditto for prediction - BLUP) BLUE/BLUP do not always for estimation/prediction if X is not full rank may occur with redundancies for modest p < n and of course p > n Eliminate redundancies by removing variables (variable selection) Consider alternative estimators (Bayes and related)

71 Other Estimators What about some estimator g(y) that is not unbiased?

72 Other Estimators What about some estimator g(y) that is not unbiased? Mean Squared Error for estimator g(y) of λ T β is E[g(Y) λ T β] 2 = Var(g(Y)) + Bias 2 (g(y)) where Bias = E[g(Y)] λ T β

73 Other Estimators What about some estimator g(y) that is not unbiased? Mean Squared Error for estimator g(y) of λ T β is E[g(Y) λ T β] 2 = Var(g(Y)) + Bias 2 (g(y)) where Bias = E[g(Y)] λ T β Bias vs Variance tradeoff

74 Other Estimators What about some estimator g(y) that is not unbiased? Mean Squared Error for estimator g(y) of λ T β is E[g(Y) λ T β] 2 = Var(g(Y)) + Bias 2 (g(y)) where Bias = E[g(Y)] λ T β Bias vs Variance tradeoff Can have smaller MSE if we allow some Bias!

75 Bayes Next Class Bayes Theorem & Conjugate Normal-Gamma Prior/Posterior distributions Read Chapter 2 in Christensen or Wakefield 5.7 Review Multivariate Normal and Gamma distributions

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter

More information

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27 Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix

More information

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X = The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design

More information

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17 Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear

More information

MLES & Multivariate Normal Theory

MLES & Multivariate Normal Theory Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate

More information

Distribution Assumptions

Distribution Assumptions Merlise Clyde Duke University November 22, 2016 Outline Topics Normality & Transformations Box-Cox Nonlinear Regression Readings: Christensen Chapter 13 & Wakefield Chapter 6 Linear Model Linear Model

More information

Sampling Distributions

Sampling Distributions Merlise Clyde Duke University September 8, 2016 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance: 8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

Regression Models - Introduction

Regression Models - Introduction Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

Sampling Distributions

Sampling Distributions Merlise Clyde Duke University September 3, 2015 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);

More information

4 Multiple Linear Regression

4 Multiple Linear Regression 4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ

More information

Chapter 5 Prediction of Random Variables

Chapter 5 Prediction of Random Variables Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Estimable Functions and Their Least Squares Estimators. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51

Estimable Functions and Their Least Squares Estimators. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51 Estimable Functions and Their Least Squares Estimators Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Consider the GLM y = n p X β + ε, where E(ε) = 0. p 1 n 1 n 1 Suppose

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 6: Bias and variance (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 49 Our plan today We saw in last lecture that model scoring methods seem to be trading off two different

More information

STAT5044: Regression and Anova

STAT5044: Regression and Anova STAT5044: Regression and Anova Inyoung Kim 1 / 25 Outline 1 Multiple Linear Regression 2 / 25 Basic Idea An extra sum of squares: the marginal reduction in the error sum of squares when one or several

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Regression Models - Introduction

Regression Models - Introduction Regression Models - Introduction In regression models, two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent variable,

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

14 Multiple Linear Regression

14 Multiple Linear Regression B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in

More information

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61 The Gauss-Markov Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 61 Recall that Cov(u, v) = E((u E(u))(v E(v))) = E(uv) E(u)E(v) Var(u) = Cov(u, u) = E(u E(u)) 2 = E(u 2

More information

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28 2. A Review of Some Key Linear Models Results Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics 510 1 / 28 A General Linear Model (GLM) Suppose y = Xβ + ɛ, where y R n is the response

More information

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University. Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall

More information

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40 When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 40 When is the Ordinary Least Squares Estimator (OLSE) the Best Linear Unbiased Estimator (BLUE)? Copyright

More information

The Aitken Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41

The Aitken Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41 The Aitken Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 The Aitken Model (AM): Suppose where y = Xβ + ε, E(ε) = 0 and Var(ε) = σ 2 V for some σ 2 > 0 and some known

More information

BIOS 2083 Linear Models c Abdus S. Wahed

BIOS 2083 Linear Models c Abdus S. Wahed Chapter 5 206 Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter

More information

Bayes Estimators & Ridge Regression

Bayes Estimators & Ridge Regression Readings Chapter 14 Christensen Merlise Clyde September 29, 2015 How Good are Estimators? Quadratic loss for estimating β using estimator a L(β, a) = (β a) T (β a) How Good are Estimators? Quadratic loss

More information

Module 4: Bayesian Methods Lecture 5: Linear regression

Module 4: Bayesian Methods Lecture 5: Linear regression 1/28 The linear regression model Module 4: Bayesian Methods Lecture 5: Linear regression Peter Hoff Departments of Statistics and Biostatistics University of Washington 2/28 The linear regression model

More information

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017 Summary of Part II Key Concepts & Formulas Christopher Ting November 11, 2017 christopherting@smu.edu.sg http://www.mysmu.edu/faculty/christophert/ Christopher Ting 1 of 16 Why Regression Analysis? Understand

More information

The Statistical Property of Ordinary Least Squares

The Statistical Property of Ordinary Least Squares The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:

More information

WLS and BLUE (prelude to BLUP) Prediction

WLS and BLUE (prelude to BLUP) Prediction WLS and BLUE (prelude to BLUP) Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 21, 2018 Suppose that Y has mean X β and known covariance matrix V (but Y need

More information

MIT Spring 2015

MIT Spring 2015 Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)

More information

STA 302f16 Assignment Five 1

STA 302f16 Assignment Five 1 STA 30f16 Assignment Five 1 Except for Problem??, these problems are preparation for the quiz in tutorial on Thursday October 0th, and are not to be handed in As usual, at times you may be asked to prove

More information

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36 20. REML Estimation of Variance Components Copyright c 2018 (Iowa State University) 20. Statistics 510 1 / 36 Consider the General Linear Model y = Xβ + ɛ, where ɛ N(0, Σ) and Σ is an n n positive definite

More information

Linear regression methods

Linear regression methods Linear regression methods Most of our intuition about statistical methods stem from linear regression. For observations i = 1,..., n, the model is Y i = p X ij β j + ε i, j=1 where Y i is the response

More information

STAT 540: Data Analysis and Regression

STAT 540: Data Analysis and Regression STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State

More information

Quick Review on Linear Multiple Regression

Quick Review on Linear Multiple Regression Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,

More information

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No. 7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Asymptotics Asymptotics Multiple Linear Regression: Assumptions Assumption MLR. (Linearity in parameters) Assumption MLR. (Random Sampling from the population) We have a random

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information

General Linear Model: Statistical Inference

General Linear Model: Statistical Inference Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter 4), least

More information

Chapter 3 Best Linear Unbiased Estimation

Chapter 3 Best Linear Unbiased Estimation Chapter 3 Best Linear Unbiased Estimation C R Henderson 1984 - Guelph In Chapter 2 we discussed linear unbiased estimation of k β, having determined that it is estimable Let the estimate be a y, and if

More information

For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations

For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations 1 Generalised Inverse For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations (X T X)β = X T y (2) If rank of X, denoted

More information

11 Hypothesis Testing

11 Hypothesis Testing 28 11 Hypothesis Testing 111 Introduction Suppose we want to test the hypothesis: H : A q p β p 1 q 1 In terms of the rows of A this can be written as a 1 a q β, ie a i β for each row of A (here a i denotes

More information

Module 17: Bayesian Statistics for Genetics Lecture 4: Linear regression

Module 17: Bayesian Statistics for Genetics Lecture 4: Linear regression 1/37 The linear regression model Module 17: Bayesian Statistics for Genetics Lecture 4: Linear regression Ken Rice Department of Biostatistics University of Washington 2/37 The linear regression model

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Microeconometrics: Clustering. Ethan Kaplan

Microeconometrics: Clustering. Ethan Kaplan Microeconometrics: Clustering Ethan Kaplan Gauss Markov ssumptions OLS is minimum variance unbiased (MVUE) if Linear Model: Y i = X i + i E ( i jx i ) = V ( i jx i ) = 2 < cov i ; j = Normally distributed

More information

Statistics 203: Introduction to Regression and Analysis of Variance Penalized models

Statistics 203: Introduction to Regression and Analysis of Variance Penalized models Statistics 203: Introduction to Regression and Analysis of Variance Penalized models Jonathan Taylor - p. 1/15 Today s class Bias-Variance tradeoff. Penalized regression. Cross-validation. - p. 2/15 Bias-variance

More information

1. Simple Linear Regression

1. Simple Linear Regression 1. Simple Linear Regression Suppose that we are interested in the average height of male undergrads at UF. We put each male student s name (population) in a hat and randomly select 100 (sample). Then their

More information

Least Squares Estimation-Finite-Sample Properties

Least Squares Estimation-Finite-Sample Properties Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions

More information

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean.

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean. Economics 573 Problem Set 5 Fall 00 Due: 4 October 00 1. In random sampling from any population with E(X) = and Var(X) =, show (using Chebyshev's inequality) that sample mean converges in probability to..

More information

Simple Linear Regression Model & Introduction to. OLS Estimation

Simple Linear Regression Model & Introduction to. OLS Estimation Inside ECOOMICS Introduction to Econometrics Simple Linear Regression Model & Introduction to Introduction OLS Estimation We are interested in a model that explains a variable y in terms of other variables

More information

Lecture 14 Simple Linear Regression

Lecture 14 Simple Linear Regression Lecture 4 Simple Linear Regression Ordinary Least Squares (OLS) Consider the following simple linear regression model where, for each unit i, Y i is the dependent variable (response). X i is the independent

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

A Note on UMPI F Tests

A Note on UMPI F Tests A Note on UMPI F Tests Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico May 22, 2015 Abstract We examine the transformations necessary for establishing

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 9: LINEAR REGRESSION

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 9: LINEAR REGRESSION COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 9: LINEAR REGRESSION SEAN GERRISH AND CHONG WANG 1. WAYS OF ORGANIZING MODELS In probabilistic modeling, there are several ways of organizing models:

More information

Proof In the CR proof. and

Proof In the CR proof. and Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

The General Linear Model. Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

The General Linear Model. Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison The General Linear Model Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison How we re approaching the GLM Regression for behavioral data Without using matrices Understand least squares

More information

ML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58

ML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58 ML and REML Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 58 Suppose y = Xβ + ε, where ε N(0, Σ) for some positive definite, symmetric matrix Σ.

More information

Chapter 3: Multiple Regression. August 14, 2018

Chapter 3: Multiple Regression. August 14, 2018 Chapter 3: Multiple Regression August 14, 2018 1 The multiple linear regression model The model y = β 0 +β 1 x 1 + +β k x k +ǫ (1) is called a multiple linear regression model with k regressors. The parametersβ

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Lecture 1: Linear Models and Applications

Lecture 1: Linear Models and Applications Lecture 1: Linear Models and Applications Claudia Czado TU München c (Claudia Czado, TU Munich) ZFS/IMS Göttingen 2004 0 Overview Introduction to linear models Exploratory data analysis (EDA) Estimation

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R

More information

Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options.

Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options. 1 Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam Name: Problems do not have equal value and some problems will take more time than others. Spend your time wisely. You do not

More information

ECON The Simple Regression Model

ECON The Simple Regression Model ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Introductory Econometrics

Introductory Econometrics Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 11, 2012 Outline Heteroskedasticity

More information

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata' Business Statistics Tommaso Proietti DEF - Università di Roma 'Tor Vergata' Linear Regression Specication Let Y be a univariate quantitative response variable. We model Y as follows: Y = f(x) + ε where

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Review of Econometrics

Review of Econometrics Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,

More information

MEI Exam Review. June 7, 2002

MEI Exam Review. June 7, 2002 MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)

More information

Introduction to Econometrics Midterm Examination Fall 2005 Answer Key

Introduction to Econometrics Midterm Examination Fall 2005 Answer Key Introduction to Econometrics Midterm Examination Fall 2005 Answer Key Please answer all of the questions and show your work Clearly indicate your final answer to each question If you think a question is

More information

x 21 x 22 x 23 f X 1 X 2 X 3 ε

x 21 x 22 x 23 f X 1 X 2 X 3 ε Chapter 2 Estimation 2.1 Example Let s start with an example. Suppose that Y is the fuel consumption of a particular model of car in m.p.g. Suppose that the predictors are 1. X 1 the weight of the car

More information

For more information about how to cite these materials visit

For more information about how to cite these materials visit Author(s): Kerby Shedden, Ph.D., 2010 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Share Alike 3.0 License: http://creativecommons.org/licenses/by-sa/3.0/

More information

Lecture 07 Hypothesis Testing with Multivariate Regression

Lecture 07 Hypothesis Testing with Multivariate Regression Lecture 07 Hypothesis Testing with Multivariate Regression 23 September 2015 Taylor B. Arnold Yale Statistics STAT 312/612 Goals for today 1. Review of assumptions and properties of linear model 2. The

More information

STA442/2101: Assignment 5

STA442/2101: Assignment 5 STA442/2101: Assignment 5 Craig Burkett Quiz on: Oct 23 rd, 2015 The questions are practice for the quiz next week, and are not to be handed in. I would like you to bring in all of the code you used to

More information

of being selected and varying such probability across strata under optimal allocation leads to increased accuracy.

of being selected and varying such probability across strata under optimal allocation leads to increased accuracy. 5 Sampling with Unequal Probabilities Simple random sampling and systematic sampling are schemes where every unit in the population has the same chance of being selected We will now consider unequal probability

More information

Appendix A: Review of the General Linear Model

Appendix A: Review of the General Linear Model Appendix A: Review of the General Linear Model The generallinear modelis an important toolin many fmri data analyses. As the name general suggests, this model can be used for many different types of analyses,

More information

MIXED MODELS THE GENERAL MIXED MODEL

MIXED MODELS THE GENERAL MIXED MODEL MIXED MODELS This chapter introduces best linear unbiased prediction (BLUP), a general method for predicting random effects, while Chapter 27 is concerned with the estimation of variances by restricted

More information

Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses

Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses ISQS 5349 Final Spring 2011 Instructions: Closed book, notes, and no electronic devices. Points (out of 200) in parentheses 1. (10) What is the definition of a regression model that we have used throughout

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

LINEAR MIXED MODEL ESTIMATION WITH DIRICHLET PROCESS RANDOM EFFECTS

LINEAR MIXED MODEL ESTIMATION WITH DIRICHLET PROCESS RANDOM EFFECTS LINEAR MIXED MODEL ESTIMATION WITH DIRICHLET PROCESS RANDOM EFFECTS By CHEN LI A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR

More information

On Modifications to Linking Variance Estimators in the Fay-Herriot Model that Induce Robustness

On Modifications to Linking Variance Estimators in the Fay-Herriot Model that Induce Robustness Statistics and Applications {ISSN 2452-7395 (online)} Volume 16 No. 1, 2018 (New Series), pp 289-303 On Modifications to Linking Variance Estimators in the Fay-Herriot Model that Induce Robustness Snigdhansu

More information

INTRODUCTORY ECONOMETRICS

INTRODUCTORY ECONOMETRICS INTRODUCTORY ECONOMETRICS Lesson 2b Dr Javier Fernández etpfemaj@ehu.es Dpt. of Econometrics & Statistics UPV EHU c J Fernández (EA3-UPV/EHU), February 21, 2009 Introductory Econometrics - p. 1/192 GLRM:

More information

Correlation in Linear Regression

Correlation in Linear Regression Vrije Universiteit Amsterdam Research Paper Correlation in Linear Regression Author: Yura Perugachi-Diaz Student nr.: 2566305 Supervisor: Dr. Bartek Knapik May 29, 2017 Faculty of Sciences Research Paper

More information

Generalized, Linear, and Mixed Models

Generalized, Linear, and Mixed Models Generalized, Linear, and Mixed Models CHARLES E. McCULLOCH SHAYLER.SEARLE Departments of Statistical Science and Biometrics Cornell University A WILEY-INTERSCIENCE PUBLICATION JOHN WILEY & SONS, INC. New

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there

More information