THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS

Size: px
Start display at page:

Download "THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS"

Transcription

1 THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS We begin with a relatively simple special case. Suppose y ijk = µ + τ i + u ij + e ijk, (i = 1,..., t; j = 1,..., n; k = 1,..., m) β = (µ, τ 1,..., τ t ), u = (u 11, u 12,..., u tn ), e = (e 111, e 112,..., e tnm ), β IR t+1, an unknown parameter vector, [ ] ([ ] [ ]) u 0 σ 2 N, u I 0 e 0 0 σei 2, where σ 2 u, σ 2 e IR + are unknown variance components. Copyright c 2012 (Iowa State University) Statistics / 47

2 This is the standard model for a CRD with t treatments, n experimental units per treatment, and m observations per experimental unit. We can write the model as y = Xβ + Zu + e, where X = [1 tnm 1, I t t 1 nm 1 ] and Z = [I tn tn 1 m 1 ]. Copyright c 2012 (Iowa State University) Statistics / 47

3 The ANOVA Table Source DF treatments t 1 exp.units(treatments) t(n 1) obs.units(exp.units, treatments) tn(m 1) c.total tnm 1 Copyright c 2012 (Iowa State University) Statistics / 47

4 The ANOVA Table Source DF Sum of Squares trt t 1 t n m i=1 j=1 k=1 (y i y )2 xu(trt) t(n 1) t n m i=1 j=1 k=1 (y ij yi )2 ou(xu, trt) tn(m 1) t n m i=1 j=1 k=1 (y ijk y ij ) 2 c.total tnm 1 t n m i=1 j=1 k=1 (y ijk y ) 2 Copyright c 2012 (Iowa State University) Statistics / 47

5 Source DF Sum of Squares Mean Square trt t 1 nm t i=1 (y i y )2 xu(trt) tn t m t n i=1 j=1 (y ij yi )2 SS/DF ou(xu, trt) tnm tn t n m i=1 j=1 k=1 (y ijk y ij ) 2 c.total tnm 1 n m j=1 k=1 (y ijk y ) 2 t i=1 Copyright c 2012 (Iowa State University) Statistics / 47

6 Expected Mean Squares E(MS trt ) = nm t t 1 i=1 E(y i.. y )2 = nm t t 1 i=1 E(µ + τ i + u i + e i µ τ u e ) 2 = nm t t 1 i=1 E(τ i τ + u i u + e i e ) 2 = nm t 1 t i=1 [(τ i τ ) 2 + E(u i u ) 2 + E(e i e ) 2 ] = nm t 1 [ t i=1 (τ i τ ) 2 + E{ t i=1 (u i u ) 2 } +E{ t i=1 (e i e ) 2 }] Copyright c 2012 (Iowa State University) Statistics / 47

7 To simplify this expression further, note that Thus, Similarly, Thus, It follows that u 1,..., u t i.i.d. N { t } E (u i u ) 2 i=1 ( 0, σ2 u n ). = (t 1) σ2 u n. ( ) i.i.d. e 1,..., e t N 0, σ2 e. nm { t } E (e i e ) 2 i=1 E(MS trt ) = nm t 1 = (t 1) σ2 e mn. t (τ i τ.) 2 + mσu 2 + σe. 2 i=1 Copyright c 2012 (Iowa State University) Statistics / 47

8 Similar calculations allow us to add an Expected Mean Squares (EMS) column to our ANOVA table. Source trt xu(trt) ou(xu, trt) EMS σe 2 + mσu 2 + nm t 1 σe 2 + mσu 2 σ 2 e t i=1 (τ i τ.) 2 Copyright c 2012 (Iowa State University) Statistics / 47

9 The entire table could also be derived using matrices X 1 = 1, X 2 = I t t 1 nm 1, X 3 = I tn tn 1 m 1. Source Sum of Squares DF MS trt y (P 2 P 1 )y rank(x 2 ) rank(x 1 ) xu(trt) y (P 3 P 2 )y rank(x 3 ) rank(x 2 ) SS/DF ou(xu, trt) y (I P 3 )y tnm rank(x 3 ) c.total y (I P 1 )y tnm 1 Copyright c 2012 (Iowa State University) Statistics / 47

10 Expected Mean Squares (EMS) could be computed using E(y Ay) = tr(aσ) + E(y) AE(y), where Σ = Var(y) = ZGZ + R = σui 2 tn tn 11 m m + σei 2 tnm tnm and E(y) = µ + τ 1 µ + τ 2... µ + τ t 1 nm 1. Copyright c 2012 (Iowa State University) Statistics / 47

11 Furthermore, it can be shown that ( y (P 2 P 1 )y σe 2 + mσu 2 χ 2 nm t 1 σe 2 + mσu 2 ) t (τ i τ.) 2, i=1 y (P 3 P 2 )y σ 2 e + mσ 2 u y (I P 3 )y σ 2 e χ 2 tn t, χ 2 tnm tn, and that these three χ 2 random variables are independent. Copyright c 2012 (Iowa State University) Statistics / 47

12 It follows that F 1 = MS trt MS xu(trt) F t 1, tn t ( nm σ 2 e + mσ 2 u ) t (τ i τ.) 2 i=1 and F 2 = MS xu(trt) MS ou(xu,trt) ( σ 2 e + mσ 2 u σ 2 e ) F tn t, tnm tn. Thus, we can use F 1 to test H 0 : τ 1 =... = τ t and F 2 to test H 0 : σ 2 u = 0. Copyright c 2012 (Iowa State University) Statistics / 47

13 Estimating σ 2 u Note that ( ) MSxu(trt) MS ou(xu,trt) E m = (σ2 e + mσ 2 u) σ 2 e m = σ 2 u. Thus, is an unbiased estimator of σ 2 u. MS xu(trt) MS ou(xu,trt) m Copyright c 2012 (Iowa State University) Statistics / 47

14 Although MS xu(trt) MS ou(xu,trt) m is an unbiased estimator of σ 2 u, this estimator can take negative values. This is undesirable because σ 2 u, the variance of the u random effects, cannot be negative. Copyright c 2012 (Iowa State University) Statistics / 47

15 As we have seen previously, It turns out that Σ Var(y) = σ 2 ui tn tn 11 m m + σ 2 ei tnm tnm. ˆβ Σ = (X Σ 1 X) X Σ 1 y = (X X) X y = ˆβ. Thus, the GLS estimator of any estimable Cβ is equal to the OLS estimator in this special case. Copyright c 2012 (Iowa State University) Statistics / 47

16 An Analysis Based on the Average for Each Experimental Unit Recall that our model is y ijk = µ + τ i + u ij + e ijk, (i = 1,..., t; j = 1,..., n; k = 1,..., m) The average of observations for experimental unit ij is y ij = µ + τ i + u ij + e ij Copyright c 2012 (Iowa State University) Statistics / 47

17 If we define and we have ɛ ij = u ij + e ij i, j σ 2 = σ 2 u + σ2 e m, y ij = µ + τ i + ɛ ij, where the ɛ ij terms are iid N(0, σ 2 ). Thus, averaging the same number (m) of multiple observations per experimental unit results in a normal theory Gauss-Markov linear model for the averages { yij : i = 1,..., t; j = 1,..., n }. Copyright c 2012 (Iowa State University) Statistics / 47

18 Inferences about estimable functions of β obtained by analyzing these averages are identical to the results obtained using the ANOVA approach as long as the number of multiple observations per experimental unit is the same for all experimental units. When using the averages as data, our estimate of σ 2 is an estimate of σu 2 + σ2 e m. We can t separately estimate σ 2 u and σ 2 e, but this doesn t matter if our focus is on inference for estimable functions of β. Copyright c 2012 (Iowa State University) Statistics / 47

19 Because E(y) = µ + τ 1 µ + τ 2... µ + τ t 1 nm 1, the only estimable quantities are linear combinations of the treatment means µ + τ 1, µ + τ 2,..., µ + τ t, whose Best Linear Unbiased Estimators are y 1, y 2,..., y t, respectively. Copyright c 2012 (Iowa State University) Statistics / 47

20 Thus, any estimable Cβ can always be written as µ + τ 1 µ + τ 2 A for some matrix A.. µ + τ t It follows that the BLUE of Cβ can be written as y 1 y 2 A.. y t Copyright c 2012 (Iowa State University) Statistics / 47

21 Now note that Var(y i..) = Var(µ + τ i + u i. + e i..) = Var(u i. + e i..) = Var(u i.) + Var(e i..) = σ2 u n + σ2 e nm = 1 ( ) σu 2 + σ2 e n m = σ2 n. Copyright c 2012 (Iowa State University) Statistics / 47

22 Thus Var y 1.. y y t.. = σ2 n I t t which implies that the variance of the BLUE of Cβ is Var A y 1... y t ( σ 2 = A n I t t ) A = σ2 n AA. Copyright c 2012 (Iowa State University) Statistics / 47

23 Thus, we don t need separate estimates of σ 2 u and σ 2 e to carry out inference for estimable Cβ. We do need to estimate σ 2 = σ 2 u + σ2 e m. This can equivalently be estimated by MS xu(trt) m or by the MSE in an analysis of the experimental unit means { yij : i = 1,..., t; j = 1,..., n. }. Copyright c 2012 (Iowa State University) Statistics / 47

24 For example, suppose we want to estimate τ 1 τ 2. The BLUE is y 1 y 2 whose variance is Var(y 1 y 2 ) = Var(y 1 ) + Var(y ( 2 ) ) = 2 σ2 σ 2 n = 2 u n + σ2 e mn = 2 mn (σ2 e + mσ 2 u) = 2 mn E(MS xu(trt)) Thus, Var(y 1 y 2 ) = 2MS xu(trt). mn Copyright c 2012 (Iowa State University) Statistics / 47

25 A 100(1 α)% confidence intreval for τ 1 τ 2 is 2MSxu(trt) y 1 y 2 ± t t(n 1),1 α/2. mn A test of H 0 : τ 1 = τ 2 can be based on t = y 1 y 2 2MSxu(trt) mn t t(n 1) τ 1 τ 2 2(σ 2 e +mσ2 u ) mn. Copyright c 2012 (Iowa State University) Statistics / 47

26 What if the number of observations per experimental unit is not the same for all experimental units? Let us look at two miniature examples to understand how this type of unbalancedness affects estimation and inference. Copyright c 2012 (Iowa State University) Statistics / 47

27 First Example y = y 111 y 121 y 211 y 212, X = , Z = Copyright c 2012 (Iowa State University) Statistics / 47

28 X 1 = 1, X 2 = X, X 3 = Z MS trt = y (P 2 P 1 )y = 2(y 1 y ) 2 + 2(y 2 y ) 2 = (y 1 y 2 ) 2 MS xu(trt) = y (P 3 P 2 )y = (y 111 y 1 ) 2 + (y 121 y 1 ) 2 = 1 2 (y 111 y 121 ) 2 MS ou(xu,trt) = y (I P 3 )y = (y 211 y 2 ) 2 + (y 212 y 2 ) 2 = 1 2 (y 211 y 212 ) 2 Copyright c 2012 (Iowa State University) Statistics / 47

29 E(MS trt ) = E(y 1 y 2 ) 2 = E(τ 1 τ 2 + u 1 u 21 + e 1 e 2 ) 2 = (τ 1 τ 2 ) 2 + Var(u 1 ) + Var(u 21 ) + Var(e 1 ) + Var(e 2 ) = (τ 1 τ 2 ) 2 + σ2 u 2 + σu 2 + σ2 e 2 + σ2 e 2 = (τ 1 τ 2 ) σ 2 u + σ 2 e Copyright c 2012 (Iowa State University) Statistics / 47

30 E(MS xu(trt) ) = 1 2 E(y 111 y 121 ) 2 = 1 2 E(u 11 u 12 + e 111 e 121 ) 2 = 1 2 (2σ2 u + 2σ 2 e) = σ 2 u + σ 2 e E(MS ou(xu,trt) ) = 1 2 E(y 211 y 212 ) 2 = 1 2 E(e 211 e 212 ) 2 = σ 2 e Copyright c 2012 (Iowa State University) Statistics / 47

31 SOURCE trt xu(trt) ou(xu, trt) EMS (τ 1 τ 2 ) σu 2 + σe 2 σu 2 + σe 2 σ 2 e F = ( MStrt 1.5σ 2 u + σ 2 e ) / ( MSxu(trt) σ 2 u + σ 2 e ) ( (τ1 τ 2 ) 2 ) F 1,1 1.5σ 2 u + σ 2 e Copyright c 2012 (Iowa State University) Statistics / 47

32 The test statistic that we used to test H 0 : τ 1 = = τ t in the balanced case is not F distributed in this unbalanced case. MS trt 1.5σ2 u + σ 2 ( e (τ1 τ 2 ) 2 ) MS xu(trt) σu 2 + σe 2 F 1,1 1.5σu 2 + σe 2 Copyright c 2012 (Iowa State University) Statistics / 47

33 A Statistic with an Approximate F Distribution We d like our denominator to be an unbiased estimator of 1.5σ 2 u + σ 2 e in this case. Consider 1.5MS xu(trt) 0.5MS ou(xu,trt) The expectation is 1.5(σ 2 u + σ 2 e) 0.5σ 2 e = 1.5σ 2 u + σ 2 e. The ratio MS trt 1.5MS xu(trt) 0.5MS ou(xu,trt) can be used as an approximate F statistic with 1 numerator DF and a denominator DF obtained using the Cochran-Satterthwaite method. Copyright c 2012 (Iowa State University) Statistics / 47

34 The Cochran-Satterthwaite method will be explained in the next set of notes. We should not expect this approximate F-test to be reliable in this case because of our pitifully small dataset. Copyright c 2012 (Iowa State University) Statistics / 47

35 Best Linear Unbiased Estimates in this First Example What do the BLUEs of the treatment means look like in this case? Recall [ ] µ1 β =, X =, Z =. µ Copyright c 2012 (Iowa State University) Statistics / 47

36 Σ = Var(y) = ZGZ + R = σuzz 2 + σei = σu σ2 e = = σu σ 0 σu 2 e σu 2 σu σe σ σu 2 σu 2 e σe 2 σu 2 + σe σu 2 + σe σu 2 + σe 2 σu σu 2 σu 2 + σe 2 Copyright c 2012 (Iowa State University) Statistics / 47

37 It follows that = ˆβ Σ = (X Σ 1 X) X Σ 1 y ] y = [ [ y1 Fortunately, this is a linear estimator that does not depend on unknown variance components. y 2 ] Copyright c 2012 (Iowa State University) Statistics / 47

38 Second Example y = y 111 y 112 y 121 y 211, X = , Z = Copyright c 2012 (Iowa State University) Statistics / 47

39 In this case, it can be shown that ˆβ Σ = (X Σ 1 X) X Σ 1 y = = [ σ 2 e +σ 2 u 3σ 2 e +4σ 2 u [ 2σ 2 e +2σ 2 u σ 2 e +σ 2 u 3σ 2 e +4σ 2 u σ 2 e +2σ 2 u 3σ 2 e +4σ 2 u σ 2 e +4σ2 u y 11 + y 211 σ 2 e +2σ 2 u 3σ 2 e +4σ2 u 0 y 121 ] ]. y 111 y 112 y 121 y 211 Copyright c 2012 (Iowa State University) Statistics / 47

40 It is straightforward to show that the weights on y 11 and y 121 are 1 Var(y 11 ) 1 Var(y ) Var(y 121 ) and 1 Var(y 121 ) 1 Var(y ) Var(y 121 ), respectively. This is a special case of a more general phenomenon: the BLUE is a weighted average of independent linear unbiased estimators with weights for the linear unbiased estimators proportional to the inverse variances of the linear unbiased estimators. Copyright c 2012 (Iowa State University) Statistics / 47

41 Of course, in this case and in many others, ˆβ Σ = [ 2σ 2 e +2σ 2 u 3σ 2 e +4σ2 u y 11 + y 211 σ 2 e +2σ2 u 3σ 2 e +4σ2 u y 121 ] is not an estimator because it is a function of unknown parameters. Thus, we use ˆβ Σ as our estimator (i.e., we replace σ 2 e and σ 2 u by estimates in the expression above). Copyright c 2012 (Iowa State University) Statistics / 47

42 ˆβ Σ is an approximation to the BLUE. ˆβ Σ is not even a linear estimator in this case. Its exact distribution is unknown. When sample sizes are large, it is reasonable to assume that the distribution of ˆβ Σ is approximately the same as the distribution of ˆβ Σ. Copyright c 2012 (Iowa State University) Statistics / 47

43 Var(ˆβ Σ ) = Var[(X Σ 1 X) 1 X Σ 1 y] = (X Σ 1 X) 1 X Σ 1 Var(y)[(X Σ 1 X) 1 X Σ 1 ] = (X Σ 1 X) 1 X Σ 1 ΣΣ 1 X(X Σ 1 X) 1 = (X Σ 1 X) 1 X Σ 1 X(X Σ 1 X) 1 = (X Σ 1 X) 1 Var(ˆβ Σ) = Var[(X Σ 1 X) 1 X Σ 1 y] =???? (X Σ 1 X) 1 Copyright c 2012 (Iowa State University) Statistics / 47

44 Summary of Main Points Many of the concepts we have seen by examining special cases hold in greater generality. For many of the linear mixed models commonly used in practice, balanced data are nice because... Copyright c 2012 (Iowa State University) Statistics / 47

45 1 It is relatively easy to determine degrees of freedom, sums of squares, and expected mean squares in an ANOVA table. 2 Ratios of appropriate mean squares can be used to obtain exact F-tests. 3 For estimable Cβ, Cˆβ Σ = Cˆβ. (OLS = GLS). 4 When Var(c ˆβ) = constant E(MS), exact inferences about c β can be obtained by constructing t-tests or confidence intervals based on c ˆβ c β t = t DF(MS). constant (MS) 5 Simple analysis based on experimental unit averages gives the same results as those obtained by linear mixed model analysis of the full data set. Copyright c 2012 (Iowa State University) Statistics / 47

46 When data are unbalanced, the analysis of linear mixed may be considerably more complicated. 1 Approximate F-tests can be obtained by forming linear combinations of Mean Squares to obtain denominators for test statistics. 2 The estimator Cˆβ Σ may be a nonlinear estimator of Cβ whose exact distribution is unknown. 3 Approximate inference for Cβ is often obtained by using the distribution of Cˆβ Σ, with unknowns in that distribution replaced by estimates. Copyright c 2012 (Iowa State University) Statistics / 47

47 Whether data are balanced or unbalanced, unbiased estimators of variance components can be obtained using linear combinations of mean squares from the ANOVA table. Copyright c 2012 (Iowa State University) Statistics / 47

REPEATED MEASURES. Copyright c 2012 (Iowa State University) Statistics / 29

REPEATED MEASURES. Copyright c 2012 (Iowa State University) Statistics / 29 REPEATED MEASURES Copyright c 2012 (Iowa State University) Statistics 511 1 / 29 Repeated Measures Example In an exercise therapy study, subjects were assigned to one of three weightlifting programs i=1:

More information

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17 Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear

More information

LINEAR MIXED-EFFECT MODELS

LINEAR MIXED-EFFECT MODELS LINEAR MIXED-EFFECT MODELS Studies / data / models seen previously in 511 assumed a single source of error variation y = Xβ + ɛ. β are fixed constants (in the frequentist approach to inference) ɛ is the

More information

11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49

11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49 11. Linear Mixed-Effects Models Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics 510 1 / 49 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p matrix of known constants β R

More information

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares 13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares opyright c 2018 Dan Nettleton (Iowa State University) 13. Statistics 510 1 / 18 Suppose M 1,..., M k are independent

More information

Linear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34

Linear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34 Linear Mixed-Effects Models Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 34 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p design matrix of known constants β R

More information

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28

2. A Review of Some Key Linear Models Results. Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics / 28 2. A Review of Some Key Linear Models Results Copyright c 2018 Dan Nettleton (Iowa State University) 2. Statistics 510 1 / 28 A General Linear Model (GLM) Suppose y = Xβ + ɛ, where y R n is the response

More information

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32 ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach

More information

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32 ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach

More information

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61 The Gauss-Markov Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 61 Recall that Cov(u, v) = E((u E(u))(v E(v))) = E(uv) E(u)E(v) Var(u) = Cov(u, u) = E(u E(u)) 2 = E(u 2

More information

Lecture 34: Properties of the LSE

Lecture 34: Properties of the LSE Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E

More information

Lecture 6: Linear models and Gauss-Markov theorem

Lecture 6: Linear models and Gauss-Markov theorem Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables

More information

STAT 540: Data Analysis and Regression

STAT 540: Data Analysis and Regression STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information

General Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35

General Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35 General Linear Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 35 Suppose the NTGMM holds so that y = Xβ + ε, where ε N(0, σ 2 I). opyright

More information

21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model

21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model 21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model Copyright c 2018 (Iowa State University) 21. Statistics 510 1 / 26 C. R. Henderson Born April 1, 1911,

More information

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 Lecture 3: Linear Models Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector of observed

More information

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36 20. REML Estimation of Variance Components Copyright c 2018 (Iowa State University) 20. Statistics 510 1 / 36 Consider the General Linear Model y = Xβ + ɛ, where ɛ N(0, Σ) and Σ is an n n positive definite

More information

Mixed-Models. version 30 October 2011

Mixed-Models. version 30 October 2011 Mixed-Models version 30 October 2011 Mixed models Mixed models estimate a vector! of fixed effects and one (or more) vectors u of random effects Both fixed and random effects models always include a vector

More information

Multivariate Regression Analysis

Multivariate Regression Analysis Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Applied Statistics Preliminary Examination Theory of Linear Models August 2017

Applied Statistics Preliminary Examination Theory of Linear Models August 2017 Applied Statistics Preliminary Examination Theory of Linear Models August 2017 Instructions: Do all 3 Problems. Neither calculators nor electronic devices of any kind are allowed. Show all your work, clearly

More information

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University. Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall

More information

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X = The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design

More information

3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43

3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43 3. The F Test for Comparing Reduced vs. Full Models opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics 510 1 / 43 Assume the Gauss-Markov Model with normal errors: y = Xβ + ɛ, ɛ N(0, σ

More information

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin Regression Review Statistics 149 Spring 2006 Copyright c 2006 by Mark E. Irwin Matrix Approach to Regression Linear Model: Y i = β 0 + β 1 X i1 +... + β p X ip + ɛ i ; ɛ i iid N(0, σ 2 ), i = 1,..., n

More information

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is Q = (Y i β 0 β 1 X i1 β 2 X i2 β p 1 X i.p 1 ) 2, which in matrix notation is Q = (Y Xβ) (Y

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

Regression Models - Introduction

Regression Models - Introduction Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent

More information

ML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58

ML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58 ML and REML Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 58 Suppose y = Xβ + ε, where ε N(0, Σ) for some positive definite, symmetric matrix Σ.

More information

Simple Linear Regression: The Model

Simple Linear Regression: The Model Simple Linear Regression: The Model task: quantifying the effect of change X in X on Y, with some constant β 1 : Y = β 1 X, linear relationship between X and Y, however, relationship subject to a random

More information

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27 Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Unit 7: Random Effects, Subsampling, Nested and Crossed Factor Designs

Unit 7: Random Effects, Subsampling, Nested and Crossed Factor Designs Unit 7: Random Effects, Subsampling, Nested and Crossed Factor Designs STA 643: Advanced Experimental Design Derek S. Young 1 Learning Objectives Understand how to interpret a random effect Know the different

More information

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012 Mixed-Model Estimation of genetic variances Bruce Walsh lecture notes Uppsala EQG 01 course version 8 Jan 01 Estimation of Var(A) and Breeding Values in General Pedigrees The above designs (ANOVA, P-O

More information

The Aitken Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41

The Aitken Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41 The Aitken Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 The Aitken Model (AM): Suppose where y = Xβ + ε, E(ε) = 0 and Var(ε) = σ 2 V for some σ 2 > 0 and some known

More information

Scheffé s Method. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 37

Scheffé s Method. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 37 Scheffé s Method opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 37 Scheffé s Method: Suppose where Let y = Xβ + ε, ε N(0, σ 2 I). c 1β,..., c qβ be q estimable functions, where

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

The Random Effects Model Introduction

The Random Effects Model Introduction The Random Effects Model Introduction Sometimes, treatments included in experiment are randomly chosen from set of all possible treatments. Conclusions from such experiment can then be generalized to other

More information

MIXED MODELS THE GENERAL MIXED MODEL

MIXED MODELS THE GENERAL MIXED MODEL MIXED MODELS This chapter introduces best linear unbiased prediction (BLUP), a general method for predicting random effects, while Chapter 27 is concerned with the estimation of variances by restricted

More information

Econometrics Multiple Regression Analysis: Heteroskedasticity

Econometrics Multiple Regression Analysis: Heteroskedasticity Econometrics Multiple Regression Analysis: João Valle e Azevedo Faculdade de Economia Universidade Nova de Lisboa Spring Semester João Valle e Azevedo (FEUNL) Econometrics Lisbon, April 2011 1 / 19 Properties

More information

Formal Statement of Simple Linear Regression Model

Formal Statement of Simple Linear Regression Model Formal Statement of Simple Linear Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor

More information

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals (SW Chapter 5) Outline. The standard error of ˆ. Hypothesis tests concerning β 3. Confidence intervals for β 4. Regression

More information

Lecture 4. Random Effects in Completely Randomized Design

Lecture 4. Random Effects in Completely Randomized Design Lecture 4. Random Effects in Completely Randomized Design Montgomery: 3.9, 13.1 and 13.7 1 Lecture 4 Page 1 Random Effects vs Fixed Effects Consider factor with numerous possible levels Want to draw inference

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.

More information

Random and Mixed Effects Models - Part III

Random and Mixed Effects Models - Part III Random and Mixed Effects Models - Part III Statistics 149 Spring 2006 Copyright 2006 by Mark E. Irwin Quasi-F Tests When we get to more than two categorical factors, some times there are not nice F tests

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = 0 + 1 x 1 + x +... k x k + u 6. Heteroskedasticity What is Heteroskedasticity?! Recall the assumption of homoskedasticity implied that conditional on the explanatory variables,

More information

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model Outline 1 Multiple Linear Regression (Estimation, Inference, Diagnostics and Remedial Measures) 2 Special Topics for Multiple Regression Extra Sums of Squares Standardized Version of the Multiple Regression

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X. Estimating σ 2 We can do simple prediction of Y and estimation of the mean of Y at any value of X. To perform inferences about our regression line, we must estimate σ 2, the variance of the error term.

More information

Introductory Econometrics

Introductory Econometrics Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna December 11, 2012 Outline Heteroskedasticity

More information

Topic 17 - Single Factor Analysis of Variance. Outline. One-way ANOVA. The Data / Notation. One way ANOVA Cell means model Factor effects model

Topic 17 - Single Factor Analysis of Variance. Outline. One-way ANOVA. The Data / Notation. One way ANOVA Cell means model Factor effects model Topic 17 - Single Factor Analysis of Variance - Fall 2013 One way ANOVA Cell means model Factor effects model Outline Topic 17 2 One-way ANOVA Response variable Y is continuous Explanatory variable is

More information

Lecture 15. Hypothesis testing in the linear model

Lecture 15. Hypothesis testing in the linear model 14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma

More information

Least Squares Estimation-Finite-Sample Properties

Least Squares Estimation-Finite-Sample Properties Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions

More information

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40 When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 40 When is the Ordinary Least Squares Estimator (OLSE) the Best Linear Unbiased Estimator (BLUE)? Copyright

More information

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11 Econometrics A Keio University, Faculty of Economics Simple linear model (2) Simon Clinet (Keio University) Econometrics A October 16, 2018 1 / 11 Estimation of the noise variance σ 2 In practice σ 2 too

More information

Heteroskedasticity. Part VII. Heteroskedasticity

Heteroskedasticity. Part VII. Heteroskedasticity Part VII Heteroskedasticity As of Oct 15, 2015 1 Heteroskedasticity Consequences Heteroskedasticity-robust inference Testing for Heteroskedasticity Weighted Least Squares (WLS) Feasible generalized Least

More information

Weighted Least Squares

Weighted Least Squares Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w

More information

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij =

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij = 20. ONE-WAY ANALYSIS OF VARIANCE 1 20.1. Balanced One-Way Classification Cell means parametrization: Y ij = µ i + ε ij, i = 1,..., I; j = 1,..., J, ε ij N(0, σ 2 ), In matrix form, Y = Xβ + ε, or 1 Y J

More information

Topic 25 - One-Way Random Effects Models. Outline. Random Effects vs Fixed Effects. Data for One-way Random Effects Model. One-way Random effects

Topic 25 - One-Way Random Effects Models. Outline. Random Effects vs Fixed Effects. Data for One-way Random Effects Model. One-way Random effects Topic 5 - One-Way Random Effects Models One-way Random effects Outline Model Variance component estimation - Fall 013 Confidence intervals Topic 5 Random Effects vs Fixed Effects Consider factor with numerous

More information

STAT 705 Chapter 19: Two-way ANOVA

STAT 705 Chapter 19: Two-way ANOVA STAT 705 Chapter 19: Two-way ANOVA Adapted from Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 41 Two-way ANOVA This material is covered in Sections

More information

Matrix Approach to Simple Linear Regression: An Overview

Matrix Approach to Simple Linear Regression: An Overview Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix

More information

ECON The Simple Regression Model

ECON The Simple Regression Model ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In

More information

Stat 511 Outline (Spring 2008 Revision of 2004 Version)

Stat 511 Outline (Spring 2008 Revision of 2004 Version) Stat 511 Outline (Spring 2008 Revision of 2004 Version) Steve Vardeman Iowa State University February 1, 2008 Abstract This outline summarizes the main points of lectures based on Ken Koehler s class notes

More information

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review

More information

Two-Way Factorial Designs

Two-Way Factorial Designs 81-86 Two-Way Factorial Designs Yibi Huang 81-86 Two-Way Factorial Designs Chapter 8A - 1 Problem 81 Sprouting Barley (p166 in Oehlert) Brewer s malt is produced from germinating barley, so brewers like

More information

Ch 3: Multiple Linear Regression

Ch 3: Multiple Linear Regression Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery

More information

Inference in Regression Analysis

Inference in Regression Analysis Inference in Regression Analysis Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 1 Today: Normal Error Regression Model Y i = β 0 + β 1 X i + ǫ i Y i value

More information

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017 Summary of Part II Key Concepts & Formulas Christopher Ting November 11, 2017 christopherting@smu.edu.sg http://www.mysmu.edu/faculty/christophert/ Christopher Ting 1 of 16 Why Regression Analysis? Understand

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata' Business Statistics Tommaso Proietti DEF - Università di Roma 'Tor Vergata' Linear Regression Specication Let Y be a univariate quantitative response variable. We model Y as follows: Y = f(x) + ε where

More information

Stat 511 Outline (Spring 2009 Revision of 2008 Version)

Stat 511 Outline (Spring 2009 Revision of 2008 Version) Stat 511 Outline Spring 2009 Revision of 2008 Version Steve Vardeman Iowa State University September 29, 2014 Abstract This outline summarizes the main points of lectures based on Ken Koehler s class notes

More information

DESAIN EKSPERIMEN Analysis of Variances (ANOVA) Semester Genap 2017/2018 Jurusan Teknik Industri Universitas Brawijaya

DESAIN EKSPERIMEN Analysis of Variances (ANOVA) Semester Genap 2017/2018 Jurusan Teknik Industri Universitas Brawijaya DESAIN EKSPERIMEN Analysis of Variances (ANOVA) Semester Jurusan Teknik Industri Universitas Brawijaya Outline Introduction The Analysis of Variance Models for the Data Post-ANOVA Comparison of Means Sample

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares

Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares L Magee Fall, 2008 1 Consider a regression model y = Xβ +ɛ, where it is assumed that E(ɛ X) = 0 and E(ɛɛ X) =

More information

The Statistical Property of Ordinary Least Squares

The Statistical Property of Ordinary Least Squares The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting

More information

Multiple Regression Analysis. Part III. Multiple Regression Analysis

Multiple Regression Analysis. Part III. Multiple Regression Analysis Part III Multiple Regression Analysis As of Sep 26, 2017 1 Multiple Regression Analysis Estimation Matrix form Goodness-of-Fit R-square Adjusted R-square Expected values of the OLS estimators Irrelevant

More information

PROBLEM TWO (ALKALOID CONCENTRATIONS IN TEA) 1. Statistical Design

PROBLEM TWO (ALKALOID CONCENTRATIONS IN TEA) 1. Statistical Design PROBLEM TWO (ALKALOID CONCENTRATIONS IN TEA) 1. Statistical Design The purpose of this experiment was to determine differences in alkaloid concentration of tea leaves, based on herb variety (Factor A)

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there

More information

Lecture 2: Linear and Mixed Models

Lecture 2: Linear and Mixed Models Lecture 2: Linear and Mixed Models Bruce Walsh lecture notes Introduction to Mixed Models SISG, Seattle 18 20 July 2018 1 Quick Review of the Major Points The general linear model can be written as y =

More information

coefficients n 2 are the residuals obtained when we estimate the regression on y equals the (simple regression) estimated effect of the part of x 1

coefficients n 2 are the residuals obtained when we estimate the regression on y equals the (simple regression) estimated effect of the part of x 1 Review - Interpreting the Regression If we estimate: It can be shown that: where ˆ1 r i coefficients β ˆ+ βˆ x+ βˆ ˆ= 0 1 1 2x2 y ˆβ n n 2 1 = rˆ i1yi rˆ i1 i= 1 i= 1 xˆ are the residuals obtained when

More information

11 Hypothesis Testing

11 Hypothesis Testing 28 11 Hypothesis Testing 111 Introduction Suppose we want to test the hypothesis: H : A q p β p 1 q 1 In terms of the rows of A this can be written as a 1 a q β, ie a i β for each row of A (here a i denotes

More information

General Linear Model: Statistical Inference

General Linear Model: Statistical Inference Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter 4), least

More information

The general linear regression with k explanatory variables is just an extension of the simple regression as follows

The general linear regression with k explanatory variables is just an extension of the simple regression as follows 3. Multiple Regression Analysis The general linear regression with k explanatory variables is just an extension of the simple regression as follows (1) y i = β 0 + β 1 x i1 + + β k x ik + u i. Because

More information

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance: 8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions

More information

Regression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T,

Regression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T, Regression Analysis The multiple linear regression model with k explanatory variables assumes that the tth observation of the dependent or endogenous variable y t is described by the linear relationship

More information

Stat 217 Final Exam. Name: May 1, 2002

Stat 217 Final Exam. Name: May 1, 2002 Stat 217 Final Exam Name: May 1, 2002 Problem 1. Three brands of batteries are under study. It is suspected that the lives (in weeks) of the three brands are different. Five batteries of each brand are

More information

Review of Econometrics

Review of Econometrics Review of Econometrics Zheng Tian June 5th, 2017 1 The Essence of the OLS Estimation Multiple regression model involves the models as follows Y i = β 0 + β 1 X 1i + β 2 X 2i + + β k X ki + u i, i = 1,...,

More information

Quick Review on Linear Multiple Regression

Quick Review on Linear Multiple Regression Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,

More information

One-way ANOVA (Single-Factor CRD)

One-way ANOVA (Single-Factor CRD) One-way ANOVA (Single-Factor CRD) STAT:5201 Week 3: Lecture 3 1 / 23 One-way ANOVA We have already described a completed randomized design (CRD) where treatments are randomly assigned to EUs. There is

More information

Analysis of Variance and Design of Experiments-I

Analysis of Variance and Design of Experiments-I Analysis of Variance and Design of Experiments-I MODULE VIII LECTURE - 35 ANALYSIS OF VARIANCE IN RANDOM-EFFECTS MODEL AND MIXED-EFFECTS MODEL Dr. Shalabh Department of Mathematics and Statistics Indian

More information

COMPLETELY RANDOM DESIGN (CRD) -Design can be used when experimental units are essentially homogeneous.

COMPLETELY RANDOM DESIGN (CRD) -Design can be used when experimental units are essentially homogeneous. COMPLETELY RANDOM DESIGN (CRD) Description of the Design -Simplest design to use. -Design can be used when experimental units are essentially homogeneous. -Because of the homogeneity requirement, it may

More information

Theorem A: Expectations of Sums of Squares Under the two-way ANOVA model, E(X i X) 2 = (µ i µ) 2 + n 1 n σ2

Theorem A: Expectations of Sums of Squares Under the two-way ANOVA model, E(X i X) 2 = (µ i µ) 2 + n 1 n σ2 identity Y ijk Ȳ = (Y ijk Ȳij ) + (Ȳi Ȳ ) + (Ȳ j Ȳ ) + (Ȳij Ȳi Ȳ j + Ȳ ) Theorem A: Expectations of Sums of Squares Under the two-way ANOVA model, (1) E(MSE) = E(SSE/[IJ(K 1)]) = (2) E(MSA) = E(SSA/(I

More information

STAT 705 Chapter 19: Two-way ANOVA

STAT 705 Chapter 19: Two-way ANOVA STAT 705 Chapter 19: Two-way ANOVA Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 38 Two-way ANOVA Material covered in Sections 19.2 19.4, but a bit

More information

ECON Introductory Econometrics. Lecture 5: OLS with One Regressor: Hypothesis Tests

ECON Introductory Econometrics. Lecture 5: OLS with One Regressor: Hypothesis Tests ECON4150 - Introductory Econometrics Lecture 5: OLS with One Regressor: Hypothesis Tests Monique de Haan (moniqued@econ.uio.no) Stock and Watson Chapter 5 Lecture outline 2 Testing Hypotheses about one

More information

STAT22200 Spring 2014 Chapter 8A

STAT22200 Spring 2014 Chapter 8A STAT22200 Spring 2014 Chapter 8A Yibi Huang May 13, 2014 81-86 Two-Way Factorial Designs Chapter 8A - 1 Problem 81 Sprouting Barley (p166 in Oehlert) Brewer s malt is produced from germinating barley,

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

Generalized Linear Mixed-Effects Models. Copyright c 2015 Dan Nettleton (Iowa State University) Statistics / 58

Generalized Linear Mixed-Effects Models. Copyright c 2015 Dan Nettleton (Iowa State University) Statistics / 58 Generalized Linear Mixed-Effects Models Copyright c 2015 Dan Nettleton (Iowa State University) Statistics 510 1 / 58 Reconsideration of the Plant Fungus Example Consider again the experiment designed to

More information

Asymptotic Theory. L. Magee revised January 21, 2013

Asymptotic Theory. L. Magee revised January 21, 2013 Asymptotic Theory L. Magee revised January 21, 2013 1 Convergence 1.1 Definitions Let a n to refer to a random variable that is a function of n random variables. Convergence in Probability The scalar a

More information