REPEATED MEASURES. Copyright c 2012 (Iowa State University) Statistics / 29

Size: px
Start display at page:

Download "REPEATED MEASURES. Copyright c 2012 (Iowa State University) Statistics / 29"

Transcription

1 REPEATED MEASURES Copyright c 2012 (Iowa State University) Statistics / 29

2 Repeated Measures Example In an exercise therapy study, subjects were assigned to one of three weightlifting programs i=1: The number of repetitions of weightlifting was increased as subjects became stronger (RI) i=2: The amount of weight was increased as subjects became stronger (WI) i=3: Subjects did not participate in weightlifting (XCont) Copyright c 2012 (Iowa State University) Statistics / 29

3 Measurements of strength (y) were taken on days 2, 4, 6, 8, 10, 12 and 14 for each subject Source: Littel, Freund, and Spector (1991), SAS System for Linear Models R code: RepeatedMeasuresR Copyright c 2012 (Iowa State University) Statistics / 29

4 A Linear Mixed-Effect Model y ijk = µ + α i + s ij + τ k + γ ik + e ijk Y ijk strength measurement for program i, subject j, time point k α i fixed program effect s ij random subject effect τ k fixed time effect γ ik fixed program time interaction e ijk random error Copyright c 2012 (Iowa State University) Statistics / 29

5 Initially, we will assume s ij iid N(0, σ 2 s ) independent of e ijk iid N(0, σ 2 e) Copyright c 2012 (Iowa State University) Statistics / 29

6 Average strength after 2k days on the ith program is µ ik = E(y ijk ) = E(µ + α i + s ij + τ k + γ ik + e ijk ) = µ + α i + E(s ij ) + τ k + γ ik + E(e ijk ) = µ + α i + τ k + γ ik for i = 1, 2, 3 and k = 1, 2,, 7 Copyright c 2012 (Iowa State University) Statistics / 29

7 The variance of any single observation is Var(y ijk ) = Var(µ + α i + s ij + τ k + γ ik + e ijk ) = Var(s ij + e ijk ) = Var(s ij ) + Var(e ijk ) = σs 2 + σe 2 Copyright c 2012 (Iowa State University) Statistics / 29

8 The covariance between an two different observations from the same subject is Cov(y ijk, y ijl ) = Cov(µ + α i + s ij + τ k + γ ik + e ijk, µ + α i + s ij + τ l + γ il + e ijl ) = Cov(s ij + e ijk, s ij + e ijl ) = Cov(s ij, s ij ) + Cov(s ij, e ijl ) +Cov(e ijk, s ij ) + Cov(e ijk, e ijl ) = Var(s ij ) = σ 2 s Copyright c 2012 (Iowa State University) Statistics / 29

9 The correlation between y ijk and y ijl is σ 2 s σ 2 s + σ 2 e ρ Observations taken on different subjects are uncorrelated Copyright c 2012 (Iowa State University) Statistics / 29

10 For the set of observations taken on a single subject, we have y ij1 σe 2 + σs 2 σs 2 σs 2 y ij3 Var = σs 2 σe 2 + σs 2 σ 2 s σs 2 y ij7 σs 2 σs 2 σs 2 σe 2 + σs 2 = σ 2 ei σ 2 s This is known as a compound symmetric covariance structure Copyright c 2012 (Iowa State University) Statistics / 29

11 Using n i to denote the number of subjects in the ith program, we can write this model in the form as follows y = Xβ + Zu + e Copyright c 2012 (Iowa State University) Statistics / 29

12 y 111 y 112 y 117 y 121 y 122 y 127 y 211 y 212 y 217 y 3n31 y 3n32 y 3n37 = columns for interaction µ α 1 α 2 α 3 τ 1 τ 2 τ 3 τ 4 τ 5 τ 6 τ 7 γ 11 γ 12 γ 37 + Copyright c 2012 (Iowa State University) Statistics / 29

13 s 11 s 12 s 3n3 + e 111 e 112 e 117 e 121 e 122 e 127 e 3n31 e 3n32 e 3n37 Copyright c 2012 (Iowa State University) Statistics / 29

14 In this case, G = Var(u) = σ 2 s I m m and R = Var(e) = σ 2 ei (7m) (7m), where m = n 1 + n 2 + n 3 is the total number of subjects Σ = Var(y) = ZGZ + R is a block diagonal matrix with one block of the form σs σei for each subject Copyright c 2012 (Iowa State University) Statistics / 29

15 LSMEAN for Program i and Time k ȳ i k = 1 n i n i j=1 y ijk = µ + α i + s i + τ k + γ ik + ē i k Var(ȳ i k ) = σ2 s + σ 2 e (6 se = 7 MS error + 1 ) 1 7 MS subj(prog) Cochran-Satterthwaite degrees of freedom = 658 n i n i Copyright c 2012 (Iowa State University) Statistics / 29

16 LSMEAN for Program i ȳ i k = ȳ i = µ + α i + s i + τ + γ i + ē i k=1 Var(ȳ i ) = 7σ2 s + σe 2 7n i MS subj(prog) se = 7n i Degrees of freedom = 54 because there are n 1 = 16, n 2 = 21, n 3 = 20 subjects in the three programs Copyright c 2012 (Iowa State University) Statistics / 29

17 LSMEAN for time k ȳ i k = µ + ᾱ i=1 se = 1 3 Var ( s i + τ k + γ k i=1 ) 3 ȳ i k = 1 9 i=1 3 i=1 ( 6 7 MS error MS subj(prog) σ 2 e + σ 2 s n i ) ( 3 i=1 3 i=1 ) 1 Cochran-Satterthwaite degrees of freedom = 658 n i ē i k Copyright c 2012 (Iowa State University) Statistics / 29

18 Difference between Strength Means at Times k and l (Averaging across Programs) (ȳ i k ȳ i l ) = 1 3 i=1 3 1 n i n i i=1 j=1 (y ijk y ijl ) = τ k τ l + γ k γ l n i (e ijk e ijl ) 3 i=1 n i j=1 Note that subject effects cancel out Copyright c 2012 (Iowa State University) Statistics / 29

19 The variance of the estimator is 2σe n i i=1 Thus, the standard error of the estimator is 2MS 3 error 1 9 n i i=1 Degrees of freedom = 7( ) ( ) = 324 Copyright c 2012 (Iowa State University) Statistics / 29

20 Difference between Strength Means for Program i and l at Time k ȳ i k ȳ l k = α i α l + γ ik γ lk + s i s l + ē i k ē l k ( 1 Var(ȳ i k ȳ l k ) = (σe 2 + σs 2 ) + 1 ) n i n l (6 se = 7 MS error + 1 ) ( 1 7 MS subj(prog) + 1 ) n i n l Cochran-Satterthwaite degrees of freedom = 658 Copyright c 2012 (Iowa State University) Statistics / 29

21 Difference between Strength Means at Times k and l within a Program i ȳ i k ȳ i l = τ k τ l + γ ik γ il + ē i k ē i l Var(ȳ i k ȳ i l ) = Var 1 n i (y ijk y ijl ) = 2σ2 e n i n i j=1 ( ) se = MS error 2ni Degrees of freedom = 324 Copyright c 2012 (Iowa State University) Statistics / 29

22 Difference in Strength Means between Programs i and l (Averaging across Time) ȳ i ȳ l = α i α l + γ i γ l + s i s l + ē i ē l ( 1 Var(ȳ i ȳ l ) = (7σs 2 + σe) ) 7n i 7n l ( 1 se = MS subj(prog) + 1 ) 7n i 7n l Degrees of freedom = 54 Copyright c 2012 (Iowa State University) Statistics / 29

23 Other Variance Matrices We began with the model where y ijk = µ + α i + s ij + τ k + γ ik + e ijk s ij iid N(0, σ 2 s ) independent of e ijk iid N(0, σ 2 e) Copyright c 2012 (Iowa State University) Statistics / 29

24 This model was expressed in the form where y = Xβ + Zu + e, u = s 11 s 3n3 contained the random subject effects Copyright c 2012 (Iowa State University) Statistics / 29

25 Here G = Var(u) = σ 2 s I, R = Var(e) = σ 2 ei, and Σ = Var(y) = ZGZ + R 11 = σs σ2 ei 11 σei 2 + σs 2 11 = σei 2 + σs 2 11 Copyright c 2012 (Iowa State University) Statistics / 29

26 If you are not interested in predicting subject effects (random subject effects are included only to introduce correlation among repeated measures on the same subject), you can work with an alternative expression of the same model by using the general linear model y = Xβ + ɛ, where Var(y) = Var(ɛ) = σ 2 ei + σ 2 s 11 σ 2 ei + σ 2 s 11 Copyright c 2012 (Iowa State University) Statistics / 29

27 More generally, we can replace the mixed model y = Xβ + Zu + e with the model y = Xβ + ɛ where W W 0 Var(y) = Var(ɛ) = 0 0 W Copyright c 2012 (Iowa State University) Statistics / 29

28 We can choose a structure for W that seems appropriate based on the design and the data One choice for W is a compound symmetric matrix like we have considered previously Another choice for W is an unstructured positive definite matrix A common choice for W when repeated measures are equally spaced in time is the first order autoregressive structure known as AR(1) Copyright c 2012 (Iowa State University) Statistics / 29

29 AR(1):First Order Autoregressive Covariance Structure W = σ 2 1 ρ ρ 2 ρ 3 ρ 4 ρ 5 ρ 6 ρ 1 ρ ρ 2 ρ 3 ρ 4 ρ 5 ρ 2 ρ 1 ρ ρ 2 ρ 3 ρ 4 ρ 3 ρ 2 ρ 1 ρ ρ 2 ρ 3 ρ 4 ρ 3 ρ 2 ρ 1 ρ ρ 2 ρ 5 ρ 4 ρ 3 ρ 2 ρ 1 ρ ρ 6 ρ 5 ρ 4 ρ 3 ρ 2 ρ 1 where σ 2 (0, ) and ρ ( 1, 1) are unknown parameters Copyright c 2012 (Iowa State University) Statistics / 29

11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49

11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49 11. Linear Mixed-Effects Models Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics 510 1 / 49 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p matrix of known constants β R

More information

THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS

THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS We begin with a relatively simple special case. Suppose y ijk = µ + τ i + u ij + e ijk, (i = 1,..., t; j = 1,..., n; k = 1,..., m) β =

More information

Linear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34

Linear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34 Linear Mixed-Effects Models Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 34 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p design matrix of known constants β R

More information

RANDOM and REPEATED statements - How to Use Them to Model the Covariance Structure in Proc Mixed. Charlie Liu, Dachuang Cao, Peiqi Chen, Tony Zagar

RANDOM and REPEATED statements - How to Use Them to Model the Covariance Structure in Proc Mixed. Charlie Liu, Dachuang Cao, Peiqi Chen, Tony Zagar Paper S02-2007 RANDOM and REPEATED statements - How to Use Them to Model the Covariance Structure in Proc Mixed Charlie Liu, Dachuang Cao, Peiqi Chen, Tony Zagar Eli Lilly & Company, Indianapolis, IN ABSTRACT

More information

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40 When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 40 When is the Ordinary Least Squares Estimator (OLSE) the Best Linear Unbiased Estimator (BLUE)? Copyright

More information

LINEAR MIXED-EFFECT MODELS

LINEAR MIXED-EFFECT MODELS LINEAR MIXED-EFFECT MODELS Studies / data / models seen previously in 511 assumed a single source of error variation y = Xβ + ɛ. β are fixed constants (in the frequentist approach to inference) ɛ is the

More information

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares 13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares opyright c 2018 Dan Nettleton (Iowa State University) 13. Statistics 510 1 / 18 Suppose M 1,..., M k are independent

More information

Stat 579: Generalized Linear Models and Extensions

Stat 579: Generalized Linear Models and Extensions Stat 579: Generalized Linear Models and Extensions Linear Mixed Models for Longitudinal Data Yan Lu April, 2018, week 15 1 / 38 Data structure t1 t2 tn i 1st subject y 11 y 12 y 1n1 Experimental 2nd subject

More information

Mixed models with correlated measurement errors

Mixed models with correlated measurement errors Mixed models with correlated measurement errors Rasmus Waagepetersen October 9, 2018 Example from Department of Health Technology 25 subjects where exposed to electric pulses of 11 different durations

More information

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32 ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1

MA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1 MA 575 Linear Models: Cedric E Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1 1 Within-group Correlation Let us recall the simple two-level hierarchical

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32 ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach

More information

Stat 579: Generalized Linear Models and Extensions

Stat 579: Generalized Linear Models and Extensions Stat 579: Generalized Linear Models and Extensions Linear Mixed Models for Longitudinal Data Yan Lu April, 2018, week 12 1 / 34 Correlated data multivariate observations clustered data repeated measurement

More information

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 Lecture 3: Linear Models Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector of observed

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin Regression Review Statistics 149 Spring 2006 Copyright c 2006 by Mark E. Irwin Matrix Approach to Regression Linear Model: Y i = β 0 + β 1 X i1 +... + β p X ip + ɛ i ; ɛ i iid N(0, σ 2 ), i = 1,..., n

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Prediction. is a weighted least squares estimate since it minimizes. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark

Prediction. is a weighted least squares estimate since it minimizes. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark March 22, 2017 WLS and BLUE (prelude to BLUP) Suppose that Y has mean β and known covariance matrix V (but Y need not

More information

WLS and BLUE (prelude to BLUP) Prediction

WLS and BLUE (prelude to BLUP) Prediction WLS and BLUE (prelude to BLUP) Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 21, 2018 Suppose that Y has mean X β and known covariance matrix V (but Y need

More information

Lecture 34: Properties of the LSE

Lecture 34: Properties of the LSE Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E

More information

Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 14

Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 14 Introduction to Econometrics (3 rd Updated Edition) by James H. Stock and Mark W. Watson Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 14 (This version July 0, 014) 015 Pearson Education,

More information

Lecture 2: Linear and Mixed Models

Lecture 2: Linear and Mixed Models Lecture 2: Linear and Mixed Models Bruce Walsh lecture notes Introduction to Mixed Models SISG, Seattle 18 20 July 2018 1 Quick Review of the Major Points The general linear model can be written as y =

More information

1 Introduction to Generalized Least Squares

1 Introduction to Generalized Least Squares ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the

More information

Contents. 1 Review of Residuals. 2 Detecting Outliers. 3 Influential Observations. 4 Multicollinearity and its Effects

Contents. 1 Review of Residuals. 2 Detecting Outliers. 3 Influential Observations. 4 Multicollinearity and its Effects Contents 1 Review of Residuals 2 Detecting Outliers 3 Influential Observations 4 Multicollinearity and its Effects W. Zhou (Colorado State University) STAT 540 July 6th, 2015 1 / 32 Model Diagnostics:

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61 The Gauss-Markov Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 61 Recall that Cov(u, v) = E((u E(u))(v E(v))) = E(uv) E(u)E(v) Var(u) = Cov(u, u) = E(u E(u)) 2 = E(u 2

More information

STAT 5200 Handout #23. Repeated Measures Example (Ch. 16)

STAT 5200 Handout #23. Repeated Measures Example (Ch. 16) Motivating Example: Glucose STAT 500 Handout #3 Repeated Measures Example (Ch. 16) An experiment is conducted to evaluate the effects of three diets on the serum glucose levels of human subjects. Twelve

More information

Modelling the Covariance

Modelling the Covariance Modelling the Covariance Jamie Monogan Washington University in St Louis February 9, 2010 Jamie Monogan (WUStL) Modelling the Covariance February 9, 2010 1 / 13 Objectives By the end of this meeting, participants

More information

Mixed-Models. version 30 October 2011

Mixed-Models. version 30 October 2011 Mixed-Models version 30 October 2011 Mixed models Mixed models estimate a vector! of fixed effects and one (or more) vectors u of random effects Both fixed and random effects models always include a vector

More information

Topic 25 - One-Way Random Effects Models. Outline. Random Effects vs Fixed Effects. Data for One-way Random Effects Model. One-way Random effects

Topic 25 - One-Way Random Effects Models. Outline. Random Effects vs Fixed Effects. Data for One-way Random Effects Model. One-way Random effects Topic 5 - One-Way Random Effects Models One-way Random effects Outline Model Variance component estimation - Fall 013 Confidence intervals Topic 5 Random Effects vs Fixed Effects Consider factor with numerous

More information

Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares

Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares L Magee Fall, 2008 1 Consider a regression model y = Xβ +ɛ, where it is assumed that E(ɛ X) = 0 and E(ɛɛ X) =

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ST 370 The probability distribution of a random variable gives complete information about its behavior, but its mean and variance are useful summaries. Similarly, the joint probability

More information

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38 Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012 Mixed-Model Estimation of genetic variances Bruce Walsh lecture notes Uppsala EQG 01 course version 8 Jan 01 Estimation of Var(A) and Breeding Values in General Pedigrees The above designs (ANOVA, P-O

More information

Review of probability and statistics 1 / 31

Review of probability and statistics 1 / 31 Review of probability and statistics 1 / 31 2 / 31 Why? This chapter follows Stock and Watson (all graphs are from Stock and Watson). You may as well refer to the appendix in Wooldridge or any other introduction

More information

Stat 579: Generalized Linear Models and Extensions

Stat 579: Generalized Linear Models and Extensions Stat 579: Generalized Linear Models and Extensions Mixed models Yan Lu Feb, 2018, week 7 1 / 17 Some commonly used experimental designs related to mixed models Two way or three way random/mixed effects

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter

More information

Generalized Estimating Equations (gee) for glm type data

Generalized Estimating Equations (gee) for glm type data Generalized Estimating Equations (gee) for glm type data Søren Højsgaard mailto:sorenh@agrsci.dk Biometry Research Unit Danish Institute of Agricultural Sciences January 23, 2006 Printed: January 23, 2006

More information

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17 Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear

More information

Simple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation.

Simple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation. Statistical Computation Math 475 Jimin Ding Department of Mathematics Washington University in St. Louis www.math.wustl.edu/ jmding/math475/index.html October 10, 2013 Ridge Part IV October 10, 2013 1

More information

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X = The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design

More information

2.1 Linear regression with matrices

2.1 Linear regression with matrices 21 Linear regression with matrices The values of the independent variables are united into the matrix X (design matrix), the values of the outcome and the coefficient are represented by the vectors Y and

More information

4 Introduction to modeling longitudinal data

4 Introduction to modeling longitudinal data 4 Introduction to modeling longitudinal data We are now in a position to introduce a basic statistical model for longitudinal data. The models and methods we discuss in subsequent chapters may be viewed

More information

21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model

21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model 21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model Copyright c 2018 (Iowa State University) 21. Statistics 510 1 / 26 C. R. Henderson Born April 1, 1911,

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Matrix Approach to Simple Linear Regression: An Overview

Matrix Approach to Simple Linear Regression: An Overview Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix

More information

The regression model with one stochastic regressor (part II)

The regression model with one stochastic regressor (part II) The regression model with one stochastic regressor (part II) 3150/4150 Lecture 7 Ragnar Nymoen 6 Feb 2012 We will finish Lecture topic 4: The regression model with stochastic regressor We will first look

More information

Modeling the scale parameter ϕ A note on modeling correlation of binary responses Using marginal odds ratios to model association for binary responses

Modeling the scale parameter ϕ A note on modeling correlation of binary responses Using marginal odds ratios to model association for binary responses Outline Marginal model Examples of marginal model GEE1 Augmented GEE GEE1.5 GEE2 Modeling the scale parameter ϕ A note on modeling correlation of binary responses Using marginal odds ratios to model association

More information

STAT 705 Chapter 16: One-way ANOVA

STAT 705 Chapter 16: One-way ANOVA STAT 705 Chapter 16: One-way ANOVA Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 21 What is ANOVA? Analysis of variance (ANOVA) models are regression

More information

Chapter 3: Multiple Regression. August 14, 2018

Chapter 3: Multiple Regression. August 14, 2018 Chapter 3: Multiple Regression August 14, 2018 1 The multiple linear regression model The model y = β 0 +β 1 x 1 + +β k x k +ǫ (1) is called a multiple linear regression model with k regressors. The parametersβ

More information

MIXED MODELS THE GENERAL MIXED MODEL

MIXED MODELS THE GENERAL MIXED MODEL MIXED MODELS This chapter introduces best linear unbiased prediction (BLUP), a general method for predicting random effects, while Chapter 27 is concerned with the estimation of variances by restricted

More information

Weighted Least Squares

Weighted Least Squares Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w

More information

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics David B. University of South Carolina Department of Statistics What are Time Series Data? Time series data are collected sequentially over time. Some common examples include: 1. Meteorological data (temperatures,

More information

Economics Department LSE. Econometrics: Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL)

Economics Department LSE. Econometrics: Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL) Economics Department LSE EC402 Lent 2015 Danny Quah TW1.10.01A x7535 : Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL) 1. Suppose ɛ is w.n. (0, σ 2 ), ρ < 1, and W t = ρw t 1 + ɛ t, for t = 1, 2,....

More information

Weighted Least Squares

Weighted Least Squares Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Lecture 6: Linear models and Gauss-Markov theorem

Lecture 6: Linear models and Gauss-Markov theorem Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables

More information

Statistics Homework #4

Statistics Homework #4 Statistics 910 1 Homework #4 Chapter 6, Shumway and Stoffer These are outlines of the solutions. If you would like to fill in other details, please come see me during office hours. 6.1 State-space representation

More information

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2 MATH 3806/MATH4806/MATH6806: MULTIVARIATE STATISTICS Solutions to Problems on Rom Vectors Rom Sampling Let X Y have the joint pdf: fx,y) + x +y ) n+)/ π n for < x < < y < this is particular case of the

More information

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27 Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix

More information

Topic 17 - Single Factor Analysis of Variance. Outline. One-way ANOVA. The Data / Notation. One way ANOVA Cell means model Factor effects model

Topic 17 - Single Factor Analysis of Variance. Outline. One-way ANOVA. The Data / Notation. One way ANOVA Cell means model Factor effects model Topic 17 - Single Factor Analysis of Variance - Fall 2013 One way ANOVA Cell means model Factor effects model Outline Topic 17 2 One-way ANOVA Response variable Y is continuous Explanatory variable is

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

The Pennsylvania State University The Graduate School Eberly College of Science INTRABLOCK, INTERBLOCK AND COMBINED ESTIMATES

The Pennsylvania State University The Graduate School Eberly College of Science INTRABLOCK, INTERBLOCK AND COMBINED ESTIMATES The Pennsylvania State University The Graduate School Eberly College of Science INTRABLOCK, INTERBLOCK AND COMBINED ESTIMATES IN INCOMPLETE BLOCK DESIGNS: A NUMERICAL STUDY A Thesis in Statistics by Yasin

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R

More information

Linear Model Under General Variance Structure: Autocorrelation

Linear Model Under General Variance Structure: Autocorrelation Linear Model Under General Variance Structure: Autocorrelation A Definition of Autocorrelation In this section, we consider another special case of the model Y = X β + e, or y t = x t β + e t, t = 1,..,.

More information

in Survey Sampling Petr Novák, Václav Kosina Czech Statistical Office Using the Superpopulation Model for Imputations and Variance

in Survey Sampling Petr Novák, Václav Kosina Czech Statistical Office Using the Superpopulation Model for Imputations and Variance Using the Superpopulation Model for Imputations and Variance Computation in Survey Sampling Czech Statistical Office Introduction Situation Let us have a population of N units: n sampled (sam) and N-n

More information

Intermediate Econometrics

Intermediate Econometrics Intermediate Econometrics Heteroskedasticity Text: Wooldridge, 8 July 17, 2011 Heteroskedasticity Assumption of homoskedasticity, Var(u i x i1,..., x ik ) = E(u 2 i x i1,..., x ik ) = σ 2. That is, the

More information

22s:152 Applied Linear Regression. Returning to a continuous response variable Y...

22s:152 Applied Linear Regression. Returning to a continuous response variable Y... 22s:152 Applied Linear Regression Generalized Least Squares Returning to a continuous response variable Y... Ordinary Least Squares Estimation The classical models we have fit so far with a continuous

More information

Modeling the Covariance

Modeling the Covariance Modeling the Covariance Jamie Monogan University of Georgia February 3, 2016 Jamie Monogan (UGA) Modeling the Covariance February 3, 2016 1 / 16 Objectives By the end of this meeting, participants should

More information

Stat 511 Outline (Spring 2009 Revision of 2008 Version)

Stat 511 Outline (Spring 2009 Revision of 2008 Version) Stat 511 Outline Spring 2009 Revision of 2008 Version Steve Vardeman Iowa State University September 29, 2014 Abstract This outline summarizes the main points of lectures based on Ken Koehler s class notes

More information

Computation of Variances of Functions of Parameter Estimates for Mixed Models in GLM

Computation of Variances of Functions of Parameter Estimates for Mixed Models in GLM Computation of Variances of Functions of Parameter Estimates for Mixed Models in GLM Ramon C. Littell and Stephen B. Linda University of Florida. Introduction Estimates of functions of parameters in linear

More information

Variance reduction. Timo Tiihonen

Variance reduction. Timo Tiihonen Variance reduction Timo Tiihonen 2014 Variance reduction techniques The most efficient way to improve the accuracy and confidence of simulation is to try to reduce the variance of simulation results. We

More information

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36 20. REML Estimation of Variance Components Copyright c 2018 (Iowa State University) 20. Statistics 510 1 / 36 Consider the General Linear Model y = Xβ + ɛ, where ɛ N(0, Σ) and Σ is an n n positive definite

More information

22s:152 Applied Linear Regression. In matrix notation, we can write this model: Generalized Least Squares. Y = Xβ + ɛ with ɛ N n (0, Σ)

22s:152 Applied Linear Regression. In matrix notation, we can write this model: Generalized Least Squares. Y = Xβ + ɛ with ɛ N n (0, Σ) 22s:152 Applied Linear Regression Generalized Least Squares Returning to a continuous response variable Y Ordinary Least Squares Estimation The classical models we have fit so far with a continuous response

More information

Repeated Measures Data

Repeated Measures Data Repeated Measures Data Mixed Models Lecture Notes By Dr. Hanford page 1 Data where subjects are measured repeatedly over time - predetermined intervals (weekly) - uncontrolled variable intervals between

More information

Ch4. Distribution of Quadratic Forms in y

Ch4. Distribution of Quadratic Forms in y ST4233, Linear Models, Semester 1 2008-2009 Ch4. Distribution of Quadratic Forms in y 1 Definition Definition 1.1 If A is a symmetric matrix and y is a vector, the product y Ay = i a ii y 2 i + i j a ij

More information

STAT 8200 Design of Experiments for Research Workers Lab 11 Due: Friday, Nov. 22, 2013

STAT 8200 Design of Experiments for Research Workers Lab 11 Due: Friday, Nov. 22, 2013 Example: STAT 8200 Design of Experiments for Research Workers Lab 11 Due: Friday, Nov. 22, 2013 An experiment is designed to study pigment dispersion in paint. Four different methods of mixing a particular

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Chapter 7: Variances. October 14, In this chapter we consider a variety of extensions to the linear model that allow for more gen-

Chapter 7: Variances. October 14, In this chapter we consider a variety of extensions to the linear model that allow for more gen- Chapter 7: Variances October 14, 2018 In this chapter we consider a variety of extensions to the linear model that allow for more gen- eral variance structures than the independent, identically distributed

More information

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report -

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report - A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report - Edgar Brunner and Marius Placzek University of Göttingen, Germany 3. August 0 . Statistical Model and Hypotheses Throughout

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

Heteroskedasticity. Part VII. Heteroskedasticity

Heteroskedasticity. Part VII. Heteroskedasticity Part VII Heteroskedasticity As of Oct 15, 2015 1 Heteroskedasticity Consequences Heteroskedasticity-robust inference Testing for Heteroskedasticity Weighted Least Squares (WLS) Feasible generalized Least

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

1/15. Over or under dispersion Problem

1/15. Over or under dispersion Problem 1/15 Over or under dispersion Problem 2/15 Example 1: dogs and owners data set In the dogs and owners example, we had some concerns about the dependence among the measurements from each individual. Let

More information

RADBOUD UNIVERSITY NIJMEGEN & UMC UTRECHT

RADBOUD UNIVERSITY NIJMEGEN & UMC UTRECHT RADBOUD UNIVERSITY NIJMEGEN & UMC UTRECHT Master s Thesis Maximum likelihood estimation of treatment effect in clinical trials with multiple follow-up measurements Author: S. Roberti Supervisors: Prof.

More information

Lecture 5: BLUP (Best Linear Unbiased Predictors) of genetic values. Bruce Walsh lecture notes Tucson Winter Institute 9-11 Jan 2013

Lecture 5: BLUP (Best Linear Unbiased Predictors) of genetic values. Bruce Walsh lecture notes Tucson Winter Institute 9-11 Jan 2013 Lecture 5: BLUP (Best Linear Unbiased Predictors) of genetic values Bruce Walsh lecture notes Tucson Winter Institute 9-11 Jan 013 1 Estimation of Var(A) and Breeding Values in General Pedigrees The classic

More information

Course topics (tentative) The role of random effects

Course topics (tentative) The role of random effects Course topics (tentative) random effects linear mixed models analysis of variance frequentist likelihood-based inference (MLE and REML) prediction Bayesian inference The role of random effects Rasmus Waagepetersen

More information

Ordinary Least Squares Regression

Ordinary Least Squares Regression Ordinary Least Squares Regression Goals for this unit More on notation and terminology OLS scalar versus matrix derivation Some Preliminaries In this class we will be learning to analyze Cross Section

More information

Chapter 5 Prediction of Random Variables

Chapter 5 Prediction of Random Variables Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,

More information

MIT Spring 2015

MIT Spring 2015 Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)

More information

Well-developed and understood properties

Well-developed and understood properties 1 INTRODUCTION TO LINEAR MODELS 1 THE CLASSICAL LINEAR MODEL Most commonly used statistical models Flexible models Well-developed and understood properties Ease of interpretation Building block for more

More information

Lecture 3: Multiple Regression

Lecture 3: Multiple Regression Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u

More information