REPEATED MEASURES. Copyright c 2012 (Iowa State University) Statistics / 29

Similar documents
11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49

THE ANOVA APPROACH TO THE ANALYSIS OF LINEAR MIXED EFFECTS MODELS

Linear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34

RANDOM and REPEATED statements - How to Use Them to Model the Covariance Structure in Proc Mixed. Charlie Liu, Dachuang Cao, Peiqi Chen, Tony Zagar

When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40

LINEAR MIXED-EFFECT MODELS

13. The Cochran-Satterthwaite Approximation for Linear Combinations of Mean Squares

Stat 579: Generalized Linear Models and Extensions

Mixed models with correlated measurement errors

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32

MA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1

STAT5044: Regression and Anova. Inyoung Kim

STAT 100C: Linear models

ANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32

Stat 579: Generalized Linear Models and Extensions

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Chapter 4: Models for Stationary Time Series

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Statistics 910, #5 1. Regression Methods

Prediction. is a weighted least squares estimate since it minimizes. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark

WLS and BLUE (prelude to BLUP) Prediction

Lecture 34: Properties of the LSE

Solutions to Odd-Numbered End-of-Chapter Exercises: Chapter 14

Lecture 2: Linear and Mixed Models

1 Introduction to Generalized Least Squares

Contents. 1 Review of Residuals. 2 Detecting Outliers. 3 Influential Observations. 4 Multicollinearity and its Effects

Properties of the least squares estimates

The Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61

STAT 5200 Handout #23. Repeated Measures Example (Ch. 16)

Modelling the Covariance

Mixed-Models. version 30 October 2011

Topic 25 - One-Way Random Effects Models. Outline. Random Effects vs Fixed Effects. Data for One-way Random Effects Model. One-way Random effects

Questions and Answers on Heteroskedasticity, Autocorrelation and Generalized Least Squares

ANOVA: Analysis of Variance - Part I

Covariance and Correlation

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012

Review of probability and statistics 1 / 31

Stat 579: Generalized Linear Models and Extensions

Maximum Likelihood Estimation

Generalized Estimating Equations (gee) for glm type data

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

Simple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation.

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

2.1 Linear regression with matrices

4 Introduction to modeling longitudinal data

21. Best Linear Unbiased Prediction (BLUP) of Random Effects in the Normal Linear Mixed Effects Model

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Matrix Approach to Simple Linear Regression: An Overview

The regression model with one stochastic regressor (part II)

Modeling the scale parameter ϕ A note on modeling correlation of binary responses Using marginal odds ratios to model association for binary responses

STAT 705 Chapter 16: One-way ANOVA

Chapter 3: Multiple Regression. August 14, 2018

MIXED MODELS THE GENERAL MIXED MODEL

Weighted Least Squares

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics

Economics Department LSE. Econometrics: Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL)

Weighted Least Squares

[y i α βx i ] 2 (2) Q = i=1

. a m1 a mn. a 1 a 2 a = a n

Lecture 6: Linear models and Gauss-Markov theorem

Statistics Homework #4

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Topic 17 - Single Factor Analysis of Variance. Outline. One-way ANOVA. The Data / Notation. One way ANOVA Cell means model Factor effects model

Introduction to Estimation Methods for Time Series models. Lecture 1

The Pennsylvania State University The Graduate School Eberly College of Science INTRABLOCK, INTERBLOCK AND COMBINED ESTIMATES

STAT 100C: Linear models

Linear Model Under General Variance Structure: Autocorrelation

in Survey Sampling Petr Novák, Václav Kosina Czech Statistical Office Using the Superpopulation Model for Imputations and Variance

Intermediate Econometrics

22s:152 Applied Linear Regression. Returning to a continuous response variable Y...

Modeling the Covariance

Stat 511 Outline (Spring 2009 Revision of 2008 Version)

Computation of Variances of Functions of Parameter Estimates for Mixed Models in GLM

Variance reduction. Timo Tiihonen

20. REML Estimation of Variance Components. Copyright c 2018 (Iowa State University) 20. Statistics / 36

22s:152 Applied Linear Regression. In matrix notation, we can write this model: Generalized Least Squares. Y = Xβ + ɛ with ɛ N n (0, Σ)

Repeated Measures Data

Ch4. Distribution of Quadratic Forms in y

STAT 8200 Design of Experiments for Research Workers Lab 11 Due: Friday, Nov. 22, 2013

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 7: Variances. October 14, In this chapter we consider a variety of extensions to the linear model that allow for more gen-

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report -

A Probability Review

Heteroskedasticity. Part VII. Heteroskedasticity

Asymptotic Statistics-III. Changliang Zou

1/15. Over or under dispersion Problem

RADBOUD UNIVERSITY NIJMEGEN & UMC UTRECHT

Lecture 5: BLUP (Best Linear Unbiased Predictors) of genetic values. Bruce Walsh lecture notes Tucson Winter Institute 9-11 Jan 2013

Course topics (tentative) The role of random effects

Ordinary Least Squares Regression

Chapter 5 Prediction of Random Variables

MIT Spring 2015

Well-developed and understood properties

Lecture 3: Multiple Regression

Transcription:

REPEATED MEASURES Copyright c 2012 (Iowa State University) Statistics 511 1 / 29

Repeated Measures Example In an exercise therapy study, subjects were assigned to one of three weightlifting programs i=1: The number of repetitions of weightlifting was increased as subjects became stronger (RI) i=2: The amount of weight was increased as subjects became stronger (WI) i=3: Subjects did not participate in weightlifting (XCont) Copyright c 2012 (Iowa State University) Statistics 511 2 / 29

Measurements of strength (y) were taken on days 2, 4, 6, 8, 10, 12 and 14 for each subject Source: Littel, Freund, and Spector (1991), SAS System for Linear Models R code: RepeatedMeasuresR Copyright c 2012 (Iowa State University) Statistics 511 3 / 29

A Linear Mixed-Effect Model y ijk = µ + α i + s ij + τ k + γ ik + e ijk Y ijk strength measurement for program i, subject j, time point k α i fixed program effect s ij random subject effect τ k fixed time effect γ ik fixed program time interaction e ijk random error Copyright c 2012 (Iowa State University) Statistics 511 4 / 29

Initially, we will assume s ij iid N(0, σ 2 s ) independent of e ijk iid N(0, σ 2 e) Copyright c 2012 (Iowa State University) Statistics 511 5 / 29

Average strength after 2k days on the ith program is µ ik = E(y ijk ) = E(µ + α i + s ij + τ k + γ ik + e ijk ) = µ + α i + E(s ij ) + τ k + γ ik + E(e ijk ) = µ + α i + τ k + γ ik for i = 1, 2, 3 and k = 1, 2,, 7 Copyright c 2012 (Iowa State University) Statistics 511 6 / 29

The variance of any single observation is Var(y ijk ) = Var(µ + α i + s ij + τ k + γ ik + e ijk ) = Var(s ij + e ijk ) = Var(s ij ) + Var(e ijk ) = σs 2 + σe 2 Copyright c 2012 (Iowa State University) Statistics 511 7 / 29

The covariance between an two different observations from the same subject is Cov(y ijk, y ijl ) = Cov(µ + α i + s ij + τ k + γ ik + e ijk, µ + α i + s ij + τ l + γ il + e ijl ) = Cov(s ij + e ijk, s ij + e ijl ) = Cov(s ij, s ij ) + Cov(s ij, e ijl ) +Cov(e ijk, s ij ) + Cov(e ijk, e ijl ) = Var(s ij ) = σ 2 s Copyright c 2012 (Iowa State University) Statistics 511 8 / 29

The correlation between y ijk and y ijl is σ 2 s σ 2 s + σ 2 e ρ Observations taken on different subjects are uncorrelated Copyright c 2012 (Iowa State University) Statistics 511 9 / 29

For the set of observations taken on a single subject, we have y ij1 σe 2 + σs 2 σs 2 σs 2 y ij3 Var = σs 2 σe 2 + σs 2 σ 2 s σs 2 y ij7 σs 2 σs 2 σs 2 σe 2 + σs 2 = σ 2 ei 7 7 + σ 2 s 11 7 7 This is known as a compound symmetric covariance structure Copyright c 2012 (Iowa State University) Statistics 511 10 / 29

Using n i to denote the number of subjects in the ith program, we can write this model in the form as follows y = Xβ + Zu + e Copyright c 2012 (Iowa State University) Statistics 511 11 / 29

y 111 y 112 y 117 y 121 y 122 y 127 y 211 y 212 y 217 y 3n31 y 3n32 y 3n37 = 1 1 0 0 1000000 21 1 1 0 0 0100000 columns for 1 1 0 0 0000001 interaction 1 1 0 0 1000000 1 1 0 0 0100000 1 1 0 0 0000001 1 0 1 0 1000000 1 0 1 0 0100000 1 0 1 0 0000001 1 0 0 1 1000000 1 0 0 1 0100000 1 0 0 1 0000001 µ α 1 α 2 α 3 τ 1 τ 2 τ 3 τ 4 τ 5 τ 6 τ 7 γ 11 γ 12 γ 37 + Copyright c 2012 (Iowa State University) Statistics 511 12 / 29

1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 s 11 s 12 s 3n3 + e 111 e 112 e 117 e 121 e 122 e 127 e 3n31 e 3n32 e 3n37 Copyright c 2012 (Iowa State University) Statistics 511 13 / 29

In this case, G = Var(u) = σ 2 s I m m and R = Var(e) = σ 2 ei (7m) (7m), where m = n 1 + n 2 + n 3 is the total number of subjects Σ = Var(y) = ZGZ + R is a block diagonal matrix with one block of the form σs 2 11 7 7 + σei 2 7 7 for each subject Copyright c 2012 (Iowa State University) Statistics 511 14 / 29

LSMEAN for Program i and Time k ȳ i k = 1 n i n i j=1 y ijk = µ + α i + s i + τ k + γ ik + ē i k Var(ȳ i k ) = σ2 s + σ 2 e (6 se = 7 MS error + 1 ) 1 7 MS subj(prog) Cochran-Satterthwaite degrees of freedom = 658 n i n i Copyright c 2012 (Iowa State University) Statistics 511 15 / 29

LSMEAN for Program i 1 7 7 ȳ i k = ȳ i = µ + α i + s i + τ + γ i + ē i k=1 Var(ȳ i ) = 7σ2 s + σe 2 7n i MS subj(prog) se = 7n i Degrees of freedom = 54 because there are n 1 = 16, n 2 = 21, n 3 = 20 subjects in the three programs Copyright c 2012 (Iowa State University) Statistics 511 16 / 29

LSMEAN for time k 1 3 3 ȳ i k = µ + ᾱ + 1 3 i=1 se = 1 3 Var ( 1 3 3 s i + τ k + γ k + 1 3 i=1 ) 3 ȳ i k = 1 9 i=1 3 i=1 ( 6 7 MS error + 1 7 MS subj(prog) σ 2 e + σ 2 s n i ) ( 3 i=1 3 i=1 ) 1 Cochran-Satterthwaite degrees of freedom = 658 n i ē i k Copyright c 2012 (Iowa State University) Statistics 511 17 / 29

Difference between Strength Means at Times k and l (Averaging across Programs) 1 3 3 (ȳ i k ȳ i l ) = 1 3 i=1 3 1 n i n i i=1 j=1 (y ijk y ijl ) = τ k τ l + γ k γ l + 1 3 1 n i (e ijk e ijl ) 3 i=1 n i j=1 Note that subject effects cancel out Copyright c 2012 (Iowa State University) Statistics 511 18 / 29

The variance of the estimator is 2σe 2 3 1 9 n i i=1 Thus, the standard error of the estimator is 2MS 3 error 1 9 n i i=1 Degrees of freedom = 7(16 + 21 + 20) (1 + 2 + 54 + 6 + 12) = 324 Copyright c 2012 (Iowa State University) Statistics 511 19 / 29

Difference between Strength Means for Program i and l at Time k ȳ i k ȳ l k = α i α l + γ ik γ lk + s i s l + ē i k ē l k ( 1 Var(ȳ i k ȳ l k ) = (σe 2 + σs 2 ) + 1 ) n i n l (6 se = 7 MS error + 1 ) ( 1 7 MS subj(prog) + 1 ) n i n l Cochran-Satterthwaite degrees of freedom = 658 Copyright c 2012 (Iowa State University) Statistics 511 20 / 29

Difference between Strength Means at Times k and l within a Program i ȳ i k ȳ i l = τ k τ l + γ ik γ il + ē i k ē i l Var(ȳ i k ȳ i l ) = Var 1 n i (y ijk y ijl ) = 2σ2 e n i n i j=1 ( ) se = MS error 2ni Degrees of freedom = 324 Copyright c 2012 (Iowa State University) Statistics 511 21 / 29

Difference in Strength Means between Programs i and l (Averaging across Time) ȳ i ȳ l = α i α l + γ i γ l + s i s l + ē i ē l ( 1 Var(ȳ i ȳ l ) = (7σs 2 + σe) 2 + 1 ) 7n i 7n l ( 1 se = MS subj(prog) + 1 ) 7n i 7n l Degrees of freedom = 54 Copyright c 2012 (Iowa State University) Statistics 511 22 / 29

Other Variance Matrices We began with the model where y ijk = µ + α i + s ij + τ k + γ ik + e ijk s ij iid N(0, σ 2 s ) independent of e ijk iid N(0, σ 2 e) Copyright c 2012 (Iowa State University) Statistics 511 23 / 29

This model was expressed in the form where y = Xβ + Zu + e, u = s 11 s 3n3 contained the random subject effects Copyright c 2012 (Iowa State University) Statistics 511 24 / 29

Here G = Var(u) = σ 2 s I, R = Var(e) = σ 2 ei, and Σ = Var(y) = ZGZ + R 11 = σs 2 11 + σ2 ei 11 σei 2 + σs 2 11 = σei 2 + σs 2 11 Copyright c 2012 (Iowa State University) Statistics 511 25 / 29

If you are not interested in predicting subject effects (random subject effects are included only to introduce correlation among repeated measures on the same subject), you can work with an alternative expression of the same model by using the general linear model y = Xβ + ɛ, where Var(y) = Var(ɛ) = σ 2 ei + σ 2 s 11 σ 2 ei + σ 2 s 11 Copyright c 2012 (Iowa State University) Statistics 511 26 / 29

More generally, we can replace the mixed model y = Xβ + Zu + e with the model y = Xβ + ɛ where W 0 0 0 W 0 Var(y) = Var(ɛ) = 0 0 W Copyright c 2012 (Iowa State University) Statistics 511 27 / 29

We can choose a structure for W that seems appropriate based on the design and the data One choice for W is a compound symmetric matrix like we have considered previously Another choice for W is an unstructured positive definite matrix A common choice for W when repeated measures are equally spaced in time is the first order autoregressive structure known as AR(1) Copyright c 2012 (Iowa State University) Statistics 511 28 / 29

AR(1):First Order Autoregressive Covariance Structure W = σ 2 1 ρ ρ 2 ρ 3 ρ 4 ρ 5 ρ 6 ρ 1 ρ ρ 2 ρ 3 ρ 4 ρ 5 ρ 2 ρ 1 ρ ρ 2 ρ 3 ρ 4 ρ 3 ρ 2 ρ 1 ρ ρ 2 ρ 3 ρ 4 ρ 3 ρ 2 ρ 1 ρ ρ 2 ρ 5 ρ 4 ρ 3 ρ 2 ρ 1 ρ ρ 6 ρ 5 ρ 4 ρ 3 ρ 2 ρ 1 where σ 2 (0, ) and ρ ( 1, 1) are unknown parameters Copyright c 2012 (Iowa State University) Statistics 511 29 / 29