Review of Multilevel Models for Longitudinal Data

Similar documents
Describing Within-Person Fluctuation over Time using Alternative Covariance Structures

Review of Unconditional Multilevel Models for Longitudinal Data

An Introduction to Multilevel Models. PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 25: December 7, 2012

Introduction to Within-Person Analysis and RM ANOVA

Review of CLDP 944: Multilevel Models for Longitudinal Data

Introduction to Random Effects of Time and Model Estimation

Describing Change over Time: Adding Linear Trends

Describing Within-Person Change over Time

over Time line for the means). Specifically, & covariances) just a fixed variance instead. PROC MIXED: to 1000 is default) list models with TYPE=VC */

Time Invariant Predictors in Longitudinal Models

Time-Invariant Predictors in Longitudinal Models

Repeated Measures ANOVA Multivariate ANOVA and Their Relationship to Linear Mixed Models

A (Brief) Introduction to Crossed Random Effects Models for Repeated Measures Data

SAS Syntax and Output for Data Manipulation:

Time-Invariant Predictors in Longitudinal Models

Model Assumptions; Predicting Heterogeneity of Variance

Time-Invariant Predictors in Longitudinal Models

A Re-Introduction to General Linear Models (GLM)

Time-Invariant Predictors in Longitudinal Models

Generalized Linear Models for Non-Normal Data

Describing Nonlinear Change Over Time

Longitudinal Data Analysis of Health Outcomes

A Re-Introduction to General Linear Models

ANOVA Longitudinal Models for the Practice Effects Data: via GLM

SAS Code for Data Manipulation: SPSS Code for Data Manipulation: STATA Code for Data Manipulation: Psyc 945 Example 1 page 1

Generalized Models: Part 1

Introduction to Generalized Models

Two-Level Models for Clustered* Data

SAS Syntax and Output for Data Manipulation: CLDP 944 Example 3a page 1

Measurement Invariance (MI) in CFA and Differential Item Functioning (DIF) in IRT/IFA

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model

Exploratory Factor Analysis and Principal Component Analysis

Models for longitudinal data

Correlated data. Repeated measurements over time. Typical set-up for repeated measurements. Traditional presentation of data

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

Statistical Distribution Assumptions of General Linear Models

MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010

36-309/749 Experimental Design for Behavioral and Social Sciences. Dec 1, 2015 Lecture 11: Mixed Models (HLMs)

Analysis of Longitudinal Data: Comparison between PROC GLM and PROC MIXED.

Generalized Linear Models for Count, Skewed, and If and How Much Outcomes

Simple, Marginal, and Interaction Effects in General Linear Models: Part 1

Exploratory Factor Analysis and Principal Component Analysis

An Introduction to Mplus and Path Analysis

Simple, Marginal, and Interaction Effects in General Linear Models

Repeated Measures Modeling With PROC MIXED E. Barry Moser, Louisiana State University, Baton Rouge, LA

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model

Serial Correlation. Edps/Psych/Stat 587. Carolyn J. Anderson. Fall Department of Educational Psychology

Hierarchical Generalized Linear Models. ERSH 8990 REMS Seminar on HLM Last Lecture!

An Introduction to Path Analysis

Centering Predictor and Mediator Variables in Multilevel and Time-Series Models

Analysis of Longitudinal Data: Comparison Between PROC GLM and PROC MIXED. Maribeth Johnson Medical College of Georgia Augusta, GA

Basic IRT Concepts, Models, and Assumptions

Correlated data. Longitudinal data. Typical set-up for repeated measurements. Examples from literature, I. Faculty of Health Sciences

Answer to exercise: Blood pressure lowering drugs

Longitudinal Data Analysis

Independence (Null) Baseline Model: Item means and variances, but NO covariances

Repeated Measures Data

Introduction to Confirmatory Factor Analysis

Generalized Multilevel Models for Non-Normal Outcomes

INTRODUCTION TO MULTILEVEL MODELLING FOR REPEATED MEASURES DATA. Belfast 9 th June to 10 th June, 2011

Multilevel Modeling: A Second Course

multilevel modeling: concepts, applications and interpretations

Introduction to SAS proc mixed

Analysis of variance and regression. May 13, 2008

Univariate Normal Distribution; GLM with the Univariate Normal; Least Squares Estimation

Specifying Latent Curve and Other Growth Models Using Mplus. (Revised )

Mixed Effects Models

Path Analysis. PRE 906: Structural Equation Modeling Lecture #5 February 18, PRE 906, SEM: Lecture 5 - Path Analysis

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

Dyadic Data Analysis. Richard Gonzalez University of Michigan. September 9, 2010

Univariate Normal Distribution; GLM with the Univariate Normal; Least Squares Estimation

Designing Multilevel Models Using SPSS 11.5 Mixed Model. John Painter, Ph.D.

Introduction to SAS proc mixed

ANOVA approaches to Repeated Measures. repeated measures MANOVA (chapter 3)

Biostatistics Workshop Longitudinal Data Analysis. Session 4 GARRETT FITZMAURICE

Lab 3: Two levels Poisson models (taken from Multilevel and Longitudinal Modeling Using Stata, p )

Research Design: Topic 18 Hierarchical Linear Modeling (Measures within Persons) 2010 R.C. Gardner, Ph.d.

Introduction to Regression

Advanced Experimental Design

Interactions among Continuous Predictors

Supplemental Materials. In the main text, we recommend graphing physiological values for individual dyad

Longitudinal Data Analysis Using SAS Paul D. Allison, Ph.D. Upcoming Seminar: October 13-14, 2017, Boston, Massachusetts

Maximum Likelihood Estimation; Robust Maximum Likelihood; Missing Data with Maximum Likelihood

ANCOVA. Lecture 9 Andrew Ainsworth

Spatial inference. Spatial inference. Accounting for spatial correlation. Multivariate normal distributions

Chapter 9. Multivariate and Within-cases Analysis. 9.1 Multivariate Analysis of Variance

Regression: Main Ideas Setting: Quantitative outcome with a quantitative explanatory variable. Example, cont.

Confirmatory Factor Models (CFA: Confirmatory Factor Analysis)

Introduction to Structural Equation Modeling

Variance component models part I

Example 7b: Generalized Models for Ordinal Longitudinal Data using SAS GLIMMIX, STATA MEOLOGIT, and MPLUS (last proportional odds model only)

36-309/749 Experimental Design for Behavioral and Social Sciences. Sep. 22, 2015 Lecture 4: Linear Regression

STAT3401: Advanced data analysis Week 10: Models for Clustered Longitudinal Data

Hypothesis testing, part 2. With some material from Howard Seltman, Blase Ur, Bilge Mutlu, Vibha Sazawal

Lab 11. Multilevel Models. Description of Data

Advantages of Mixed-effects Regression Models (MRM; aka multilevel, hierarchical linear, linear mixed models) 1. MRM explicitly models individual

Panel Data. March 2, () Applied Economoetrics: Topic 6 March 2, / 43

Modelling heterogeneous variance-covariance components in two-level multilevel models with application to school effects educational research

Review of the General Linear Model

Additional Notes: Investigating a Random Slope. When we have fixed level-1 predictors at level 2 we show them like this:

Transcription:

Review of Multilevel Models for Longitudinal Data Topics: Concepts in longitudinal multilevel modeling Describing within-person fluctuation using ACS models Describing within-person change using random effects Likelihood estimation in random effects models Describing nonlinear patterns of change Time-invariant predictors PSYC 945: Lecture 1 1

What is a Multilevel Model (MLM)? Same as other terms you have heard of: General Linear Mixed Model (if you are from statistics) Mixed = Fixed and Random effects Random Coefficients Model (also if you are from statistics) Random coefficients = Random effects Hierarchical Linear Model (if you are from education) Not the same as hierarchical regression Special cases of MLM: Random Effects ANOVA or Repeated Measures ANOVA (Latent) Growth Curve Model (where Latent SEM) Within-Person Fluctuation Model (e.g., for daily diary data) Clustered/Nested Observations Model (e.g., for kids in schools) Cross-Classified Models (e.g., value-added models) PSYC 945: Lecture 1

The Two Sides of Any Model Model for the Means: Aka Fixed Effects, Structural Part of Model What you are used to caring about for testing hypotheses How the expected outcome for a given observation varies as a function of values on predictor variables Model for the Variance: Aka Random Effects and Residuals, Stochastic Part of Model What you are used to making assumptions about instead How residuals are distributed and related across observations (persons, groups, time, etc.) these relationships are called dependency and this is the primary way that multilevel models differ from general linear models (e.g., regression) PSYC 945: Lecture 1 3

Review: Variances and Covariances Variance: Dispersion of y Variance (y ) t N i1 (y y ) ti N k ti Covariance: How y s go together, unstandardized Covariance (y,y ) 1 N i1 (y y )(y y ) 1i 1i N k i i Correlation: How y s go together, standardized ( 1 to 1) Correlation (y,y ) 1 Covariance(y,y ) 1 Variance(y ) * Variance(y ) 1 N = # people, t = time, i = person k = # fixed effects, y ti = y predicted from fixed effects PSYC 945: Lecture 1 4

Dimensions for Organizing Models Outcome type: General (normal) vs. Generalized (not normal) Dimensions of sampling: One (so one variance term per outcome) vs. Multiple (so multiple variance terms per outcome) OUR WORLD General Linear Models: conditionally normal outcome distribution, fixed effects (identity link; only one dimension of sampling) Generalized Linear Models: any conditional outcome distribution, fixed effects through link functions, no random effects (one dimension) Note: Least Squares is only for GLM General Linear Mixed Models: conditionally normal outcome distribution, fixed and random effects (identity link, but multiple sampling dimensions) Generalized Linear Mixed Models: any conditional outcome distribution, fixed and random effects through link functions (multiple dimensions) Linear means the fixed effects predict the link-transformed DV in a linear combination of (effect*predictor) + (effect*predictor) PSYC 945: Lecture 1 5

What can MLM do for you? 1. Model dependency across observations Longitudinal, clustered, and/or cross-classified data? No problem! Tailor your model of sources of correlation to your data. Include categorical or continuous predictors at any level Time-varying, person-level, group-level predictors for each variance Explore reasons for dependency, don t just control for dependency 3. Does not require same data structure for each person Unbalanced or missing data? No problem! 4. You already know how (or you will soon)! Use SPSS Mixed, SAS Mixed, Stata, Mplus, R, HLM, MlwiN What s an intercept? What s a slope? What s a pile of variance? PSYC 945: Lecture 1 6

Levels of Analysis in Longitudinal Data Between-Person (BP) Variation: Level- INTER-individual Differences Time-Invariant All longitudinal studies begin as cross-sectional studies Within-Person (WP) Variation: Level-1 INTRA-individual Differences Time-Varying Only longitudinal studies can provide this extra information Longitudinal studies allow examination of both types of relationships simultaneously (and their interactions) Any variable measured over time usually has both BP and WP variation BP = more/less than other people; WP = more/less than one s average I use person here, but level- can be anything that is measured repeatedly (like animals, schools, countries ) PSYC 945: Lecture 1 7

A Longitudinal Data Continuum Within-Person Change: Systematic change Magnitude or direction of change can be different across individuals Growth curve models Time is meaningfully sampled Within-Person Fluctuation: No systematic change Outcome just varies/fluctuates over time (e.g., emotion, stress) Time is just a way to get lots of data per individual Pure WP Change Pure WP Fluctuation Time Time PSYC 945: Lecture 1 8

The Two Sides of a (BP) Model Model for the Means (Predicted Values): Each person s expected (predicted) outcome is a weighted linear function of his/her values on X and Z (and here, their interaction), each measured once per person (i.e., this is a between-person model) Estimated parameters are called fixed effects (here, β, β, β, and β ) Model for the Variance ( Piles of Variance): e N 0, σ ONE residual (unexplained) deviation Our focus today e has a mean of 0 with some estimated constant variance σ, is normally distributed, is unrelated to X and Z, and is unrelated across people (across all observations, just people here) Estimated parameter is residual variance only in above BP model PSYC 945: Lecture 1 9

Empty +Within-Person Model 140 10 100 80 60 40 0 Start off with Mean of Y as best guess for any value: = Grand Mean = Fixed Intercept Can make better guess by taking advantage of repeated observations: = Person Mean Random Intercept PSYC 945: Lecture 1 10

Empty +Within-Person Model Variance of Y sources: 140 10 100 80 60 40 0 Between-Person (BP) Variance: Differences from GRAND mean INTER-Individual Differences Within-Person (WP) Variance: Differences from OWN mean INTRA-Individual Differences This part is only observable through longitudinal data. Now we have piles of variance in Y to predict. PSYC 945: Lecture 1 11

Hypothetical Longitudinal Data PSYC 945: Lecture 1 1

Error in a BP Model for the Variance: Single-Level Model e ti represents all Y variance e 1i e i e3i e 4i e 5i PSYC 945: Lecture 1 13

Error in a +WP Model for the Variance: Multilevel Model U 0i = random intercept that represents BP variance in mean Y e ti = residual that represents WP variance in Y U 0i e 1i e i e3i e 4i e 5i U 0i also represents constant dependency (covariance) due to mean differences in Y across persons PSYC 945: Lecture 1 14

Empty +Within-Person Model Variance of Y sources: 140 10 100 e ti e ti U 0i Level Random Intercept Variance (of U 0i, as ): Between-Person Variance Differences from GRAND mean 80 INTER-Individual Differences 60 40 eti U 0i Level 1 Residual Variance (of e ti, as ): 0 Within-Person Variance Differences from OWN mean INTRA-Individual Differences PSYC 945: Lecture 1 15

BP vs. +WP Empty Models Empty Between-Person Model (used for 1 occasion): y i = β 0 + e i β 0 = fixed intercept = grand mean e i = residual deviation from GRAND mean Empty +Within-Person Model (>1 occasions): y ti = β 0 + U 0i + e ti β 0 = fixed intercept = grand mean U 0i = random intercept = individual deviation from GRAND mean e ti = time-specific residual deviation from OWN mean PSYC 945: Lecture 1 16

Intraclass Correlation (ICC) Intraclass Correlation (ICC): ICC Corr(y,y ) 1 BP BP WP Cov(y,y ) Intercept Var. Intercept Var. Residual Var. 1 Var(y )* Var(y ) 1 e u 0 u 0 u0 u 0 eu 0 u0 u u e u 0 0 0 R matrix R CORR Matrix 1 ICC ICC ICC 1 ICC ICC ICC 1 ICC = Proportion of total variance that is between persons ICC = Average correlation among occasions (in RCORR) ICC is a standardized way of expressing how much we need to worry about dependency due to person mean differences (i.e., ICC is an effect size for constant person dependency) PSYC 945: Lecture 1 17

Counter-Intuitive: Between-Person Variance is in the numerator, but the ICC is the correlation over time! ICC = BTW / BTW + within Large ICC Large correlation over time ICC = btw / btw + WITHIN Small ICC Small correlation over time PSYC 945: Lecture 1 18

BP and +WP Conditional Models Multiple Regression, Between-Person ANOVA: 1 PILE y i = (β 0 + β 1 X i + β Z i ) + e i e i ONE residual, assumed uncorrelated with equal variance across observations (here, just persons) BP (all) variation Repeated Measures, Within-Person ANOVA: PILES y ti = (β 0 + β 1 X i + β Z i ) + U 0i + e ti U 0i A random intercept for differences in person means, assumed uncorrelated with equal variance across persons BP (mean) variation = is now leftover after predictors e ti A residual that represents remaining time-to-time variation, usually assumed uncorrelated with equal variance across observations (now, persons and time) WP variation = is also now leftover after predictors PSYC 945: Lecture 1 19

ANOVA for longitudinal data? There are 3 possible kinds of ANOVAs we could use: Between-Persons/Groups, Univariate RM, and Multivariate RM NONE OF THEM ALLOW: Missing occasions (do listwise deletion due to least squares) Time-varying predictors (covariates are BP predictors only) Each includes the same model for the means for time: all possible mean differences (so 4 parameters to get to 4 means) Saturated means model : β 0 + β 1 (T 1 ) + β (T ) + β 3 (T 3 ) The Time variable must be balanced and discrete in ANOVA! These ANOVAs differ by what they predict for the correlation across outcomes from the same person in the model for the variance i.e., how they handle dependency due to persons, or what they says the variance and covariance of the y ti residuals should look like PSYC 945: Lecture 1 0

1. Between-Groups ANOVA Uses e ti only (total variance = a single variance term of σ ) Assumes no covariance at all among observations from the same person: Dependency? What dependency? Will usually be very, very wrong for longitudinal data WP effects tested against wrong residual variance (significance tests will often be way too conservative) Will also tend to be wrong for clustered data, but less so (because the correlation among persons from the same group is not as strong as the correlation among occasions from the same person) Predicts a variance-covariance matrix over time (here, 4 occasions) like this, called Variance Components (R matrix is TYPE=VC on REPEATED): R matrix σ e 0 0 0 0 σ e 0 0 0 0 σ e 0 0 0 0 σ PSYC 945: Lecture 1 1 e

a. Univariate Repeated Measures Separates total variance into two sources: Between-Person (mean differences due to U 0i, or ) Within-Person (remaining variance due to e ti, or ) Predicts a variance-covariance matrix over time (here, 4 occasions) like this, called Compound Symmetry (R matrix is TYPE=CS on REPEATED): Mean differences from U 0i are the only reason why occasions are correlated Will usually be at least somewhat wrong for longitudinal data If people change at different rates, the variances and covariances over time have to change, too R matrix eu 0 u 0 u 0 u0 u 0 eu 0 u 0 u0 u 0 u 0 eu 0 u0 u u u e u Time 0 0 0 0 Time PSYC 945: Lecture 1

The Problem with Univariate RM ANOVA Univ. RM ANOVA τ σ predicts compound symmetry: All variances and all covariances are equal across occasions In other words, the amount of error observed should be the same at any occasion, so a single, pooled error variance term makes sense If not, tests of fixed effects may be biased (i.e., sometimes tested against too much or too little error, if error is not really constant over time) COMPOUND SYMMETRY RARELY FITS FOR LONGITUDINAL DATA But to get the correct tests of the fixed effects, the data must only meet a less restrictive assumption of sphericity: In English pairwise differences between adjacent occasions have equal variance and covariance (satisfied by default with only occasions) If compound symmetry is satisfied, so is sphericity (but see above) Significance test provided in ANOVA for where data meet sphericity assumption Other RM ANOVA approaches are used when sphericity fails PSYC 945: Lecture 1 3

The Other Repeated Measures ANOVAs b. Univariate RM ANOVA with sphericity corrections Based on ε how far off sphericity (from 0-1, 1=spherical) Applies an overall correction for model df based on estimated ε, but it doesn t really address the problem that data model 3. Multivariate Repeated Measures ANOVA All variances and covariances are estimated separately over time (here, 4 occasions), called Unstructured (R matrix is TYPE=UN on REPEATED) it s not a model, it IS the data: Because it can never be wrong, UN can be useful for complete and balanced longitudinal data with few occasions (e.g., -4) Parameters = # # R matrix σ σ σ σ σ σ σ σ σ σ σ σ σ σ σ σ 11 1 13 14 1 3 4 31 3 33 43 41 4 43 44 so can be hard to estimate Unstructured can also be specified to include random intercept variance τ Every other model for the variances is nested within Unstructured (we can do model comparisons to see if all other models are NOT WORSE) PSYC 945: Lecture 1 4

Summary: ANOVA approaches for longitudinal data are one size fits most Saturated Model for the Means (balanced time required) All possible mean differences Unparsimonious, but best-fitting (is a description, not a model) 3 kinds of Models for the Variances (complete data required) BP ANOVA (σ only) assumes independence and constant variance over time Univ. RM ANOVA τ σ assumes constant variance and covariance Multiv. RM ANOVA (whatever) no assumptions; is a description, not a model there is no structure that shows up in a scalar equation (i.e., the way U 0i + e ti does) MLM will give us more flexibility in both parts of the model: Fixed effects that predict the pattern of means (polynomials, pieces) Random intercepts and slopes and/or alternative covariance structures that predict intermediate patterns of variance and covariance over time PSYC 945: Lecture 1 5

Review of Multilevel Models for Longitudinal Data Topics: Concepts in longitudinal multilevel modeling Describing within-person fluctuation using ACS models Describing within-person change using random effects Likelihood estimation in random effects models Describing nonlinear patterns of change Time-invariant predictors PSYC 945: Lecture 1 6

Modeling Change vs. Fluctuation Pure WP Change Our focus right now Pure WP Fluctuation Time Time Model for the Means: WP Change describe pattern of average change (over time ) WP Fluctuation *may* not need anything (if no systematic change) Model for the Variances: WP Change describe individual differences in change (random effects) this allows variances and covariances to differ over time WP Fluctuation describe pattern of variances and covariances over time PSYC 945: Lecture 1 7

Big Picture Framework: Models for the Variance in Longitudinal Data Parsimony Compound Symmetry (CS) U 0 e U 0 U 0 U0 U 0 U 0 e U 0 U 0 U 0 U 0 U 0 e U 0 U 0 U 0 U 0 U 0 e Univariate RM ANOVA Most useful model: likely somewhere in between! NAME...THAT STRUCTURE! Unstructured (UN) σ σ σ σ σ σ σ σ σ σ σ σ σ σ σ σ T1 T1 T13 T14 T1 T T3 T4 T31 T3 T3 T43 T41 T4 T43 T4 Multivariate RM ANOVA Good fit What is the pattern of variance and covariance over time? CS and UN are just two of the many, many options available within MLM, including random effects models (for change) and alternative covariance structure models (for fluctuation). PSYC 945: Lecture 1 8

Alternative Covariance Structure Models Useful in predicting patterns of variance and covariance that arise from fluctuation in the outcome over time: Variances: Same (homogeneous) or different (heterogeneous)? Covariances: Same or different? If different, what is the pattern? Models with heterogeneous variances predict correlation instead of covariance Often don t need any fixed effects for systematic effects of time in the model for the means (although this is always an empirical question) Limitations for most of the ACS models: Require equal-interval occasions (they are based on idea of time lag ) Require balanced time across persons (no intermediate time values) But do not require complete data (unlike when CS and UN are estimated via least squares in ANOVA instead of ML/REML in MLM) ACS models do require some new terminology to introduce PSYC 945: Lecture 1 9

Two Families of ACS Models So far, we ve referred to the variance and covariance matrix of the longitudinal outcomes as the R matrix We now refer to these as R-only models (use REPEATED statement only) Although the R matrix is actually specified per individual, ACS models usually assume the same R matrix for everyone R matrix is symmetric with dimensions n x n, in which n = # occasions per person (although people can have missing data, the same set of possible occasions is required across people to use most R-only models) 3 other matrices we ll see in G and R combined ACS models: G = matrix of random effects variances and covariances (stay tuned) Z = matrix of values for predictors that have random effects (stay tuned) V = symmetric n x n matrix of total variance and covariance over time If the model includes random effects, then G and Z get combined with R to make V as (accomplished by adding the RANDOM statement) If the model does NOT include random effects in G, then so, R-only PSYC 945: Lecture 1 30

R-Only ACS Models The R-only models to be presented next are all specified using the REPEATED statement only (no RANDOM statement) They are explained by showing their predicted R matrix, which provides the total variances and covariances across occasions Total variance per occasion on diagonal Total covariances across occasions on off-diagonals I ve included in the labels SAS uses for each parameter Correlations across occasions can be calculated given variances and covariances, which would be shown in the RCORR matrix (available in SAS PROC MIXED) 1 s on diagonal (standardized variables), correlations on off-diagonal Unstructured (TYPE=UN) will always fit best by LL All ACS models are nested within Unstructured (UN = the data) Goal: find an ACS model that is simpler but not worse fitting than UN PSYC 945: Lecture 1 31

R-Only ACS Models: CS/CSH Compound Symmetry: TYPE=CS parameters: 1 residual variance 1 CS covariance across occasions Constant total variance: CS σ Constant total covariance: CS CS e CS CS CS CS CS e CS CS CS CS CS e CS CS CS CS CS e Compound Symmetry Heterogeneous: TYPE=CSH n+1 parameters: n separate Var(n) total variances 1 CSH total correlation across occasions Separate total variances are estimated directly σ CSH CSH CSH CSH σ CSH CSH CSH CSH σ CSH CSH CSH CSH σ T1 T1T T1T3 T1T4 T T1 T T T3 T T4 T3 T1 T3 T T3 T3 T4 T4 T1 T4 T T4 T3 T4 Still constant total correlation: CSH (but has non-constant covariances) PSYC 945: Lecture 1 3

R-Only ACS Models: AR1/ARH1 1 st Order Auto-Regressive: TYPE=AR(1) parameters: 1 constant total variance (mislabeled residual ) 1 AR1 total auto-correlation r T across occasions r is lag-1 correlation, r is lag- correlation, r is lag-3 correlation. 1 st Order Auto-Regressive Heterogeneous: TYPE=ARH(1) n+1 parameters: n separate Var(n) total variances 1 ARH1 total autocorrelation r T across occasions r σ σ r σ r σ r σ r σ σ r σ r σ r σ r σ σ 1 3 σt rtσ T rtσ T rtσ T 1 1 T T T T T T T 1 1 T T T T T T T 3 1 T T T T T T T r σ r r r r σ r r r r σ 1 3 σt1 rtt1t rtt1t3 rtt1t4 1 1 TTT1 T TTT3 TTT4 1 1 TT3T1 TT3T T3 TT3T4 3 1 TT4T1 TT4T TT4T3 T4 r is lag-1 correlation, r is lag- correlation, r is lag-3 correlation. PSYC 945: Lecture 1 33

R-Only ACS Models: TOEPn/TOEPHn Toeplitz(n): TYPE=TOEP(n) n parameters: 1 constant total variance (mislabeled residual ) n 1 TOEP(lag) c Tn banded total covariances across occasions σ c c c σ c c c σ T T1 σt T T1 T T3 T T1 T c is lag-1 covariance, c is lag- covariance, c is lag-3 covariance. Toeplitz Heterogeneous(n): TYPE=TOEPH(n) n + (n 1) parameters: n separate Var(n) total variances n 1 TOEPH(lag) r Tn banded total correlations across occasions r σ r r r r σ r r r r σ σt1 rt1t1t rt T1T3 rt3 T1T4 T1TT1 T T1TT3 TTT4 TT3T1 T1T3T T3 T1T3T4 T3T4T1 TT4T T1T4T3 T4 r is lag-1 correlation, r is lag- correlation, r is lag-3 correlation. PSYC 945: Lecture 1 34

Comparing R-only ACS Models Baseline models: CS =simplest, UN = most complex Relative to CS, more complex models fit better or not better Relative to UN, less complex models fit worse or not worse Other rules of nesting and model comparisons: Homogeneous variance models are nested within heterogeneous variance models (e.g., CS in CSH, AR1 in ARH1, TOEP in TOEPH) CS and AR1 are each nested within TOEP (i.e., TOEP can become CS or AR1 through restrictions of its covariance patterns) CS and AR1 are not nested (because both have parameters) R-only models differ in unbounded parameters, so can be compared using regular LL tests (instead of mixture LL tests) Good idea to start by assuming heterogeneous variances until you settle on the covariance pattern, then test if het. var. are still necessary When in doubt, just compare AIC and BIC (useful even with LL tests) PSYC 945: Lecture 1 35

The Other Family of ACS Models R-only models directly predict the total variance and covariance G and R models indirectly predict the total variance and covariance through between-person (BP) and within-person (WP) sources of variance and covariance So, for this model: y ti = β 0 + U 0i + e ti BP = G matrix of level- random effect (U 0i ) variances and covariances Which effects get to be random (whose variance and covariances are then included in G) is specified using the RANDOM statement (always TYPE=UN) Our ACS models have a random intercept only, so G is 1x1 scalar of WP = R matrix of level-1 (e ti ) residual variances and covariances The n x n R matrix of residual variances and covariances that remain after controlling for random intercept variance is then modeled with REPEATED Total = V = n x n matrix of total variance and covariance over time that results from putting G and R together: Z is a matrix that holds the values of predictors with random effects, but Z will be an n x 1 column of 1 s for now (random intercept only) PSYC 945: Lecture 1 36

A Random Intercept (G and R) Model Total Predicted Data Matrix is called V Matrix U 0 e U 0 U 0 U0 U 0 U 0 e U 0 U0 U 0 U 0 U 0 e U0 0 0 0 0 U U U U e Level, BP Variance Unstructured GMatrix (RANDOM statement) Each person has same 1 x 1 G matrix (no covariance across persons in two-level model) Random Intercept Variance only τ U 0 Level 1, WP Variance Diagonal (VC) RMatrix (REPEATED statement) Each person has same n x n R matrix equal variances and 0 covariances across time (no covariance across persons) Residual Variance only σ e 0 0 0 0 σ e 0 0 0 0 σ e 0 0 0 0 σ e PSYC 945: Lecture 1 37

CS as a Random Intercept Model RI and DIAG: Total predicted data matrix is called V matrix, created from the G [TYPE=UN] and R [TYPE=VC] matrices as follows: T V Z * G * Z R V 1 U0 e U0 U0 U e 0 0 0 0 1 0 e 0 0 U 0 U 0 e U 0 U0 V U 1111 0 1 0 0 e 0 U 0 U 0 U 0 e U0 1 Z represents 0 0 0 n per person e 0 0 0 0 U U U U e Does the end result V look familiar? It should: CS = CS e CS CS CS CS CS e CS CS CS CS CS e CS CS CS CS CS e So if the R-only CS model (the simplest baseline) can be specified equivalently using G and R, can we do the same for the R-only UN model (the most complex baseline)? Absolutely!...with one small catch PSYC 945: Lecture 1 38

UN via a Random Intercept Model RI and UNn 1: Total predicted data matrix is called V matrix, created from the G [TYPE=UN] and R [TYPE=UN(n 1)] matrices as follows: T V Z* G * Z R V 1 U e1 σe1 σe13 0 0 e1 U σ 0 e1 U σ 0 e13 U 0 1 σe1 e σe3 σe4 σ σ σ 0 0 0 0 V U 1111 0 1 σe31 σe3 e3 σe34 σ σ σ 0 0 0 0 1 0 σe4 σ e43 e4 σ σ 0 0 0 0 U e1 U e U e3 U e4 U e31 U e3 U e3 U e34 U U e4 U e43 U e4 This RI and UNn 1 model is equivalent to (makes same predictions as) the R-only UN model. But it shows the residual (not total) covariances. Because we can t estimate all possible variances and covariances in the R matrix and also estimate the random intercept variance τ in the G matrix, we have to eliminate the last R matrix covariance by setting it to 0. Accordingly, in the RI and UNn 1 model, the random intercept variance τ takes on the value of the covariance for the first and last occasions. PSYC 945: Lecture 1 39

Rationale for G and R ACS models Modeling WP fluctuation traditionally involves using R only (no G) Total BP + WP variance described by just R matrix (so R=V) Correlations would still be expected even at distant time lags because of constant individual differences (i.e., the BP random intercept) Resulting R-only model may require lots of estimated parameters as a result e.g., 8 time points? Pry need a 7-lag Toeplitz(8) model Why not take out the primary reason for the covariance across occasions (the random intercept variance) and see what s left? Random intercept variance in G control for person mean differences THEN predict just the residual variance/covariance in R, not the total Resulting model may be more parsimonious (e.g., maybe only lag1 or lag occasions are still related after removing as a source of covariance) Has the advantage of still distinguishing BP from WP variance (useful for descriptive purposes and for calculating effect sizes later) PSYC 945: Lecture 1 40

Random Intercept + Diagonal R Models RI and DIAG: V is created from G [TYPE=UN] and R [TYPE=VC]: homogeneous residual variances; no residual covariances T Same fit as R-only CS V Z * G * Z R V 1 U0 e U0 U0 U e 0 0 0 0 1 0 e 0 0 U 0 U 0 e U 0 U0 V U 1111 0 1 0 0 e 0 U 0 U 0 U 0 e U0 1 0 0 0 e 0 0 0 0 U U U U e RI and DIAGH: V is created from G [TYPE=UN] and R [TYPE=UN(1)]: heterogeneous residual variances; no residual covariances NOT same fit T V Z * G * Z R V as R-only CSH 1 U0 e1 U0 U0 U e1 0 0 0 0 1 0 e 0 0 U 0 U 0 e U 0 U 0 V U 1111 0 1 0 0 e3 0 U 0 U 0 U 0 e3 U 0 1 0 0 0 e4 U 0 U 0 U 0 U 0 e4 PSYC 945: Lecture 1 41

Random Intercept + AR1 R Models RI and AR1: V is created from G [TYPE=UN] and R [TYPE=AR(1)]: homogeneous residual variances; auto-regressive lagged residual covariances T V Z * G * Z R V 1 3 1 3 1 σe reσ e reσ e reσ U e 0 e U r 0 eσe U r 0 eσe U r 0 eσ e 1 1 1 1 1 reσe σe reσ e reσ e V U U r 0 eσe U 0 e U r 0 eσe U r 0 eσe 1111 1 0 1 1 1 1 reσ e reσe σe reσ e U r 0 eσe U r 0 eσe U 0 e U r 0 eσ e 1 3 1 reσ e r 3 1 eσe reσe σ e U r 0 eσe U r 0 eσe U r 0 eσe U 0 e RI and ARH1: V is created from G [TYPE=UN] and R [TYPE=ARH(1)]: heterogeneous residual variances; auto-regressive lagged residual covariances T V Z* G * Z R V 1 3 1 3 1 σe1 re σσ e1 e re σσ e1 e3 re σσ r σσ r σσ r σσ e1 e4 0 0 0 0 1 1 1 1 1 reσeσ V U e1 σe re σeσe3 re σeσ r σ σ r σ σ r σ σ e4 0 0 0 0 1111 1 0 1 1 re σe3σe1 re σe3σe σe3 re σσ e3 e4 1 1 r σ σ r σσ r σ σ 0 0 0 0 1 3 1 re σe4σe1 re σe4σe re σe4σe3 σ 3 1 e4 U r 0 eσe4σe1 U r 0 e σe4σe U r 0 eσe4σe3 U 0 e4 U e1 U e e1 e U e e1 e3 U e e1 e4 U e e e1 U e U e e e3 U e e e4 U e e3 e1 U e e3 e U e3 U e e3 e4 PSYC 945: Lecture 1 4

Random Intercept + TOEPn 1 R Models RI and TOEPn 1: V is created from G [TYPE=UN] and R [TYPE=TOEP(n 1)]: homogeneous residual variances; banded residual covariances Same fit as R-only TOEP(n) T V Z * G * Z R V U0 e U c 0 e1 U c 1 σ 0 e U e ce1 ce 0 0 1 ce1 σe ce1 ce 0 U 1111 0 1 0 0 0 V ce ce1 σe ce1 c c c 0 0 0 0 1 0 ce ce1 σ e c c 0 0 0 0 U ce1 U e U ce1 U ce U e U e1 U e U e1 U U e U e1 U e RI and TOEPHn 1: V is created from G [TYPE=UN] and R [TYPE=TOEPH(n 1)]: homogeneous residual variances; banded residual covariances NOT same fit as R-only TOEPH(n) T V Z* G * Z R V 1 1 r σ σ σ r σ V U 1111 0 1 r σ σ r σ σ σ r σ σ r σ σ r σ σ 1 σe1 re1 σe1σe reσe1σe3 0 U 0 e1 U r 0 e1σe1σe U r 0 eσe1σe3 U 0 e1 e e1 e e1 eσe3 reσeσe4 Ur 0 e1σeσe1 U 0 e Ur 0 e1σeσe3 Ur 0 eσeσe4 e e3 e1 e1 e3 e e3 e1 e3 e4 U 0 e e3 e1 U 0 e1 e3 e U 0 e3 U r 0 e1σe3 σe4 0 reσe4σe re1 σe4σe3 σ e4 U U reσe4σe U re1 σe4σe3 U e4 0 0 0 0 Because of τ, highest lag covariance in R must be set to 0 for model to be identified PSYC 945: Lecture 1 43

Random Intercept + TOEP R Models RI and TOEP: V is created from G [TYPE=UN] and R [TYPE=TOEP()]: homogeneous residual variances; banded residual covariance at lag1 only T V Z * G * Z R V U0 e U c 1 σ 0 e1 U0 U e ce1 0 0 0 1 ce1 σe ce1 0 U c 0 e1 U 0 e U c U 1111 0 1 0 e1 U0 V 0 ce1 σe ce1 c c 0 0 0 0 1 0 0 ce1 σ e c 0 0 0 0 U U e1 U e U e1 U U U e1 U e RI and TOEPH1: V is created from G [TYPE=UN] and R [TYPE=TOEPH()]: homogeneous residual variances; banded residual covariance at lag1 only Now we can test the need for residual covariances at higher lags T V Z * G * Z R V 1 σe1 re1 σe1σe 0 0 U 0 e1 U r 0 e1σe1σe U 0 U0 1 re1 σeσe1 σe re1 σeσe3 0 V U 1111 0 1 U r 0 e1σeσe1 U 0 e U r 0 e1σeσe3 U 0 0 r 0 0 0 0 1 0 0 r σ σ σ r σ σ 0 0 0 0 e1σe3σe σe3 re1σe3σe4 U U re1 σe3σe U e3 U re1 σe3σe4 e1 e4 e3 e4 U U U e1 e4 e3 U e4 PSYC 945: Lecture 1 44

Map of R-only and G and R ACS Models Arrows indicate nesting (end is more complex model) R-Only Models G and R Combined Models R-Only Models G and R Combined Models R-only Compound Symmetry Random Intercept & R-only Compound = Diagonal R Symmetry Het. Random Intercept & Diagonal Het. R n+1 Number of Parameters in Homogeneous Variance Models 3 3 4 n R-only First-Order Auto-Regressive R-only n-1 Lag Toeplitz = Random Intercept & First-Order Auto-Regressive R Random Intercept & 1-Lag Toeplitz R Random Intercept & -Lag Toeplitz R Random Intercept & n- Lag Toeplitz R R-only First-Order Auto-Regressive Het. R-only n-1 Lag Toeplitz Het. Random Intercept & First-Order Auto- Regressive Het. R Random Intercept & 1-Lag Toeplitz Het. R Random Intercept & -Lag Toeplitz Het. R Random Intercept & n- Lag Toeplitz Het. R n+1 n+ n+ n+3 n+n-1 Number of Parameters in Heterogeneous Variance Models R-only n-order Unstructured = Random Intercept & n-1 order Unstructured R n*(n+1) / Homogeneous Variance over Time Heterogeneous (Het.) Variance over Time PSYC 945: Lecture 1 45

Stuff to Watch Out For If using a random intercept, don t forget to drop 1 parameter in: n-1 order UN R: Can t get all possible elements in R, plus τ in G TOEPn 1: Have to eliminate last lag covariance If using a random intercept Can t do RI + CS R: Can t get a constant in R, and then another constant in G Can often test if random intercept helps (e.g., AR1 is nested within RI + AR1) If time is treated as continuous in the fixed effects, you will need another variable for time that is categorical to use in the syntax: Continuous Time on MODEL statement Categorical Time on CLASS and REPEATED statements Most alternative covariance structure models assume time is balanced across persons with equal intervals across occasions If not, holding correlations of same lag equal doesn t make sense Other structures can be used for unbalanced time SP(POW)(time) = AR1 for unbalanced time (see SAS REPEATED statement for others) PSYC 945: Lecture 1 46

Summary: Two Families of ACS Models R-only models: Specify R model on REPEATED statement without any random effects variances in G (so no RANDOM statement is used) Include UN, CS, CSH, AR1, AR1H, TOEPn, TOEPHn (among others) Total variance and total covariance kept in R, so R = V Other than CS, does not partition total variance into BP vs. WP G and R combined models (so G and R V): Specify random intercept variance τ in G using RANDOM statement, then specify R model using REPEATED statement G matrix = Level- BP variance and covariance due to U, so R = Level-1 WP variance and covariance of the e ti residuals R models what s left after accounting for mean differences between persons (via the random intercept variance τ in G) PSYC 945: Lecture 1 47

Syntax for Models for the Variance Does your model include random intercept variance (for U 0i)? Use the RANDOM statement Gmatrix Random intercept models BP interindividual differences in mean Y What about residual variance (for e ti )? Use the REPEATED statement Rmatrix WITHOUT a RANDOM statement: R is BP and WP variance together = Total variances and covariances (to model all variation, so R = V) WITH a RANDOM statement: R is WP variance only = Residual variances and covariances to model WP intraindividual variation G and R put back together = V matrix of total variances and covariances The REPEATED statement is always there implicitly Any model always has at least one residual variance in R matrix But the RANDOM statement is only there if you write it G matrix isn t always necessary (don t always need random intercept) PSYC 945: Lecture 1 48

Wrapping Up: ACS Models Even if you just expect fluctuation over time rather than change, you still should be concerned about accurately predicting the variances and covariances across occasions Baseline models (from ANOVA least squares) are CS & UN: Compound Symmetry: Equal variance and covariance over time Unstructured: All variances & covariances estimated separately CS and UN via ML or REML estimation allows missing data MLM gives us choices in the middle Goal: Get as close to UN as parsimoniously as possible R-only: Structure TOTAL variation in one matrix (R only) G+R: Put constant covariance due to random intercept in G, then structural RESIDUAL covariance in R (so that G and R V TOTAL) PSYC 945: Lecture 1 49

Review of Multilevel Models for Longitudinal Data Topics: Concepts in longitudinal multilevel modeling Describing within-person fluctuation using ACS models Describing within-person change using random effects Likelihood estimation in random effects models Describing nonlinear patterns of change Time-invariant predictors PSYC 945: Lecture 1 50

Modeling Change vs. Fluctuation Pure WP Change Our focus for today using random effects models Pure WP Fluctuation Time Uses alternative covariance structure models instead Time Model for the Means: WP Change describe pattern of average change (over time ) WP Fluctuation *may* not need anything (if no systematic change) Model for the Variance: WP Change describe individual differences in change (random effects) this allows variances and covariances to differ over time WP Fluctuation describe pattern of variances and covariances over time PSYC 945: Lecture 1 51

The Big Picture of Longitudinal Data: Models for the Means What kind of change occurs on average over time? There are two baseline models to consider: Empty only a fixed intercept (predicts no change) Saturated all occasion mean differences from time 0 (ANOVA model that uses # fixed effects= n) *** may not be possible in unbalanced data Parsimony Empty Model: Predicts NO change over time 1 Fixed Effect In-between options: polynomial slopes, piecewise slopes, nonlinear models Name that Trajectory! Saturated Means: Reproduces mean at each occasion # Fixed Effects = # Occasions Good fit PSYC 945: Lecture 1 5

The Big Picture of Longitudinal Data: Models for the Variance Parsimony Compound Symmetry (CS) U 0 e U 0 U 0 U0 U 0 U 0 e U 0 U 0 U 0 U 0 U 0 e U 0 U 0 U 0 U 0 U 0 e Univariate RM ANOVA Most useful model: likely somewhere in between! Name...that Structure! Unstructured (UN) σ σ σ σ σ σ σ σ σ σ σ σ σ σ σ σ T1 T1 T13 T14 T1 T T3 T4 T31 T3 T3 T43 T41 T4 T43 T4 Multivariate RM ANOVA Good fit What is the pattern of variance and covariance over time? CS and UN are just two of the many, many options available within MLM, including random effects models (for change) and alternative covariance structure models (for fluctuation). PSYC 945: Lecture 1 53

Empty Means, Random Intercept Model GLM Empty Model: y i = β 0 + e i MLM Empty Model: Level 1: y ti = β 0i + e ti Level : β 0i = γ 00 + U 0i 3 Total Parameters: Model for the Means (1): Fixed Intercept γ 00 Model for the Variance (): Level-1 Variance of e ti Level- Variance of U 0i Residual = time-specific deviation from individual s predicted outcome Fixed Intercept =grand mean (because no predictors yet) Random Intercept = individual-specific deviation from predicted intercept Composite equation: y ti = (γ 00 + U 0i ) + e ti PSYC 945: Lecture 1 54

Augmenting the empty means, random intercept model with time questions about the possible effects of time: 1. Is there an effect of time on average? If the line describing the sample means not flat? Significant FIXED effect of time. Does the average effect of time vary across individuals? Does each individual need his or her own line? Significant RANDOM effect of time PSYC 945: Lecture 1 55

Fixed and Random Effects of Time (Note: The intercept is random in every figure) No Fixed, No Random Yes Fixed, No Random No Fixed, Yes Random Yes Fixed, Yes Random PSYC 945: Lecture 1 56

Fixed Linear Time, Random Intercept Model (4 total parameters: effect of time is FIXED only) Residual = time-specific deviation from individual s predicted outcome estimated variance of Multilevel Model Level 1: y ti = β 0i + β 1i (Time ti )+ e ti Fixed Intercept = predicted mean outcome at time 0 Fixed Linear Time Slope = predicted mean rate of change per unit time Level : β 0i = γ 00 + U 0i β 1i = γ 10 Random Intercept = individual-specific deviation from fixed intercept estimated variance of Composite Model y ti = (γ 00 + U 0i )+(γ 10 )(Time ti )+ e ti β 0i β 1i Because the effect of time is fixed, everyone is predicted to change at exactly the same rate. PSYC 945: Lecture 1 57

Explained Variance from Fixed Linear Time Most common measure of effect size in MLM is Pseudo-R Is supposed to be variance accounted for by predictors Multiple piles of variance mean multiple possible values of pseudo R (can be calculated per variance component or per model level) A fixed linear effect of time will reduce level-1 residual variance σ in R By how much is the residual variance σ reduced? Pseudo R = residual variance - residual variance fewer more e residual variancefewer If time varies between persons, then level- random intercept variance τ in G may also be reduced: Pseudo R = random intercept variance - random intercept variance fewer more U0 random intercept variancefewer But you are likely to see a (net) INCREASE in τ instead. Here s why: PSYC 945: Lecture 1 58

Increases in Random Intercept Variance Level- random intercept variance τ will often increase as a consequence of reducing level-1 residual variance σ Observed level- τ is NOT just between-person variance Also has a small part of within-person variance (level-1 σ ), or: Observed = True + ( /n) As n occasions increases, bias of level-1 σ is minimized Likelihood-based estimates of true τ use (σ /n) as correction factor: True = Observed ( /n) For example: observed level- τ =4.65, level-1 σ =7.06, n=4 True τ = 4.65 (7.60/4) =.88 in empty means model Add fixed linear time slope reduce σ from 7.06 to.17 (R =.69) But now True τ = 4.65 (.17/4) = 4.10 in fixed linear time model PSYC 945: Lecture 1 59

Random Intercept Models Imply People differ from each other systematically in only ONE way in intercept (U 0i ), which implies ONE kind of BP variance, which translates to ONE source of person dependency (covariance or correlation in the outcomes from the same person) If so, after controlling for BP intercept differences (by estimating the variance of U 0i as τ in the G matrix), the e ti residuals (whose variance and covariance are estimated in the R matrix) should be uncorrelated with homogeneous variance across time, as shown: Level- G matrix: RANDOM TYPE=UN τ U 0 Level-1 R matrix: REPEATED TYPE=VC σ e 0 0 0 0 σ e 0 0 0 0 σ e 0 0 0 0 σ e G and R matrices combine to create a total V matrix with CS pattern U 0 e U 0 U 0 U0 U 0 U 0 e U 0 U 0 U 0 U 0 U 0 e U 0 U 0 U 0 U 0 U 0 e PSYC 945: Lecture 1 60

Matrices in a Random Intercept Model Total predicted data matrix is called V matrix, created from the G [TYPE=UN] and R [TYPE=VC] matrices as follows: T V Z * G * Z R V 1 U0 e U0 U0 U e 0 0 0 0 1 0 e 0 0 U 0 U 0 e U 0 U0 V U 1111 0 1 0 0 e 0 U 0 U 0 U 0 e U0 1 0 0 0 e 0 0 0 0 VCORR then provides the intraclass correlation, calculated as: ICC = / ( + ) 1 ICC ICC ICC ICC 1 ICC ICC ICC ICC 1 ICC ICC ICC ICC 1 assumes a constant correlation over time U U U U e For any random effects model: G matrix = BP variances/covariances R matrix = WP variances/covariances Z matrix = values of predictors with random effects (just intercept here), which can vary per person V matrix = Total variance/covariance PSYC 945: Lecture 1 61

Random Linear Time Model (6 total parameters) Residual = time-specific deviation from individual s predicted outcome estimated variance of Multilevel Model Level 1: y ti = β 0i + β 1i (Time ti )+ e ti Fixed Intercept = predicted mean outcome at time 0 Fixed Linear Time Slope = predicted mean rate of change per unit time Level : β 0i = γ 00 + U 0i β 1i = γ 10 + U 1i Random Intercept = individual-specific deviation from fixed intercept at time 0 estimated variance of Composite Model Random Linear Time Slope= individual-specific deviation from fixed linear time slope estimated variance of y ti = (γ 00 + U 0i )+(γ 10 + U 1i )(Time ti )+ e ti Also has an estimated covariance of random intercepts and slopes of β 0i β 1i PSYC 945: Lecture 1 6

Random Linear Time Model y ti = (γ 00 + U 0i ) + (γ 10 + U 1i )(Time ti )+ e ti Fixed Intercept Random Intercept Deviation Fixed Slope Random Slope Deviation error for person i at time t Outcome 3 8 4 0 16 1 8 4 u 1i = + γ 10 = 6 γ 00 =10 U 0i = -4 Mean P Linear (Mean) Linear (P) 0 1 3 Time e ti = -1 6 Parameters: Fixed Effects: γ 00 Intercept, γ 10 Slope Random Effects Variances: U 0i Intercept Variance U 1i Slope Variance Int-Slope Covariance 1 e ti Residual Variance = PSYC 945: Lecture 1 63

Quantification of Random Effects Variances We can test if a random effect variance is significant, but the variance estimates are not likely to have inherent meaning e.g., I have a significant fixed linear time effect of γ 10 = 1.7, so people increase by 1.7/time on average. I also have a significant random linear time slope variance of = 0.91, so people need their own slopes (people change differently). But how much is a variance of 0.91, really? 95% Random Effects Confidence Intervals can tell you Can be calculated for each effect that is random in your model Provide range around the fixed effect within which 95% of your sample is predicted to fall, based on your random effect variance: Random Effect 95% CI = fixed effect ± 1.96* Random Variance Linear Time Slope 95% CI = γ ± 1.96* τ 1.7 ± 1.96* 0.91 = 0.15 to 3.59 10 U So although people improve on average, individual slopes are predicted to range from 0.15 to 3.59 (so some people may actually decline) PSYC 945: Lecture 1 64 1

Random Linear Time Models Imply: People differ from each other systematically in TWO ways in intercept (U 0i ) and slope (U 1i ), which implies TWO kinds of BP variance, which translates to TWO sources of person dependency (covariance or correlation in the outcomes from the same person) If so, after controlling for both BP intercept and slope differences (by estimating the τ and τ variances in the G matrix), the e ti residuals (whose variance and covariance are estimated in the R matrix) should be uncorrelated with homogeneous variance across time, as shown: Level- G matrix: RANDOM TYPE=UN τu τ 0 U10 τu τ 01 U 1 Level-1 R matrix: REPEATED TYPE=VC σ e 0 0 0 0 σ e 0 0 0 0 σ e 0 0 0 0 σ e G and R combine to create a total V matrix whose per-person structure depends on the specific time occasions each person has (very flexible for unbalanced time) PSYC 945: Lecture 1 65

Random Linear Time Model (6 total parameters: effect of time is now RANDOM) How the model predicts each element of the V matrix: Level 1: y ti = β 0i + β 1i (Time ti )+ e ti Level : β 0i = γ 00 + U 0i β 1i = γ 10 + U 0i Composite Model: y ti = (γ 00 + U 0i )+(γ 10 + U 0i )(Time ti )+ e ti Predicted Time-Specific Variance: Var y Var U U Time e ti 00 0i 10 1i i ti Var U U *Time e 0i 1i i ti Var U Var U *Time *Cov U,U *Time Var e 0i 1i i 0i 1i i ti U Time i * U *Time i* U e Var U Time *Var U *Time *Cov U,U Var e 0i i 1i i 0i 1i ti 0 1 01 PSYC 945: Lecture 1 66

Random Linear Time Model (6 total parameters: effect of time is now RANDOM) How the model predicts each element of the V matrix: Level 1: y ti = β 0i + β 1i (Time ti )+ e ti Level : β 0i = γ 00 + U 0i β 1i = γ 10 + U 1i Composite Model: y ti = (γ 00 + U 0i )+(γ 10 + U 1i )(Time ti )+ e ti Predicted Time-Specific Covariances (Time A with Time B): Cov y, y Cov U U A e, U U B e Ai Bi 00 0i 10 1i i Ai 00 0i 10 1i i Bi 0i 1i i 0i 1i i Cov U U A, U U B 0i 0i 0i 1i i 0i 1i i 1i i 1i i Cov U,U Cov U,U B Cov U,U A Cov U A,U B Var U0i A i B i *CovU 0i,U1i A ibi VarU1i A B A B U i i U i i U 0 01 1 PSYC 945: Lecture 1 67

Random Linear Time Model (6 total parameters: effect of time is now RANDOM) Scalar mixed model equation per person: Y = X * γ Z * U + E i i i i i y0i 10 10 e0i y1i 1100 11U0i e1i y 1 1 U e i 10 1i i y3i 13 13 e3i y0i γ 00+ γ 10(0) U 0i + U 1i(0) e0i y1i γ 00+ γ 10(1 y ) U 0i + U 1i(1) e1i i γ 00+ γ 10() U 0i + U 1i() ei y 3i γ 00+ γ 10(3) U 0i + U 1i(3) e 3i y γ + γ (0) + U + U (0) + e y γ + γ (1) + U + U (1) + e y i γ 00+ γ () + U + U () + e y 3i γ + γ (3) + U + U (3) + e 0i 00 10 0i 1i 0i 1i 00 10 0i 1i 1i 10 0i 1i i 00 10 0i 1i 3i X i = n x k values of predictors with fixed effects, so can differ per person (k = : intercept, linear time) γ = k x 1 estimated fixed effects, so will be the same for all persons (γ 00 = intercept, γ 10 = linear time) Z i = n x u values of predictors with random effects, so can differ per person (u = : intercept, linear time) U i = u x estimated individual random effects, so can differ per person E i = n x n time-specific residuals, so can differ per person PSYC 945: Lecture 1 68

Random Linear Time Model (6 total parameters: effect of time is now RANDOM) Predicted total variances and covariances per person: T i i * i * i i V Z G Z R V i V i V 10 e 0 0 0 11 U 0 U 1111 0 01 e 0 0 1 013 U 01 U 0 0 1 e 0 13 0 0 0 e matrix: Variance y time time time 0 1 01 A B AB U U U e matrix: Covariance y, y i A B U 0 U01 U 1 V i matrix = complicated Z i = n x u values of predictors with random effects, so can differ per person (u = : int., time slope) Z it = u x n values of predictors with random effects (just Z i transposed) G i = u x u estimated random effects variances and covariances, so will be the same for all persons (τ = int. var., τ = slope var.) R i = n x n time-specific residual variances and covariances, so will be same for all persons (here, just diagonal σ ) PSYC 945: Lecture 1 69

V Z * G * Z R V Building V across persons: Random Linear Time Model V for two persons with unbalanced time observations: e 0 0 0 0 0 0 0 1 0.0 0 0 11.00 0 0 e 0 0 0 0 0 0 1.0 0 0 U 0 U 0 0 01 1 1 1 1 0 0 0 0 0 0 e 0 0 0 0 0 1 3.0 0 0 U01 U 0 0 0.0 1.0.0 3.0 0 0 0 0 0 0 0 1 e 0 0 0 0 0 0 1 0. 0 0 0 0 1 1 1 1 0 0 U 0 U 0 0 0 0 01 e 0 0 0 0 0 1 1.4 0 0 0 0 0. 1.4.3 3.5 0 0 1.3 0 0 U 0 0 0 0 0 01 U e 0 0 1 0 0 1 3.5 0 0 0 0 0 0 e 0 0 0 0 0 0 0 0 e The giant combined V matrix across persons is how the multilevel or mixed model is actually estimated Known as block diagonal structure predictions are given for each person, but 0 s are given for the elements that describe relationships between persons (because persons are supposed to be independent here!) T PSYC 945: Lecture 1 70

Building V across persons: Random Linear Time Model V for two persons also with different n per person: V Z * G * Z R T V 1 0.0 0 0 e 0 0 0 0 0 0 11.00 0 U0 U 0 0 0 e 0 0 0 0 0 01 1.0 0 0 1 1 1 1 0 0 0 e U 01 U 0 0 0 0 0 0 0 0 1 1 3.0 0 0 0.0 1.0.0 3.0 0 0 0 0 0 0 e 0 0 0 0 0 1 0. 0 0 0 0 0 0 1 1 1 U 0 U 01 0 0 0 0 e 0 0 0 0 1 1. 4 0 0 0 0 0. 1.4 3.5 0 0 U 01 U1 0 0 0 0 0 e 0 0 0 1 3.5 0 0 0 0 0 0 e The block diagonal does not need to be the same size or contain the same time observations per person R matrix can also include non-0 covariance or differential residual variance across time (as in ACS models), although the models based on the idea of a lag won t work for unbalanced or unequal-interval time PSYC 945: Lecture 1 71

G, R, and V: The Take-Home Point The partitioning of variance into piles Level = BP G matrix of random effects variances/covariances Level 1 = WP R matrix of residual variances/covariances G and R combine via Z to create V matrix of total variances/covariances Many flexible options that allows the variances and covariances to vary in a time-dependent way that better matches the actual data Can allow differing variance and covariance due to other predictors, too Parsimony Compound Symmetry (CS) U 0 e U 0 U 0 U0 U 0 U 0 e U 0 U 0 U 0 U 0 U 0 e U 0 U 0 U 0 U 0 U 0 e Univariate RM ANOVA Random effects models use G and R to predict something in-between! Unstructured (UN) σ σ σ σ σ σ σ σ σ σ σ σ σ σ σ σ T1 T1 T13 T14 T1 T T3 T4 T31 T3 T3 T43 T41 T4 T43 T4 Multivariate RM ANOVA Good fit PSYC 945: Lecture 1 7

How MLM Handles Dependency Common description of the purpose of MLM is that it addresses or handles correlated (dependent) data But where does this correlation come from? 3 places (here, an example with health as an outcome): 1. Mean differences across persons Some people are just healthier than others (at every time point) This is what a random intercept is for. Differences in effects of predictors across persons Does time (or stress) affect health more in some persons than others? This is what random slopes are for 3. Non-constant within-person correlation for unknown reasons Occasions closer together may just be more related This is what alternative covariance structure models are for PSYC 945: Lecture 1 73

MLM Handles Dependency Where does each kind of person dependency go? Into a new random effects variance component (or pile of variance): Level 1, Within- Person Differences Residual Variance ( ) Level, Between- Person Differences Residual Variance ( ) BP Int Variance ( ) Residual Variance ( ) BP Slope Variance ( ) BP Int Variance ( ) U 01 covariance PSYC 945: Lecture 1 74

Piles of Variance By adding a random slope, we carve up our total variance into 3 piles: BP (error) variance around intercept BP (error) variance around slope WP (error) residual variance These piles are 1 pile of error variance in Univ. RM ANOVA But making piles does NOT make error variance go away Level 1 (one source of) Within-Person Variation: gets accounted for by time-level predictors Residual Variance ( ) FIXED effects make variance go away (explain variance). RANDOM effects just make a new pile of variance. Level (two sources of) Between-Person Variation: gets accounted for by person-level predictors BP Int Variance ( ) covariance BP Slope Variance ( ) PSYC 945: Lecture 1 75

Fixed vs. Random Effects of Persons Person dependency: via fixed effects in the model for the means or via random effects in the model for the variance? Individual intercept differences can be included as: N-1 person dummy code fixed main effects OR 1 random U 0i Individual time slope differences can be included as: N-1*time person dummy code interactions OR 1 random U 1i *time ti Either approach would appropriately control for dependency (fixed effects are used in some programs that control SEs for sampling) Two important advantages of random effects: Quantification: Direct measure of how much of the outcome variance is due to person differences (in intercept or in effects of predictors) Prediction: Person differences (main effects and effects of time) then become predictable quantities this can t happen using fixed effects Summary: Random effects give you predictable control of dependency PSYC 945: Lecture 1 76

Review of Multilevel Models for Longitudinal Data Topics: Concepts in longitudinal multilevel modeling Describing within-person fluctuation using ACS models Describing within-person change using random effects Likelihood estimation in random effects models Describing nonlinear patterns of change Time-invariant predictors PSYC 945: Lecture 1 77

3 Decision Points for Model Comparisons 1. Are the models nested or non-nested? Nested: have to add OR subtract effects to go from one to other Can conduct significance tests for improvement in fit Non-nested: have to add AND subtract effects No significance tests available for these comparisons. Differ in model for the means, variances, or both? Means? Can only use ML LL tests (or p-value of each fixed effect) Variances? Can use ML (or preferably REML) LL tests, no p-values Both sides? Can only use ML LL tests 3. Models estimated using ML or REML? ML: All model comparisons are ok REML: Model comparisons are ok for the variance parameters only PSYC 945: Lecture 1 78

Likelihood-Based Model Comparisons Relative model fit is indexed by a deviance statistic LL LL indicates BADNESS of fit, so smaller values = better models Two estimation flavors (given as log likelihood in SAS, SPSS, but given as LL instead in STATA): Maximum Likelihood (ML) or Restricted (Residual) ML (REML) Nested models are compared using their deviance values: LL Test (i.e., Likelihood Ratio Test, Deviance Difference Test) 1. Calculate LL: ( LL fewer ) ( LL more ) 1. &. must be. Calculate df: (# Parms more ) (# Parms fewer ) positive values! 3. Compare LL to χ distribution with df = df CHIDIST in excel will give exact p-values for the difference test; so will STATA Nested or non-nested models can also be compared by Information Criteria that reflect LL AND # parameters used and/or sample size AIC = Akaike IC = LL + *(#parameters) BIC = Bayesian IC = LL + log(n)*(#parameters) penalty for complexity No significance tests or critical values, just smaller is better PSYC 945: Lecture 1 79

ML vs. REML Remember population vs. sample formulas for calculating variance? N N (y i ) (yi y) i1 i1 e e Population: Sample: N N1 All comparisons must have same N!!! To select, type In estimating variances, it treats fixed effects as So, in small samples, L variances will be But because it indexes the fit of the You can compare models differing in ML METHOD=ML (- log likelihood) Known (df for having to also estimate fixed effects is not factored in) Too small (by a factor of (N k) / N, N=# persons) Entire model (means + variances) Fixed and/or random effects (either/both) REML METHOD=REML default (- res log likelihood) Unknown (df for having to estimate fixed effects is factored in) Unbiased (correct) Variances model only Random effects only (same fixed effects) PSYC 945: Lecture 1 80

Rules for Comparing Multilevel Models All observations must be the same across models! Compare Models Differing In: Type of Comparison: Means Model (Fixed) Only Variance Model (Random) Only Both Means and Variance Model (Fixed and Random) Nested? YES, can do significance tests via Fixed effect p-values from ML or REML -- OR -- ML LL only (NO REML LL) NO p-values REML LL (ML LL is ok if big N) ML LL only (NO REML LL) Non-Nested? NO signif. tests, instead see ML AIC, BIC (NO REML AIC, BIC) REML AIC, BIC (ML ok if big N) ML AIC, BIC only (NO REML AIC, BIC) Nested = one model is a direct subset of the other Non-Nested = one model is not a direct subset of the other PSYC 945: Lecture 1 81

Summary: Model Comparisons Significance of fixed effects can be tested with EITHER their p-values OR ML LL (LRT, deviance difference) tests p-value Is EACH of these effects significant? (fine under ML or REML) ML LL test Does this SET of predictors make my model better? REML LL tests are WRONG for comparing models differing in fixed effects Significance of random effects can only be tested with LL tests (preferably using REML; here ML is not wrong, but results in too small variance components and fixed effect SEs in smaller samples) Can get p-values as part of output but *shouldn t* use them #parms added (df) should always include the random effect covariances My recommended approach to building models: Stay in REML (for best estimates), test new fixed effects with their p-values THEN add new random effects, testing LL against previous model PSYC 945: Lecture 1 8

Two Sides of Any Model: Estimation Fixed Effects in the Model for the Means: How the expected outcome for a given observation varies as a function of values on known predictor variables Fixed effects predict the Y values per se but are not parameters that are solved for iteratively in maximum likelihood estimation Random Effects in the Model for the Variances: How model residuals are related across observations (persons, groups, time, etc) unknown things due to sampling Random effects variances and covariances are a mechanism by which complex patterns of variance and covariance among the Y residuals can be predicted (not the Y values, but their dispersion) Anything besides level-1 residual variance σ must be solved for iteratively increases the dimensionality of estimation process Estimation utilizes the predicted V matrix for each person In the material that follows, V will be based on a random linear model PSYC 945: Lecture 1 83

End Goals of Maximum Likelihood Estimation 1. Obtain most likely values for each unknown model parameter (random effects variances and covariances, residual variances and covariances, which then are used to calculate the fixed effects) the estimates. Obtain an index as to how likely each parameter value actually is (i.e., really likely or pretty much just a guess?) the standard error (SE) of the estimates 3. Obtain an index as to how well the model we ve specified actually describes the data the model fit indices How does all this happen? The magic of multivariate normal (but let s start with univariate normal first) PSYC 945: Lecture 1 84

Univariate Normal Likelihood of yi This function tells us how likely any value of y i is given two pieces of info: predicted value y residual variance σ Univariate Normal PDF (two ways): 1 1 f(y ) *exp * y i i e e 1 i i 1/ 1 i e i e i f(y ) *exp * y y y y y Example: regression i yi 0 1Xi ei y i 0 1X N i e y y i1 e i i i e i N PSYC 945: Lecture 1 85