Introduction (Alex Dmitrienko, Lilly) Web-based training program

Similar documents
Methods for Analyzing Continuous, Discrete, and Incomplete Longitudinal Data

Methods for Analyzing Continuous, Discrete, and Incomplete Longitudinal Data

Multilevel Methodology

ML estimation: Random-intercepts logistic model. and z

Contrasting Marginal and Mixed Effects Models Recall: two approaches to handling dependence in Generalized Linear Models:

Longitudinal and Incomplete Data

Web-based Supplementary Material for A Two-Part Joint. Model for the Analysis of Survival and Longitudinal Binary. Data with excess Zeros

STAT 705 Generalized linear mixed models

A Joint Model with Marginal Interpretation for Longitudinal Continuous and Time-to-event Outcomes

Outline for today. Computation of the likelihood function for GLMMs. Likelihood for generalized linear mixed model

Analysis and Sensitivity Analysis of Incomplete Data

PQL Estimation Biases in Generalized Linear Mixed Models

Hierarchical Generalized Linear Models. ERSH 8990 REMS Seminar on HLM Last Lecture!

Non-maximum likelihood estimation and statistical inference for linear and nonlinear mixed models

Generalized Linear Models for Non-Normal Data

Models for binary data

Generalized linear mixed models for biologists

Generalized Linear. Mixed Models. Methods and Applications. Modern Concepts, Walter W. Stroup. Texts in Statistical Science.

Surrogate marker evaluation when data are small, large, or very large Geert Molenberghs

STAT 526 Advanced Statistical Methodology

SAS/STAT 15.1 User s Guide Introduction to Mixed Modeling Procedures

SAS Code: Joint Models for Continuous and Discrete Longitudinal Data

Estimation in Generalized Linear Models with Heterogeneous Random Effects. Woncheol Jang Johan Lim. May 19, 2004

SAS/STAT 13.1 User s Guide. Introduction to Mixed Modeling Procedures

Conditional Inference Functions for Mixed-Effects Models with Unspecified Random-Effects Distribution

An R # Statistic for Fixed Effects in the Linear Mixed Model and Extension to the GLMM

The combined model: A tool for simulating correlated counts with overdispersion. George KALEMA Samuel IDDI Geert MOLENBERGHS

The consequences of misspecifying the random effects distribution when fitting generalized linear mixed models

Random Effects Models for Longitudinal Data

Type I and type II error under random-effects misspecification in generalized linear mixed models Link Peer-reviewed author version

,..., θ(2),..., θ(n)

GOODNESS-OF-FIT TEST ISSUES IN GENERALIZED LINEAR MIXED MODELS. A Dissertation NAI-WEI CHEN

Generalized Linear Mixed Models (GLMMs)

Using Estimating Equations for Spatially Correlated A

T E C H N I C A L R E P O R T A SANDWICH-ESTIMATOR TEST FOR MISSPECIFICATION IN MIXED-EFFECTS MODELS. LITIERE S., ALONSO A., and G.

QUANTIFYING PQL BIAS IN ESTIMATING CLUSTER-LEVEL COVARIATE EFFECTS IN GENERALIZED LINEAR MIXED MODELS FOR GROUP-RANDOMIZED TRIALS

Copula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011

Hierarchical Hurdle Models for Zero-In(De)flated Count Data of Complex Designs

Package HGLMMM for Hierarchical Generalized Linear Models

Generalized Linear Models. Kurt Hornik

Support Vector Machines

Multivariate Survival Analysis

Bayesian Multivariate Logistic Regression

Design of experiments for generalized linear models with random block e ects

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture 15-7th March Arnaud Doucet

GEE for Longitudinal Data - Chapter 8

Bayesian Regression Linear and Logistic Regression

Review. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 770: Categorical Data Analysis

Figure 36: Respiratory infection versus time for the first 49 children.

SAS PROC NLMIXED Mike Patefield The University of Reading 12 May

Discussion of Missing Data Methods in Longitudinal Studies: A Review by Ibrahim and Molenberghs

Bayesian linear regression

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent

Mixed models in R using the lme4 package Part 5: Generalized linear mixed models

On Fitting Generalized Linear Mixed Effects Models for Longitudinal Binary Data Using Different Correlation

Mixed models in R using the lme4 package Part 5: Generalized linear mixed models

Parameter Estimation. Industrial AI Lab.

Multiple Imputation in Generalized Linear Mixed Models

Generalized Models: Part 1

Logistic Regression. Seungjin Choi

Statistics 203: Introduction to Regression and Analysis of Variance Penalized models

Recent Developments in Multilevel Modeling

I r j Binom(m j, p j ) I L(, ; y) / exp{ y j + (x j y j ) m j log(1 + e + x j. I (, y) / L(, ; y) (, )

Practical considerations for survival models

Marginal correlation in longitudinal binary data based on generalized linear mixed models

Performance of Likelihood-Based Estimation Methods for Multilevel Binary Regression Models

Bayesian analysis of logistic regression

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Statistics & Data Sciences: First Year Prelim Exam May 2018

ECE521 week 3: 23/26 January 2017

Mixed models in R using the lme4 package Part 7: Generalized linear mixed models

Biostatistics Workshop Longitudinal Data Analysis. Session 4 GARRETT FITZMAURICE

Pattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods

A Flexible Modeling Framework for Overdispersed Hierarchical Data

Gauge Plots. Gauge Plots JAPANESE BEETLE DATA MAXIMUM LIKELIHOOD FOR SPATIALLY CORRELATED DISCRETE DATA JAPANESE BEETLE DATA

Mixed Models for Longitudinal Binary Outcomes. Don Hedeker Department of Public Health Sciences University of Chicago.

Stat 579: Generalized Linear Models and Extensions

Bias-Variance Tradeoff

TECHNICAL REPORT # 59 MAY Interim sample size recalculation for linear and logistic regression models: a comprehensive Monte-Carlo study

Lecture 15 (Part 2): Logistic Regression & Common Odds Ratio, (With Simulations)

Algebra of Random Variables: Optimal Average and Optimal Scaling Minimising

Introduction to lnmle: An R Package for Marginally Specified Logistic-Normal Models for Longitudinal Binary Data

Outline. Mixed models in R using the lme4 package Part 5: Generalized linear mixed models. Parts of LMMs carried over to GLMMs

Estimation: Problems & Solutions

INTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP

T E C H N I C A L R E P O R T KERNEL WEIGHTED INFLUENCE MEASURES. HENS, N., AERTS, M., MOLENBERGHS, G., THIJS, H. and G. VERBEKE

Introduction to Generalized Models

The equivalence of the Maximum Likelihood and a modified Least Squares for a case of Generalized Linear Model

1 Mixed effect models and longitudinal data analysis

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

Generalized Multilevel Models for Non-Normal Outcomes

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Why analyze as ordinal? Mixed Models for Longitudinal Ordinal Data Don Hedeker University of Illinois at Chicago

Bayesian Gaussian / Linear Models. Read Sections and 3.3 in the text by Bishop

Bayesian Inference for Regression Parameters

A measure for the reliability of a rating scale based on longitudinal clinical trial data Link Peer-reviewed author version

Density Estimation. Seungjin Choi

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Flexible modeling tools for hierarchical and incomplete data, with applications in comet assays and clinical trials

Longitudinal Data Analysis. Michael L. Berbaum Institute for Health Research and Policy University of Illinois at Chicago

Transcription:

Web-based training Introduction (Alex Dmitrienko, Lilly) Web-based training program http://www.amstat.org/sections/sbiop/webinarseries.html Four-part web-based training series Geert Verbeke (Katholieke Universiteit Leuven, Belgium) Geert Molenberghs (Universiteit Hasselt, Belgium) Presentation slides http://www.biopharmnet.com/doc/doc03002.html Discussion thread http://biopharmnet.com/forum/viewtopic.php?p=70#70 Longitudinal and Incomplete Data Part III: Generalized Linear Mixed Models Geert Verbeke Biostatistical Centre K.U.Leuven, Belgium geert.verbeke@med.kuleuven.be www.kuleuven.ac.be/biostat/ Geert Molenberghs Center for Statistics Universiteit Hasselt, Belgium geert.molenberghs@uhasselt.be www.censtat.uhasselt.be American Statistical Association, Webinar, Spring 2007

Contents I Generalized Linear Mixed Models for Non-Gaussian Longitudinal Data 1 1 Generalized Linear Mixed Models (GLMM)... 2 2 Fitting GLMM s in SAS... 20 II Marginal Versus Random-effects Models 24 3 Marginal Versus Random-effects Models... 25 Longitudinal and Incomplete Data i Part I Generalized Linear Mixed Models for Non-Gaussian Longitudinal Data Longitudinal and Incomplete Data 1

Chapter 1 Generalized Linear Mixed Models (GLMM) Introduction: LMM Revisited Generalized Linear Mixed Models (GLMM) Fitting Algorithms Example Longitudinal and Incomplete Data 2 1.1 Introduction: LMM Revisited We re-consider the linear mixed model: Y i b i N(X i β + Z i b i, Σ i ), b i N(0,D) The implied marginal model equals Y i N(X i β,z i DZ i +Σ i ) Hence, even under conditional independence, i.e., all Σ i equal to σ 2 I ni, a marginal association structure is implied through the random effects. The same ideas can now be applied in the context of GLM s to model association between discrete repeated measures. Longitudinal and Incomplete Data 3

1.2 Generalized Linear Mixed Models (GLMM) Given a vector b i of random effects for cluster i, it is assumed that all responses Y ij are independent, with density f(y ij θ ij,φ) = exp { φ 1 [y ij θ ij ψ(θ ij )] + c(y ij,φ) } θ ij is now modelled as θ ij = x ij β + z ij b i As before, it is assumed that b i N(0,D) Let f ij (y ij b i, β,φ) denote the conditional density of Y ij given b i, the conditional density of Y i equals f i (y i b i, β,φ)= n i j=1 f ij(y ij b i, β,φ) Longitudinal and Incomplete Data 4 The marginal distribution of Y i is given by f i (y i β,d,φ)= f i (y i b i, β,φ) f(b i D) db i = n i j=1 f ij(y ij b i, β,φ) f(b i D) db i where f(b i D) is the density of the N(0,D) distribution. The likelihood function for β, D, and φ now equals L(β,D,φ)= N i=1 f i(y i β,d,φ) = N i=1 n i j=1 f ij(y ij b i, β,φ) f(b i D) db i Longitudinal and Incomplete Data 5

Under the normal linear model, the integral can be worked out analytically. In general, approximations are required: Approximation of integrand Approximation of data Approximation of integral Predictions of random effects can be based on the posterior distribution f(b i Y i = y i ) Empirical Bayes (EB) estimate : Posterior mode, with unknown parameters replaced by their MLE Longitudinal and Incomplete Data 6 1.3 Laplace Approximation of Integrand Integrals in L(β,D,φ) can be written in the form I = e Q(b) db Second-order Taylor expansion of Q(b) around the mode yields Q(b) Q( b)+ 1 2 (b b) Q ( b)(b b), Quadratic term leads to re-scaled normal density. Hence, I (2π) q/2 Q ( b) 1/2 e Q( b). Exact approximation in case of normal kernels Good approximation in case of many repeated measures per subject Longitudinal and Incomplete Data 7

1.4 Approximation of Data 1.4.1 General Idea Re-write GLMM as: Y ij = µ ij + ε ij = h(x ijβ + z ijb i )+ε ij with variance for errors equal to Var(Y ij b i )=φv(µ ij ) Linear Taylor expansion for µ ij : Penalized quasi-likelihood (PQL): Around current β and b i Marginal quasi-likelihood (MQL): Around current β and b i = 0 Longitudinal and Incomplete Data 8 1.4.2 Penalized quasi-likelihood (PQL) Linear Taylor expansion around current β and b i : Y ij h(x ij β + z ij b i ) + h (x ij β + z ij b i )x ij(β β) + h (x ij β + z ij b i )z ij(b i b i )+ε ij µ ij + v( µ ij )x ij(β β)+v( µ ij )z ij(b i b i )+ε ij In vector notation: Y i µ i + V i X i (β β)+ V i Z i (b i b i )+ε i Re-ordering terms yields: Y 1 i V i (Y i µ i )+X i β + Z i b i X i β + Z i b i + ε i, Model fitting by iterating between updating the pseudo responses Y i and fitting the above linear mixed model to them. Longitudinal and Incomplete Data 9

1.4.3 Marginal quasi-likelihood (MQL) Linear Taylor expansion around current β and b i = 0: Y ij h(x ij β) + h (x ij β)x ij(β β) + h (x ij β)z ijb i + ε ij µ ij + v( µ ij )x ij(β β)+v( µ ij )z ijb i + ε ij In vector notation: Y i µ i + V i X i (β β)+ V i Z i b i + ε i Re-ordering terms yields: Y 1 i V i (Y i µ i )+X i β X i β + Z i b i + ε i Model fitting by iterating between updating the pseudo responses Y i and fitting the above linear mixed model to them. Longitudinal and Incomplete Data 10 1.4.4 PQL versus MQL MQL only performs reasonably well if random-effects variance is (very) small Both perform bad for binary outcomes with few repeated measurements per cluster With increasing number of measurements per subject: MQL remains biased PQL consistent Improvements possible with higher-order Taylor expansions Longitudinal and Incomplete Data 11

1.5 Approximation of Integral The likelihood contribution of every subject is of the form f(z)φ(z)dz where φ(z) is the density of the (multivariate) normal distribution Gaussian quadrature methods replace the integral by a weighted sum: f(z)φ(z)dz Q q=1 w qf(z q ) Q is the order of the approximation. The higher Q the more accurate the approximation will be Longitudinal and Incomplete Data 12 The nodes (or quadrature points) z q are solutions to the Qth order Hermite polynomial The w q are well-chosen weights The nodes z q and weights w q are reported in tables. Alternatively, an algorithm is available for calculating all z q and w q for any value Q. With Gaussian quadrature, the nodes and weights are fixed, independent of f(z)φ(z). With adaptive Gaussian quadrature, the nodes and weights are adapted to the support of f(z)φ(z). Longitudinal and Incomplete Data 13

Graphically (Q =10): Longitudinal and Incomplete Data 14 Typically, adaptive Gaussian quadrature needs (much) less quadrature points than classical Gaussian quadrature. On the other hand, adaptive Gaussian quadrature is much more time consuming. Adaptive Gaussian quadrature of order one is equivalent to Laplace transformation. Longitudinal and Incomplete Data 15

1.6 Example: Toenail Data Y ij is binary severity indicator for subject i at visit j. Model: Y ij b i Bernoulli(π ij ), log π ij 1 π ij = β 0 + b i + β 1 T i + β 2 t ij + β 3 T i t ij Notation: T i : treatment indicator for subject i t ij : time point at which jth measurement is taken for ith subject Adaptive as well as non-adaptive Gaussian quadrature, for various Q. Longitudinal and Incomplete Data 16 Results: Gaussian quadrature Q =3 Q =5 Q =10 Q =20 Q =50 β 0-1.52 (0.31) -2.49 (0.39) -0.99 (0.32) -1.54 (0.69) -1.65 (0.43) β 1-0.39 (0.38) 0.19 (0.36) 0.47 (0.36) -0.43 (0.80) -0.09 (0.57) β 2-0.32 (0.03) -0.38 (0.04) -0.38 (0.05) -0.40 (0.05) -0.40 (0.05) β 3-0.09 (0.05) -0.12 (0.07) -0.15 (0.07) -0.14 (0.07) -0.16 (0.07) σ 2.26 (0.12) 3.09 (0.21) 4.53 (0.39) 3.86 (0.33) 4.04 (0.39) 2l 1344.1 1259.6 1254.4 1249.6 1247.7 Adaptive Gaussian quadrature Q =3 Q =5 Q =10 Q =20 Q =50 β 0-2.05 (0.59) -1.47 (0.40) -1.65 (0.45) -1.63 (0.43) -1.63 (0.44) β 1-0.16 (0.64) -0.09 (0.54) -0.12 (0.59) -0.11 (0.59) -0.11 (0.59) β 2-0.42 (0.05) -0.40 (0.04) -0.41 (0.05) -0.40 (0.05) -0.40 (0.05) β 3-0.17 (0.07) -0.16 (0.07) -0.16 (0.07) -0.16 (0.07) -0.16 (0.07) σ 4.51 (0.62) 3.70 (0.34) 4.07 (0.43) 4.01 (0.38) 4.02 (0.38) 2l 1259.1 1257.1 1248.2 1247.8 1247.8 Longitudinal and Incomplete Data 17

Conclusions: (Log-)likelihoods are not comparable Different Q can lead to considerable differences in estimates and standard errors For example, using non-adaptive quadrature, with Q =3, we found no difference in time effect between both treatment groups (t = 0.09/0.05,p = 0.0833). Using adaptive quadrature, with Q =50, we find a significant interaction between the time effect and the treatment (t = 0.16/0.07,p = 0.0255). Assuming that Q =50is sufficient, the final results are well approximated with smaller Q under adaptive quadrature, but not under non-adaptive quadrature. Longitudinal and Incomplete Data 18 Comparison of fitting algorithms: Adaptive Gaussian Quadrature, Q =50 MQL and PQL Summary of results: Parameter QUAD PQL MQL Intercept group A 1.63 (0.44) 0.72 (0.24) 0.56 (0.17) Intercept group B 1.75 (0.45) 0.72 (0.24) 0.53 (0.17) Slope group A 0.40 (0.05) 0.29 (0.03) 0.17 (0.02) Slope group B 0.57 (0.06) 0.40 (0.04) 0.26 (0.03) Var. random intercepts (τ 2 ) 15.99 (3.02) 4.71 (0.60) 2.49 (0.29) Severe differences between QUAD (gold standard?) and MQL/PQL. MQL/PQL may yield (very) biased results, especially for binary data. Longitudinal and Incomplete Data 19

Chapter 2 Fitting GLMM s in SAS Proc GLIMMIX for PQL and MQL Proc NLMIXED for Gaussian quadrature Longitudinal and Incomplete Data 20 2.1 Procedure GLIMMIX for PQL and MQL Re-consider logistic model with random intercepts for toenail data SAS code (PQL): proc glimmix data=test method=rspl ; class idnum; model onyresp (event= 1 ) = treatn time treatn*time / dist=binary solution; random intercept / subject=idnum; run; MQL obtained with option method=rmpl Inclusion of random slopes: random intercept time / subject=idnum type=un; Longitudinal and Incomplete Data 21

2.2 Procedure NLMIXED for Gaussian Quadrature Re-consider logistic model with random intercepts for toenail data SAS program (non-adaptive, Q =3): proc nlmixed data=test noad qpoints=3; parms beta0=-1.6 beta1=0 beta2=-0.4 beta3=-0.5 sigma=3.9; teta = beta0 + b + beta1*treatn + beta2*time + beta3*timetr; expteta = exp(teta); p = expteta/(1+expteta); model onyresp ~ binary(p); random b ~ normal(0,sigma**2) subject=idnum; run; Adaptive Gaussian quadrature obtained by omitting option noad Longitudinal and Incomplete Data 22 Automatic search for optimal value of Q in case of no option qpoints= Good starting values needed! The inclusion of random slopes can be specified as follows: proc nlmixed data=test noad qpoints=3; parms beta0=-1.6 beta1=0 beta2=-0.4 beta3=-0.5 d11=3.9 d12=0 d22=0.1; teta = beta0 + b1 + beta1*treatn + beta2*time + b2*time + beta3*timetr; expteta = exp(teta); p = expteta/(1+expteta); model onyresp ~ binary(p); random b1 b2 ~ normal([0, 0], [d11, d12, d22]) subject=idnum; run; Longitudinal and Incomplete Data 23

Part II Marginal Versus Random-effects Models Longitudinal and Incomplete Data 24 Chapter 3 Marginal Versus Random-effects Models Interpretation of GLMM parameters Marginalization of GLMM Example: Toenail data Longitudinal and Incomplete Data 25

3.1 Interpretation of GLMM Parameters: Toenail Data We compare our GLMM results for the toenail data with those from fitting GEE s (unstructured working correlation): GLMM GEE Parameter Estimate (s.e.) Estimate (s.e.) Intercept group A 1.6308 (0.4356) 0.7219 (0.1656) Intercept group B 1.7454 (0.4478) 0.6493 (0.1671) Slope group A 0.4043 (0.0460) 0.1409 (0.0277) Slope group B 0.5657 (0.0601) 0.2548 (0.0380) Longitudinal and Incomplete Data 26 The strong differences can be explained as follows: Consider the following GLMM: Y ij b i Bernoulli(π ij ), log = β 0 + b i + β 1 t ij 1 π ij The conditional means E(Y ij b i ), as functions of t ij, are given by π ij E(Y ij b i ) = exp(β 0 + b i + β 1 t ij ) 1 + exp(β 0 + b i + β 1 t ij ) Longitudinal and Incomplete Data 27

The marginal average evolution is now obtained from averaging over the random effects: exp(β 0 + b i + β 1 t ij ) E(Y ij ) = E[E(Y ij b i )] = E 1 + exp(β 0 + b i + β 1 t ij ) exp(β 0 + β 1 t ij ) 1 + exp(β 0 + β 1 t ij ) Longitudinal and Incomplete Data 28 Hence, the parameter vector β in the GEE model needs to be interpreted completely different from the parameter vector β in the GLMM: GEE: marginal interpretation GLMM: conditional interpretation, conditionally upon level of random effects In general, the model for the marginal average is not of the same parametric form as the conditional average in the GLMM. For logistic mixed models, with normally distributed random random intercepts, it can be shown that the marginal model can be well approximated by again a logistic model, but with parameters approximately satisfying β RE β M = c 2 σ 2 + 1 > 1, σ 2 = variance random intercepts c =16 3/(15π) Longitudinal and Incomplete Data 29

For the toenail application, σ was estimated as 4.0164, such that the ratio equals c2 σ 2 + 1 = 2.5649. The ratio s between the GLMM and GEE estimates are: GLMM GEE Parameter Estimate (s.e.) Estimate (s.e.) Ratio Intercept group A 1.6308 (0.4356) 0.7219 (0.1656) 2.2590 Intercept group B 1.7454 (0.4478) 0.6493 (0.1671) 2.6881 Slope group A 0.4043 (0.0460) 0.1409 (0.0277) 2.8694 Slope group B 0.5657 (0.0601) 0.2548 (0.0380) 2.2202 Note that this problem does not occur in linear mixed models: Conditional mean: E(Y i b i )=X i β + Z i b i Specifically: E(Y i b i = 0) =X i β Marginal mean: E(Y i )=X i β Longitudinal and Incomplete Data 30 The problem arises from the fact that, in general, E[g(Y )] g[e(y )] So, whenever the random effects enter the conditional mean in a non-linear way, the regression parameters in the marginal model need to be interpreted differently from the regression parameters in the mixed model. In practice, the marginal mean can be derived from the GLMM output by integrating out the random effects. This can be done numerically via Gaussian quadrature, or based on sampling methods. Longitudinal and Incomplete Data 31

3.2 Marginalization of GLMM: Toenail Data As an example, we plot the average evolutions based on the GLMM output obtained in the toenail example: P (Y ij =1) = E E exp( 1.6308 + b i 0.4043t ij ), 1+exp( 1.6308 + b i 0.4043t ij ) exp( 1.7454 + b i 0.5657t ij ), 1+exp( 1.7454 + b i 0.5657t ij ) Longitudinal and Incomplete Data 32 Average evolutions obtained from the GEE analyses: P (Y ij =1) = exp( 0.7219 0.1409t ij ) 1+exp( 0.7219 0.1409t ij ) exp( 0.6493 0.2548t ij ) 1+exp( 0.6493 0.2548t ij ) Longitudinal and Incomplete Data 33

In a GLMM context, rather than plotting the marginal averages, one can also plot the profile for an average subject, i.e., a subject with random effect b i = 0: P (Y ij =1 b i =0) = exp( 1.6308 0.4043t ij ) 1+exp( 1.6308 0.4043t ij ) exp( 1.7454 0.5657t ij ) 1+exp( 1.7454 0.5657t ij ) Longitudinal and Incomplete Data 34 3.3 Example: Toenail Data Revisited Overview of all analyses on toenail data: Parameter QUAD PQL MQL GEE Intercept group A 1.63 (0.44) 0.72 (0.24) 0.56 (0.17) 0.72 (0.17) Intercept group B 1.75 (0.45) 0.72 (0.24) 0.53 (0.17) 0.65 (0.17) Slope group A 0.40 (0.05) 0.29 (0.03) 0.17 (0.02) 0.14 (0.03) Slope group B 0.57 (0.06) 0.40 (0.04) 0.26 (0.03) 0.25 (0.04) Var. random intercepts (τ 2 ) 15.99 (3.02) 4.71 (0.60) 2.49 (0.29) Conclusion: GEE < MQL < PQL < QUAD Longitudinal and Incomplete Data 35