A Parameter Expansion Approach to Bayesian SEM Estimation

Similar documents
arxiv: v2 [stat.co] 17 Nov 2016

Bayesian Networks in Educational Assessment

Empirical Validation of the Critical Thinking Assessment Test: A Bayesian CFA Approach

Bayesian Analysis of Latent Variable Models using Mplus

36-463/663Multilevel and Hierarchical Models

Running head: BSEM: SENSITIVITY TO THE PRIOR 1. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling

WinBUGS : part 2. Bruno Boulanger Jonathan Jaeger Astrid Jullion Philippe Lambert. Gabriele, living with rheumatoid arthritis

BUGS Bayesian inference Using Gibbs Sampling

SEM 2: Structural Equation Modeling

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model

Default Priors and Effcient Posterior Computation in Bayesian

Plausible Values for Latent Variables Using Mplus

lavaan note: equality constraints

Robust Bayesian Regression

Application of Plausible Values of Latent Variables to Analyzing BSI-18 Factors. Jichuan Wang, Ph.D

Bayesian Mixture Modeling

XXV ENCONTRO BRASILEIRO DE ECONOMETRIA Porto Seguro - BA, 2003 REVISITING DISTRIBUTED LAG MODELS THROUGH A BAYESIAN PERSPECTIVE

Model comparison: Deviance-based approaches

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Using Mplus individual residual plots for. diagnostics and model evaluation in SEM

Metric Predicted Variable on One Group

Structural Equation Modeling and Confirmatory Factor Analysis. Types of Variables

Using Bayesian Priors for More Flexible Latent Class Analysis

CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA

Applied Bayesian Nonparametrics Session 9: Polya Tree Examples

Latent Variable Centering of Predictors and Mediators in Multilevel and Time-Series Models

An Introduction to Path Analysis

Bayes: All uncertainty is described using probability.

Metric Predicted Variable on Two Groups

STA 216, GLM, Lecture 16. October 29, 2007

CFA Loading Estimation and Comparison Example Joel S Steele, PhD

Package factorqr. R topics documented: February 19, 2015

Bayesian Inference for Regression Parameters

Nesting and Equivalence Testing

The evdbayes Package

Bayesian SEM: A more flexible representation of substantive theory

Package effectfusion

Specifying Latent Curve and Other Growth Models Using Mplus. (Revised )

Dealing with rotational invariance in Bayesian confirmatory factor analysis

The sbgcop Package. March 9, 2007

Metric Predicted Variable With One Nominal Predictor Variable

Using Structural Equation Modeling to Conduct Confirmatory Factor Analysis

PRESENTATION TITLE. Is my survey biased? The importance of measurement invariance. Yusuke Kuroki Sunny Moon November 9 th, 2017

ST 740: Markov Chain Monte Carlo

Bayesian Graphical Models

Bayes Model Selection with Path Sampling: Factor Models

DIC: Deviance Information Criterion

Computationally Efficient Estimation of Multilevel High-Dimensional Latent Variable Models

Package LBLGXE. R topics documented: July 20, Type Package

Misspecification in Nonrecursive SEMs 1. Nonrecursive Latent Variable Models under Misspecification

Inference using structural equations with latent variables

CE 590 Applied Bayesian Statistics. Mid-term Take-home Exam

Metropolis-Hastings Algorithm

Bain: A program for Bayesian testing of order constrained hypotheses in structural equation models

SRMR in Mplus. Tihomir Asparouhov and Bengt Muthén. May 2, 2018

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang

Bayesian philosophy Bayesian computation Bayesian software. Bayesian Statistics. Petter Mostad. Chalmers. April 6, 2017

SEM Day 3 Lab Exercises SPIDA 2007 Dave Flora

SEM Day 1 Lab Exercises SPIDA 2007 Dave Flora

Centering Predictor and Mediator Variables in Multilevel and Time-Series Models

Statistics in Environmental Research (BUC Workshop Series) II Problem sheet - WinBUGS - SOLUTIONS

Introduction to Markov Chain Monte Carlo

Multivariate Normal & Wishart

Markov Chain Monte Carlo

Package spatial.gev.bma

Default Priors and Efficient Posterior Computation in Bayesian Factor Analysis

Bayesian Model Diagnostics and Checking

Package sbgcop. May 29, 2018

Introduction to mtm: An R Package for Marginalized Transition Models

Online Appendix for Sterba, S.K. (2013). Understanding linkages among mixture models. Multivariate Behavioral Research, 48,

Why Bayesian approaches? The average height of a rare plant

MULTILEVEL IMPUTATION 1

Instrumental variables regression on the Poverty data

ANOVA, ANCOVA and MANOVA as sem

Multilevel Structural Equation Modeling with lavaan

DAG models and Markov Chain Monte Carlo methods a short overview

ST440/540: Applied Bayesian Statistics. (9) Model selection and goodness-of-fit checks

Part 6: Multivariate Normal and Linear Models

Bayesian Inference and Decision Theory

9.1 Analysis of Variance models

Bayesian Dynamic Linear Modelling for. Complex Computer Models

Partial factor modeling: predictor-dependent shrinkage for linear regression

Addition to PGLR Chap 6

MCMC Methods: Gibbs and Metropolis

Regression without measurement error using proc calis

RESMA course Introduction to LISREL. Harry Ganzeboom RESMA Data Analysis & Report #4 February

STAT 740: Testing & Model Selection

Old and new approaches for the analysis of categorical data in a SEM framework

PIER HLM Course July 30, 2011 Howard Seltman. Discussion Guide for Bayes and BUGS

UNIVERSITY OF TORONTO MISSISSAUGA April 2009 Examinations STA431H5S Professor Jerry Brunner Duration: 3 hours

Confirmatory Factor Analysis: Model comparison, respecification, and more. Psychology 588: Covariance structure and factor models

Factor Analysis. Qian-Li Xue

THE GENERAL STRUCTURAL EQUATION MODEL WITH LATENT VARIATES

Latent variable interactions

Generalized Linear Models

Vector Autoregression

Index. Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables.

Package CopulaRegression

Introduction to Confirmatory Factor Analysis

BAYESIAN INFERENCE FOR A COVARIANCE MATRIX

Transcription:

A Parameter Expansion Approach to Bayesian SEM Estimation Ed Merkle and Yves Rosseel Utrecht University 24 June 2016 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 1 / 51

overview goal: Bayesian structural equation model methods that satisfy three properties: 1. easy model specification 2. extensible to novel situations 3. relatively fast strategy: develop general Bayesian SEM estimation methods that can be used with JAGS or Stan tie them to model specification/summarization methods in lavaan Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 2 / 51

cfa some Bayesian SEMs have received heavy consideration: x1 x2 visual x3 x4 x5 textual x6 x7 x8 speed x9 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 3 / 51

cfa equation measurement model: x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 = ν 1 ν 2 ν 3 ν 4 ν 5 ν 6 ν 7 ν 8 ν 9 + 1 0 0 λ 2 0 0 λ 3 0 0 0 1 0 0 λ 5 0 0 λ 6 0 0 0 1 0 0 λ 8 0 0 λ 9 visual textual speed + ɛ the covariance matrix associated with ɛ is diagonal: Var(ɛ) = Θ = diagonal Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 4 / 51

cfa equation structural model: visual textual speed = 0 0 0 + ζ the covariance matrix associated with ζ is unrestricted: Var(ζ) = Ψ = (unrestriced, symmetric) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 5 / 51

typical priors normal distribution for the parameters in ν, Λ and α inverse gamma distribution for the (diagonal) elements of Θ inverse Wishart distribution for the covariance matrix of the exogenous latent variables inverse gamma distribution for the (diagonal) elements of the covariance matrix of the endogenous latent variables Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 6 / 51

sem other models have received less consideration: y1 x1 x2 x3 y2 1 λ 5 y3 λ 6 dem60 ind60 λ 7 y4 y5 1 y6 λ 5 dem65 λ 6 y7 λ 7 y8 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 7 / 51

sem equation measurement model: x 1 x 2 x 3 y 1 y 2 y 3 y 4 y 5 y 6 y 7 = ν 1 ν 2 ν 3 ν 4 ν 5 ν 6 ν 7 ν 8 ν 9 ν 10 + 1 0 0 λ 2 0 0 λ 3 0 0 0 1 0 0 λ 5 0 0 λ 6 0 0 λ 7 0 0 0 1 0 0 λ 5 0 0 λ 6 ind60 dem60 dem65 + ɛ y 8 ν 11 0 0 λ 7 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 8 / 51

sem equation structural model: ind60 dem60 dem65 = 0 0 0 + 0 0 0 b 1 0 0 b 2 b 3 0 + ζ Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 9 / 51

sem equation non-diagonal theta matrix: Var(ɛ) = Θ = X X X X X X X X X X X X X X X X X X X X X X X Mplus may be the only major piece of software that can come close to dealing with this model from a Bayesian standpoint Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 10 / 51

parameter expansion SEM parameter expansion methods: sample from a working model that is easy translate sampled parameters to the desired inferential model problem: how to specify priors on the working model parameters that are meaningful in the inferential model? general parameter expansion approaches to Bayesian inference are described by Gelman (2004, 2006); applications to factor analysis models are described by Gosh and Dunson (2009) our approach is related to that of Palomo, Dunson, & Bollen (2007) but they do not address prior distribution specification Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 11 / 51

parameter expansion Palomo, Dunson, & Bollen (2007) specify this working model; D1 y1 x1 x2 x3 y2 1 D2 λ 5 y3 λ 6 dem60 ind60 D3 y4 λ 7 D4 y5 1 D5 y6 λ 5 λ 6 dem65 y7 λ 7 D6 y8 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 12 / 51

parameter expansion for each non-zero (residual) covariance in Θ, we create a phantom latent variable (D 1, D 2,...) the original residual vector ɛ is reparameterized as ɛ = Λ D D + ɛ D N(0, Ψ D ) ɛ N(0, Θ ) by carefully choosing the nonzero entries of Λ D, both Ψ D and Θ are diagonal the original covariance matrix Θ can be re-obtained via Θ = Λ D Ψ D Λ D + Θ Palomo, Dunson, & Bollen (2007) set the nonzero entries to 1 (not allowing for negative covariances) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 13 / 51

parameter expansion our approach: define working model parameters in such a way that we can set priors on inferential model parameters this involves separately specifying priors on correlation and variance parameters original formulation (inferential model): θ 11 θ 12 θ 22 X 1 X 2 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 14 / 51

parameter expansion working model: ψ D D θ 11 λ 1 λ 2 θ 22 X 1 X 2 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 15 / 51

parameterization: ψ D = 1 λ 1 = ρ 12 θ 11 λ 2 = sign(ρ 12 ) ρ 12 θ 22 θ11 = θ 11 ρ 12 θ 11 θ22 = θ 22 ρ 12 θ 22 priors: θ 11 IG(, ) θ 22 IG(, ) ρ 12 Beta ( 1,1) (, ) these priors are related to those used by Muthén and Asparouhov (2012), based on results from Barnard, McCulloch and Meng (2000) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 16 / 51

blavaan this approach is implemented in R package blavaan model specification via lavaan syntax automatic translation to JAGS Bayes-specific model summaries and statistics Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 17 / 51

blavaan most of blavaan is similar to lavaan; new features: easy, flexible specification of prior distributions for individual model parameters, or for classes of parameters intuitive specification of priors for covariance parameters: separately for correlation and for precisions (or variances or standard deviations) ability to save JAGS code and data for study or extension use of novel statistics through other R packages Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 18 / 51

political democracy y1 x1 x2 x3 y2 1 λ 5 y3 λ 6 dem60 ind60 λ 7 y4 y5 1 y6 λ 5 dem65 λ 6 y7 λ 7 y8 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 19 / 51

political democracy model <- # latent variable definitions ind60 = x1 + x2 + x3 dem60 = y1 + a*y2 + b*y3 + c*y4 dem65 = y5 + a*y6 + b*y7 + c*y8 # regressions dem60 ind60 dem65 ind60 + dem60 # residual correlations y1 y5 y2 y4 + y6 y3 y7 y4 y8 y6 y8 fit <- bsem(model, data = PoliticalDemocracy) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 20 / 51

political democracy (more options) model <- # latent variable definitions ind60 = x1 + x2 + x3 dem60 = y1 + a*y2 + b*y3 + c*y4 dem65 = y5 + a*y6 + b*y7 + c*y8 # regressions dem60 ind60 dem65 ind60 + prior("dnorm(0.8,.1)")*dem60 # residual correlations y1 y5 y2 y4 + y6 y3 y7 y4 y8 y6 y8 fit <- bsem(model, data = PoliticalDemocracy, jagcontrol = list(method = "rjparallel"), dp = dpriors(nu = "dnorm(5,1e-2)", itheta = "dlnorm(1,.1)[sd]"), n.chains = 4, burnin = 5000, sample = 5000, jagfile = TRUE) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 21 / 51

rjags Loading required package: lavaan This is lavaan 0.5-21.968 lavaan is BETA software! Please report any bugs. This is blavaan 0.1-4 blavaan is more BETA than lavaan! Compiling rjags model... Calling the simulation using the rjags method... Adapting the model for 1000 iterations... ++++++++++++++++++++++++++++++++++++++++++++++++++ 100% Burning in the model for 4000 iterations... ************************************************** 100% Running the model for 10000 iterations... ************************************************** 100% Simulation complete Calculating summary statistics... Calculating the Gelman-Rubin statistic for 48 variables... Note: Unable to calculate the multivariate psrf Finished running the simulation Computing posterior predictives... Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 22 / 51

convergence the potential scale reduction factors and effective sample size statistics are available via summary(fit, psrf=true, neff=true) warning is given when one PSRF exceeds 1.2 (suggesting using longer chains) large discrepancy between the effective sample size and the simulation sample size indicates poor mixing runjags provides many plots eg, traceplots (for the first four parameters) can be obtained for each chain, using the plot(fit, pars=1:4, plot.type="trace") command other options: plot.type: a character vector of plots to produce, from trace, density, ecdf, histogram, autocorr, crosscorr, key or all. These are all based on the equivalent plots from the lattice package with some modifications. Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 23 / 51

traceplot (lambda[2,1]) 3 chains lambda[2,1] 1.8 2.0 2.2 2.4 2.6 2.8 6000 8000 10000 12000 14000 Iteration Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 24 / 51

autocorrelation (lambda[2,1]) 1.0 0.5 Autocorrelation of lambda[2,1] 0.0 0.5 1.0 0 5 10 15 20 25 30 35 40 Lag Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 25 / 51

summary(fit) blavaan (0.1-4) results of 10000 samples after 5000 adapt+burnin iterations Number of observations 75 Number of missing patterns 1 Statistic MargLogLik PPP Value -1680.859 0.524 Parameter Estimates: Information Standard Errors MCMC MCMC Latent Variables: Post.Mean Post.SD HPD.025 HPD.975 PSRF Prior ind60 = x1 1.000 x2 2.207 0.153 1.917 2.512 1.001 dnorm(0,1e-2) x3 1.866 0.168 1.535 2.195 1.000 dnorm(0,1e-2) dem60 = y1 1.000 y2 (a) 1.217 0.151 0.925 1.519 1.002 dnorm(0,1e-2) y3 (b) 1.201 0.129 0.955 1.457 1.002 dnorm(0,1e-2) y4 (c) 1.288 0.138 1.028 1.565 1.005 dnorm(0,1e-2) dem65 = Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 26 / 51

y5 1.000 y6 (a) 1.217 0.151 0.925 1.519 1.002 y7 (b) 1.201 0.129 0.955 1.457 1.002 y8 (c) 1.288 0.138 1.028 1.565 1.005 Regressions: Post.Mean Post.SD HPD.025 HPD.975 PSRF Prior dem60 ind60 1.480 0.399 0.704 2.263 1.000 dnorm(0,1e-2) dem65 ind60 0.602 0.256 0.093 1.098 1.000 dnorm(0,1e-2) dem60 0.856 0.079 0.702 1.011 1.001 dnorm(0,1e-2) Covariances: Post.Mean Post.SD HPD.025 HPD.975 PSRF Prior y1 y5 0.650 0.371-0.026 1.406 1.002 dbeta(1,1) y2 y4 1.335 0.704 0.047 2.775 1.008 dbeta(1,1) y6 2.099 0.731 0.728 3.567 1.001 dbeta(1,1) y3 y7 0.850 0.622-0.33 2.108 1.000 dbeta(1,1) y4 y8 0.404 0.459-0.46 1.347 1.009 dbeta(1,1) y6 y8 1.175 0.571 0.084 2.291 1.001 dbeta(1,1) Intercepts: Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 27 / 51

Post.Mean Post.SD HPD.025 HPD.975 PSRF Prior x1 5.054 0.084 4.887 5.218 1.002 dnorm(0,1e-3) x2 4.792 0.172 4.45 5.129 1.004 dnorm(0,1e-3) x3 3.557 0.162 3.248 3.887 1.003 dnorm(0,1e-3) y1 5.455 0.303 4.884 6.059 1.003 dnorm(0,1e-3) y2 4.239 0.450 3.363 5.122 1.003 dnorm(0,1e-3) y3 6.549 0.402 5.752 7.326 1.002 dnorm(0,1e-3) y4 4.438 0.390 3.695 5.226 1.003 dnorm(0,1e-3) y5 5.124 0.310 4.525 5.739 1.003 dnorm(0,1e-3) y6 2.958 0.404 2.19 3.754 1.002 dnorm(0,1e-3) y7 6.179 0.375 5.438 6.9 1.003 dnorm(0,1e-3) y8 4.023 0.388 3.264 4.776 1.002 dnorm(0,1e-3) ind60 0.000 dem60 0.000 dem65 0.000 Variances: Post.Mean Post.SD HPD.025 HPD.975 PSRF Prior x1 0.100 0.021 0.062 0.142 1.000 dgamma(1,.5) x2 0.176 0.059 0.072 0.292 1.000 dgamma(1,.5) x3 0.478 0.096 0.3 0.67 1.000 dgamma(1,.5) y1 1.953 0.501 1.021 2.975 1.003 dgamma(1,.5) y2 7.766 1.420 5.173 10.604 1.003 dgamma(1,.5) y3 5.137 1.024 3.333 7.217 1.000 dgamma(1,.5) y4 3.259 0.810 1.782 4.914 1.004 dgamma(1,.5) y5 2.447 0.524 1.463 3.468 1.001 dgamma(1,.5) y6 4.990 0.921 3.357 6.9 1.000 dgamma(1,.5) y7 3.634 0.758 2.274 5.181 1.000 dgamma(1,.5) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 28 / 51

y8 3.246 0.750 1.851 4.746 1.001 dgamma(1,.5) ind60 0.451 0.090 0.283 0.628 1.000 dgamma(1,.5) dem60 3.863 0.933 2.132 5.703 1.001 dgamma(1,.5) dem65 0.370 0.185 0.076 0.729 1.000 dgamma(1,.5) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 29 / 51

lavaan extractor functions parameter estimates parameterestimates(fit, ci = FALSE) lhs op rhs label est se 1 ind60 = x1 1.000 0.000 2 ind60 = x2 2.207 0.153 3 ind60 = x3 1.866 0.168 4 dem60 = y1 1.000 0.000 5 dem60 = y2 a 1.217 0.151... estimated parameter values coef(fit) ind60= x2 ind60= x3 a b c 2.207 1.866 1.217 1.201 1.288 a b c dem60 ind60 dem65 ind60 1.217 1.201 1.288 1.480 0.602 dem65 dem60 y1 y5 y2 y4 y2 y6 y3 y7 0.856 0.650 1.335 2.099 0.850 y4 y8 y6 y8 x1 x1 x2 x2 x3 x3 0.404 1.175 0.100 0.176 0.478... Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 30 / 51

priors the default priors can be seen via dpriors() nu alpha lambda beta "dnorm(0,1e-3)" "dnorm(0,1e-2)" "dnorm(0,1e-2)" "dnorm(0,1e-2)" itheta ipsi rho ibpsi "dgamma(1,.5)" "dgamma(1,.5)" "dbeta(1,1)" "dwish(iden,3)" note: these prior distributions correspond to JAGS parameterizations (similar to R, but not the same); JAGS uses precisions instead of variances/standard deviations the prior()* modifier can be used in the model syntax to pass custom priors for this parameter to JAGS note: for covariances, this must be a prior for the correlation; the distribution should have support on (0,1), and blavaan will automatically translate the prior to an equivalent distribution with support on (-1,1) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 31 / 51

Bayesian model evaluation fitmeasures(fit) npar logl ppp bic dic p_dic waic 39.000-1551.015 0.527 3269.889 3174.238 36.104 3178.799 p_waic looic p_loo margloglik 37.986 3179.870 38.522-1680.822 blavaan computes its own loglikelihood after model estimation (and does not rely on JAGS) posterior predictive p-value (ppp) (should be closer to 1) Deviance Information Criterion (DIC) Widely Applicable Information Criterion (WAIC) leave-one-out cross-validation statistic (loo) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 32 / 51

Bayes Factor two types of approximations: the Laplace approximation, obtained via BF() (experimental) the Savage-Dickey approximation, obtained via summary(fit, bf=true) one-parameter-at-a-time Bayes factors comparing a model with that parameter fixed to 0 vs a model with that parameter free it assumes the posterior is normal Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 33 / 51

cross-loadings Muthen and Asparouhov (2012) describe the use of cross-loadings in Bayesian structural equation models instead of fixing many loadings to zero, we place high-precision prior distributions on the loadings that would be fixed to zero example using the classic Holzinger & Swineford (1939) model: HS.model <- visual = x1 + prior("dnorm(0,.1)")*x2 + prior("dnorm(0,.1)")*x3 + x4 + x5 + x6 + x7 + x8 + x9 textual = x4 + prior("dnorm(0,.1)")*x5 + prior("dnorm(0,.1)")*x6 + x1 + x2 + x3 + x7 + x8 + x9 speed = x7 + prior("dnorm(0,.1)")*x8 + prior("dnorm(0,.1)")*x9 + x1 + x2 + x3 + x4 + x5 + x6 fit <- bcfa(hs.model, data = HolzingerSwineford1939, group = "school", dp = dpriors(lambda = "dnorm(0,100)")) regular loadings: var = 10; cross-loadings: var = 0.01 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 34 / 51

future some plans: discrete data increased sampling efficiency (reducing autocorrelation) Stan export novel models (new distributions, etc) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 35 / 51

Thank you! (questions?) https://faculty.missouri.edu/ merklee/blavaan/ Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 36 / 51

JAGS code (somewhat beautified) model { for(i in 1:N) { # ov y for(j in 1:11) { y[i,j] dnorm(mu[i,j],invthetstar[j,g[i]]) } # mu mu[i, 1] <- nu[ 1,g[i]] + lambda[ 1,g[i]]*eta[i,1] mu[i, 2] <- nu[ 2,g[i]] + lambda[ 2,g[i]]*eta[i,1] mu[i, 3] <- nu[ 3,g[i]] + lambda[ 3,g[i]]*eta[i,1] mu[i, 4] <- nu[ 4,g[i]] + lambda[ 4,g[i]]*eta[i,2] + lambda[12,g[i]]*eta[i,4] mu[i, 5] <- nu[ 5,g[i]] + lambda[ 5,g[i]]*eta[i,2] + lambda[14,g[i]]*eta[i,5] + lambda[16,g[i]]*eta[i,6] mu[i, 6] <- nu[ 6,g[i]] + lambda[ 6,g[i]]*eta[i,2] + lambda[18,g[i]]*eta[i,7] mu[i, 7] <- nu[ 7,g[i]] + lambda[ 7,g[i]]*eta[i,2] + lambda[15,g[i]]*eta[i,5] + lambda[20,g[i]]*eta[i,8] mu[i, 8] <- nu[ 8,g[i]] + lambda[ 8,g[i]]*eta[i,3] + lambda[13,g[i]]*eta[i,4] mu[i, 9] <- nu[ 9,g[i]] + lambda[ 9,g[i]]*eta[i,3] + lambda[17,g[i]]*eta[i,6] + lambda[22,g[i]]*eta[i,9] mu[i,10] <- nu[10,g[i]] + lambda[10,g[i]]*eta[i,3] + lambda[19,g[i]]*eta[i,7] mu[i,11] <- nu[11,g[i]] + lambda[11,g[i]]*eta[i,3] + lambda[21,g[i]]*eta[i,8] + lambda[23,g[i]]*eta[i,9] Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 37 / 51

# lvs eta[i,1] dnorm(mu.eta[i,1], invpsistar[1,g[i]]) eta[i,2] dnorm(mu.eta[i,2], invpsistar[2,g[i]]) eta[i,3] dnorm(mu.eta[i,3], invpsistar[3,g[i]]) eta[i,4] dnorm(mu.eta[i,4], invpsistar[4,g[i]]) eta[i,5] dnorm(mu.eta[i,5], invpsistar[5,g[i]]) eta[i,6] dnorm(mu.eta[i,6], invpsistar[6,g[i]]) eta[i,7] dnorm(mu.eta[i,7], invpsistar[7,g[i]]) eta[i,8] dnorm(mu.eta[i,8], invpsistar[8,g[i]]) eta[i,9] dnorm(mu.eta[i,9], invpsistar[9,g[i]]) mu.eta[i,1] <- alpha[1,g[i]] mu.eta[i,2] <- alpha[2,g[i]] + beta[1,g[i]]*eta[i,1] mu.eta[i,3] <- alpha[3,g[i]] + beta[2,g[i]]*eta[i,1] + beta[3,g[i]]*eta[i,2] mu.eta[i,4] <- 0 mu.eta[i,5] <- 0 mu.eta[i,6] <- 0 mu.eta[i,7] <- 0 mu.eta[i,8] <- 0 mu.eta[i,9] <- 0 } Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 38 / 51

# Priors/constraints nu[1,1] dnorm(0,1e-3) lambda[1,1] <- 1 nu[2,1] dnorm(0,1e-3) lambda[2,1] dnorm(0,1e-2) nu[3,1] dnorm(0,1e-3) lambda[3,1] dnorm(0,1e-2) nu[4,1] dnorm(0,1e-3) lambda[4,1] <- 1 lambda[12,1] <- sqrt(abs(rho[1,1])*theta[4,1]) nu[5,1] dnorm(0,1e-3) lambda[5,1] dnorm(0,1e-2) lambda[14,1] <- sqrt(abs(rho[2,1])*theta[5,1]) lambda[16,1] <- sqrt(abs(rho[3,1])*theta[5,1]) nu[6,1] dnorm(0,1e-3) lambda[6,1] dnorm(0,1e-2) lambda[18,1] <- sqrt(abs(rho[4,1])*theta[6,1]) nu[7,1] dnorm(0,1e-3) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 39 / 51

lambda[7,1] dnorm(0,1e-2) lambda[15,1] <- (-1 + 2*step(rho[2,1]))*sqrt(abs(rho[2,1])*theta[7,1]) lambda[20,1] <- sqrt(abs(rho[5,1])*theta[7,1]) nu[8,1] dnorm(0,1e-3) lambda[8,1] <- 1 lambda[13,1] <- (-1 + 2*step(rho[1,1]))*sqrt(abs(rho[1,1])*theta[8,1]) nu[9,1] dnorm(0,1e-3) lambda[9,1] <- lambda[5,1] lambda[17,1] <- (-1 + 2*step(rho[3,1]))*sqrt(abs(rho[3,1])*theta[9,1]) lambda[22,1] <- sqrt(abs(rho[6,1])*theta[9,1]) nu[10,1] dnorm(0,1e-3) lambda[10,1] <- lambda[6,1] lambda[19,1] <- (-1 + 2*step(rho[4,1]))*sqrt(abs(rho[4,1])*theta[10,1]) nu[11,1] dnorm(0,1e-3) lambda[11,1] <- lambda[7,1] lambda[21,1] <- (-1 + 2*step(rho[5,1]))*sqrt(abs(rho[5,1])*theta[11,1]) lambda[23,1] <- (-1 + 2*step(rho[6,1]))*sqrt(abs(rho[6,1])*theta[11,1]) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 40 / 51

alpha[1,1] <- 0 alpha[2,1] <- 0 beta[1,1] dnorm(0,1e-2) alpha[3,1] <- 0 beta[2,1] dnorm(0,1e-2) beta[3,1] dnorm(0,1e-2) invtheta[1,1] dgamma(1,.5) invtheta[2,1] dgamma(1,.5) invtheta[3,1] dgamma(1,.5) invtheta[4,1] dgamma(1,.5) invtheta[5,1] dgamma(1,.5) invtheta[6,1] dgamma(1,.5) invtheta[7,1] dgamma(1,.5) invtheta[8,1] dgamma(1,.5) invtheta[9,1] dgamma(1,.5) invtheta[10,1] dgamma(1,.5) invtheta[11,1] dgamma(1,.5) for(j in 1:11) { for(k in 1:1) { theta[j,k] <- 1/invtheta[j,k] } } Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 41 / 51

invpsi[1,1] dgamma(1,.5) invpsi[2,1] dgamma(1,.5) invpsi[3,1] dgamma(1,.5) invpsi[4,1] <- 1 invpsi[5,1] <- 1 invpsi[6,1] <- 1 invpsi[7,1] <- 1 invpsi[8,1] <- 1 invpsi[9,1] <- 1 for(j in 1:9) { for(k in 1:1) { psi[j,k] <- 1/invpsi[j,k] } } # correlations/covariances rho[1,1] <- -1 + 2*rstar[1,1] rstar[1,1] dbeta(1,1) rho[2,1] <- -1 + 2*rstar[2,1] rstar[2,1] dbeta(1,1) rho[3,1] <- -1 + 2*rstar[3,1] rstar[3,1] dbeta(1,1) rho[4,1] <- -1 + 2*rstar[4,1] rstar[4,1] dbeta(1,1) rho[5,1] <- -1 + 2*rstar[5,1] rstar[5,1] dbeta(1,1) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 42 / 51

rho[6,1] <- -1 + 2*rstar[6,1] rstar[6,1] dbeta(1,1) # variances & covariances invthetstar[1,1] <- 1/(theta[1,1]) invthetstar[2,1] <- 1/(theta[2,1]) invthetstar[3,1] <- 1/(theta[3,1]) invthetstar[4,1] <- 1/(theta[4,1] - (lambda[12,1]ˆ2/invpsi[4,1])) invthetstar[5,1] <- 1/(theta[5,1] - (lambda[14,1]ˆ2/invpsi[5,1]) - (lambda[16,1]ˆ2/invpsi[6,1])) invthetstar[6,1] <- 1/(theta[6,1] - (lambda[18,1]ˆ2/invpsi[7,1])) invthetstar[7,1] <- 1/(theta[7,1] - (lambda[15,1]ˆ2/invpsi[5,1]) - (lambda[20,1]ˆ2/invpsi[8,1])) invthetstar[8,1] <- 1/(theta[8,1] - (lambda[13,1]ˆ2/invpsi[4,1])) invthetstar[9,1] <- 1/(theta[9,1] - (lambda[17,1]ˆ2/invpsi[6,1]) - (lambda[22,1]ˆ2/invpsi[9,1])) invthetstar[10,1] <- 1/(theta[10,1] - (lambda[19,1]ˆ2/invpsi[7,1])) invthetstar[11,1] <- 1/(theta[11,1] - (lambda[21,1]ˆ2/invpsi[8,1]) - (lambda[23,1]ˆ2/invpsi[9,1])) invpsistar[1,1] <- 1/(psi[1,1]) invpsistar[2,1] <- 1/(psi[2,1]) invpsistar[3,1] <- 1/(psi[3,1]) invpsistar[4,1] <- 1/(psi[4,1]) invpsistar[5,1] <- 1/(psi[5,1]) invpsistar[6,1] <- 1/(psi[6,1]) invpsistar[7,1] <- 1/(psi[7,1]) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 43 / 51

invpsistar[8,1] <- 1/(psi[8,1]) invpsistar[9,1] <- 1/(psi[9,1]) cov[1,1] <- psi[4,1]*lambda[12,1]*lambda[13,1] cov[2,1] <- psi[5,1]*lambda[14,1]*lambda[15,1] cov[3,1] <- psi[6,1]*lambda[16,1]*lambda[17,1] cov[4,1] <- psi[7,1]*lambda[18,1]*lambda[19,1] cov[5,1] <- psi[8,1]*lambda[20,1]*lambda[21,1] cov[6,1] <- psi[9,1]*lambda[22,1]*lambda[23,1] } # End of model Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 44 / 51

some elements in jagtrans $coefvec jlabel plabel prior 1 lambda[2,1].p2. dnorm(0,1e-2) 2 lambda[3,1].p3. dnorm(0,1e-2) 3 lambda[5,1].p5. dnorm(0,1e-2) 4 lambda[6,1].p6. dnorm(0,1e-2) 5 lambda[7,1].p7. dnorm(0,1e-2) 6 lambda[9,1].p9. 7 lambda[10,1].p10. 8 lambda[11,1].p11. 9 beta[1,1].p12. dnorm(0,1e-2) 10 beta[2,1].p13. dnorm(0,1e-2) 11 beta[3,1].p14. dnorm(0,1e-2) 12 cov[1,1].p15.@rho[1,1] dbeta(1,1) 13 cov[2,1].p16.@rho[2,1] dbeta(1,1) 14 cov[3,1].p17.@rho[3,1] dbeta(1,1) 15 cov[4,1].p18.@rho[4,1] dbeta(1,1) 16 cov[5,1].p19.@rho[5,1] dbeta(1,1) 17 cov[6,1].p20.@rho[6,1] dbeta(1,1) 18 theta[1,1].p21. dgamma(1,.5) 19 theta[2,1].p22. dgamma(1,.5) 20 theta[3,1].p23. dgamma(1,.5) 21 theta[4,1].p24. dgamma(1,.5) 22 theta[5,1].p25. dgamma(1,.5) 23 theta[6,1].p26. dgamma(1,.5) 24 theta[7,1].p27. dgamma(1,.5) Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 45 / 51

25 theta[8,1].p28. dgamma(1,.5) 26 theta[9,1].p29. dgamma(1,.5) 27 theta[10,1].p30. dgamma(1,.5) 28 theta[11,1].p31. dgamma(1,.5) 29 psi[1,1].p32. dgamma(1,.5) 30 psi[2,1].p33. dgamma(1,.5) 31 psi[3,1].p34. dgamma(1,.5) 32 nu[1,1].p35. dnorm(0,1e-3) 33 nu[2,1].p36. dnorm(0,1e-3) 34 nu[3,1].p37. dnorm(0,1e-3) 35 nu[4,1].p38. dnorm(0,1e-3) 36 nu[5,1].p39. dnorm(0,1e-3) 37 nu[6,1].p40. dnorm(0,1e-3) 38 nu[7,1].p41. dnorm(0,1e-3) 39 nu[8,1].p42. dnorm(0,1e-3) 40 nu[9,1].p43. dnorm(0,1e-3) 41 nu[10,1].p44. dnorm(0,1e-3) 42 nu[11,1].p45. dnorm(0,1e-3) 43 rho[1,1].p15. dbeta(1,1) 44 rho[2,1].p16. dbeta(1,1) 45 rho[3,1].p17. dbeta(1,1) 46 rho[4,1].p18. dbeta(1,1) 47 rho[5,1].p19. dbeta(1,1) 48 rho[6,1].p20. dbeta(1,1) $inits $inits$c1 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 46 / 51

$inits$c1$invtheta [,1] [1,] 1.9557533 [2,] 0.9671030 [3,] 0.5619125 [4,] 1.5149037 [5,] 4.4426965 [6,] 0.4827693 [7,] 0.4929958 [8,] 0.9739004 [9,] 0.4149735 [10,] 2.1122322 [11,] 0.7178149 $inits$c1$invpsi [,1] [1,] 1.0789384 [2,] 2.1748271 [3,] 0.1565811 [4,] NA [5,] NA [6,] NA [7,] NA [8,] NA [9,] NA $inits$c1$rstar [,1] Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 47 / 51

[1,] 0.4640470 [2,] 0.4874579 [3,] 0.5318592 [4,] 0.5454525 [5,] 0.4432848 [6,] 0.4398773 $inits$c1$nu [,1] [1,] 0.20174677 [2,] -0.39573091 [3,] 0.18335548 [4,] 0.09869364 [5,] 1.45794347 [6,] -1.68639560 [7,] 0.82112432 [8,] 2.03342908 [9,] -0.96879889 [10,] 0.80418652 [11,] 0.39538881 $inits$c1$lambda [,1] [1,] NA [2,] 1.956661 [3,] 1.403942 [4,] NA [5,] 1.594785 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 48 / 51

[6,] 1.377266 [7,] 1.796208 [8,] NA [9,] NA [10,] NA [11,] NA [12,] NA [13,] NA [14,] NA [15,] NA [16,] NA [17,] NA [18,] NA [19,] NA [20,] NA [21,] NA [22,] NA [23,] NA $inits$c1$beta [,1] [1,] 1.869905 [2,] 0.906948 [3,] 1.129366 $inits$c2 $inits$c2$invtheta Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 49 / 51

[,1] [1,] 0.3805187 [2,] 0.2536417 [3,] 0.6088813 [4,] 2.8322974 [5,] 0.7341873 [6,] 0.4878023 [7,] 1.9332653 [8,] 1.4647736 [9,] 0.9350006 [10,] 0.7716892 [11,] 0.4442128... $inits$c3 $inits$c3$invtheta [,1] [1,] 1.154777482 [2,] 0.823284066 [3,] 0.564649482 [4,] 1.082572247 [5,] 2.637141568 [6,] 1.572037018 [7,] 1.212736063 [8,] 0.739781788 [9,] 0.087279519 [10,] 0.007934238 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 50 / 51

[11,] 2.683140417... $data$g [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [39] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 $data$n [1] 75 Yves Rosseel A Parameter Expansion Approach to Bayesian SEM Estimation 51 / 51