Online Appendices for: Modeling Latent Growth With Multiple Indicators: A Comparison of Three Approaches

Size: px
Start display at page:

Download "Online Appendices for: Modeling Latent Growth With Multiple Indicators: A Comparison of Three Approaches"

Transcription

1 Online Appendices for: Modeling Latent Growth With Multiple Indicators: A Comparison of Three Approaches Jacob Bishop and Christian Geiser Utah State University David A. Cole Vanderbilt University Contents First-Order Autoregressive Structure Among Latent State Residuals 3 Mean and Covariance Structure in the SGM, GSGM, and ISGM 5 SGM Mean and Covariance Structure GSGM Mean and Covariance Structure ISGM Mean and Covariance Structure Autoregressive (AR Models Summary Proportionality Constraint 14 General to Specific Variance Ratio for the SGM General to Specific Variance Ratio for the GSGM General to Specific Variance Ratio for the ISGM Degrees of Freedom in the SGM, GSGM, and ISGM 16 Computing df Known Parameters Estimated Parameters and Degrees of Freedom Model Identification with Two Indicators per Time Point 1 Author Note. Jacob Bishop and Christian Geiser, Department of Psychology, Utah State University. David A. Cole, Department of Psychology and Human Development, Vanderbilt University. Correspondence concerning this article should be addressed to Jacob Bishop, 81 Old Main Hill Logan, UT jacob.bishop@usu.edu

2 MULTIPLE-INDICATOR GROWTH MODELS Variance Components and Coefficients 6 SGM GSGM ISGM Mplus Model Inputs for SGM, GSGM, and ISGM 9 Model 1a: SGM Model 1b: SGM, α, λ-mi Model a: GSGM Model b: GSGM, α, λ-mi Model c: GSGM, α, λ, γ-mi Model 3a: ISGM Model 3b: ISGM, γ-mi Monte-Carlo Simulation for the ISGM 46 Method Criteria for Evaluating the Performance of the Models Results Summary and Discussion References 65

3 MULTIPLE-INDICATOR GROWTH MODELS 3 Appendix A First-Order Autoregressive Structure Among Latent State Residual Variables Steyer and Schmitt (1994 were among the first to propose the inclusion of an autoregressive structure in LST models to capture stability beyond what can be accounted for by a common trait factor. Cole, Martin, and Steiger (5 examined the meaning of autoregressive structures in LST models in detail and noted that it is often appropriate to include autoregressive effects between latent state residual variables to model short-term stability or carry-over effects in these models. In a similar vein, Bollen and Curran (4 presented so-called autoregressive latent trajectory (ALT models that include autoregressive paths between observed variables in single-indicator LGC models. Murphy et al. (11 recently studied the meaning of autocorrelations in the SGM. A first-order autoregressive structure implies a time-dependence among adjacent latent state residuals, with an additional unpredictable error component δ t : ζ t = β t(t 1 ζ (t 1 + δ t, (A.1 where β t(t 1 is a real constant and the δ t variables have a mean of zero and are uncorrelated with each other as well as with all other latent and error variables in the model. Given that we assume a first-order autoregressive structure, the notation for the regression coefficient may be simplified by defining: β t β t(t 1. (A. We illustrate a first-order autoregressive structure graphically for the SGM in Figure A1A, the GSGM in Figure A1B, and the ISGM in Figure A1C. Including first-order autoregressive effects among the latent state residual variables relaxes the assumption that all of the across-time stability is entirely due to the trait and trait-change process. Instead, part of the (short-term stability can now be captured by the autoregressive effects, accounting for the fact that adjacent measurements are often more highly correlated than measurements that are further apart in time. In practical applications, it is often common to assume time-invariance of the autoregressive effect. This means that β t = β s = β is often assumed to hold for all s, t.

4 MULTIPLE-INDICATOR GROWTH MODELS 4 A ξ int E (ξ int 1 E (ξ lin ξ lin λ i1 λ j1 λ it λ jt α i1 α j1 α it α jt α is α js λ is λ js (t 1 λ it (t 1 λ jt (t 1 (s 1 λis (s 1 λjs (s 1 Y 11 Y i1 Y j1 Y 1t Y it Y jt Y 1s Y is Y js ε 11 ε i1 ε j1 ε 1t ε it ε jt ε 1s ε is ε js λ i1 λ j1 λ it λ jt λ is λ js ζ 1 β β t ζ t β (t+1 β s ζ s δ t δ s B ξ int E (ξ int 1 E (ξ lin ξ lin λ i1 λ j1 λ it λ jt α i1 α j1 α it α jt α is α js λ is λ js (t 1 λ it (t 1 λ jt (t 1 (s 1 λis (s 1 λjs (s 1 Y 11 Y i1 Y j1 Y 1t Y it Y jt Y 1s Y is Y js ε 11 ε i1 ε j1 ε 1t ε it ε jt ε 1s ε is ε js γ i1 γ j1 γ it γ jt γ is γ js ζ 1 β β t ζ t β (t+1 β s ζ s δ t δ s C ξ int1 E (ξ int1 ξ inti E (ξ inti ξ intj E ( ξ intj 1 E (ξ lini E (ξ lin1 E ( ξ linj t 1 t 1 ξ lin1 s 1 t 1 ξ lini s 1 ξ linj s 1 Y 11 Y i1 Y j1 Y 1t Y it Y jt Y 1s Y is Y js ε 11 ε i1 ε j1 ε 1t ε it ε jt ε 1s ε is ε js γ i1 γ j1 γ it γ jt γ is γ js ζ 1 β β t ζ t β (t+1 β s ζ s δ t Figure A1. LST Models with First-order Autoregressive Structure Among Latent State Residual Variables A: SGM Post-Transformation with a First-order Autoregressive Structure among Latent State Residual Factors. B: GSGM Post-transformation with a Firstorder Autoregressive State Residual Structure. C: ISGM Post-transformation with Firstorder Autoregressive State Residual Structure. δ s

5 MULTIPLE-INDICATOR GROWTH MODELS 5 Appendix B Mean and Covariance Structure in the SGM, GSGM, and ISGM SGM Mean and Covariance Structure The combined equations for the SGM are given in Equation (13. Recognizing that we set α 1t = and λ 1t = γ 1t = 1 to define the metric of the latent factors, it is sufficient to work with only the final line of these equations. This is given by: Y it = α it + λ it ξ int + λ it (t 1 ξ lin + λ it ζ t + ε it. (B.1 Mean Structure. We derive the mean structure by substituting Equation (B.1 into the expected value, and simplifying this expression according to the algebraic rules for expected values as follows. E (Y it = E (α it + λ it ξ int + λ it (t 1 ξ lin + λ it ζ t + ε it = E (α it }{{} +E (λ it ξ int + E (λ it (t 1 ξ lin + E (λ it ζ t }{{} α it = α it + λ it E (ξ int + λ it (t 1 E (ξ lin. + E (ε it }{{} Covariance Structure. First, we expand the covariance by substituting Equation (B.1 into the covariance operator for Y it and Y js : cov (Y it, Y js = cov ( αit + λ it ξ int + λ it (t 1 ξ lin + λ it ζ t + ε it, α js + λ js ξ int + λ js (s 1 ξ lin + λ js ζ s + ε js. Next, we expand using rules of covariance algebra (e.g., Kenny, This yields cov (Y it, Y js = cov (α it, λ js (ξ int + (s 1 ξ lin + λ js ζ s + ε js + }{{} cov (λ it (ξ int + (t 1 ξ lin + λ it ζ t + ε it, α js + }{{} ( λit (ξ cov int + (t 1 ξ lin + λ it ζ t + ε it, λ js (ξ int + (s 1 ξ lin + λ js ζ s + ε js = ( λit (ξ cov int + (t 1 ξ lin + λ it ζ t + ε it,. λ js (ξ int + (s 1 ξ lin + λ js ζ s + ε js The first two terms are zero because of the covariance rule cov (k, X = where k is a constant. Next, we decompose the covariance according to covariance addition rules. In this step, we separate the error terms ε it and ε js.

6 MULTIPLE-INDICATOR GROWTH MODELS 6 ( λit (ξ cov (Y it, Y js = cov int + (t 1 ξ lin + λ it ζ t + ε it,, λ js (ξ int + (s 1 ξ lin + λ js ζ s + ε js ( λit (ξ = cov int + (t 1 ξ lin + λ it ζ t + ε it, + λ js (ξ int + (s 1 ξ lin + λ js ζ s cov (λ it (ξ int + (t 1 ξ lin + λ it ζ t + ε it, ε js, ( λit (ξ = cov int + (t 1 ξ lin + λ it ζ t + ε it, + λ js (ξ int + (s 1 ξ lin + λ js ζ s cov (λ it (ξ int + (t 1 ξ lin, ε js + cov (λ it ζ t, ε js + cov (ε it, ε js, ( λit (ξ = cov int + (t 1 ξ lin + λ it ζ t, + λ js (ξ int + (s 1 ξ lin + λ js ζ s cov (ε it, λ js (ξ int + (s 1 ξ lin + cov (λ it (ξ int + (t 1 ξ lin, ε js + cov (ε it, λ js ζ s + cov (λ it ζ t, ε js + cov (ε it, ε js. The subsequent step focuses on expanding the first covariance term, the covariance between the latent trait and state residual components. cov (Y it, Y js = cov (λ it (ξ int + (t 1 ξ lin, λ js ζ s + cov (λ it (ξ int + (t 1 ξ lin, λ js (ξ int + (s 1 ξ lin + cov (λ it ζ t, λ js ζ s + cov (ε it, λ js (ξ int + (s 1 ξ lin + cov (λ it (ξ int + (t 1 ξ lin, ε js + cov (ε it, λ js ζ s + cov (λ it ζ t, ε js + cov (ε it, ε js. The second term in this expression, which represents the covariance among the latent trait variables is now expanded. At the same time, the constant terms λ it, λ js, (t 1, and (s 1 can be moved outside the covariance expression according to the rule cov (k X, Y = k cov (X, Y.

7 MULTIPLE-INDICATOR GROWTH MODELS 7 cov (Y it, Y js = λ it λ js cov ((ξ int + (t 1 ξ lin, ζ s + [(s 1 + (t 1] cov (ξ int, ξ lin + λ it λ js (t 1 (s 1 var (ξ lin + + var (ξ int λ it λ js cov (ζ t, ζ s + λ js cov (ε it, (ξ int + (s 1 ξ lin + λ it cov ((ξ int + (t 1 ξ lin, ε js + λ js cov (ε it, ζ s + λ it cov (ζ t, ε js + cov (ε it, ε js. Finally, this expression is simplified because for this model, the following restrictions apply cov (ξ it, ζ t cov (ξ it, ε it cov (ζ t, ε it cov (ζ t, ζ s = cov (ε it, ε js = The resulting general covariance equation is thus: cov (Y it, Y js = λ it λ js = for all i, j, s, t, (B. { var (ζt when t = s, otherwise, { var (εit when i = j, t = s, otherwise. [(s 1 + (t 1] cov (ξ int, ξ lin + (t 1 (s 1 var (ξ lin + var (ξ int λ it λ js cov (ζ t, ζ s + cov (ε it, ε js. + Note that the final two terms are zero except as given in Equations (B.3 and (B.4. GSGM Mean and Covariance Structure (B.3 (B.4 The combined equations for the GSGM are given in Equation (18. Recognizing that we set α 1t = and λ 1t = γ 1t = 1, it is sufficient to work with only the final line of these equations. Thus, the general equation for Y it is given by Y it = α it + λ it ξ int + λ it (t 1 ξ lin + γ it ζ t + ε it. (B.5 Mean Structure. We derive the mean structure by substituting Equation (B.5 into the expected value operator, and simplifying this expression according to the algebraic rules for expected values as follows.

8 MULTIPLE-INDICATOR GROWTH MODELS 8 E (Y it = E (α it + λ it ξ int + λ it (t 1 ξ lin + γ it ζ t + ε it = E (α it }{{} +E (λ it ξ int + E (λ it (t 1 ξ lin + E (γ it ζ t }{{} α it = α it + λ it E (ξ int + λ it (t 1 E (ξ lin. + E (ε it }{{} Note that this is equivalent to the mean structure for the SGM, since neither the latent state residual variable ζ t nor the corresponding loading γ it appears in the final simplified equation. Covariance Structure. We now derive the covariance structure in a similar way to the SGM. First, we expand the covariance by substituting the above for Y it and Y js : cov (Y it, Y js = cov ( αit + λ it ξ int + λ it (t 1 ξ lin + γ it ζ t + ε it, α js + λ js ξ int + λ js (s 1 ξ lin + γ js ζ s + ε js. Next, we expand according to the rules of covariance algebra. This yields cov (Y it, Y js = cov (α it, λ js (ξ int + (s 1 ξ lin + γ js ζ s + ε js + }{{} cov (λ it (ξ int + (t 1 ξ lin + γ it ζ t + ε it, α js + }{{} ( λit (ξ cov int + (t 1 ξ lin + γ it ζ t + ε it, λ js (ξ int + (s 1 ξ lin + γ js ζ s + ε js = ( λit (ξ cov int + (t 1 ξ lin + γ it ζ t + ε it,. λ js (ξ int + (s 1 ξ lin + γ js ζ s + ε js The first two terms are zero because of the covariance rule cov (k, X = where k is a constant. Next, we decompose the covariance according to the rules of covariance addition. In this step, we separate the error terms ε it and ε js.

9 MULTIPLE-INDICATOR GROWTH MODELS 9 ( λit (ξ cov (Y it, Y js = cov int + (t 1 ξ lin + γ it ζ t + ε it,, λ js (ξ int + (s 1 ξ lin + γ js ζ s + ε js ( λit (ξ = cov int + (t 1 ξ lin + γ it ζ t + ε it, + λ js (ξ int + (s 1 ξ lin + γ js ζ s cov (λ it (ξ int + (t 1 ξ lin + γ it ζ t + ε it, ε js, ( λit (ξ = cov int + (t 1 ξ lin + γ it ζ t + ε it, + λ js (ξ int + (s 1 ξ lin + γ js ζ s cov (λ it (ξ int + (t 1 ξ lin, ε js + cov (γ it ζ t, ε js + cov (ε it, ε js, ( λit (ξ = cov int + (t 1 ξ lin + γ it ζ t, + λ js (ξ int + (s 1 ξ lin + γ js ζ s cov (ε it, λ js (ξ int + (s 1 ξ lin + cov (λ it (ξ int + (t 1 ξ lin, ε js + cov (ε it, γ js ζ s + cov (γ it ζ t, ε js + cov (ε it, ε js. The subsequent step focuses on expanding the first covariance term, the covariance between the latent trait and state residual components. cov (Y it, Y js = cov (λ it (ξ int + (t 1 ξ lin, γ js ζ s + cov (λ it (ξ int + (t 1 ξ lin, λ js (ξ int + (s 1 ξ lin + cov (γ it ζ t, γ js ζ s + cov (ε it, λ js (ξ int + (s 1 ξ lin + cov (λ it (ξ int + (t 1 ξ lin, ε js + cov (ε it, γ js ζ s + cov (γ it ζ t, ε js + cov (ε it, ε js. The second term in this expression, which represents the covariance among the latent trait variables is now expanded. At the same time, the constant terms λ it, λ js, γ it, γ js, (t 1, and (s 1 can be moved outside the covariance expression according to the rule cov (k X, Y = k cov (X, Y.

10 MULTIPLE-INDICATOR GROWTH MODELS 1 cov (Y it, Y js = λ it γ js cov (ξ int + (t 1 ξ lin, ζ s + [(s 1 + (t 1] cov (ξ int, ξ lin + λ it λ js (t 1 (s 1 var (ξ lin + + var (ξ int γ it γ js cov (ζ t, ζ s + λ js cov (ε it, (ξ int + (s 1 ξ lin + λ it cov ((ξ int + (t 1 ξ lin, ε js + γ js cov (ε it, ζ s + γ it cov (ζ t, ε js + cov (ε it, ε js. Finally, this expression is simplified because of the restrictions for this model, which are listed in Equations (B., (B.3, and (B.4. The resulting general covariance equation for the GSGM is thus: cov (Y it, Y js = λ it λ js [(s 1 + (t 1] cov (ξ int, ξ lin + (t 1 (s 1 var (ξ lin + var (ξ int γ it γ js cov (ζ t, ζ s + cov (ε it, ε js. + As before, the final two terms of this equation are zero except as given in Equations (B.3 and (B.4. ISGM Mean and Covariance Structure The combined equations for the ISGM are given in Equation (3. Recognizing that we set γ 1t = 1 to define the metric of each ζ t, it is sufficient to work with only the final line of these equations. Thus, the general equation for Y it is given by: Y it = α it + ξ inti + (t 1 ξ lini + γ it ζ t + ε it. (B.6 Mean Structure. We now derive the covariance structure in a similar way to the SGM and GSGM. First, Equation (B.6 is substituted into the expected value operator, and simplifying this expression according to the algebraic rules for expected values as follows. E (Y it = E (α it + ξ inti + (t 1 ξ lini + γ it ζ t + ε it = E (α it }{{} +E (ξ inti + E ((t 1 ξ lini + E (γ it ζ t }{{} α it = α it + E (ξ inti + (t 1 E (ξ lini. + E (ε it }{{}

11 MULTIPLE-INDICATOR GROWTH MODELS 11 Covariance Structure. The covariance structure for the ISGM is derived in a like manner by substituting Equation (B.6 for Y it and Y js in the covariance operator: cov (Y it, Y js = cov ( ξinti + (t 1 ξ lini + γ it ζ t + ε it, ξ intj + (s 1 ξ linj + γ js ζ s + ε js Next, we expand according to the covariance algebra rule cov (X, Y + Z = cov (X, Y + cov (X, Z. This is used to separate the error terms ε it and ε js. cov (Y it, Y js = ( (ξinti + (t 1 ξ lini + γ it ζ t + ε it, cov, (ξ intj + (s 1 ξ linj + γ js ζ s + ε js = ( (ξinti + (t 1 ξ lini + γ it ζ t + ε it, cov + intj + (s 1 ξ linj + γ js ζ s cov ((ξ inti + (t 1 ξ lini + γ it ζ t + ε it, ε js, ( (ξinti + (t 1 ξ lini + γ it ζ t + ε it, = cov + intj + (s 1 ξ linj + γ js ζ s cov (ξ inti + (t 1 ξ lini, ε js + cov (γ it ζ t, ε js + cov (ε it, ε js, ( (ξinti + (t 1 ξ lini + γ it ζ t, = cov + (ξ intj + (s 1 ξ linj + γ js ζ s ( cov ε it, (ξ intj + (s 1 ξ linj + cov ((ξ inti + (t 1 ξ lini, ε js + cov (ε it, γ js ζ s + cov (γ it ζ t, ε js + cov (ε it, ε js. The subsequent step focuses on expanding the first covariance term, the covariance between the latent trait and state residual components. cov (Y it, Y js = cov (ξ inti + (t 1 ξ lini, γ js ζ s + cov (ξ inti + (t 1 ξ lini, ξ intj + (s 1 ξ linj + cov (γ it ζ t, γ js ζ s + cov (ε it, ξ intj + (s 1 ξ linj + cov (ξ inti + (t 1 ξ lini, ε js + cov (ε it, γ js ζ s + cov (γ it ζ t, ε js + cov (ε it, ε js. The second term in this expression, which represents the covariance among the latent trait variables is now expanded. At the same time, the constant terms γ it, γ js, (t 1, and (s 1 can be moved outside the covariance expression according to the rule cov (k X, Y = k cov (X, Y..

12 MULTIPLE-INDICATOR GROWTH MODELS 1 cov (Y it, Y js = γ js cov (ξ inti + (t 1 ξ lini, ζ s + (t 1 cov (ξ intj, ξ lini + (s 1 cov (ξ inti, ξ linj + (t 1 (s 1 cov (ξ lini, ξ linj + cov (ξ inti, ξ intj γ it γ js cov (ζ t, ζ s + cov (ε it, ξ intj + (s 1 ξ linj + cov (ξ inti + (t 1 ξ lini, ε js + γ js cov (ε it, ζ s + γ it cov (ζ t, ε js + cov (ε it, ε js. + Finally, this expression is simplified because for this model, the following restrictions apply: cov (ξ it, ζ s = cov (ξ it, ε js = cov (ζ s, ε it =, for all i, j, s, and t. The resulting general covariance equation is thus: cov (Y it, Y js = (t 1 cov (ξ intj, ξ lini + (s 1 cov (ξ inti, ξ linj + (t 1 (s 1 cov (ξ lini, ξ linj cov (ξ inti, ξ intj γ it γ js cov (ζ t, ζ s + cov (ε it, ε js. + + As before, the final two terms of this equation are zero except as given in Equations (B.3 and (B.4. Autoregressive (AR Models In all three instances, the derivation of the autoregressive (AR model begins where the corresponding non-ar model left off. This is accomplished by substituting the quantity β t ζ t 1 + δ t wherever ζ t appears. Recall that β t(t 1 is a real constant and the δ t variables have a mean of zero and are uncorrelated with each other as well as with all other latent and error variables in the model. Thus, the mean structure for the AR-model is identical in each case to the mean structure for the non-ar model. It is not possible to explicitly re-write the equations for the covariance structure because the autoregressive substitution is recursive. Nevertheless, this substitution and expansion may be done for any specific case in which t and s are known. See Appendix E, where we illustrate this expansion for the specific case of two indicators, i = {1, } and three time points, t = {1,, 3}.

13 MULTIPLE-INDICATOR GROWTH MODELS 13 The restrictions for the AR model are slightly different than those of the non-ar model. For the AR model, the following restrictions apply: cov (ξ it, ζ s cov (ξ it, ε js cov (ζ t, ε js = for all i, j, s, t, (B.7 cov (δ t, ζ s cov (ξ it, δ s cov (ε it, δ s Summary cov (δ t, δ s = cov (ε it, ε js = var (ζ 1 when t = s = 1, var (δ t when t = s 1, otherwise, { var (εit when i = j, t = s, otherwise. (B.8 (B.9 The mean and covariance structure for the three models is summarized in Table B1. Table B1 Mean and Covariance Equations for Non-Autoregressive SGM, GSGM, and ISGM Models Mean and Covariance Structure SGM, non-ar Mean E (Y it = α it + λ ite (ξ int + λ it (t 1 E (ξ lin Covariance cov (Y it, Y js = λ it λ js [(s 1 + (t 1] cov (ξ int, ξ lin + λ it λ js (t 1 (s 1 var (ξ lin + λ it λ js var (ξ int + λ it λ js cov (ζ t, ζ s + cov (ε it, ε js GSGM, non-ar Mean E (Y it = α it + λ ite (ξ int + λ it (t 1 E (ξ lin Covariance cov (Y it, Y js = λ it λ js [(s 1 + (t 1] cov (ξ int, ξ lin + λ it λ js (t 1 (s 1 var (ξ lin + λ it λ js var (ξ int + γ it γ js cov (ζ t, ζ s + cov (ε it, ε js ISGM, non-ar Mean E (Y it = E (Y it = E (ξ inti + (t 1 E (ξ lini Covariance cov (Y it, Y js = (t 1 cov (ξ intj, ξ lini + (s 1 cov (ξ inti, ξ linj + (t 1 (s 1 cov (ξ lini, ξ linj + cov (ξ inti, ξ intj + γ it γ js cov (ζ t, ζ s + cov (ε it, ε js

14 MULTIPLE-INDICATOR GROWTH MODELS 14 Appendix C Proportionality Constraint The proportionality constraint refers to an implicit constraint on the ratio of general to specific variance that is present in the SGM, but not the GSGM or ISGM. The ratio of general to specific variance for the first time point (t = 1 was given in the main text, while this ratio for the general case (for an arbitrary number of time points is given below. Note that even for the general case, only the ratio for the SGM is subject to the so-called proportionality constraint. General to Specific Variance Ratio for the SGM We begin with the decomposition of variance for an observed variable, var (Y it, in the SGM, which is given by (the general mean and covariance equations for all three models are derived in Appendix B: var (Y it = λ it var (ξ int +(t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin +λ it var (ζ t +var (ε it. (C.1 For any two indicators Y it and Y jt, at the same time point t, the variance ratio is constrained to be identical: λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin λ it var (ζ t = λ jt var (ξ int + (t 1 λ jt var (ξ lin + (t 1 λ jt cov (ξ int, ξ lin λ jt var (ζ t = var (ξ int + (t 1 var (ξ lin + (t 1 cov (ξ int, ξ lin. var (ζ t (C. General to Specific Variance Ratio for the GSGM In the GSGM, the variance decomposition is given by: var (Y it = λ it var (ξ int +(t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin +γ it var (ζ t +var (ε it, (C.3 and the ratio of general to specific variance is given by λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin γ it var (ζ t λ jt var (ξ int + (t 1 λ jt var (ξ lin + (t 1 λ jt cov (ξ int, ξ lin γ jt var (ζ t for indicator i, for indicator j.(c.4 The ratio of these variances is not constrained.

15 MULTIPLE-INDICATOR GROWTH MODELS 15 General to Specific Variance Ratio for the ISGM We now consider the general variance equation for the ISGM: var (Y it = var (ξ inti + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini + γ it var (ζ t + var (ε it, (C.5 which leads to the following ratio of general to specific variance: var (ξ inti + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini γit var (ζ. (C.6 t Obviously, the ISGM also does not impose a proportionality constraint because in general, the state residual loadings γ it in the denominator can be estimated freely for all indicators 1. 1 Note that there are specific identification conditions in certain designs that may preclude the state residual (γ loadings from being freely estimated in the GSGM and ISGM. For example, designs with only indicators per time point would not allow the state residual (γ loadings to be freely estimated unless (1 a significant autoregresive process is present and estimated for the state residual factors or ( each state residual factor is significantly related to at least one external variable.

16 MULTIPLE-INDICATOR GROWTH MODELS 16 Appendix D Degrees of Freedom in the SGM, GSGM, and ISGM The degrees of freedom (df of a model is an important consideration when choosing or comparing models; df is important when choosing a model because insufficient df may result in model non-identification; df is also important when comparing models since it provides the context within which fit statistics are interpreted (for more detail see e.g., Walker, 194. For simplicity, we compare the df for the restricted case in which latent state residuals are assumed to be uncorrelated, and all conditions of MI are met for all models (i.e., that all loadings and intercepts are time-invariant. The df for this situation represented as an algebraic function of the input parameters (m and n are shown in general in Table D3, and for specific numeric values (of m and n in Table D1. The derivation of the formulas in Table D3 is based on the known and estimated parameters given in Table D. The technical details for this derivation are given below. Examining these tables reveals that for a given set of variables the SGM has the most df, followed by the GSGM, and finally the ISGM. Computing df The degrees of freedom (df for each model are computed as: df = #known #estimated, (D.1 where #known is the number of known parameters (means, variances, and non-redundant covariances of the observed variables Y it and #estimated is the number of free parameters estimated in the model. Known Parameters The total number of known parameters is equal to the number of uniquely identified non-redundant elements in the measurement covariance matrix (including both variances Table D1 Degrees of Freedom for SGM, GSGM, and ISGM Models SGM GSGM ISGM m n n n a 5 a 43 a 65 a 4 a 18 a 36 a 58 a Note. SGM = second-order growth model; GSGM = Generalized second-order growth model; ISGM = indicator-specific growth model; m = number of indicators; n = number of time points. Here, we assume that measurement invariance across time holds for all intercepts and factor loadings. a The two indicator GSGM and ISGM are subject to additional constraints needed for model identification.

17 MULTIPLE-INDICATOR GROWTH MODELS 17 Table D Number of Estimated Model Parameters in Different Multi-indicator Linear Growth Models Model Estimated Parameters SGM GSGM ISGM Intercept and Growth Factor Means E (ξ inti, E (ξ lini m Intercept and Growth Factor Variances var (ξ inti, var (ξ lini m Error Variances var (ε it mn mn mn State Residual Variances var (ζ t or var (δ t n n n Intercept and Growth Factor Covariances cov (ξ inti, ξ intj, cov (ξ inti, ξ linj, cov (ξ lini, ξ linj Intercepts 1 1 m (m 1 α it n (m 1 if α it α is (m 1 if α it = α is Factor Loadings λ it n (m 1 if λ it λ is (m 1 if λ it = λ is Autoregressive Effects β t State Residual Factor Loadings γ it if cov (ζ t, ζ s = 1 if ζ t = β t ζ (t 1 + δ t and β t = β s (n 1 if ζ t = β t ζ (t 1 + δ it and β t β s n (m 1 if γ it γ is (m 1 if γ it = γ is Note. SGM = second-order growth model; GSGM = Generalized second-order growth model; ISGM = indicator-specific growth model; m = number of indicators; n = number of time points.

18 MULTIPLE-INDICATOR GROWTH MODELS 18 Table D3 Algebraic Degrees of Freedom for SGM, GSGM, and ISGM Models Model #known #estimated df SGM GSGM ISGM (mn + 3mn (mn + 3mn (mn + 3mn m + mn + n + 3 mn + 3m + n + m + 4m + mn + n 1 m n + mn 4m n 6 m n + mn 6m n 4 m n 4m + mn 8m n + Note. #known = number of known means, variances, and covariances; #estimated = number of estimated parameters; SGM = second-order growth model; GSGM = Generalized second-order growth model; ISGM = indicator-specific growth model; m = number of indicators; n = number of time points. Here, we assume that all latent state residual factors are uncorrelated and that measurement invariance across time holds for all intercepts and factor loadings. and covariances plus the total number of expected values that can be computed from the measurements (see, e.g., Bollen, In the present context, the number of known parameters is #known = (mn + 3mn. Estimated Parameters and Degrees of Freedom In contrast to the number of known parameters, the number of parameters estimated varies across models. The number of degrees of freedom is then calculated by subtracting the number of estimated parameters from the number of known parameters, as given in Equation (D.1. For simplicity, we assume linear growth, and that all intercepts (α it, trait loadings (λ it, and state residual loadings (γ it are time-invariant. We also assume that the state residual factors are all uncorrelated (no autoregressive structure. For less restrictive models, or models with other forms of growth, more parameters will be estimated. SGM. #estimated = }{{} E (ξ int E (ξ lin + }{{} var (ξ int var (ξ lin }{{} 1 + (m 1 + (m 1 }{{}}{{} cov(ξ int,ξ lin α i λ i = 4 + (m 1 + m n + n + 1 = m + mn + n + 3 4m + mn + n + 6 = + m }{{ n } + }{{} n + var(ε it var(ζ t

19 MULTIPLE-INDICATOR GROWTH MODELS 19 Thus, the degrees of freedom for the SGM are given by GSGM. df = #known #estimated ( = (mn + 3mn 4m + mn + n + 6 = m n + 3mn 4m mn n 6 = m n + mn 4m n 6 #estimated = }{{} E (ξ int E (ξ lin + }{{} var (ξ int var (ξ lin + }{{} mn + }{{} n + var(ε it var(ζ t }{{} 1 + (m 1 + (m 1 + (m 1 }{{}}{{}}{{} cov(ξ int,ξ lin α i λ i γ i = (m 1 + m n + n + 1 = 4 + 3m 3 + m n + n + 1 = mn + 3m + n + mn + 6m + n + 4 = Thus, the degrees of freedom for the GSGM are given by ISGM. #unknown = }{{} m E (ξ inti E (ξ lini df = #known #estimated ( = (mn + 3mn m n + 6m + n + 4 = m n + 3mn mn 6m n 4 = m n + mn 6m n 4 + m }{{} var (ξ inti var (ξ lini + m n }{{} var(ε it + n }{{} var(ζ t ( + = (m + m m + m + m n + n + m 1 = m + 4m + mn + n 1 m m } {{ } cov(ξ it,ξ js + (m 1 }{{} γ i

20 MULTIPLE-INDICATOR GROWTH MODELS Thus, the degrees of freedom for the ISGM are given by df = #known #estimated = (mn + 3mn ( m + 4m + mn + n 1 ( = (mn + 3mn 4m + 8m + mn + n = m n 4m + mn 8m n +. Note that for models with autoregressive parameters (β t as well as partial or complete non-invariance of measurement parameters (α it, λ it, γ it, models with fewer df would result.

21 MULTIPLE-INDICATOR GROWTH MODELS 1 Appendix E Model Identification with Two Indicators per Time Point To demonstrate the limitations of a growth model with only two indicators per time point, it is sufficient to consider only the case with three time-points. That is, i = {1, }, t = {1,, 3}. The covariance matrix for this case, in terms of Y it for all i, t is shown in Table E1. Table E1 Covariance Matrix for Indicators and 3 Time-points j = 1 s = 1 var (Y 11 i = 1 i = i = 1 i = i = 1 i = t = 1 t = 1 t = t = t = 3 t = 3 j = s = 1 cov (Y 11, Y 1 var (Y 1 j = 1 s = cov (Y 11, Y 1 cov (Y 1, Y 1 var (Y 1 j = s = cov (Y 11, Y cov (Y 1, Y cov (Y 1, Y var (Y j = 1 s = 3 cov (Y 11, Y 13 cov (Y 1, Y 13 cov (Y 1, Y 13 cov (Y, Y 13 var (Y 13 j = s = 3 cov (Y 11, Y 3 cov (Y 1, Y 3 cov (Y 1, Y 3 cov (Y, Y 3 cov (Y 13, Y 3 var (Y 3 To show the general case, we first write out the complete model for the GSGM. This is accomplished by starting with the generic covariance relation for the autoregressive GSGM given in Table B1. The substitution of values for the indices into the general covariance equation is very straightforward. However, as noted in Appendix B, β t ζ t 1 + δ t must be recursively substituted for ζ t (if t > 1 or s > 1. Then, the resulting covariance relation must be expanded and simplified. We demonstrate this for all except the trivial instance where t = s = 1. Note that the order in which the values (t, s appear is not important (cov (ζ t, ζ s = cov (ζ s, ζ t. (1, = (, 1 (1, 3 = (3, 1 (, cov (ζ, ζ 1 = cov (β ζ 1 + δ, ζ 1 = cov (β ζ 1, ζ 1 + cov (δ, ζ 1 = β var (ζ 1 cov (ζ 1, ζ 3 = cov (ζ 1, β 3 [β ζ 1 + δ ] + δ 3 = cov (ζ 1, β 3 β ζ 1 + β 3 δ + δ 3 = β 3 β var (ζ 1 cov (ζ, ζ = cov (β ζ 1 + δ, β ζ 1 + δ = cov (β ζ 1 + δ, β ζ 1 + cov (β ζ 1 + δ t, δ = cov (β ζ 1, β ζ 1 + cov (δ, β ζ 1 + }{{} cov (β ζ 1, δ + cov (δ, δ }{{} = β var (ζ 1 + var (δ

22 MULTIPLE-INDICATOR GROWTH MODELS (, 3 = (3, (3, 3 cov (ζ, ζ 3 = cov (β ζ 1 + δ, β 3 [β ζ 1 + δ ] + δ 3 = cov (β ζ 1, β 3 [β ζ 1 + δ ] + δ 3 + cov (δ, β 3 [β 1 ζ 1 + δ ] + δ 3 = β 3 β cov (ζ 1, ζ 1 + cov (δ, β 3 β ζ 1 + β 3 δ + = β 3 β var (ζ 1 + β 3 var (δ, δ cov (ζ 3, ζ 3 = cov (β 3 [β ζ 1 + δ ] + δ 3, β 3 [β ζ 1 + δ ] + δ 3 = cov (β 3 [β ζ 1 + δ ], β 3 [β ζ 1 + δ ] + cov (β 3 [β ζ 1 + δ ], δ 3 + }{{} cov (δ 3, δ 3 = cov (β 3 β ζ 1 + β 3 δ, β 3 β ζ 1 + β 3 δ + cov (β 3 β ζ 1, β 3 β ζ 1 + cov (β 3 β ζ 1, β 3 δ + }{{} cov (β 3 δ, β 3 δ + var (δ 3 = β 3β var (ζ 1 + β 3 var (δ + var (δ 3 The results from substituting all possible combinations for the specific values of i and t into this covariance equation, and simplifying these according to the model restrictions given in Equations (B.7-(B.9 are shown in Table E3. Similarly, the results of this substitution for the case of uncorrelated state residual variables, simplified according to the restrictions given in Equations (B.-(B.4, are shown in Table E3. We assume MI and uncorrelated state residual variables in order to examine when the model is not identified. The identification problem comes from the state residual (ζ t parameters. This is evident from the fact that the latent state residual variable for the first time point, ζ 1, appears in exactly three equations from Table E3: var (Y 11 = var (ξ int + var (ζ 1 + var (ε 11 cov (Y 11, Y 1 = λ 1 var (ξ int + γ 1 var (ζ 1 var (Y 1 = λ 1 var (ξ int + γ 1 var (ζ 1 + var (ε 1 These are also the only equations in which the parameter γ 1 appears. In these equations, var (Y 11, var (Y 1, and cov (Y 11, Y 1 are known, while var (ξ int and λ 1 can be identified from the other model equations. The parameters γ 1, var (ζ 1, var (ε 11 and var (ε 1 are not known. These parameters do not appear elsewhere and thus cannot be estimated from other information. Without formal proof, we note that the situation remains unchanged regardless of the number of time-points present. This leads to the conclusion that

23 MULTIPLE-INDICATOR GROWTH MODELS 3 Table E Variance and covariance equations for the GSGM-AR with two indicators and three time points. var (Y 11 = var (ξ int + γ 11 var (ζ 1 + var (ε 11 cov (Y 11, Y 1 = λ 1 var (ξ int + γ 1 var (ζ 1 cov (Y 11, Y 1 = cov (ξ int, ξ lin + var (ξ int + β var (ζ 1 cov (Y 11, Y = λ [cov (ξ int, ξ lin + var (ξ int ] + γ β var (ζ 1 cov (Y 11, Y 13 = [ cov (ξ int, ξ lin + var (ξ int ] + β 3 β var (ζ 1 cov (Y 11, Y 3 = λ 3 [ cov (ξ int, ξ lin + var (ξ int ] + γ 3 β 3 β var (ζ 1 var (Y 1 = λ 1 var (ξ int + λ 1 var (ζ 1 + var (ε 1 cov (Y 1, Y 1 = λ 1 [cov (ξ int, ξ lin + var (ξ int ] + γ 1 β var (ζ 1 cov (Y 1, Y = λ 1 λ [cov (ξ int, ξ lin + var (ξ int ] + γ 1 γ β var (ζ 1 cov (Y 1, Y 13 = λ 1 [ cov (ξ int, ξ lin + var (ξ int ] + γ 1 β 3 β var (ζ 1 cov (Y 1, Y 3 = λ 1 λ 3 [ cov (ξ int, ξ lin + var (ξ int ] + γ 1 γ 3 β 3 β var (ζ 1 var (Y 1 = cov (ξ int, ξ lin + var (ξ lin + var (ξ int + β var (ζ 1 + var (δ + var (ε 1 cov (Y 1, Y = λ [ cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + [ γ β var (ζ 1 + var (δ ] cov (Y 1, Y 13 = [3 cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + [ β3 β var (ζ 1 + β 3 var (δ ] cov (Y 1, Y 3 = λ 3 [3 cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + [ γ 3 β3 β var (ζ 1 + β 3 var (δ ] var (Y = λ [ cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + [ β var (ζ 1 + var (δ ] + var (ε γ cov (Y, Y 13 = λ [3 cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + [ γ β3 β var (ζ 1 + β 3 var (δ ] cov (Y, Y 3 = λ λ 3 [3 cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + [ γ γ 3 β3 β var (ζ 1 + β 3 var (δ ] var (Y 13 = 4 cov (ξ int, ξ lin + 4 var (ξ lin + var (ξ int + β 3 β var (ζ 1 + β 3 var (δ + var (δ 3 + var (ε 13 cov (Y 13, Y 3 = λ 3 [4 cov (ξ int, ξ lin + 4 var (ξ lin + var (ξ int ] + [ γ 3 β 3 β var (ζ 1 + β3 var (δ + var (δ 3 ] var (Y 3 = λ 3 [4 cov (ξ int, ξ lin + 4 var (ξ lin + var (ξ int ] + [ β 3 β var (ζ 1 + β3 var (δ + var (δ 3 ] + var (ε 3 γ 3

24 MULTIPLE-INDICATOR GROWTH MODELS 4 Table E3 Variance and covariance equations for the GSGM with two indicators and three time points. var (Y 11 = var (ξ int + var (ζ 1 + var (ε 11 cov (Y 11, Y 1 = λ 1 var (ξ int + γ 1 var (ζ 1 cov (Y 11, Y 1 = cov (ξ int, ξ lin + var (ξ int cov (Y 11, Y = λ [cov (ξ int, ξ lin + var (ξ int ] cov (Y 11, Y 13 = cov (ξ int, ξ lin + var (ξ int cov (Y 11, Y 3 = λ 3 [ cov (ξ int, ξ lin + var (ξ int ] var (Y 1 = λ 1 var (ξ int + γ 1 var (ζ 1 + var (ε 1 cov (Y 1, Y 1 = λ 1 [cov (ξ int, ξ lin + var (ξ int ] cov (Y 1, Y = λ 1 λ [cov (ξ int, ξ lin + var (ξ int ] cov (Y 1, Y 13 = λ 1 [ cov (ξ int, ξ lin + var (ξ int ] cov (Y 1, Y 3 = λ 1 λ 3 [ cov (ξ int, ξ lin + var (ξ int ] var (Y 1 = [ cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + var (ζ + var (ε 1 cov (Y 1, Y = λ [ cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + γ var (ζ cov (Y 1, Y 13 = 3 cov (ξ int, ξ lin + var (ξ lin + var (ξ int cov (Y 1, Y 3 = λ 3 [3 cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] var (Y = λ [ cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] + γ var (ζ + var (ε cov (Y, Y 13 = λ [3 cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] cov (Y, Y 3 = λ λ 3 [3 cov (ξ int, ξ lin + var (ξ lin + var (ξ int ] var (Y 13 = 4 cov (ξ int, ξ lin + 4 var (ξ lin + var (ξ int + var (ζ 3 + var (ε 13 cov (Y 13, Y 3 = λ 3 [4 cov (ξ int, ξ lin + 4 var (ξ lin + var (ξ int ] + γ 3 var (ζ 3 var (Y 3 = λ 3 [4 cov (ξ int, ξ lin + 4 var (ξ lin + var (ξ int ] + γ 3 var (ζ 3 + var (ε 3

25 MULTIPLE-INDICATOR GROWTH MODELS 5 the GSGM is not generally identified with only two indicators at each time-point, unless further constraints are imposed or effects are added to the model. If the parameters γ t are known, the remaining unknown parameters in these equations can be estimated. This can be accomplished by fixing the state residual loadings γ t to some value (i.e., γ 1t = γ t = 1. Alternatively, the SGM, which we have shown to be a special case of the GSGM in which γ it = λ it can be used to adequately constrain this model, since the λ it parameters can be estimated from other model information. Another case in which the model information is sufficient for the parameters γ t and var (ζ t to be estimated occurs when there is a significant autoregressive structure. The γ t loadings in this case are related by the β t parameters. This can be seen by comparing the covariance relations in Table E to those in Table E3, paying particular attention to the parameters containing the terms containing ζ 1 and δ t.

26 MULTIPLE-INDICATOR GROWTH MODELS 6 Appendix F Variance Components and Coefficients In each of the three growth models, the observed variance can be partitioned into variance due to the growth process, variance due to state residual variability, and measurement error variance as shown in Figure F1. Observed Variance var (Y it Trait & Growth Variance State Residual Variance Error Variance Latent State Variance var (τ it var (ε it Figure F1. Variance Decomposition in the SGM, GSGM, and ISGM Based on the variance decomposition in each model (see Table F1, several coefficients can be defined (Eid, Courvoisier, & Lischetzke, 1: Coefficient of reliability: Rel (Y it = Coefficient of consistency: Con (τ it = Coefficient of occasion-specificity: Table F1 Variance Decomposition OSpec (τ it = Model Variance Component Equation SGM GSGM ISGM State Variance Observed Variance = var (τ it var (Y it. Trait & Growth Variance. State Variance State Residual Variance. State Variance Trait & Growth Variance λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin State Residual Variance Error Variance λ it var (ζ t 1 var (ε it Trait & Growth Variance λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin State Residual Variance γ it var (ζ t Error Variance var (ε it Trait & Growth Variance var (ξ inti + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini State Residual Variance γ it var (ζ t Error Variance var (ε it

27 MULTIPLE-INDICATOR GROWTH MODELS 7 Note that whereas the reliability coefficient is defined for the observed variables Y it, the Con and OSpec coefficients are defined for the true score variables τ it, so as to partition the total systematic variance into growth versus state residual variance. Below we show the specific calculations for each model. SGM Total Variance: var (Y it = λ it var (ξ int +(t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin +λ it var (ζ t +var (ε it. Reliability Coefficient: Rel (Y it = λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin + λ it var (ζ t. var (Y it Consistency Coefficient: Con (τ t = var (ξ int + (t 1 var (ξ lin + (t 1 cov (ξ int, ξ lin, var (τ t = Occasion-specificity Coefficient: GSGM OSpec (τ t = var (ζ t var (τ t, Total Variance: = var (ξ int + (t 1 var (ξ lin + (t 1 cov (ξ int, ξ lin var (ξ int + (t 1 var (ξ lin + (t 1 cov (ξ int, ξ lin + var (ζ t. var (ζ t var (ξ int + (t 1 var (ξ lin + (t 1 cov (ξ int, ξ lin + var (ζ t. var (Y it = λ it var (ξ int +(t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin +γ it var (ζ t +var (ε it. Reliability Coefficient: Rel (Y it = λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin + γit var (ζ t. var (Y it Consistency Coefficient: Con (τ it = λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin, var (τ it = λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin + γ it var (ζ t. Occasion-specificity Coefficient: OSpec (τ t = γ it var (ζ t var (τ it, = γ it var (ζ t λ it var (ξ int + (t 1 λ it var (ξ lin + (t 1 λ it cov (ξ int, ξ lin + γ it var (ζ t.

28 MULTIPLE-INDICATOR GROWTH MODELS 8 ISGM Total Variance: var (Y it = var (ξ inti + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini + γ it var (ζ t + var (ε it. Reliability Coefficient: Rel (Y it = var (ξ int i + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini + γit var (ζ t. var (Y it Consistency Coefficient: Con (τ it = var (ξ int i + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini, var (τ it = Occasion-specificity Coefficient: var (ξ inti + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini var (ξ inti + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini + γ it var (ζ t. OSpec (τ it = γ it var (ζ t var (ξ inti + (t 1 var (ξ lini + (t 1 cov (ξ inti, ξ lini + γ it var (ζ t. Note that for the SGM, the consistency coefficient and its complement (OSpec are dependent only upon τ t. Thus, this coefficient is the same for all indicators measured at the same time point in this model. However, for both the GSGM and ISGM, Con and OSpec are indicator-specific (dependent on τ it because there is not a common latent state factor for each time-point in these two models. Additional coefficients can be defined in the case of an autoregressive structure among the state residuals as shown in Eid et al. (1.

29 MULTIPLE-INDICATOR GROWTH MODELS 9 Appendix G Mplus Model Inputs for SGM, GSGM, and ISGM The order and names used for the following model specifications match the specification in Table 3 (of the main paper. The data file used for this analysis is provided for practice purposes only. It is available for download at psycarticles/supplemental/met18/met18_supp.html. Publication of this data in any form, including results from analyzing this data is expressly forbidden. For requests to use this data for publication or other purposes, please contact Dr. David Cole at david.cole@vanderbilt.edu. Model 1a: SGM filename: 1a_SGM_LinearGrowth_AlphaLamInv.inp TITLE: Second-Order Multiple-Indicator Growth Model. Linear Growth. First-order representation. (after Schmid-Leiman Transformation. DATA: file = Anxiety_Data_3_Indicators_4_Waves.dat; VARIABLE: names = grpid y11 y1 y31 y1 y y3 y13 y3 y33 y14 y4 y34; usevariables = y11 y1 y31 y1 y y3 y13 y3 y33 y14 y4 y34; missing = all(-99; MODEL:! Common xi. This is the trait intercept.! Set the first trait loading (lambda to 1.! Estimate the other two.! Measurement Invariance on lambda not assumed. xi_int by y11@1 (lambda11 y1 (lambda1 y31 (lambda31 y1@1 (lambda1 y (lambda y3 (lambda3 y13@1 (lambda13 y3 (lambda3 y33 (lambda33 y14@1 (lambda14 y4 (lambda4

30 MULTIPLE-INDICATOR GROWTH MODELS 3 y34 (lambda34;! Estimate the mean of the trait factor.! (zero by default in Mplus. [xi_int*];! Variance of the trait factor (xi_int. xi_int (xi_intv;! Growth Model. This is the trait slope.! Wave 1 is missing, because lambda would be multiplied by.! Multiply by 1*lambda for the second wave, by *lambda for! the third, and 3*lambda for the fourth. xi_lin by y1@1 (lambda1 y (lambda y3 (lambda3 y13@ (lambda13d y3 (lambda3d y33 (lambda33d y14@3 (lambda14t y4 (lambda4t y34 (lambda34t;! Estimate the mean of the growth factor.! (zero by default in Mplus. [xi_lin*];! Variance of the growth factor (xi_lin. xi_lin (xi_linv;! These loadings must be the same as the trait! loadings given above...a constraint of the SGM. zeta1 by y11@1 (lambda11 y1 (lambda1 y31 (lambda31; zeta by y1@1 (lambda1 y (lambda y3 (lambda3; zeta3 by y13@1 (lambda13 y3 (lambda3 y33 (lambda33; zeta4 by y14@1 (lambda14 y4 (lambda4 y34 (lambda34;! Set intercepts of state residual factors to zero.! (would otherwise be estimated as the default in Mplus [zeta1-zeta4@];! Delta error variances on each zeta. zeta1 (zeta1var; zeta (zetavar;

31 MULTIPLE-INDICATOR GROWTH MODELS 31 zeta3 (zeta3var; zeta4 (zeta4var;! Constant intercepts (alpha_it.! Set the intercept on the first indicator! (alpha1 to, and estimate the other! two (alpha, alpha3.! This assumes alpha-mi. [y1*] (alpha1; [y31*] (alpha31; [y*] (alpha; [y3*] (alpha3; [y3*] (alpha3; [y33*] (alpha33; [y4*] (alpha4; [y34*] (alpha34;! Estimate the Error Variances. y11 (ev11; y1 (ev1; y31 (ev31; y1 (ev1; y (ev; y3 (ev3; y13 (ev13; y3 (ev3; y33 (ev33; y14 (ev14; y4 (ev4; y34 (ev34;! Non-admissible correlations. xi_int with zeta1-zeta4@; xi_lin with zeta1-zeta4@; zeta1 with zeta-zeta4@; zeta with zeta3-zeta4@; zeta3 with zeta4@; MODEL CONSTRAINT:! lambdaid is double lambda i. lambda3d = lambda3*; lambda33d = lambda33*;! lambdait is triple lambda i. lambda4t = lambda4*3; lambda34t = lambda34*3; OUTPUT: sampstat stdyx; Model 1b: SGM, α, λ-mi filename: 1b_SGM_LinearGrowth_AlphaLamInv.inp TITLE: Second-Order Multiple-Indicator Growth Model. Measurement invariance assumed on alpha and lambda. Linear Growth. First-order representation. (after Schmid-Leiman Transformation. DATA: file = Anxiety_Data_3_Indicators_4_Waves.dat; VARIABLE: names = grpid y11 y1 y31 y1 y y3 y13 y3 y33

32 MULTIPLE-INDICATOR GROWTH MODELS 3 y14 y4 y34; usevariables = y11 y1 y31 y1 y y3 y13 y3 y33 y14 y4 y34; missing = all(-99; MODEL:! Common xi. This is the trait intercept.! Set the first trait loading (lambda to 1.! Estimate the other two.! This assumes Measurement Invariance on lambda.! Do this if the lambdas have to be the same! for a given test across testing occasions. xi_int by y11@1 y1 y31 (lambda1-lambda3 y1@1 y y3 (lambda1-lambda3 y13@1 y3 y33 (lambda1-lambda3 y14@1 y4 y34 (lambda1-lambda3;! Estimate the mean of the trait factor.! (zero by default in Mplus. [xi_int*];! Variance of the trait factor (xi_int. xi_int (xi_intv;! Growth Model. This is the trait slope.! Wave 1 is missing, because lambda would be multiplied by.! Multiply by 1*lambda for the second wave, by *lambda for! the third, and 3*lambda for the fourth. xi_lin by y1@1 (lambda1 y (lambda y3 (lambda3 y13@ (lambda1d y3 (lambdad y33 (lambda3d y14@3 (lambda1t y4 (lambdat y34 (lambda3t;! Estimate the mean of the growth factor.! (zero by default in Mplus. [xi_lin*];! Variance of the growth factor (xi_lin. xi_lin (xi_linv;! These loadings must be the same as the trait! loadings given above...a constraint of the SGM. zeta1 by y11@1 y1 y31 (lambda1-lambda3; zeta by y1@1 y y3 (lambda1-lambda3; zeta3 by y13@1 y3 y33 (lambda1-lambda3;

33 MULTIPLE-INDICATOR GROWTH MODELS 33 zeta4 by y4 y34 (lambda1-lambda3;! Set intercepts of state residual factors to zero.! (would otherwise be estimated as the default in Mplus Delta error variances on each zeta. zeta1 (zeta1var; zeta (zetavar; zeta3 (zeta3var; zeta4 (zeta4var;! Constant intercepts (alpha_it.! Set the intercept on the first indicator! (alpha1 to, and estimate the other! two (alpha, alpha3.! This assumes alpha-mi. [y1*] (alpha; [y31*] (alpha3; [y*] (alpha; [y3*] (alpha3; [y3*] (alpha; [y33*] (alpha3; [y4*] (alpha; [y34*] (alpha3;! Estimate the Error Variances. y11 (ev11; y1 (ev1; y31 (ev31; y1 (ev1; y (ev; y3 (ev3; y13 (ev13; y3 (ev3; y33 (ev33; y14 (ev14; y4 (ev4; y34 (ev34;! Non-admissible correlations. xi_int with zeta1-zeta4@; xi_lin with zeta1-zeta4@; zeta1 with zeta-zeta4@; zeta with zeta3-zeta4@; zeta3 with zeta4@; MODEL CONSTRAINT:! lambdaid is double lambda i. lambdad = lambda*; lambda3d = lambda3*;! lambdait is triple lambda i. lambdat = lambda*3; lambda3t = lambda3*3; OUTPUT: sampstat stdyx; Model a: GSGM filename: a_gsgm_lineargrowth.inp TITLE: Generalized Second-Order Multiple-Indicator Growth Model. Linear Growth. First-order model representation.

34 MULTIPLE-INDICATOR GROWTH MODELS 34 (after Schmid-Leiman Transformation. DATA: file = Anxiety_Data_3_Indicators_4_Waves.dat; VARIABLE: names = grpid y11 y1 y31 y1 y y3 y13 y3 y33 y14 y4 y34; usevariables = y11 y1 y31 y1 y y3 y13 y3 y33 y14 y4 y34; missing = all(-99; MODEL:! Common xi. This is the trait intercept.! Set the first trait loading (lambda to 1.! Estimate the other two.! Measurement Invariance on lambda not assumed. xi_int by y11@1 (lambda11 y1 (lambda1 y31 (lambda31 y1@1 (lambda1 y (lambda y3 (lambda3 y13@1 (lambda13 y3 (lambda3 y33 (lambda33 y14@1 (lambda14 y4 (lambda4 y34 (lambda34;! Estimate the mean of the trait factor.! (zero by default in Mplus. [xi_int*];! Variance of the trait factor (xi_int. xi_int (xi_intv;! Growth Model. This is the trait slope.! Wave 1 is missing, because lambda would be multiplied by.! Multiply by 1*lambda for the second wave, by *lambda for! the third, and 3*lambda for the fourth. xi_lin by y1@1 (lambda1 y (lambda y3 (lambda3 y13@ (lambda13d y3 (lambda3d y33 (lambda33d

35 MULTIPLE-INDICATOR GROWTH MODELS 35 (lambda14t y4 (lambda4t y34 (lambda34t;! Estimate the mean of the growth factor.! (zero by default in Mplus. [xi_lin*];! Variance of the growth factor (xi_lin. xi_lin (xi_linv;! These loadings are NOT the same as the trait! loadings given above. This is the difference! between the GSGM and the SGM.! Do not assume measurement invariance of the state residual! factor loadings (gamma_it=gamma_is. zeta1 by (gamma11 y1 (gamma1 y31 (gamma31; zeta by (gamma1 y (gamma y3 (gamma3; zeta3 by y13@1 (gamma13 y3 (gamma3 y33 (gamma33; zeta4 by y14@1 (gamma14 y4 (gamma4 y34 (gamma34;! Set intercepts of state residual factors to zero.! (would otherwise be estimated as the default in Mplus [zeta1-zeta4@];! Delta error variances on each zeta. zeta1 (zeta1var; zeta (zetavar; zeta3 (zeta3var; zeta4 (zeta4var;! Constant intercepts (alpha_it.! Set the intercept on the first indicator! (alpha1 to, and estimate the other! two (alpha, alpha3.! This assumes alpha-mi. [y11@]; [y1*] (alpha1; [y31*] (alpha31; [y1@]; [y*] (alpha; [y3*] (alpha3; [y13@]; [y3*] (alpha3; [y33*] (alpha33; [y14@]; [y4*] (alpha4; [y34*] (alpha34;! Estimate the Error Variances. y11 (ev11; y1 (ev1; y31 (ev31;

36 MULTIPLE-INDICATOR GROWTH MODELS 36 y1 (ev1; y (ev; y3 (ev3; y13 (ev13; y3 (ev3; y33 (ev33; y14 (ev14; y4 (ev4; y34 (ev34;! Non-admissible correlations. xi_int with zeta1-zeta4@; xi_lin with zeta1-zeta4@; zeta1 with zeta-zeta4@; zeta with zeta3-zeta4@; zeta3 with zeta4@; MODEL CONSTRAINT:! lambdaid is double lambda i. lambda3d = lambda3*; lambda33d = lambda33*;! lambdait is triple lambda i. lambda4t = lambda4*3; lambda34t = lambda34*3; OUTPUT: sampstat stdyx; Model b: GSGM, α, λ-mi filename: b_gsgm_lineargrowth_alphalaminv.inp TITLE: Generalized Second-Order Multiple-Indicator Growth Model. Measurement invariance assumed on alpha and lambda. Linear Growth. First-order model representation. (after Schmid-Leiman Transformation. DATA: file = Anxiety_Data_3_Indicators_4_Waves.dat; VARIABLE: names = grpid y11 y1 y31 y1 y y3 y13 y3 y33 y14 y4 y34; usevariables = y11 y1 y31 y1 y y3 y13 y3 y33 y14 y4 y34; missing = all(-99; MODEL:! Common xi. This is the trait intercept.! Set the first trait loading (lambda to 1.! Estimate the other two.! This assumes Measurement Invariance on lambda.! Do this if the lambdas have to be the same! for a given test across testing occasions.

How well do Fit Indices Distinguish Between the Two?

How well do Fit Indices Distinguish Between the Two? MODELS OF VARIABILITY VS. MODELS OF TRAIT CHANGE How well do Fit Indices Distinguish Between the Two? M Conference University of Connecticut, May 2-22, 2 bkeller2@asu.edu INTRODUCTION More and more researchers

More information

How to run the RI CLPM with Mplus By Ellen Hamaker March 21, 2018

How to run the RI CLPM with Mplus By Ellen Hamaker March 21, 2018 How to run the RI CLPM with Mplus By Ellen Hamaker March 21, 2018 The random intercept cross lagged panel model (RI CLPM) as proposed by Hamaker, Kuiper and Grasman (2015, Psychological Methods) is a model

More information

Consequences of measurement error. Psychology 588: Covariance structure and factor models

Consequences of measurement error. Psychology 588: Covariance structure and factor models Consequences of measurement error Psychology 588: Covariance structure and factor models Scaling indeterminacy of latent variables Scale of a latent variable is arbitrary and determined by a convention

More information

Multiple Group CFA Invariance Example (data from Brown Chapter 7) using MLR Mplus 7.4: Major Depression Criteria across Men and Women (n = 345 each)

Multiple Group CFA Invariance Example (data from Brown Chapter 7) using MLR Mplus 7.4: Major Depression Criteria across Men and Women (n = 345 each) Multiple Group CFA Invariance Example (data from Brown Chapter 7) using MLR Mplus 7.4: Major Depression Criteria across Men and Women (n = 345 each) 9 items rated by clinicians on a scale of 0 to 8 (0

More information

Advanced Structural Equations Models I

Advanced Structural Equations Models I This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Thursday Morning. Growth Modelling in Mplus. Using a set of repeated continuous measures of bodyweight

Thursday Morning. Growth Modelling in Mplus. Using a set of repeated continuous measures of bodyweight Thursday Morning Growth Modelling in Mplus Using a set of repeated continuous measures of bodyweight 1 Growth modelling Continuous Data Mplus model syntax refresher ALSPAC Confirmatory Factor Analysis

More information

Inference using structural equations with latent variables

Inference using structural equations with latent variables This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Specifying Latent Curve and Other Growth Models Using Mplus. (Revised )

Specifying Latent Curve and Other Growth Models Using Mplus. (Revised ) Ronald H. Heck 1 University of Hawai i at Mānoa Handout #20 Specifying Latent Curve and Other Growth Models Using Mplus (Revised 12-1-2014) The SEM approach offers a contrasting framework for use in analyzing

More information

Longitudinal Invariance CFA (using MLR) Example in Mplus v. 7.4 (N = 151; 6 items over 3 occasions)

Longitudinal Invariance CFA (using MLR) Example in Mplus v. 7.4 (N = 151; 6 items over 3 occasions) Longitudinal Invariance CFA (using MLR) Example in Mplus v. 7.4 (N = 151; 6 items over 3 occasions) CLP 948 Example 7b page 1 These data measuring a latent trait of social functioning were collected at

More information

A Study of Statistical Power and Type I Errors in Testing a Factor Analytic. Model for Group Differences in Regression Intercepts

A Study of Statistical Power and Type I Errors in Testing a Factor Analytic. Model for Group Differences in Regression Intercepts A Study of Statistical Power and Type I Errors in Testing a Factor Analytic Model for Group Differences in Regression Intercepts by Margarita Olivera Aguilar A Thesis Presented in Partial Fulfillment of

More information

Online Appendix for Sterba, S.K. (2013). Understanding linkages among mixture models. Multivariate Behavioral Research, 48,

Online Appendix for Sterba, S.K. (2013). Understanding linkages among mixture models. Multivariate Behavioral Research, 48, Online Appendix for, S.K. (2013). Understanding linkages among mixture models. Multivariate Behavioral Research, 48, 775-815. Table of Contents. I. Full presentation of parallel-process groups-based trajectory

More information

Nesting and Equivalence Testing

Nesting and Equivalence Testing Nesting and Equivalence Testing Tihomir Asparouhov and Bengt Muthén August 13, 2018 Abstract In this note, we discuss the nesting and equivalence testing (NET) methodology developed in Bentler and Satorra

More information

Myths about the Analysis of Change

Myths about the Analysis of Change James H. Steiger Department of Psychology and Human Development Vanderbilt University, 2010 1 Introduction 2 Limitations of the Two-Observation Study 3 Myth 2: The Difference Score is Intrinsically Unreliable

More information

Chapter 8. Models with Structural and Measurement Components. Overview. Characteristics of SR models. Analysis of SR models. Estimation of SR models

Chapter 8. Models with Structural and Measurement Components. Overview. Characteristics of SR models. Analysis of SR models. Estimation of SR models Chapter 8 Models with Structural and Measurement Components Good people are good because they've come to wisdom through failure. Overview William Saroyan Characteristics of SR models Estimation of SR models

More information

CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA

CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA Examples: Multilevel Modeling With Complex Survey Data CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA Complex survey data refers to data obtained by stratification, cluster sampling and/or

More information

Exploring Cultural Differences with Structural Equation Modelling

Exploring Cultural Differences with Structural Equation Modelling Exploring Cultural Differences with Structural Equation Modelling Wynne W. Chin University of Calgary and City University of Hong Kong 1996 IS Cross Cultural Workshop slide 1 The objectives for this presentation

More information

Supplemental material for Autoregressive Latent Trajectory 1

Supplemental material for Autoregressive Latent Trajectory 1 Supplemental material for Autoregressive Latent Trajectory 1 Supplemental Materials for The Longitudinal Interplay of Adolescents Self-Esteem and Body Image: A Conditional Autoregressive Latent Trajectory

More information

ADVANCED C. MEASUREMENT INVARIANCE SEM REX B KLINE CONCORDIA

ADVANCED C. MEASUREMENT INVARIANCE SEM REX B KLINE CONCORDIA ADVANCED SEM C. MEASUREMENT INVARIANCE REX B KLINE CONCORDIA C C2 multiple model 2 data sets simultaneous C3 multiple 2 populations 2 occasions 2 methods C4 multiple unstandardized constrain to equal fit

More information

Measurement Invariance (MI) in CFA and Differential Item Functioning (DIF) in IRT/IFA

Measurement Invariance (MI) in CFA and Differential Item Functioning (DIF) in IRT/IFA Topics: Measurement Invariance (MI) in CFA and Differential Item Functioning (DIF) in IRT/IFA What are MI and DIF? Testing measurement invariance in CFA Testing differential item functioning in IRT/IFA

More information

Random Vectors, Random Matrices, and Matrix Expected Value

Random Vectors, Random Matrices, and Matrix Expected Value Random Vectors, Random Matrices, and Matrix Expected Value James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 16 Random Vectors,

More information

SEM Day 3 Lab Exercises SPIDA 2007 Dave Flora

SEM Day 3 Lab Exercises SPIDA 2007 Dave Flora SEM Day 3 Lab Exercises SPIDA 2007 Dave Flora 1 Today we will see how to estimate SEM conditional latent trajectory models and interpret output using both SAS and LISREL. Exercise 1 Using SAS PROC CALIS,

More information

Chapter 5. Introduction to Path Analysis. Overview. Correlation and causation. Specification of path models. Types of path models

Chapter 5. Introduction to Path Analysis. Overview. Correlation and causation. Specification of path models. Types of path models Chapter 5 Introduction to Path Analysis Put simply, the basic dilemma in all sciences is that of how much to oversimplify reality. Overview H. M. Blalock Correlation and causation Specification of path

More information

An Introduction to Mplus and Path Analysis

An Introduction to Mplus and Path Analysis An Introduction to Mplus and Path Analysis PSYC 943: Fundamentals of Multivariate Modeling Lecture 10: October 30, 2013 PSYC 943: Lecture 10 Today s Lecture Path analysis starting with multivariate regression

More information

A Re-Introduction to General Linear Models (GLM)

A Re-Introduction to General Linear Models (GLM) A Re-Introduction to General Linear Models (GLM) Today s Class: You do know the GLM Estimation (where the numbers in the output come from): From least squares to restricted maximum likelihood (REML) Reviewing

More information

SRMR in Mplus. Tihomir Asparouhov and Bengt Muthén. May 2, 2018

SRMR in Mplus. Tihomir Asparouhov and Bengt Muthén. May 2, 2018 SRMR in Mplus Tihomir Asparouhov and Bengt Muthén May 2, 2018 1 Introduction In this note we describe the Mplus implementation of the SRMR standardized root mean squared residual) fit index for the models

More information

Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology

Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology Data_Analysis.calm Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology This article considers a three factor completely

More information

Comparing IRT with Other Models

Comparing IRT with Other Models Comparing IRT with Other Models Lecture #14 ICPSR Item Response Theory Workshop Lecture #14: 1of 45 Lecture Overview The final set of slides will describe a parallel between IRT and another commonly used

More information

Multilevel Structural Equation Modeling

Multilevel Structural Equation Modeling Multilevel Structural Equation Modeling Joop Hox Utrecht University j.hox@uu.nl http://www.joophox.net 14_15_mlevsem Multilevel Regression Three level data structure Groups at different levels may have

More information

An Introduction to SEM in Mplus

An Introduction to SEM in Mplus An Introduction to SEM in Mplus Ben Kite Saturday Seminar Series Quantitative Training Program Center for Research Methods and Data Analysis Goals Provide an introduction to Mplus Demonstrate Mplus with

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

Regression Models - Introduction

Regression Models - Introduction Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent

More information

Formula for the t-test

Formula for the t-test Formula for the t-test: How the t-test Relates to the Distribution of the Data for the Groups Formula for the t-test: Formula for the Standard Error of the Difference Between the Means Formula for the

More information

CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum 1997 65 CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE 4.0. Introduction In Chapter

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

Chapter 4: Factor Analysis

Chapter 4: Factor Analysis Chapter 4: Factor Analysis In many studies, we may not be able to measure directly the variables of interest. We can merely collect data on other variables which may be related to the variables of interest.

More information

SEM for Categorical Outcomes

SEM for Categorical Outcomes This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Key Algebraic Results in Linear Regression

Key Algebraic Results in Linear Regression Key Algebraic Results in Linear Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 30 Key Algebraic Results in

More information

UNIVERSITY OF TORONTO MISSISSAUGA April 2009 Examinations STA431H5S Professor Jerry Brunner Duration: 3 hours

UNIVERSITY OF TORONTO MISSISSAUGA April 2009 Examinations STA431H5S Professor Jerry Brunner Duration: 3 hours Name (Print): Student Number: Signature: Last/Surname First /Given Name UNIVERSITY OF TORONTO MISSISSAUGA April 2009 Examinations STA431H5S Professor Jerry Brunner Duration: 3 hours Aids allowed: Calculator

More information

Categorical and Zero Inflated Growth Models

Categorical and Zero Inflated Growth Models Categorical and Zero Inflated Growth Models Alan C. Acock* Summer, 2009 *Alan C. Acock, Department of Human Development and Family Sciences, Oregon State University, Corvallis OR 97331 (alan.acock@oregonstate.edu).

More information

Interaction effects for continuous predictors in regression modeling

Interaction effects for continuous predictors in regression modeling Interaction effects for continuous predictors in regression modeling Testing for interactions The linear regression model is undoubtedly the most commonly-used statistical model, and has the advantage

More information

Measurement Theory. Reliability. Error Sources. = XY r XX. r XY. r YY

Measurement Theory. Reliability. Error Sources. = XY r XX. r XY. r YY Y -3 - -1 0 1 3 X Y -10-5 0 5 10 X Measurement Theory t & X 1 X X 3 X k Reliability e 1 e e 3 e k 1 The Big Picture Measurement error makes it difficult to identify the true patterns of relationships between

More information

An Efficient State Space Approach to Estimate Univariate and Multivariate Multilevel Regression Models

An Efficient State Space Approach to Estimate Univariate and Multivariate Multilevel Regression Models An Efficient State Space Approach to Estimate Univariate and Multivariate Multilevel Regression Models Fei Gu Kristopher J. Preacher Wei Wu 05/21/2013 Overview Introduction: estimate MLM as SEM (Bauer,

More information

An Introduction to Path Analysis

An Introduction to Path Analysis An Introduction to Path Analysis PRE 905: Multivariate Analysis Lecture 10: April 15, 2014 PRE 905: Lecture 10 Path Analysis Today s Lecture Path analysis starting with multivariate regression then arriving

More information

SMA 6304 / MIT / MIT Manufacturing Systems. Lecture 10: Data and Regression Analysis. Lecturer: Prof. Duane S. Boning

SMA 6304 / MIT / MIT Manufacturing Systems. Lecture 10: Data and Regression Analysis. Lecturer: Prof. Duane S. Boning SMA 6304 / MIT 2.853 / MIT 2.854 Manufacturing Systems Lecture 10: Data and Regression Analysis Lecturer: Prof. Duane S. Boning 1 Agenda 1. Comparison of Treatments (One Variable) Analysis of Variance

More information

Longitudinal Data Analysis of Health Outcomes

Longitudinal Data Analysis of Health Outcomes Longitudinal Data Analysis of Health Outcomes Longitudinal Data Analysis Workshop Running Example: Days 2 and 3 University of Georgia: Institute for Interdisciplinary Research in Education and Human Development

More information

Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model

Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model Most of this course will be concerned with use of a regression model: a structure in which one or more explanatory

More information

Heteroskedasticity in Panel Data

Heteroskedasticity in Panel Data Essex Summer School in Social Science Data Analysis Panel Data Analysis for Comparative Research Heteroskedasticity in Panel Data Christopher Adolph Department of Political Science and Center for Statistics

More information

Heteroskedasticity in Panel Data

Heteroskedasticity in Panel Data Essex Summer School in Social Science Data Analysis Panel Data Analysis for Comparative Research Heteroskedasticity in Panel Data Christopher Adolph Department of Political Science and Center for Statistics

More information

Introduction to Regression

Introduction to Regression Regression Introduction to Regression If two variables covary, we should be able to predict the value of one variable from another. Correlation only tells us how much two variables covary. In regression,

More information

Three-Level Modeling for Factorial Experiments With Experimentally Induced Clustering

Three-Level Modeling for Factorial Experiments With Experimentally Induced Clustering Three-Level Modeling for Factorial Experiments With Experimentally Induced Clustering John J. Dziak The Pennsylvania State University Inbal Nahum-Shani The University of Michigan Copyright 016, Penn State.

More information

I used college textbooks because they were the only resource available to evaluate measurement uncertainty calculations.

I used college textbooks because they were the only resource available to evaluate measurement uncertainty calculations. Introduction to Statistics By Rick Hogan Estimating uncertainty in measurement requires a good understanding of Statistics and statistical analysis. While there are many free statistics resources online,

More information

ECON2228 Notes 2. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 47

ECON2228 Notes 2. Christopher F Baum. Boston College Economics. cfb (BC Econ) ECON2228 Notes / 47 ECON2228 Notes 2 Christopher F Baum Boston College Economics 2014 2015 cfb (BC Econ) ECON2228 Notes 2 2014 2015 1 / 47 Chapter 2: The simple regression model Most of this course will be concerned with

More information

ECON 4551 Econometrics II Memorial University of Newfoundland. Panel Data Models. Adapted from Vera Tabakova s notes

ECON 4551 Econometrics II Memorial University of Newfoundland. Panel Data Models. Adapted from Vera Tabakova s notes ECON 4551 Econometrics II Memorial University of Newfoundland Panel Data Models Adapted from Vera Tabakova s notes 15.1 Grunfeld s Investment Data 15.2 Sets of Regression Equations 15.3 Seemingly Unrelated

More information

Appendix A: The time series behavior of employment growth

Appendix A: The time series behavior of employment growth Unpublished appendices from The Relationship between Firm Size and Firm Growth in the U.S. Manufacturing Sector Bronwyn H. Hall Journal of Industrial Economics 35 (June 987): 583-606. Appendix A: The time

More information

Föreläsning /31

Föreläsning /31 1/31 Föreläsning 10 090420 Chapter 13 Econometric Modeling: Model Speci cation and Diagnostic testing 2/31 Types of speci cation errors Consider the following models: Y i = β 1 + β 2 X i + β 3 X 2 i +

More information

Testing Indirect Effects for Lower Level Mediation Models in SAS PROC MIXED

Testing Indirect Effects for Lower Level Mediation Models in SAS PROC MIXED Testing Indirect Effects for Lower Level Mediation Models in SAS PROC MIXED Here we provide syntax for fitting the lower-level mediation model using the MIXED procedure in SAS as well as a sas macro, IndTest.sas

More information

ANCOVA. Lecture 9 Andrew Ainsworth

ANCOVA. Lecture 9 Andrew Ainsworth ANCOVA Lecture 9 Andrew Ainsworth What is ANCOVA? Analysis of covariance an extension of ANOVA in which main effects and interactions are assessed on DV scores after the DV has been adjusted for by the

More information

Estimation of Curvilinear Effects in SEM. Rex B. Kline, September 2009

Estimation of Curvilinear Effects in SEM. Rex B. Kline, September 2009 Estimation of Curvilinear Effects in SEM Supplement to Principles and Practice of Structural Equation Modeling (3rd ed.) Rex B. Kline, September 009 Curvlinear Effects of Observed Variables Consider the

More information

R = µ + Bf Arbitrage Pricing Model, APM

R = µ + Bf Arbitrage Pricing Model, APM 4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.

More information

Confirmatory Factor Analysis: Model comparison, respecification, and more. Psychology 588: Covariance structure and factor models

Confirmatory Factor Analysis: Model comparison, respecification, and more. Psychology 588: Covariance structure and factor models Confirmatory Factor Analysis: Model comparison, respecification, and more Psychology 588: Covariance structure and factor models Model comparison 2 Essentially all goodness of fit indices are descriptive,

More information

Supplementary materials for: Exploratory Structural Equation Modeling

Supplementary materials for: Exploratory Structural Equation Modeling Supplementary materials for: Exploratory Structural Equation Modeling To appear in Hancock, G. R., & Mueller, R. O. (Eds.). (2013). Structural equation modeling: A second course (2nd ed.). Charlotte, NC:

More information

MULTILEVEL RELIABILITY 1. Online Appendix A. Single Level Reliability as a Function of ICC, Reliability Within, and Reliability Between

MULTILEVEL RELIABILITY 1. Online Appendix A. Single Level Reliability as a Function of ICC, Reliability Within, and Reliability Between MULTILEVEL RELIABILITY 1 Online Appendix A Single Level Reliability as a Function of ICC, Reliability Within, and Reliability Between Let: = The total variance of scale X when it is considered at a single-level

More information

2.2 Classical Regression in the Time Series Context

2.2 Classical Regression in the Time Series Context 48 2 Time Series Regression and Exploratory Data Analysis context, and therefore we include some material on transformations and other techniques useful in exploratory data analysis. 2.2 Classical Regression

More information

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM Subject Business Economics Paper No and Title Module No and Title Module Tag 8, Fundamentals of Econometrics 3, The gauss Markov theorem BSE_P8_M3 1 TABLE OF CONTENTS 1. INTRODUCTION 2. ASSUMPTIONS OF

More information

Theoretical and Simulation-guided Exploration of the AR(1) Model

Theoretical and Simulation-guided Exploration of the AR(1) Model Theoretical and Simulation-guided Exploration of the AR() Model Overview: Section : Motivation Section : Expectation A: Theory B: Simulation Section : Variance A: Theory B: Simulation Section : ACF A:

More information

Linear Model Under General Variance Structure: Autocorrelation

Linear Model Under General Variance Structure: Autocorrelation Linear Model Under General Variance Structure: Autocorrelation A Definition of Autocorrelation In this section, we consider another special case of the model Y = X β + e, or y t = x t β + e t, t = 1,..,.

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators

More information

Longitudinal Data Analysis Using Stata Paul D. Allison, Ph.D. Upcoming Seminar: May 18-19, 2017, Chicago, Illinois

Longitudinal Data Analysis Using Stata Paul D. Allison, Ph.D. Upcoming Seminar: May 18-19, 2017, Chicago, Illinois Longitudinal Data Analysis Using Stata Paul D. Allison, Ph.D. Upcoming Seminar: May 18-19, 217, Chicago, Illinois Outline 1. Opportunities and challenges of panel data. a. Data requirements b. Control

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Introduction to Confirmatory Factor Analysis

Introduction to Confirmatory Factor Analysis Introduction to Confirmatory Factor Analysis Multivariate Methods in Education ERSH 8350 Lecture #12 November 16, 2011 ERSH 8350: Lecture 12 Today s Class An Introduction to: Confirmatory Factor Analysis

More information

PubH 7405: REGRESSION ANALYSIS. MLR: INFERENCES, Part I

PubH 7405: REGRESSION ANALYSIS. MLR: INFERENCES, Part I PubH 7405: REGRESSION ANALYSIS MLR: INFERENCES, Part I TESTING HYPOTHESES Once we have fitted a multiple linear regression model and obtained estimates for the various parameters of interest, we want to

More information

CHAPTER 8 MODEL DIAGNOSTICS. 8.1 Residual Analysis

CHAPTER 8 MODEL DIAGNOSTICS. 8.1 Residual Analysis CHAPTER 8 MODEL DIAGNOSTICS We have now discussed methods for specifying models and for efficiently estimating the parameters in those models. Model diagnostics, or model criticism, is concerned with testing

More information

INTRODUCTION TO DESIGN AND ANALYSIS OF EXPERIMENTS

INTRODUCTION TO DESIGN AND ANALYSIS OF EXPERIMENTS GEORGE W. COBB Mount Holyoke College INTRODUCTION TO DESIGN AND ANALYSIS OF EXPERIMENTS Springer CONTENTS To the Instructor Sample Exam Questions To the Student Acknowledgments xv xxi xxvii xxix 1. INTRODUCTION

More information

CAUSAL MEDIATION ANALYSIS WITH MULTIPLE MEDIATORS: WEB APPENDICES 1

CAUSAL MEDIATION ANALYSIS WITH MULTIPLE MEDIATORS: WEB APPENDICES 1 CAUSAL MEDIATION ANALYSIS WITH MULTIPLE MEDIATORS: WEB APPENDICES 1 This document includes five web appendices to the paper: Nguyen TQ, Webb-Vargas Y, Koning IM, Stuart EA. Causal mediation analysis with

More information

Factor Analysis. Qian-Li Xue

Factor Analysis. Qian-Li Xue Factor Analysis Qian-Li Xue Biostatistics Program Harvard Catalyst The Harvard Clinical & Translational Science Center Short course, October 7, 06 Well-used latent variable models Latent variable scale

More information

Using SPSS for One Way Analysis of Variance

Using SPSS for One Way Analysis of Variance Using SPSS for One Way Analysis of Variance This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests. This tutorial

More information

Moderation 調節 = 交互作用

Moderation 調節 = 交互作用 Moderation 調節 = 交互作用 Kit-Tai Hau 侯傑泰 JianFang Chang 常建芳 The Chinese University of Hong Kong Based on Marsh, H. W., Hau, K. T., Wen, Z., Nagengast, B., & Morin, A. J. S. (in press). Moderation. In Little,

More information

Optimal design of experiments

Optimal design of experiments Optimal design of experiments Session 4: Some theory Peter Goos / 40 Optimal design theory continuous or approximate optimal designs implicitly assume an infinitely large number of observations are available

More information

Introduction to Random Effects of Time and Model Estimation

Introduction to Random Effects of Time and Model Estimation Introduction to Random Effects of Time and Model Estimation Today s Class: The Big Picture Multilevel model notation Fixed vs. random effects of time Random intercept vs. random slope models How MLM =

More information

STRUCTURAL EQUATION MODELING. Khaled Bedair Statistics Department Virginia Tech LISA, Summer 2013

STRUCTURAL EQUATION MODELING. Khaled Bedair Statistics Department Virginia Tech LISA, Summer 2013 STRUCTURAL EQUATION MODELING Khaled Bedair Statistics Department Virginia Tech LISA, Summer 2013 Introduction: Path analysis Path Analysis is used to estimate a system of equations in which all of the

More information

AN INTRODUCTION TO STRUCTURAL EQUATION MODELING WITH AN APPLICATION TO THE BLOGOSPHERE

AN INTRODUCTION TO STRUCTURAL EQUATION MODELING WITH AN APPLICATION TO THE BLOGOSPHERE AN INTRODUCTION TO STRUCTURAL EQUATION MODELING WITH AN APPLICATION TO THE BLOGOSPHERE Dr. James (Jim) D. Doyle March 19, 2014 Structural equation modeling or SEM 1971-1980: 27 1981-1990: 118 1991-2000:

More information

Unit Root and Cointegration

Unit Root and Cointegration Unit Root and Cointegration Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt@illinois.edu Oct 7th, 016 C. Hurtado (UIUC - Economics) Applied Econometrics On the

More information

Supplemental Materials. In the main text, we recommend graphing physiological values for individual dyad

Supplemental Materials. In the main text, we recommend graphing physiological values for individual dyad 1 Supplemental Materials Graphing Values for Individual Dyad Members over Time In the main text, we recommend graphing physiological values for individual dyad members over time to aid in the decision

More information

1. You have data on years of work experience, EXPER, its square, EXPER2, years of education, EDUC, and the log of hourly wages, LWAGE

1. You have data on years of work experience, EXPER, its square, EXPER2, years of education, EDUC, and the log of hourly wages, LWAGE 1. You have data on years of work experience, EXPER, its square, EXPER, years of education, EDUC, and the log of hourly wages, LWAGE You estimate the following regressions: (1) LWAGE =.00 + 0.05*EDUC +

More information

Goals for the Morning

Goals for the Morning Introduction to Growth Curve Modeling: An Overview and Recommendations for Practice Patrick J. Curran & Daniel J. Bauer University of North Carolina at Chapel Hill Goals for the Morning Brief review of

More information

Higher-Order Factor Models

Higher-Order Factor Models Higher-Order Factor Models Topics: The Big Picture Identification of higher-order models Measurement models for method effects Equivalent models CLP 948: Lecture 8 1 Sequence of Steps in CFA or IFA 1.

More information

Outline. Mixed models in R using the lme4 package Part 3: Longitudinal data. Sleep deprivation data. Simple longitudinal data

Outline. Mixed models in R using the lme4 package Part 3: Longitudinal data. Sleep deprivation data. Simple longitudinal data Outline Mixed models in R using the lme4 package Part 3: Longitudinal data Douglas Bates Longitudinal data: sleepstudy A model with random effects for intercept and slope University of Wisconsin - Madison

More information

Obtaining Uncertainty Measures on Slope and Intercept

Obtaining Uncertainty Measures on Slope and Intercept Obtaining Uncertainty Measures on Slope and Intercept of a Least Squares Fit with Excel s LINEST Faith A. Morrison Professor of Chemical Engineering Michigan Technological University, Houghton, MI 39931

More information

1 The Multiple Regression Model: Freeing Up the Classical Assumptions

1 The Multiple Regression Model: Freeing Up the Classical Assumptions 1 The Multiple Regression Model: Freeing Up the Classical Assumptions Some or all of classical assumptions were crucial for many of the derivations of the previous chapters. Derivation of the OLS estimator

More information

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication

G. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication G. S. Maddala Kajal Lahiri WILEY A John Wiley and Sons, Ltd., Publication TEMT Foreword Preface to the Fourth Edition xvii xix Part I Introduction and the Linear Regression Model 1 CHAPTER 1 What is Econometrics?

More information

Review of CLDP 944: Multilevel Models for Longitudinal Data

Review of CLDP 944: Multilevel Models for Longitudinal Data Review of CLDP 944: Multilevel Models for Longitudinal Data Topics: Review of general MLM concepts and terminology Model comparisons and significance testing Fixed and random effects of time Significance

More information

Overview. 1. Terms and Definitions. 2. Model Identification. 3. Path Coefficients

Overview. 1. Terms and Definitions. 2. Model Identification. 3. Path Coefficients 2. The Basics Overview 1. Terms and Definitions 2. Model Identification 3. Path Coefficients 2.1 Terms and Definitions 2.1 Terms & Definitions. Structural equation model = observed, latent, composite Direct

More information

MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010

MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010 MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010 Part 1 of this document can be found at http://www.uvm.edu/~dhowell/methods/supplements/mixed Models for Repeated Measures1.pdf

More information

MLMED. User Guide. Nicholas J. Rockwood The Ohio State University Beta Version May, 2017

MLMED. User Guide. Nicholas J. Rockwood The Ohio State University Beta Version May, 2017 MLMED User Guide Nicholas J. Rockwood The Ohio State University rockwood.19@osu.edu Beta Version May, 2017 MLmed is a computational macro for SPSS that simplifies the fitting of multilevel mediation and

More information

FERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA. Ning He, Lei Xie, Shu-qing Wang, Jian-ming Zhang

FERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA. Ning He, Lei Xie, Shu-qing Wang, Jian-ming Zhang FERMENTATION BATCH PROCESS MONITORING BY STEP-BY-STEP ADAPTIVE MPCA Ning He Lei Xie Shu-qing Wang ian-ming Zhang National ey Laboratory of Industrial Control Technology Zhejiang University Hangzhou 3007

More information

Model Invariance Testing Under Different Levels of Invariance. W. Holmes Finch Brian F. French

Model Invariance Testing Under Different Levels of Invariance. W. Holmes Finch Brian F. French Model Invariance Testing Under Different Levels of Invariance W. Holmes Finch Brian F. French Measurement Invariance (MI) MI is important. Construct comparability across groups cannot be assumed Must be

More information

Chapter 4: Regression Models

Chapter 4: Regression Models Sales volume of company 1 Textbook: pp. 129-164 Chapter 4: Regression Models Money spent on advertising 2 Learning Objectives After completing this chapter, students will be able to: Identify variables,

More information

Serial Correlation. Edps/Psych/Stat 587. Carolyn J. Anderson. Fall Department of Educational Psychology

Serial Correlation. Edps/Psych/Stat 587. Carolyn J. Anderson. Fall Department of Educational Psychology Serial Correlation Edps/Psych/Stat 587 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 017 Model for Level 1 Residuals There are three sources

More information

SEM with observed variables: parameterization and identification. Psychology 588: Covariance structure and factor models

SEM with observed variables: parameterization and identification. Psychology 588: Covariance structure and factor models SEM with observed variables: parameterization and identification Psychology 588: Covariance structure and factor models Limitations of SEM as a causal modeling 2 If an SEM model reflects the reality, the

More information

Principal Component Analysis (PCA) Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Recall: Eigenvectors of the Covariance Matrix Covariance matrices are symmetric. Eigenvectors are orthogonal Eigenvectors are ordered by the magnitude of eigenvalues: λ 1 λ 2 λ p {v 1, v 2,..., v n } Recall:

More information