CONFIRMATORY FACTOR ANALYSIS

Similar documents
CONFIRMATORY FACTOR ANALYSIS

THE GENERAL STRUCTURAL EQUATION MODEL WITH LATENT VARIATES

THE GENERAL STRUCTURAL EQUATION MODEL WITH LATENT VARIATES

Regression without measurement error using proc calis

EVALUATION OF STRUCTURAL EQUATION MODELS

Instrumental variables regression on the Poverty data

Multi-sample structural equation models with mean structures, with special emphasis on assessing measurement invariance in cross-national research

SAS Example 3: Deliberately create numerical problems

Introduction to Confirmatory Factor Analysis

* IVEware Analysis Examples Replication for ASDA 2nd Edition * Berglund April 2017 * Chapter 13 ;

UNIVERSITY OF TORONTO MISSISSAUGA April 2009 Examinations STA431H5S Professor Jerry Brunner Duration: 3 hours

Confirmatory Factor Analysis. Psych 818 DeShon

A Modification of the Jarque-Bera Test. for Normality

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Evaluation of structural equation models. Hans Baumgartner Penn State University

Introduction to Structural Equation Modeling with Latent Variables

Parameter estimation: A new approach to weighting a priori information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

Factor analysis. George Balabanis

C:\Users\Rex Kline\AppData\Local\Temp\AmosTemp\AmosScratch.amw. Date: Friday, December 5, 2014 Time: 11:20:30 AM

Topic 7: Convergence of Random Variables

Structural Equation Modeling and Confirmatory Factor Analysis. Types of Variables

Inference using structural equations with latent variables

under the null hypothesis, the sign test (with continuity correction) rejects H 0 when α n + n 2 2.

MEASURES WITH ZEROS IN THE INVERSE OF THEIR MOMENT MATRIX

Simple Tests for Exogeneity of a Binary Explanatory Variable in Count Data Regression Models

ADIT DEBRIS PROJECTION DUE TO AN EXPLOSION IN AN UNDERGROUND AMMUNITION STORAGE MAGAZINE

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

New Statistical Test for Quality Control in High Dimension Data Set

SYNCHRONOUS SEQUENTIAL CIRCUITS

Structural Equation Modeling

Research Article When Inflation Causes No Increase in Claim Amounts

The Role of Models in Model-Assisted and Model- Dependent Estimation for Domains and Small Areas

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

Exploring Cultural Differences with Structural Equation Modelling

VI. Linking and Equating: Getting from A to B Unleashing the full power of Rasch models means identifying, perhaps conceiving an important aspect,

Psychology 454: Latent Variable Modeling How do you know if a model works?

Capacity Analysis of MIMO Systems with Unknown Channel State Information

Spurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009

THE EFFICIENCIES OF THE SPATIAL MEDIAN AND SPATIAL SIGN COVARIANCE MATRIX FOR ELLIPTICALLY SYMMETRIC DISTRIBUTIONS

Inverse Theory Course: LTU Kiruna. Day 1

MODELLING DEPENDENCE IN INSURANCE CLAIMS PROCESSES WITH LÉVY COPULAS ABSTRACT KEYWORDS

arxiv: v4 [math.pr] 27 Jul 2016

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

Estimation of District Level Poor Households in the State of. Uttar Pradesh in India by Combining NSSO Survey and

UNIFYING PCA AND MULTISCALE APPROACHES TO FAULT DETECTION AND ISOLATION

u!i = a T u = 0. Then S satisfies

Diagonalization of Matrices Dr. E. Jacobs

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

Designing of Acceptance Double Sampling Plan for Life Test Based on Percentiles of Exponentiated Rayleigh Distribution

Survey-weighted Unit-Level Small Area Estimation

The Principle of Least Action

Chapter 13 Introduction to Structural Equation Modeling

A simple model for the small-strain behaviour of soils

An Introduction to Path Analysis

Recovery of weak factor loadings in confirmatory factor analysis under conditions of model misspecification

Linear First-Order Equations

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes

Mathematical Review Problems

Introduction to Confirmatory Factor Analysis

CONTROL CHARTS FOR VARIABLES

Entanglement is not very useful for estimating multiple phases

A Review of Multiple Try MCMC algorithms for Signal Processing

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Chapter 6: Energy-Momentum Tensors

Space-time Linear Dispersion Using Coordinate Interleaving

On Testing the Extent of Noncircularity

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

arxiv: v1 [hep-lat] 19 Nov 2013

Flexible High-Dimensional Classification Machines and Their Asymptotic Properties

Optimal Signal Detection for False Track Discrimination

Introduction to Structural Equation Modeling Dominique Zephyr Applied Statistics Lab

Improving Estimation Accuracy in Nonrandomized Response Questioning Methods by Multiple Answers

sempower Manual Morten Moshagen

Linear inversion. A 1 m 2 + A 2 m 2 = Am = d. (12.1) again.

Experiment 2, Physics 2BL

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

Introduction to Structural Equation Modeling

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Test of Hypotheses in a Time Trend Panel Data Model with Serially Correlated Error Component Disturbances

Nonparametric Additive Models

Bayesian Estimation of the Entropy of the Multivariate Gaussian

Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions. Paul M. Mwasame, Norman J. Wagner, Antony N.

Calculus and optimization

arxiv:hep-th/ v1 3 Feb 1993

MATH , 06 Differential Equations Section 03: MWF 1:00pm-1:50pm McLaury 306 Section 06: MWF 3:00pm-3:50pm EEP 208

The use of structural equation modeling to examine consumption of pornographic materials in Chinese adolescents in Hong Kong

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Polynomial Inclusion Functions

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation

A variance decomposition and a Central Limit Theorem for empirical losses associated with resampling designs

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification

The Role of Leader Motivating Language in Employee Absenteeism (Mayfield: 2009)

Angular Velocity Bounds via Light Curve Glint Duration

Ductility and Failure Modes of Single Reinforced Concrete Columns. Hiromichi Yoshikawa 1 and Toshiaki Miyagi 2

Transcription:

1 CONFIRMATORY FACTOR ANALYSIS The purpose of confirmatory factor analysis (CFA) is to explain the pattern of associations among a set of observe variables in terms of a smaller number of unerlying latent variables (or factors). Figure 1: Figure 2: j 21 1 1 x 1 x 2 l 11 l 21 l 31 l 41 l 52 l 62 l 72 l 82 x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 1 2 3 4 5 6 7 8 q 11 q 22 q 33 q 44 q 55 q 66 q 77 q 88 In general, the goal of CFA is similar to that of exploratory factor analysis (EFA). However, in EFA the number of factors is not known a priori, an it is also unknown which observe variables loa on which factor(s). In contrast, in CFA the number of factors is etermine a priori, an the researcher also specifies which observe variables loa on which factor(s). I. Specification: x x x where: x q x 1 vector of observe variables in eviation form, x q x n matrix of factor loaings, x n x 1 vector of common factors in eviation form, q x 1 vector of unique (specific plus ranom error) factors. Assuming that E ( ) 0 an that Cov ( x, ') 0, this specification of the moel implies the following structure for the variance-covariance matrix of x: x x x Cov( x, x'),, ) ' where an are the variance-covariance matrices of x an, respectively (see Appenix A for the specification of the moel in Figure 1).

2 (1) the basic (congeneric) CFA moel: in this moel we specify that each row of has only one non-zero entry an that Cov(, ') is a iagonal matrix; (2) variations on the basic moel: aitional restrictions may be impose on (e.g., that certain factor loaings are equal across items); if all items loa equally on a given factor, we speak of (essentially) t-equivalent measurement; certain restrictions on may be relaxe (e.g., an item may be allowe to loa on multiple factors); in the basic moel the factors are allowe to be correlate (i.e., they are specifie to be oblique); one coul test whether the factors are uncorrelate (orthogonal) or perfectly correlate; aitional restrictions may be impose on (e.g., that the error variances are equal across items); if all items loa equally on a given factor an all error variances are ientical, we speak of parallel measurement; certain restrictions on may be relaxe (e.g., correlate errors of measurement may be introuce); [for a fairly complicate moel use in MTMM analysis see Appenix B] Moel ientification: A factor analysis moel is sai to be (globally) ientifie if implies that (,, ) (,, ) 1 1 1 2 2 2,, 1 2 1 2 1 2 Note that in orer for a moel to be ientifie, each free parameter has to be ientifie; a moel is exactly (or just) ientifie if all the restrictions impose on the moel are neee to ientify the moel; if there are reunant restrictions, the moel is sai to be overientifie (i.e., it has a positive number of egrees of freeom).

3 In orer to achieve ientification, we have to fix the scale of the latent variates: the coefficient relating to x has alreay been set to one in the specification of the moel; the scale of the factors is fixe by setting their variances to one (i.e., has unit iagonal elements) or by constraining one loaing per factor to unity (which correspons to equating the scale of a factor to that of one of its inicators); In aition, various other constraints have to be impose on,, an in orer to ientify the moel. Ientification proceures: [see Appenix C for an example] (i) a necessary conition for ientification is that the number of freely estimate parameters not be greater than the number of istinct elements in the variance-covariance matrix of x; (ii) to show that a moel is ientifie, one has to show that every free parameter in,, an can be expresse as a unique function of the variances an covariances of the observe variables (i.e., the elements of ); (iii) ientification rules for some special cases: three-inicator rule [sufficient but not necessary]: if there are at least three inicators per factor, each inicator loas on one an only one factor, an is iagonal, the factor moel is ientifie; two-inicator rule [sufficient but not necessary]: if there are at least two factors an two inicators per factor, each inicator loas on one an only one factor, the factors are allowe to freely correlate, an is iagonal, the factor moel is ientifie; (iv) if it is too ifficult to check ientification formally, empirical tests of ientification may be use; empirical tests are base on the concept of local ientifiability; one common test is base on whether the inverse of the estimate information matrix exists; Note: In CFA, the factor correlations (containe in the matrix, if the factor variances in the iagonals have been stanarize to 1) are correcte for attenuation ue to measurement error.

4 II. Estimation: The goal of estimation is to fin values for the unknown parameters in,, an (i.e.,,, ), base on S (the observe variance-covariance matrix of x), such that the variance-covariance matrix (,, ) implie by the estimate moel parameters is as close as possible to the observe variance/covariance matrix S. To make the concept of closeness between an S operational, we have to choose a iscrepancy function F( S; ), which is a scalar-value function with the following properties (see Browne 1982): (i) F( S; ) 0 (ii) F( S; ) 0 iff S (iii) F( S; ) is continuous in S an Some common iscrepancy functions are the following: (i) Unweighte Least Squares (ULS): F ULS tr 1 2 S 2 this expression minimizes one-half the sum of square resiuals between S an ; provie the moel is ientifie, ULS prouces consistent estimates regarless of the istribution of x; however, the estimates are not asymptotically efficient an they are not scale free; in aition, FULS is not scale invariant (see the iscussion below); (ii) Maximum Likelihoo (ML): F ML log tr S 1 log S q this iscrepancy function is base on the assumption that x has a multivariate normal istribution, which implies that S has a Wishart istribution;

5 uner very general conitions, ML estimators are consistent, asymptotically efficient, an asymptotically normally istribute; in aition, FML is scale invariant an ML estimates are scale free (in most cases); this means that F S, FML D S D, D D,, D,, ML where D is a iagonal, nonsingular matrix with positive iagonal elements [see Appenix D for an example]; D (iii) Generalize Least Squares (GLS): F GLS 1 tr 2 S ˆ S 1 2 the assumptions unerlying GLS estimation are slightly less restrictive than those necessary for ML estimation (namely, that fourth-orer cumulants are zero so that there is no excess kurtosis); GLS estimates are also consistent, asymptotically efficient, an asymptotically normally istribute; in aition, FGLS is scale invariant an GLS estimates are scale free (in most cases); (iv) other estimation methos: other estimation methos are available, incluing asymptoticlly istribution-free (ADF) proceures; Estimation problems: nonconvergence: no solution can be foun in a given number of iterations or within a given time limit; it is important that estimation begin at goo starting values (usually supplie automatically); causes of nonconvergence may be poorly specifie moels an small sample sizes with few inicators per factor;

6 improper solutions: values of sample estimates that are not possible in the population (e.g., negative error variances, also referre to as Heywoo cases); the causes of improper solutions are similar to those of nonconvergence; III. Testing: 1. Global fit measures: (a) 2 gooness-of-fit test: H 0 :,, perfect fit H A :,, eparture from perfect fit base on the likelihoo ratio criterion, one compares the likelihoo of the hypothesize moel (L0) to the likelihoo of a moel with perfect fit (L1): 2 L 0 ln ~ L1 2 f where f is equal to the number of overientifying restrictions; since (N 1) times the minimum of the fit function (e.g., FML) equals 2ln( L0 / L 1 ), where N is the sample size, we can use a 2 test base on the minimum of the fit function to investigate the null hypothesis that the estimate variance-covariance matrix eviates from the sample variancecovariance matrix only because of sampling error; note that in orer for the 2 test to be applicable an vali, the moel has to have a positive number of overientifying restrictions, the assumptions unerlying the application of the chosen estimation proceure (e.g., multivariate normality in the case of maximum likelihoo estimation) have to be satisfie, an the sample size has to be large (because it is an asymptotic test); in practice, the 2 test is often of limite usefulness because of the following reasons (see Bentler 1990): the assumptions on which its appropriateness is base may not be met, an there is evience that the 2 test is not robust to violations of these assumptions;

7 the test is only asymptotically vali an the sample size may be too small to yiel a vali test of moel aequacy; the sample size may be too large so that the test is powerful enough to etect relatively minor or even trivial iscrepancies between the estimate an observe covariance matrices; note the following points: (i) if S, then F ULS F ML F GLS 0 ; (ii) application of the 2 gooness-of-fit test requires that the moel be overientifie; essentially the test assesses the appropriateness of the overientifying restrictions; (b) alternative fit inices: there are many alternative fit inices which assess the fit of the moel in an absolute sense (stan-alone fit inices) or relative to a baseline moel (incremental fit inices); we will iscuss these inices in a separate hanout; 2. Moel moification: (a) moification inices an expecte parameter changes: moification inices (MI s) show the preicte ecrease in the 2 statistic when a fixe parameter is free or an equality constraint is relaxe; expecte parameter changes (EPC s) show the preicte estimate of the parameter when it is freely estimate; stanarize EPC s are available as well; (b) resiual analysis: the size of the resiuals, ( ), is epenent on the appropriateness of s ij ij the hypothesize moel, the scale in which the observe variables are measure, an sampling fluctuation; correlation resiuals (resiuals base on the completely stanarize solution) remove scale epenencies; stanarize resiuals (resiuals ivie by the square root of the estimate asymptotic variance) correct for ifferences in scale an sample size effects; the pattern of over- an unerfitting might suggest moel moifications;

8 LISREL an other programs also provie a summary statistic base on the resiuals calle the root mean square resiual (or RMR) as well as a stanarize RMR; 3. Local fit measures: (a) parameter estimates: check whether the estimates are proper an whether they make substantive sense; also, investigate the significance of the parameter estimates base on asymptotic stanar errors; (b) reliability/convergent valiity: iniviual-item reliability (square correlation between a construct xj an one of its inicators xi): IIR xi = λ 2 ij Var(ξ j ) 2 Var(ξ j ) + Var(δ i ) λ ij note: in LISREL iniviual-item reliabilities are calle square multiple correlations; average variance extracte or AVE (proportion of the total variance in all inicators of a construct accounte for by the construct, or the average iniviual-item reliability across all inicators of a construct; see Fornell an Larcker 1981): ( λ 2 ij )Var(ξ j ) AVE(ξ j ) = ( λ 2 ij )Var(ξ j ) + Var(δ i ) or, more simply, AVE(ξ j ) = IIR x i K where K is the number of inicators for the construct in question.

9 composite reliability (square correlation between a construct an an unweighte composite of its inicators x i : ( λ ij ) 2 Var(ξ j ) CR xi = ( λ ij ) 2 Var(ξ j ) + Var(δ i ) (c) iscriminant valiity: factor correlations shoul be significantly ifferent from unity (base on the confience interval aroun the estimate factor correlation or a 2 ifference test); average variance extracte shoul be greater than the square correlation between the factors (see Fornell an Larcker 1981);

10 Appenix A: Specification of the moel in Figure 1 8 7 6 5 4 3 2 1 2 1 82 72 62 52 41 31 21 11 8 7 6 5 4 3 2 1 0 0 0 0 0 0 0 0 x x l l l l l l l l x x x x x x x x x 1 = λ 11 ξ 1 + δ 1 x 2 = λ 21 ξ 1 + δ 2 x 3 = λ 31 ξ 1 + δ 3 x 4 = λ 41 ξ 1 + δ 4 x 5 = λ 52 ξ 2 + δ 5 x 6 = λ 62 ξ 2 + δ 6 x 7 = λ 72 ξ 2 + δ 7 x 8 = λ 82 ξ 2 + δ 8 1 1 j 21 q 11... q 88 Diag Var(ξ 1 ) = 1 Var(ξ 2 ) = 1 Cov(ξ 1, ξ 2 ) = φ 21 Var(δ i ) = θ ii Cov (δ i, δ j ) = 0

11 Appenix B: A moel for MTMM (multi-trait multi-metho) analysis T 1 T 2 T 3 T 1 M 1 T 1 M 2 T 1 M 3 T 2 M 1 T 2 M 2 T 2 M 3 T 3 M 1 T 3 M 2 T 3 M 3 M 1 M 2 M 3

12 Appenix C: Ientification of a simple CFA moel j 21 x 1 x 2 l 1 l 2 l 3 l 4 x 1 x 2 x 3 x 4 1 2 3 4 Rules of covariance algebra: Let X1, X2, an X3 be ranom variables an a, b, c an constants. Then COV (a + bx1, c + X2) = b COV (X1, X2) COV (X1 + X2, X3) = COV (X1, X3) + COV (X2, X3) 11 21 22 31 32 33 41 42 43 44 2 l q 11 11 2 l 21l11 l 21 q22 2 l l j l l j l q 32 11 21 32 21 21 32 33 2 l 42l11j 21 l 42l 21j 21 l 42l 32 l 42 q 44

13 Appenix D: Scale invariance an scale freeness covariances correlations x1 x2 q x1 x2 q aa1t1 1.10.00.65.81.00.35 aa2t1 1.10.00.50.84.00.29 aa3t1.94.00.74.74.00.46 aa4t1 1.21.00.56.85.00.27 aa1t2.00 1.20.55.00.85.28 aa2t2.00 1.16.41.00.88.23 aa3t2.00.99.55.00.80.36 aa4t2.00 1.23.49.00.87.24 2 (19) 72.98 72.98 Fitte covariance matrix: aa1t1 1.86 aa2t1 1.21 1.71 aa3t1 1.03 1.03 1.61 aa4t1 1.33 1.34 1.14 2.03 aa1t2 1.19 1.19 1.01 1.31 1.98 aa2t2 1.15 1.16.98 1.27 1.39 1.76 aa3t2.98.98.83 1.08 1.18 1.15 1.52 aa4t2 1.22 1.23 1.04 1.35 1.48 1.43 1.22 2.01 Fitte correlation matrix: aa1t1 1.00 aa2t1.68 1.00 aa3t1.59.62 1.00 aa4t1.69.72.63 1.00 aa1t2.62.65.57.65 1.00 aa2t2.64.67.58.67.75 1.00 aa3t2.58.61.53.61.68.70 1.00 aa4t2.63.66.58.67.74.76.70 1.00 Fitte correlation matrix pre- an post-multiplie by stanar eviations: aa1t1 1.87 aa2t1 1.21 1.70 aa3t1 1.04 1.03 1.62 aa4t1 1.34 1.33 1.14 2.02 aa1t2 1.19 1.18 1.01 1.30 1.98 aa2t2 1.16 1.15.99 1.27 1.40 1.77 aa3t2.98.97.83 1.07 1.18 1.15 1.52 aa4t2 1.22 1.22 1.04 1.34 1.47 1.44 1.21 2.00

14 EXPLAINING CONSUMERS USAGE OF COUPONS FOR GROCERY SHOPPING (Bagozzi, Baumgartner, an Yi, JCR 1992) Proceure Female staff members at two American universities complete two questionnaires that were sent to them via campus mail. The first questionnaire containe measures of seven beliefs about the consequences of using coupons an corresponing evaluations, as well as measures of attitue towar using coupons, behavioral intentions, an the personality variable of state-/action-orientation. One week later a secon questionnaire was sent to those people who ha participate in the first wave of ata collection. This questionnaire assesse some of the same variables as in wave one as well as people s self-reporte coupon usage uring the past week. Specifically, participants were presente with a table that ha 21 prouct categories as its rows (e.g., cereal, juice rinks, paper towels, snack foos, canne goos) an six sources of coupons as its columns (i.e., irect mail, newspapers, magazines, in or on packages, from store isplays or flyers, from relatives or friens). An aitional row was inclue for other proucts so that responents coul inicate usage in categories not covere by the 21 liste. Participants were aske to state how many coupons they ha use for each category an source combination. Measures (1) Beliefs: perceive likelihoo of the following consequences of using coupons (rate on 7-point unlikely-likely scales): inconveniences: o searching for, gathering, an organizing coupons takes much time an effort; o planning the use of an actually reeeming coupons in the supermarket takes much time an effort; rewars: o using coupons saves much money on the grocery bill; o using coupons leas to feelings of being a thrifty shopper; encumbrances: o in orer to obtain coupons one has to subscribe to extra newspapers, magazines, etc.; o in orer to take avantage of coupon offers one has to purchase nonpreferre brans; o in orer to take avantage of coupon offers one has to shop at multiple supermarkets; (2) Evaluations: how each of the seven consequences of using coupons makes the responent feel, rate on 7-point goo-ba scales; (3) Aact: attitue towar using coupons for shopping in the supermarket uring the upcoming week (assesse on four semantic ifferential scales, i.e., unpleasant-pleasant, ba-goo, foolish-wise, an unfavorable-favorable); measure twice (week 1, week 2); (4) BI: behavioral intentions to use coupons for shopping in the supermarket uring the upcoming week (measure with a 7-point unlikely-likely scale assessing intentions to use coupons an an 11-point no chance-certain scale asking about plans to use coupons); (5) Actual coupon usage: the total number of coupons use across prouct categories an sources; a square root transformation was use to normalize the variable;

15 DATA coupon; INFILE ':\ipss\cfa\factor.at'; INPUT i aa1t1 aa2t1 aa3t1 aa4t1 aa1t2 aa2t2 aa3t2 aa4t2; title 'Confirmatory Factor Moel: Congeneric Moel'; title2 '(using the PATH specification in CALIS)'; PROC CALIS DATA=coupon MODIFICATION RESIDUAL; PATH aa1t1 <--- AAT1 = L11, aa2t1 <--- AAT1 = L21, aa3t1 <--- AAT1 = L31, aa4t1 <--- AAT1 = L41, aa1t2 <--- AAT2 = L12, aa2t2 <--- AAT2 = L22, aa3t2 <--- AAT2 = L32, aa4t2 <--- AAT2 = L42; PVAR AAT1 = 1., AAT2 = 1., aa1t1 = th11, aa2t1 = th22, aa3t1 = th33, aa4t1 = th44, aa1t2 = th55, aa2t2 = th66, aa3t2 = th77, aa4t2 = th88; PCOV AAT1 AAT2 = CovAAT1AAT2(0.); run;

16 Confirmatory Factor Moel: Congeneric Moel (using the PATH specification in CALIS) Fit Summary Moeling Info Number of Observations 250 Number of Variables 8 Number of Moments 36 Number of Parameters 17 Number of Active Constraints 0 Baseline Moel Function Value 6.2487 Baseline Moel Chi-Square 1555.9289 Baseline Moel Chi-Square DF 28 Pr > Baseline Moel Chi-Square <.0001 Absolute Inex Fit Function 0.2931 Chi-Square 72.9846 Chi-Square DF 19 Pr > Chi-Square <.0001 Z-Test of Wilson & Hilferty 5.3429 Hoelter Critical N 103 Root Mean Square Resiual (RMR) 0.0549 Stanarize RMR (SRMR) 0.0323 Gooness of Fit Inex (GFI) 0.9361 Parsimony Inex Ajuste GFI (AGFI) 0.8790 Parsimonious GFI 0.6352 RMSEA Estimate 0.1068 RMSEA Lower 90% Confience Limit 0.0816 RMSEA Upper 90% Confience Limit 0.1333 Probability of Close Fit 0.0002 ECVI Estimate 0.4348 ECVI Lower 90% Confience Limit 0.3432 ECVI Upper 90% Confience Limit 0.5579 Akaike Information Criterion 106.9846 Bozogan CAIC 183.8494 Schwarz Bayesian Criterion 166.8494 McDonal Centrality 0.8977 Incremental Inex Bentler Comparative Fit Inex 0.9647 Bentler-Bonett NFI 0.9531 Bentler-Bonett Non-norme Inex 0.9479 Bollen Norme Inex Rho1 0.9309 Bollen Non-norme Inex Delta2 0.9649 James et al. Parsimonious NFI 0.6467

17 The CALIS Proceure Covariance Structure Analysis: Maximum Likelihoo Estimation Asymptotically Stanarize Resiual Matrix aa1t1 aa2t1 aa3t1 aa4t1 aa1t1 0.00000-0.56966 0.53320-0.19988 aa2t1-0.56966 0.00000-0.52432 2.06889 aa3t1 0.53320-0.52432 0.00000-1.57779 aa4t1-0.19988 2.06889-1.57779 0.00000 aa1t2 3.83654 2.17105-0.00219-0.89603 aa2t2-1.12736 0.16444-0.14489-1.16498 aa3t2-1.26127-1.89428 4.89617 0.80859 aa4t2-0.73767-2.71243-0.10044 0.40943 Asymptotically Stanarize Resiual Matrix aa1t2 aa2t2 aa3t2 aa4t2 aa1t1 3.83654-1.12736-1.26127-0.73767 aa2t1 2.17105 0.16444-1.89428-2.71243 aa3t1-0.00219-0.14489 4.89617-0.10044 aa4t1-0.89603-1.16498 0.80859 0.40943 aa1t2 0.00000-0.48697-1.63345-0.45168 aa2t2-0.48697 0.00000 0.25810 1.42051 aa3t2-1.63345 0.25810 0.00000 0.71416 aa4t2-0.45168 1.42051 0.71416 0.00000 Average Stanarize Resiual 0.910177 Average Off-iagonal Stanarize Resiual 1.170228 Rank Orer of the 10 Largest Asymptotically Stanarize Resiuals Var1 Var2 Resiual aa3t2 aa3t1 4.89617 aa1t2 aa1t1 3.83654 aa4t2 aa2t1-2.71243 aa1t2 aa2t1 2.17105 aa4t1 aa2t1 2.06889 aa3t2 aa2t1-1.89428 aa3t2 aa1t2-1.63345 aa4t1 aa3t1-1.57779 aa4t2 aa2t2 1.42051 aa3t2 aa1t1-1.26127

18 The CALIS Proceure Covariance Structure Analysis: Maximum Likelihoo Estimation PATH List Stanar --------Path--------- Parameter Estimate Error t Value aa1t1 <=== AAT1 L11 1.09933 0.07318 15.02212 aa2t1 <=== AAT1 L21 1.10150 0.06860 16.05781 aa3t1 <=== AAT1 L31 0.93530 0.07097 13.17952 aa4t1 <=== AAT1 L41 1.21395 0.07434 16.32954 aa1t2 <=== AAT2 L12 1.19634 0.07283 16.42693 aa2t2 <=== AAT2 L22 1.16366 0.06755 17.22650 aa3t2 <=== AAT2 L32 0.98549 0.06593 14.94759 aa4t2 <=== AAT2 L42 1.23314 0.07236 17.04090 Variance Parameters Variance Stanar Type Variable Parameter Estimate Error t Value Exogenous AAT1 1.00000 AAT2 1.00000 Error aa1t1 th11 0.64876 0.07031 9.22763 aa2t1 th22 0.49593 0.05763 8.60475 aa3t1 th33 0.73596 0.07405 9.93828 aa4t1 th44 0.55838 0.06648 8.39946 aa1t2 th55 0.54634 0.06122 8.92468 aa2t2 th66 0.40989 0.04921 8.32996 aa3t2 th77 0.54527 0.05649 9.65259 aa4t2 th88 0.48676 0.05737 8.48535 Covariances Among Exogenous Variables Stanar Var1 Var2 Parameter Estimate Error t Value AAT1 AAT2 CovAAT1AAT2 0.90204 0.01998 45.14230 Square Multiple Correlations Error Total Variable Variance Variance R-Square aa1t1 0.64876 1.85729 0.6507 aa1t2 0.54634 1.97757 0.7237 aa2t1 0.49593 1.70924 0.7099 aa2t2 0.40989 1.76400 0.7676 aa3t1 0.73596 1.61075 0.5431 aa3t2 0.54527 1.51647 0.6404 aa4t1 0.55838 2.03206 0.7252 aa4t2 0.48676 2.00741 0.7575

19 Stanarize Results for PATH List Stanar --------Path--------- Parameter Estimate Error t Value aa1t1 <=== AAT1 L11 0.80666 0.02577 31.30356 aa2t1 <=== AAT1 L21 0.84253 0.02237 37.66482 aa3t1 <=== AAT1 L31 0.73695 0.03224 22.85720 aa4t1 <=== AAT1 L41 0.85160 0.02152 39.57206 aa1t2 <=== AAT2 L12 0.85072 0.02063 41.22889 aa2t2 <=== AAT2 L22 0.87615 0.01814 48.30288 aa3t2 <=== AAT2 L32 0.80027 0.02560 31.25636 aa4t2 <=== AAT2 L42 0.87035 0.01870 46.53660 Stanarize Results for Variance Parameters Variance Stanar Type Variable Parameter Estimate Error t Value Exogenous AAT1 1.00000 AAT2 1.00000 Error aa1t1 th11 0.34931 0.04157 8.40221 aa2t1 th22 0.29015 0.03769 7.69753 aa3t1 th33 0.45691 0.04752 9.61502 aa4t1 th44 0.27479 0.03665 7.49699 aa1t2 th55 0.27627 0.03511 7.86913 aa2t2 th66 0.23236 0.03178 7.31056 aa3t2 th77 0.35957 0.04098 8.77439 aa4t2 th88 0.24248 0.03256 7.44830 Stanarize Results for Covariances Among Exogenous Variables Stanar Var1 Var2 Parameter Estimate Error t Value AAT1 AAT2 CovAAT1AAT2 0.90204 0.01998 45.14230

20 Rank Orer of the 10 Largest LM Stat for Path Relations Parm To From LM Stat Pr > ChiSq Change aa3t1 aa3t2 26.54522 <.0001 0.37512 aa3t2 aa3t1 24.20291 <.0001 0.27320 aa1t2 aa1t1 18.29197 <.0001 0.26008 aa1t1 aa1t2 12.18041 0.0005 0.24573 aa1t2 aa2t1 8.00565 0.0047 0.19349 aa4t2 aa2t1 7.20363 0.0073-0.17961 aa2t1 aa3t2 6.53191 0.0106-0.16822 aa2t1 aa4t2 6.24929 0.0124-0.17046 aa1t2 AAT1 5.94004 0.0148 0.47711 AAT1 aa1t2 5.93991 0.0148 0.16272 NOTE: No LM statistic in the efault test set for the covariances of exogenous variables is nonsingular. Ranking is not isplaye. Rank Orer of the 10 Largest LM Stat for Error Variances an Covariances Error Error Parm of of LM Stat Pr > ChiSq Change aa3t2 aa3t1 27.88271 <.0001 0.24025 aa1t2 aa1t1 16.26376 <.0001 0.18627 aa4t2 aa2t1 6.31071 0.0120-0.10198 aa3t2 aa2t1 6.09423 0.0136-0.09889 aa2t1 aa1t2 5.44930 0.0196 0.09770 aa4t1 aa1t2 4.96998 0.0258-0.10021 aa4t1 aa2t1 4.28039 0.0386 0.10811 aa3t2 aa1t1 4.05955 0.0439-0.08927 aa4t2 aa4t1 2.77045 0.0960 0.07260 aa3t2 aa1t2 2.66822 0.1024-0.07274

title 'Confirmatory Factor Moel: Moel with correlate errors'; title2 '(using the PATH specification in CALIS)'; PROC CALIS DATA=coupon MODIFICATION RESIDUAL; PATH aa1t1 <--- AAT1 = L11, aa2t1 <--- AAT1 = L21, aa3t1 <--- AAT1 = L31, aa4t1 <--- AAT1 = L41, aa1t2 <--- AAT2 = L12, aa2t2 <--- AAT2 = L22, aa3t2 <--- AAT2 = L32, aa4t2 <--- AAT2 = L42; PVAR AAT1 = 1., AAT2 = 1., aa1t1 = th11, aa2t1 = th22, aa3t1 = th33, aa4t1 = th44, aa1t2 = th55, aa2t2 = th66, aa3t2 = th77, aa4t2 = th88; PCOV aa1t1 aa1t2 = th51(0.), aa2t1 aa2t2 = th62(0.), aa3t1 aa3t2 = th73(0.), aa4t1 aa4t2 = th84(0.), AAT1 AAT2 = CovAAT1AAT2(0.); run; 21

22 Confirmatory Factor Moel: Moel with correlate errors (using the PATH specification in CALIS) Fit Summary Moeling Info Number of Observations 250 Number of Variables 8 Number of Moments 36 Number of Parameters 21 Number of Active Constraints 0 Baseline Moel Function Value 6.2487 Baseline Moel Chi-Square 1555.9289 Baseline Moel Chi-Square DF 28 Pr > Baseline Moel Chi-Square <.0001 Absolute Inex Fit Function 0.1075 Chi-Square 26.7607 Chi-Square DF 15 Pr > Chi-Square 0.0307 Z-Test of Wilson & Hilferty 1.8703 Hoelter Critical N 233 Root Mean Square Resiual (RMR) 0.0309 Stanarize RMR (SRMR) 0.0172 Gooness of Fit Inex (GFI) 0.9743 Parsimony Inex Ajuste GFI (AGFI) 0.9383 Parsimonious GFI 0.5219 RMSEA Estimate 0.0561 RMSEA Lower 90% Confience Limit 0.0170 RMSEA Upper 90% Confience Limit 0.0900 Probability of Close Fit 0.3485 ECVI Estimate 0.2825 ECVI Lower 90% Confience Limit 0.2395 ECVI Upper 90% Confience Limit 0.3581 Akaike Information Criterion 68.7607 Bozogan CAIC 163.7114 Schwarz Bayesian Criterion 142.7114 McDonal Centrality 0.9768 Incremental Inex Bentler Comparative Fit Inex 0.9923 Bentler-Bonett NFI 0.9828 Bentler-Bonett Non-norme Inex 0.9856 Bollen Norme Inex Rho1 0.9679 Bollen Non-norme Inex Delta2 0.9924 James et al. Parsimonious NFI 0.5265

23 Asymptotically Stanarize Resiual Matrix aa1t1 aa2t1 aa3t1 aa4t1 aa1t1 1.97572-0.19095 1.24302 0.24033 aa2t1-0.19095-0.77162-0.28972 1.57707 aa3t1 1.24302-0.28972 0.24828-1.32861 aa4t1 0.24033 1.57707-1.32861-1.31971 aa1t2 1.32756 2.77540 0.80368-0.21847 aa2t2-0.26690-1.01115 0.47952-0.88541 aa3t2-0.27529-1.27887 0.74035 1.41520 aa4t2 0.19387-2.33774 0.58228-1.05335 Asymptotically Stanarize Resiual Matrix aa1t2 aa2t2 aa3t2 aa4t2 aa1t1 1.32756-0.26690-0.27529 0.19387 aa2t1 2.77540-1.01115-1.27887-2.33774 aa3t1 0.80368 0.47952 0.74035 0.58228 aa4t1-0.21847-0.88541 1.41520-1.05335 aa1t2 0.20419-0.44921-0.88205-0.17119 aa2t2-0.44921-0.68444 0.47606 0.89372 aa3t2-0.88205 0.47606 0.85426 1.08711 aa4t2-0.17119 0.89372 1.08711-0.21039 Average Stanarize Resiual 0.853963 Average Off-iagonal Stanarize Resiual 0.874074 Rank Orer of the 10 Largest Asymptotically Stanarize Resiuals Var1 Var2 Resiual aa1t2 aa2t1 2.77540 aa4t2 aa2t1-2.33774 aa1t1 aa1t1 1.97572 aa4t1 aa2t1 1.57707 aa3t2 aa4t1 1.41520 aa4t1 aa3t1-1.32861 aa1t2 aa1t1 1.32756 aa4t1 aa4t1-1.31971 aa3t2 aa2t1-1.27887 aa3t1 aa1t1 1.24302

24 The CALIS Proceure Covariance Structure Analysis: Maximum Likelihoo Estimation PATH List Stanar --------Path--------- Parameter Estimate Error t Value aa1t1 <=== AAT1 L11 1.08399 0.07333 14.78294 aa2t1 <=== AAT1 L21 1.10790 0.06893 16.07203 aa3t1 <=== AAT1 L31 0.92348 0.07115 12.97842 aa4t1 <=== AAT1 L41 1.22064 0.07473 16.33317 aa1t2 <=== AAT2 L12 1.18841 0.07313 16.25016 aa2t2 <=== AAT2 L22 1.17032 0.06768 17.29264 aa3t2 <=== AAT2 L32 0.97598 0.06585 14.82135 aa4t2 <=== AAT2 L42 1.23600 0.07261 17.02182 Variance Parameters Variance Stanar Type Variable Parameter Estimate Error t Value Exogenous AAT1 1.00000 AAT2 1.00000 Error aa1t1 th11 0.66486 0.07294 9.11526 aa2t1 th22 0.48315 0.05994 8.06038 aa3t1 th33 0.75515 0.07646 9.87676 aa4t1 th44 0.54593 0.06941 7.86565 aa1t2 th55 0.56342 0.06392 8.81385 aa2t2 th66 0.39549 0.05090 7.77051 aa3t2 th77 0.55508 0.05795 9.57931 aa4t2 th88 0.48025 0.05967 8.04791 Covariances Among Exogenous Variables Stanar Var1 Var2 Parameter Estimate Error t Value AAT1 AAT2 CovAAT1AAT2 0.88535 0.02045 43.29748 Covariances Among Errors Error Error Stanar of of Parameter Estimate Error t Value aa1t1 aa1t2 th51 0.17316 0.04991 3.46932 aa2t1 aa2t2 th62 0.03086 0.03909 0.78947 aa3t1 aa3t2 th73 0.23185 0.04966 4.66854 aa4t1 aa4t2 th84 0.05040 0.04569 1.10301

25 Square Multiple Correlations Error Total Variable Variance Variance R-Square aa1t1 0.66486 1.83988 0.6386 aa1t2 0.56342 1.97574 0.7148 aa2t1 0.48315 1.71058 0.7176 aa2t2 0.39549 1.76514 0.7759 aa3t1 0.75515 1.60796 0.5304 aa3t2 0.55508 1.50762 0.6318 aa4t1 0.54593 2.03589 0.7318 aa4t2 0.48025 2.00796 0.7608 Stanarize Results for PATH List Stanar --------Path--------- Parameter Estimate Error t Value aa1t1 <=== AAT1 L11 0.79915 0.02693 29.67202 aa2t1 <=== AAT1 L21 0.84709 0.02290 36.98794 aa3t1 <=== AAT1 L31 0.72826 0.03327 21.89182 aa4t1 <=== AAT1 L41 0.85548 0.02209 38.73116 aa1t2 <=== AAT2 L12 0.84548 0.02151 39.31369 aa2t2 <=== AAT2 L22 0.88088 0.01842 47.81655 aa3t2 <=== AAT2 L32 0.79487 0.02633 30.18808 aa4t2 <=== AAT2 L42 0.87225 0.01920 45.42772 Stanarize Results for Variance Parameters Variance Stanar Type Variable Parameter Estimate Error t Value Exogenous AAT1 1.00000 AAT2 1.00000 Error aa1t1 th11 0.36136 0.04305 8.39462 aa2t1 th22 0.28245 0.03880 7.27967 aa3t1 th33 0.46963 0.04845 9.69248 aa4t1 th44 0.26815 0.03779 7.09558 aa1t2 th55 0.28517 0.03637 7.84181 aa2t2 th66 0.22405 0.03246 6.90349 aa3t2 th77 0.36819 0.04186 8.79594 aa4t2 th88 0.23917 0.03350 7.14034 Stanarize Results for Covariances Among Exogenous Variables Stanar Var1 Var2 Parameter Estimate Error t Value AAT1 AAT2 CovAAT1AAT2 0.88535 0.02045 43.29748

26 Stanarize Results for Covariances Among Errors Error Error Stanar of of Parameter Estimate Error t Value aa1t1 aa1t2 th51 0.09082 0.02619 3.46705 aa2t1 aa2t2 th62 0.01776 0.02251 0.78919 aa3t1 aa3t2 th73 0.14891 0.03137 4.74694 aa4t1 aa4t2 th84 0.02493 0.02261 1.10225 Rank Orer of the 10 Largest LM Stat for Path Relations Parm To From LM Stat Pr > ChiSq Change aa1t2 aa2t1 13.33343 0.0003 0.24954 aa4t2 aa2t1 6.76310 0.0093-0.17972 aa2t1 aa1t2 6.06394 0.0138 0.16052 aa2t1 aa4t2 5.25571 0.0219-0.16084 aa3t1 aa4t1 5.15771 0.0231-0.21435 aa3t2 aa4t1 4.35874 0.0368 0.12062 aa1t2 AAT1 3.90384 0.0482 0.33821 aa1t2 aa1t1 3.90360 0.0482 0.31200 aa4t1 aa2t1 3.87241 0.0491 0.22811 AAT1 aa1t2 3.74085 0.0531 0.12768 NOTE: No LM statistic in the efault test set for the covariances of exogenous variables is nonsingular. Ranking is not isplaye. Rank Orer of the 10 Largest LM Stat for Error Variances an Covariances Error Error Parm of of LM Stat Pr > ChiSq Change aa2t1 aa1t2 13.68072 0.0002 0.16071 aa4t2 aa2t1 7.74921 0.0054-0.12541 aa4t1 aa3t2 6.72887 0.0095 0.10899 aa4t1 aa3t1 5.27312 0.0217-0.11611 aa4t1 aa2t1 4.01318 0.0451 0.10955 aa3t2 aa2t1 2.98185 0.0842-0.06778 aa2t1 aa1t1 2.49270 0.1144-0.07812 aa4t1 aa1t2 2.20517 0.1375-0.06910 aa3t1 aa1t1 1.63814 0.2006 0.06255 aa4t2 aa2t2 1.49541 0.2214 0.05762

27 Construct Parameter MEASUREMENT ANALYSIS FOR CFA MODEL Parameter estimate z-value of parameter estimate Iniviualitem reliability Composite reliability (average variance extracte) AAT1.88 (.66) l x 11 1.08 14.78 0.64 l x 21 1.11 16.07 0.72 l x 31 0.92 12.98 0.53 l x 41 1.22 16.33 0.73 q 11 0.66 9.12 -- q 22 0.48 8.06 -- q 33 0.76 9.88 -- q 44 0.55 7.87 -- AAT2.91 (.72) l x 52 1.19 16.25 0.71 l x 62 1.17 17.29 0.78 Assessment of iscriminant valiity: l x 72 0.98 14.82 0.63 l x 82 1.24 17.02 0.76 q 55 0.56 8.81 -- q 66 0.40 7.77 -- q 77 0.56 9.58 -- q 88 0.48 8.05 -- (1) test of whether 21 = 1: chi-square ifference test: 2 (1) = 79.85 confience interval:.89.04 = [.85;.93] Lagrange multiplier test: 2 (1) = 85.51 (2) Fornell an Larcker criterion: not satisfie here