A simulation study to investigate the use of cutoff values for assessing model fit in covariance structure models
|
|
- Ilene Sutton
- 5 years ago
- Views:
Transcription
1 Journal of Business Research 58 (2005) A simulation study to investigate the use of cutoff values for assessing model fit in covariance structure models Subhash Sharma a, *, Soumen Mukherjee b, Ajith Kumar c, William R. Dillon d a Moore School of Business, University of South Carolina, Columbia, SC 29208, USA b MAPS Inc., Waltham, MA, USA c Arizona State University, Tempe, AZ, USA d Southern Methodist University, Dallas, TX, USA Received 3 January 2002; accepted 14 October 2003 Abstract In this paper, we used simulations to investigate the effect of sample size, number of indicators, factor loadings, and factor correlations on frequencies of the acceptance/rejection of models (true and misspecified) when selected goodness-of-fit indices were compared with prespecified cutoff values. We found the percent of true models accepted when a goodness-of-fit index was compared with a prespecified cutoff value was affected by the interaction of the sample size and the total number of indicators. In addition, for the Tucker-Lewis index (TLI) and the relative noncentrality index (RNI), model acceptance percentages were affected by the interaction of sample size and size of factor loadings. For misspecified models, model acceptance percentages were affected by the interaction of the number of indicators and the degree of model misspecification. This suggests that researchers should use caution in using cutoff values for evaluating model fit. However, the study suggests that researchers who prefer to use prespecified cutoff values should use TLI, RNI, NNCP, and root-mean-square-error-ofapproximation (RMSEA) to assess model fit. The use of GFI should be discouraged. D 2004 Elsevier Inc. All rights reserved. Keywords: Structural equation modeling; Confirmatory factor analysis; Goodness-of-fit-indices; Simulation 1. Introduction The evaluation of covariance structure models is typically carried out in two stages: (1) an evaluation of overall model fit and (2) evaluations of specific parts/aspects of the model such as the measurement properties of indicators and/ or strength of structural relationships. The chi-square test statistic was among the first set of indices proposed to evaluate overall model fit to the data in a statistical sense. As is the case with most statistical tests, the power of the chi-square test increases with sample size. Since in covariance structure analysis, the nonrejection of the model subsumed under the null hypothesis is typically the desired outcome, the rejection of the model through the chi-square test in large samples, even for trivial differences between the sample and the estimated covariance matrices, soon came to be perceived as problematic (Bentler and Bonett, 1980; * Corresponding author. Tel.: ; fax: address: sharma@moore.sc.edu (S. Sharma). Tucker and Lewis, 1973). In response to this sample-size problem of the chi-square test statistic, several alternative goodness-of-fit indices were proposed for evaluating overall model fit. In turn, a number of simulation studies evaluated the sensitivity of these indices to sample-size variations (e.g., Anderson and Gerbing, 1984; Bearden et al., 1982; Bentler, 1990; Marsh et al., 1988). In their comprehensive, integrative review of various goodness-of-fit indices, McDonald and Marsh (1990) concluded that only four indices were relatively insensitive to sample size: the noncentrality parameter (NCP) of McDonald (1989) and a normed version thereof (NNCP), the relative noncentrality index (RNI), and the Tucker-Lewis index (TLI). An index is defined to be insensitive to sample size if the expected value of its sampling distribution is not affected by sample size. However, researchers typically evaluate model fit by comparing the value of some goodness-of-fit index with some prespecified cutoff value. Based on the results of a recent simulation study, Hu and Bentler (1998, 1999) suggest that a cutoff value close to 0.95 for TLI or RNI, a cutoff value close to 0.90 for NNCP or a /$ see front matter D 2004 Elsevier Inc. All rights reserved. doi: /j.jbusres
2 936 S. Sharma et al. / Journal of Business Research 58 (2005) cutoff value of 0.06 for root-mean-square-error-of-approximation (RMSEA; Steiger and Lind, 1980; Steiger, 1990) is needed before one could claim good fit of the model to the data. However, they caution that one cannot employ a specific cutoff value because the indices may be affected by such factors as sample size, estimation methods, and distribution of data. Furthermore, finding that the expected value of an index is independent of sample size does not logically imply that the percentage of index values exceeding the cutoff value is also independent of sample size. Therefore, it is quite possible that even if the expected value of an index is unaffected by sample size, the relative frequencies of model acceptance and rejection when a prespecified cutoff value is used could potentially depend on sample size. Should this occur, the use of a universal cutoff value may be inappropriate, as replication studies of a given model using different sample sizes could lead to different conclusions regarding the acceptance/rejection of models. In addition, for a given sample size, the relative frequencies of model acceptance and rejection may vary with the number of indicators in the model, which is typically a function of the number of constructs or factors in the model. However, for a given number of constructs, the number of indicators could vary due to the use of shorter or longer versions of previously developed scales. The objective of this paper, therefore, is to use simulation to empirically assess the effects of factors, such as sample size and number of indicators, on goodness-of-fit index and, more importantly, on the use of prespecified cutoff values for assessing model fit. The effects will be assessed both for true and for misspecified models. The paper is organized as follows: First, we briefly discuss goodness-of-fit indices evaluated in this study and their suggested cutoff values. Second, we present the simulation design employed. Third, we present the results of our simulations. Finally, we discuss the implications of our results for using prespecified cutoff values for acceptance/rejection decisions. 2. Goodness-of-fit indices and their cutoff values 2.1. Goodness-of-fit indices While several goodness-of-fit indices have been proposed in the literature, this study will assess the following five indices: the NNCP, the RNI, the TLI, the RMSEA, and the goodness-of-fit index of Joreskog and Sorbom (1982). We now discuss our rationale for including these five indices. First, in an integrative review of several GFIs, McDonald and Marsh (1990) concluded that among the fit indices typically used by researchers, only NCP, NNCP, RNI, and TLI were insensitive to sample size. We excluded the NCP from our analysis because we did not find it as being used frequently in substantive research for evaluating model fit, presumably because advocates of this index did not specify cutoff values for its use. Second, Marsh et al. (1988) did not include RMSEA in their simulation study, and neither did McDonald and Marsh (1990) in their integrative review. More recently, however, Browne and Cudeck (1993) suggest using this index to assess model fit. This index was included by Hu and Bentler (1998) in their simulation study and found to be quite sensitive to model misspecification. Finally, the goodness-of-fit index, although found to be sensitive to sample size in a number of simulation studies, is still being used extensively by researchers to assess model fit Cutoff values for assessing model fit As mentioned earlier, researchers typically compare the computed value of some GFI to a prespecified cutoff value for evaluating model fit. For normed fit indices (i.e., goodness-of-fit index, NNCP, RNI, and TLI) whose values typically range between 0 and 1, with 1 indicating perfect fit, the cutoff value of 0.90 recommended by Bentler and Bonett (1980) is the most popular and widely employed by researchers to evaluate model fit. The model is considered to have an unacceptable fit if the value of the fit index is less than We used a cutoff value of 0.90 for the NNCP even though McDonald and Marsh (1990) did not prescribe any cutoffs for this index. For the RMSEA, whose value does not range between 0 and 1, Browne and Cudeck (1993) suggested that values of 0.05 or less would indicate a close fit, a value of 0.08 or less would indicate a reasonable fit, and values greater than 0.10 would indicate unacceptable fit. 3. Simulation study Simulation studies were done to assess the effects of sample size, number of indicators, factor loadings size, and size of factor correlations on the mean value of the selected fit indices and on the percent of models accepted using prespecified cutoff values. Two specifications of correlated two-factor, four-factor, six-factor, and eight-factor confirmatory factor models, with four indicators per factor, were used. The two-factor model will have a total of eight indicators and one correlation among the two factors. The four-factor model will have a total of 16 indicators and six correlations among the four factors. The six-factor model will have a total of 24 indicators and 15 correlations among the six factors. The eight-factor model will have a total of 32 indicators and 28 correlations among the eight factors. In the first specification, the correct or true model was estimated. In the true or correct model, the specification of the model estimated in the sample was identical with the population model. That is, the model should have a perfect fit to the data. Any lack of fit is attributed to sampling error. In the second specification, the model was not correctly specified, in that the model estimated in the sample was not the same as the population model. Specifically, the correlations among the factors were not estimated. Misspecified models
3 S. Sharma et al. / Journal of Business Research 58 (2005) were included in the study to assess the extent to which the use of cutoff values might result in Type II errors (i.e., the decision to accept the model specified under the null hypothesis as true when an alternative model is the correct one). 4. Simulation methodology Four factors were systematically varied to create the simulation experimental design: (1) four sample sizes were used (100, 200, 400, and 800); (2) number of indicators were varied from 8 to 32, in steps of 8 (i.e., 8, 16, 24, and 32); (3) three factor loadings (i.e.,.3,.5, and.7) were used; and (4) three correlations among the factors were employed (.3,.5, and.7). Following prior simulation studies, a confirmatory factor analysis (CFA) model was chosen Data generation The simulation design resulted in a total of 36 different population covariance matrices. A total of 100,000 observations were generated from each of the 36 population covariance matrices using the GGNSM procedure (IMSL Library, 1980). From each of the 36 sets of 100,000 observations representing a given population covariance matrix, 100 replications of each sample size were randomly drawn. That is, 400 samples were drawn from each set of the 36 sets of observations. This gave a total of 14,400 samples (3 levels of factor loadings 3 levels of factor correlations 4 levels of number of indicators 4 levels of sample sizes 100 replications). A sample covariance matrix was computed from each of the 14,400 samples Model estimated: true models For each sample, the corresponding true model was estimated. All the parameters, including the correlations among the factors, were estimated. For a given index, the percent of true models rejected when compared with a prespecified cutoff value would give a measure of the Type I error committed by the usage of the respective index for model acceptance/rejection decisions Model estimated: misspecified models As indicated earlier, another objective of our study was to investigate model acceptance/rejection frequencies when cutoff values are used to evaluate the fit of misspecified models. In general, misspecification could occur in countless ways. However, since our main concern was to assess how the fit indices behaved for misspecified models and to keep the simulation study to manageable levels, we chose a subset that would span a wide range of misspecifications with respect to the lack of overall fit. The subset of models chosen were those that resulted from systematically not estimating the correlations among the factors. Specifically, misspecified models were operationalized by positing orthogonal models for each of the following combinations: (1) k=.3, /=.3; (2) k=.5, /=.5; and (3) k=.7; /=.7, where k and / denote factor loadings and factor correlations, respectively. These combinations represent varying degrees of model misspecification, with the first combination resulting in the smallest amount of misspecification and the third combination resulting in the largest amount of misspecification. For each estimated model (true and misspecified), the five goodness-of-fit indices discussed earlier were computed. In addition, for each goodness-of-fit index, the percent of times the fitted models were accepted was computed for each cell of the simulation design on the basis of a prespecified cutoff value (values exceeding 0.90 for NNCP, TLI, RNI, and goodness-of-fit index and values below 0.05 for RMSEA). The percent of misspecified models accepted when compared with a prespecified cutoff value would give a measure of the Type II error committed by the usage of the respective index for model acceptance/rejection decisions. 5. Results In Monte Carlo simulations of covariance structure models, some of the samples analyzed inevitably yield improper solutions, wherein one or more of the parameter estimates are inadmissible (e.g., zero or negative error variances, standardized factor loadings or interfactor correlations exceeding one, etc.). While such improper solutions would be discarded in substantive research contexts where, typically, a single-sample covariance matrix is analyzed, it is important to include them in the analysis of the Monte Carlo results because the sampling distribution that is ultimately being evaluated within each treatment of the simulation design includes all the sample covariance matrices that are generated. There were a total of 0.08% improper solutions for true models and 5.69% improper solutions for misspecified models. Consistent with the results of previous simulations, a majority of the improper solutions were for small sample sizes (N = 100 and 200). There were no improper solutions for samples of size 800. To assess the effect of the manipulated factors, the data were analyzed using ANOVA and computing the effect size, g 2. The g 2 associated with each estimated effect represents the percent of variance in the dependent variable that is accounted for by that effect after accounting for the impact of all other effects. Because of large sample sizes, many of the effects that are practically insignificant (as measured by g 2 ) will be statistically significant. Consequently, we present the results and the discussion only for those factors that are statistically significant and whose g 2 is greater than 3% (Anderson and Gerbing, 1984; Sharma et al., 1989); these effects will be referred to as significant effects.
4 938 S. Sharma et al. / Journal of Business Research 58 (2005) Table 1 Eta-squares for mean value of GFIs and percent of times models accepted for true models NNCP RMSEA RNI TLI GFI Sample size (N) a * b Number of indicators (NI) Factor loadings (L) Factor correlations (P) Sample size Number of indicators (N NI) Sample size Loadings (N L) a Eta-square for goodness-of-fit indices. b Eta-square for percent of times true models accepted for cutoff value of 0.90 (0.05 for RMSEA). * Not significant at P V True models Goodness-of-fit indices As indicated earlier, we performed a (Factor Correlations Factor Loadings Sample Size Number of Indicators) ANOVA, with each GFI as the dependent variable. Table 1 presents the significant results. The following conclusions can be drawn from the table: (1) Sample Size Number-of-Indicators interaction (N NI interaction) is the only interaction that is significant, and this interaction is significant only for NNCP and goodnessof-fit index; (2) the size of factor loadings and the size of correlations among the factors do not effect any of the goodness-of-fit indices; (3) sample size effects NNCP, RMSEA, and goodness-of-fit index; and (4) the number of indicators effects only NNCP and goodness-of-fit index. To gain further insights into these effects, we examine the means and standard deviations of goodness-of-fit indices for various combinations of sample sizes and number of indictors (the effects corresponding to the N NI interaction). Table 2 presents the means and standard deviations. It can be seen that RMSEA is not substantially affected by sample size, and irrespective of the number of indicators, the effect seems to be the same for sample sizes of 200 and over. For NNCP and goodness-of-fit index, the effect of sample size becomes more prominent as the number of indicators increase. The mean values for the NNCP reveal the nature of the interaction and, also, the reason why McDonald and Marsh (1990) and Marsh et al. (1988) found this index to be insensitive to sample size. If the analysis is restricted to results for models with 8 or 16 indicators, then, the NNCP would be insensitive to sample size in our study as well. The inconsistency arises as a consequence of including models with larger number of indicators (i.e., 24 and 32 indicators) in our simulation. While it appears from the mean values that RNI and TLI are affected by sample size for a large number of indicators, this effect is not significant, and this conclusion is consistent with previous studies. However, the reason for nonsignificance is probably due to the fact that the standard deviations of these two indices are relatively large compared with the other three. McDonald and Marsh (1990) noted that RNI and TLI are normed in the population (that is, they assume values between 0 and 1) but not in the sample, especially for small sample sizes. Bentler (1990) noted that the range for Table 2 Means and standard deviations of the GFI for true models Index Number of indicators Sample size Sample size Sample size Sample size Number of indicators and sample size interaction (N NI) NNCP RMSEA RNI TLI GFI Values of TLI and RNI for models whose factor loadings are.50 or.70 RNI TLI For each index, the values at the top row indicate the means, and the values at the bottom row indicate the standard deviations.
5 S. Sharma et al. / Journal of Business Research 58 (2005) TLI is large, especially for small samples. In fact, for a sample size of 100, the range of TLI was as high as (low value of and a high value ) and the range of RNI was as high as (low value of and a high value of ). These outliers obviously would affect the significance tests. An examination of the outliers suggests that most of these outliers are for cases that have small factor loadings (i.e.,.3) and small sample sizes (i.e., 100). We can only speculate as to why only these two indices (out of the five) exhibit such large fluctuations. A reasonable conjecture is that these two indices, in contrast to the other three, are, essentially, ratios of two statistics derived from the null and true models. Therefore, these indices are affected by the badness of the null model as well as the goodness-of-fit of the hypothesized model. This conjecture is further supported by the fact that these two indices are undefined in the population if the null model is true, suggesting that these indices would be extremely unstable in samples if the null model is approximately true (McDonald and Marsh, 1990). This problem is obviously exacerbated in the cases of small samples. To determine if the behavior of TLI and RNI change when factor loadings are.5 or greater, we reanalyzed the data by deleting the models whose factor loadings are.30. The results indicated that the sample size and the number of indicators, and their interaction, were significant (values of g 2 for the N NI interaction are and for RNI and TLI, respectively; values of g 2 for sample size are equal to and for RNI and TLI, respectively; and values of g 2 for the number of indicators are equal to and for RNI and TLI, respectively). Table 2 also gives the means and standard deviations for models whose factor loadings are.50 or.70. The behavior of RNI and TLI is similar with that of GFI and NNCP; however, these two indices do not seem to be substantially effected by sample size and number of indicators. The results for the mean values of the indices in Table 2 can be summarized as follows: The RMSEA is the least effected index and is insensitive to sample size for sample sizes of over 200. Goodness-of-fit index and NNCP are insensitive to sample size above some threshold (sample size) value; however, this threshold value likely varies monotonically with the number of manifest indicators in the model and, furthermore, this threshold value may not be the same for all the indices. That is, for a given index, the sample size at which the index becomes insensitive (to sample size) could be a function of the number of indicators. The behavior of TLI and RNI is erratic for models with small factor loadings (i.e.,.30). When these models are deleted, the behavior of TLI and RNI is similar with that of goodness-of-fit index and NNCP, in that TLI and RNI are affected by sample size, and the effect depends on the number of indicators. The question then becomes: Are the effects of sample size, number of indicators, factor loadings, and factor correlations the same when one uses these indices to make model acceptance/rejection decisions by comparing an index value to a prespecified cutoff value? That is, what is the impact of the manipulated factors on the Type I error, the error of rejecting the model when it is indeed true? Percent of models accepted For each of the 144 cells or conditions defined by sample size (four levels), number of indicators (four levels), factor loadings (three levels) and factor correlations (three levels), the percent of models accepted for each index was computed. Model acceptance/rejection decision was made by comparing the value of the index to a prespecified cutoff value (0.90 for NNCP, RNI, TLI, and GFI, and 0.05 for RMSEA). The percent of models accepted was the dependent variable in a ANOVA. Since for each cell, there is a single observation, the fourth-order interaction was used as the error term for significance tests. Table 1 also gives the g 2 of the effects. The following conclusions can be drawn from the table: (1) The effect of the interaction of the sample size with the number of indicators (N NI) is even more pronounced for the percent of times the true model is accepted compared with the mean value of the fit index; this interaction is significant for all the indices. Note that in the case of the mean value of the fit index, this interaction was not significant for RMSEA, RNI, and TLI; (2) The Sample Size Size of Loading (N L) interaction is significant for RNI and TLI. This interaction was not present for mean values of the indices; (3) The main effects of sample size for all the indices are significant; (4) The main effects of the number of indicators are significant for NNCP, RMSEA, and goodness-of-fit index; and (5) The main effects of factor loadings are significant for RNI and TLI. To gain further insights into these effects, we present in Table 3 the percent of times that true models are accepted for the number of indicators for the above significant effects. It is clear from Table 3 that the behavior of goodness-offit index is clearly the most aberrant, with substantial sample size effects when the number of indicators is large, and points to the need to reconsider its continued use in model evaluation. TLI and RNI are affected by sample size and its effects depend on the number of indicators. The behavior of these two indices is extremely good for models with factor loadings of.5 or above and with sample sizes of 200 or above. For these models, the effect of sample size and number of indicators is practically nonexistent. For the NNCP, on the other hand, sample-size effects are dependent on the number of indicators. The effect of sample size and number of indicators appears to be the least for RMSEA. The findings so far suggest that the percent of models accepted (when an index is compared with a cutoff value) is affected by the interaction of sample size with the number of indicators. In addition, the RNI and TLI are affected by the two-way interaction of sample size and size of factor loadings; however, the effects are very little for models whose factor loadings are.5 or above. When used for evaluating model fit relative to some cutoff value, RMSEA emerges as the most promising candidate, and the RNI and
6 940 S. Sharma et al. / Journal of Business Research 58 (2005) Table 3 Percent of times true models are accepted for a cut-off value of 0.90 (0.05 for RMSEA) Number of indicators and sample size interaction Index Number of indicators Sample size Sample size Sample size Sample size NNCP RMSEA RNI TLI GFI Sample size and size of factor loadings (k) interaction Index Factor loading (k) Sample size Sample size Sample size NNCP RMSEA RNI TLI GFI TLI for models with factor loadings of.5 or above and sample size of 200 or above. A related question, though, is whether similar patterns recur when evaluating the fit of misspecified models. That is, to what extent can these indices detect model misspecification? The next section addresses this point Misspecified models As indicated earlier, the degree of misspecification was operationalized by not estimating the factor correlations, and numerous combinations of degree of misspecifications were tried. Table 4 gives the estimated value of the fitting or the discrepancy function, f k (û k ), for each of the misspecified models when they were fitted to their corresponding population covariance matrices. Because the estimated value of the fitting function is devoid of sampling errors, it will be equal to zero for a correctly specified model and greater than zero for misspecified models. Consequently, the value of the fitting function measures the degree of misspecification. As can be seen from Table 4, the degree of misspecification is Table 4 Discrepancy function [ f k (û k )] values for misspecified models Factor Factor Number of indicators loadings (k) correlations (/) (1) (2) (4) (5) (3) (7) (8) (9) (6) (10) (11) (12) Numbers in parentheses represent degree of misspecification, which ranges from very low (1) to very high (12). confounded with the number of indicators, in that the degree of misspecification increases with an increase in the number of indicators. Essentially, there are 12 levels of misspecification, as indicated by the numbers in parentheses. These 12 levels of misspecification, which range from very low (1) to very high (12), are used to present and discuss the results. For each of the 48 cells defined by degree of misspecification (12 levels) and sample size (4 levels), the percent of models accepted using a given cutoff value of 0.90 (0.05 for RMSEA) was computed for the misspecified model. Since there are only two factors (sample size and degree of misspecifications) and only 48 cells, we simply present the percent of models accepted for each cell. Table 5 gives the percent of models accepted for each cell. As can be seen from the table, all the fit indices fail to reject a substantial number of models when the degree of misspecification is less than five (less than moderate to very low levels of misspecification), which essentially corresponds to factor loadings of.3 and factor correlations of.3. For these cells, the performance of NNCP, RMSEA, and goodness-of-fit index is quite erratic and, in some cases, these fit indices do not reject any models. That is, these indices are not sensitive enough to detect less than moderate to low levels of misspecification in the models. What is also interesting to note is that these indices tend to accept more models as the sample size increases, which is not very surprising. Note that the previous results for the mean values of the indices suggested that all the indices were sensitive to sample size. Essentially, the mean values of an index for smaller samples were less than the mean values of the index for larger sample sizes. That is, for a given prespecified cutoff value of the index, the number of models accepted
7 S. Sharma et al. / Journal of Business Research 58 (2005) Table 5 Percent of misspecified models accepted Sample size Fit index Degree of misspecification a NNCP RMSEA RNI TLI GFI NNCP RMSEA RNI TLI GFI NNCP RMSEA RNI TLI GFI NNCP RMSEA RNI TLI GFI a The degree of misspecification ranges from very low (1) to very high (12). should increase as the sample size increases. The performance of TLI and RNI is better than NNCP, RMSEA, and goodness-of-fit index. For all the other degrees of misspecifications (i.e., degree of misspecifications greater than five, which correspond to moderate to very high levels of misspecifications), the performance of RNI and TLI is excellent in that the percent of models for all cells, except one, is less than 5%. The performance of NNCP is also good but not as good as TLI and RNI. The performance of RMSEA is not as good as that of NNCP, TLI, and RNI. The performance of goodness-offit index is the worst, once again, suggesting that its use to evaluate model fit should be reevaluated. 6. Discussion and conclusions The findings of this study have important implications for users of covariance structure models. First, all the goodness-of-fit indices included in the study are affected to varying degrees by variations in the models with respect to sample size, number of indicators. In addition, the magnitudes of covariances affected TLI and RNI. Relative to the other indices, the TLI and RNI perform the best followed by NNCP and RMSEA. The goodness-of-fit index shows the most adverse effects, which led Hu and Bentler (1998, 1999) to recommend against its use. The results of our study and the ensuing recommendations are summarized in Table 6 and briefly discussed below. First, the performance of goodness-of-fit index is the worst, both with respect to how it is affected by sample size, number of indicators and detecting model misspecification. It is suggested that this index should not be used to evaluate model fit. Compared with other indices, RNI and TLI perform the best as long as the size of the factor loadings is.5 or greater and the sample size is not less than 200. Overall, for those preferring to use prespecified cutoff values, it is recommended that RNI and TLI should be used to evaluate model fit. The performance of NNCP and RMSEA is not as good as TLI and RNI. Since RMSEA is not affected by the size of factor loadings and since NNCP performs reasonably well, we recommend their use in conjunction with TLI and RNI. However, an alternative course of action is to introduce some flexibility into the model evaluation procedure by allowing for cutoff values to vary somewhat with the modeling context. For example, in small samples, a more reasonable cutoff value for RMSEA would be 0.07 to 0.08 (cf, Browne and Cudeck, 1993); for smaller sample size and larger models, a cutoff value of less than 0.90 for TLI, RNI, and NNCP should be used. Second, replication studies using different sample sizes may lead to different conclusions if model fit is evaluated by comparing the fit index to a prespecified cutoff value at least below some threshold sample size. In addition, studies assessing the same model, but with different number of indicators, might reach different conclusions. Third, the results suggest that as the number of indicators increases, a larger sample size is needed before the index becomes insensitive (to sample size), suggesting that researchers need to have a larger sample size as the number of indicators in the model increase. Alternatively, for data sets with a large number of indicators (i.e., more than 24) and smaller sample sizes (around 200), it becomes necessary to use more liberal cutoff values for normed indices (e.g.,
8 942 S. Sharma et al. / Journal of Business Research 58 (2005) Table 6 Summary and recommendations Index Summary and recommendation Goodness-of-fit index RNI and TLI NNCP RMSEA 1. The mean value of the index is substantially affected by sample size; that is, its mean value decreases as sample size decreases. However, the effect of sample size is contingent on the number of indicators. 2. The percent of times the true model is rejected (Type I error) increases substantially as the sample size increases, but decreases as the number of indicators increases. 3. The index is not very sensitive to detecting misspecified models. 4. Recommendation: This index should not be used. 1. The indices are sensitive to sample size; that is, their mean values decrease as sample size decreases. However, the effect of sample size depends on the number of indicators (i.e., model size). 2. The sample size, number of indicators interaction is not significant due to large variations in the index for small sample sizes. This is to be expected as these indices are not normed (i.e., do not lie between 0 and 1) in the sample resulting in outliers (values above 1 and below 0). The outliers were mostly for small sample size (i.e., 100) and small factor loadings (i.e.,.30). The effect became significant when the outliers were deleted but is not as severe as that for GFI and NNCP. 3. The percent of times the true models (whose factor loadings are.50 or greater and sample sizes are 200 or greater) are rejected is low and appears to be independent of sample size and number of indicators. 4. Compared to RMSEA, GFI, and NNCP, RNI and TLI are more sensitive to the degree of misspecification. The percent of times misspecified models is accepted is less than 6% for models whose factor loadings are.50 or greater. 5. Recommendation: Performance of RNI and TLI is the best among the set of indicators examined and is the recommended index for evaluating model fit when the factor loadings are reasonably large (.5 or above). 1. The mean value of the index is affected by sample size; that is, its mean value decreases as sample size decreases. However, the effect of sample size is contingent on the number of indicators. 2. The percent of times the true model is rejected (Type I error) is affected by sample size, and this effect increases as the number of indictors increase. 3. Compared with RMSEA and GFI, NNCP is quite sensitive to the degree of misspecification. The percent of times misspecified models is accepted is less than 6% for models whose factor loadings are.50 or greater. 4. Recommendation: Use of this index is recommended in conjunction with RNI and TLI. 1. The index is affected by sample size; that is, its mean value increases for smaller sample sizes. The effect of sample size is independent of the number of indicators. 2. The percent of times the true model is accepted is high for sample size of 200 or above. 3. The RMSEA is more sensitive than GFI and less sensitive than NNCP, RNI and TLI. The percent of times misspecified model is accepted is quite low for higher degrees of misspecification. In this respect, this index performs better than GFI but not as well as NNCP, RNI, and TLI. 4. Recommendation: Performance of RMSEA is reasonable and better than GFI but not as good as TLI and RNI. However, since this index is not affected by the size of factor loading, it is recommended that RMSEA be used in conjunction with NNCP, TLI, and RNI. 0.80) to ensure that frequencies of model acceptance/rejection remain approximately similar. For example, 86.2% of true models were accepted when TLI was used for assessing model fit with a cutoff value of 0.90, a sample size of 200, and eight indicators. To achieve the same 86.2% acceptance rate for a sample size of 200 and 32 indicators would require a cutoff value of Once again, this makes a strong case for adjusting the fit indices for the effects of sample size and number of indicators before comparing with arbitrary cutoff values. We feel that this issue presents further research opportunities for investigating the nature of adjustments needed for various fit indices to account for the effects of model parameters. Finally, we would like to acknowledge some of the limitations of this study. The total number of indicators in the model was manipulated by keeping the number of indicators per factor constant at four and increasing the number of factors in the model. Whether the results of this study also hold when the total number of indicators in the model is manipulated, by keeping the number of factors constant and varying the number of indicators per factor, cannot be inferred from this study. However, we do not expect the findings to be different, as the underlying issue is the number of indicators and not the number of factors or number of indicators per factor. The study is also limited by the heuristic used to obtain misspecified models. Among the countless misspecified models that could be generated, we systematically selected only 12 misspecified models, which ranged from the least misspecified to the most misspecified models. References Anderson JC, Gerbing DW. The effect of sampling error on convergence, improper solutions, and goodness-of-fit indices for maximum likelihood confirmatory factor analysis. Psychometrika 1984;49(June): Bearden WO, Sharma S, Teel JR. Sample size effects on chi-square and other statistics used in evaluating causal models. J Market Res 1982; 19(November): Bentler PM. Comparative fit indexes in structural models. Psychol Bull 1990;107(March): Bentler PM, Bonett DG. Significance tests and goodness of fit in the analysis of covariance structures. Psychol Bull 1980;88(November): Browne MW, Cudeck R. Alternate ways of assessing model fit. In: Bollen
9 S. Sharma et al. / Journal of Business Research 58 (2005) KA, Long JS, editors. Testing structural equation models. Sage Publications; Newbury Park (CA):, p Hu L, Bentler PM. Fit indices in covariance structure modeling: sensitivity to under parameterized model misspecification. Psychol Methods 1998; 3(December): Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling 1999;6:1 55. IMSL Library. IMSL Edition 8.0. Houston (TX): Visual numerics. Joreskog KG, Sorbom D. Recent developments in structural equation modeling. J Market Res 1982;19(November): Marsh HW, Balla JR, McDonald RP. Goodness-of-fit indices in confirmatory factor analysis: the effect of sample size. Psychol Bull 1988; 103(May): McDonald RP. An index of goodness-of-fit based on noncentrality. J Classif 1989;6(1): [March]. McDonald RP, Marsh HW. Choosing a multivariate model: noncentrality and goodness of fit. Psychol Bull 1990;107(March): Sharma S, Durvasula S, Dillon WR. Some results on the behavior of alternate covariance structure estimation procedures in the presence of nonnormal data. J Market Res 1989;26(May): Steiger JH. Structural model evaluation and modification: an internal estimation approach. Multivariate Behav Res 1990;25: Steiger JH, Lind JC. Statistically based tests for the number of common factors. Paper Presented at the Annual Meeting of the Psychometric Society, Iowa City, IA; Tucker LR, Lewis C. A reliability coefficient for maximum likelihood factor analysis. Psychometrika 1973;38(March):1 10.
EVALUATION OF STRUCTURAL EQUATION MODELS
1 EVALUATION OF STRUCTURAL EQUATION MODELS I. Issues related to the initial specification of theoretical models of interest 1. Model specification: a. Measurement model: (i) EFA vs. CFA (ii) reflective
More informationRecovery of weak factor loadings in confirmatory factor analysis under conditions of model misspecification
Behavior Research Methods 29, 41 (4), 138-152 doi:1.3758/brm.41.4.138 Recovery of weak factor loadings in confirmatory factor analysis under conditions of model misspecification CARMEN XIMÉNEZ Autonoma
More informationPsychology 454: Latent Variable Modeling How do you know if a model works?
Psychology 454: Latent Variable Modeling How do you know if a model works? William Revelle Department of Psychology Northwestern University Evanston, Illinois USA November, 2012 1 / 18 Outline 1 Goodness
More informationEvaluating the Sensitivity of Goodness-of-Fit Indices to Data Perturbation: An Integrated MC-SGR Approach
Evaluating the Sensitivity of Goodness-of-Fit Indices to Data Perturbation: An Integrated MC-SGR Approach Massimiliano Pastore 1 and Luigi Lombardi 2 1 Department of Psychology University of Cagliari Via
More informationsempower Manual Morten Moshagen
sempower Manual Morten Moshagen 2018-03-22 Power Analysis for Structural Equation Models Contact: morten.moshagen@uni-ulm.de Introduction sempower provides a collection of functions to perform power analyses
More informationEffect of Estimation Method on Incremental Fit Indexes for Covariance Structure Models
Effect of Estimation Method on Incremental Fit Indexes for Covariance Structure Models Hazuki M. Sugawara and Robert C. MacCallum The Ohio State University In a typical study involving covariance structure
More informationConfirmatory Factor Analysis. Psych 818 DeShon
Confirmatory Factor Analysis Psych 818 DeShon Purpose Takes factor analysis a few steps further. Impose theoretically interesting constraints on the model and examine the resulting fit of the model with
More informationFactor analysis. George Balabanis
Factor analysis George Balabanis Key Concepts and Terms Deviation. A deviation is a value minus its mean: x - mean x Variance is a measure of how spread out a distribution is. It is computed as the average
More informationUNIVERSITY OF CALGARY. The Influence of Model Components and Misspecification Type on the Performance of the
UNIVERSITY OF CALGARY The Influence of Model Components and Misspecification Type on the Performance of the Comparative Fit Index (CFI) and the Root Mean Square Error of Approximation (RMSEA) in Structural
More informationPLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use:
This article was downloaded by: [Howell, Roy][Texas Tech University] On: 15 December 2009 Access details: Access Details: [subscription number 907003254] Publisher Psychology Press Informa Ltd Registered
More informationCondition 9 and 10 Tests of Model Confirmation with SEM Techniques
Condition 9 and 10 Tests of Model Confirmation with SEM Techniques Dr. Larry J. Williams CARMA Director Donald and Shirley Clifton Chair of Survey Science Professor of Management University of Nebraska
More informationMisspecification in Nonrecursive SEMs 1. Nonrecursive Latent Variable Models under Misspecification
Misspecification in Nonrecursive SEMs 1 Nonrecursive Latent Variable Models under Misspecification Misspecification in Nonrecursive SEMs 2 Abstract A problem central to structural equation modeling is
More informationEvaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling
Multivariate Behavioral Research, 9 (), 49-478 Copyright 004, Lawrence Erlbaum Associates, Inc. Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling Jonathan Nevitt
More informationA Threshold-Free Approach to the Study of the Structure of Binary Data
International Journal of Statistics and Probability; Vol. 2, No. 2; 2013 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education A Threshold-Free Approach to the Study of
More informationPackage semgof. February 20, 2015
Package semgof February 20, 2015 Version 0.2-0 Date 2012-08-06 Title Goodness-of-fit indexes for structural equation models Author Elena Bertossi Maintainer Elena Bertossi
More informationFit Indices Versus Test Statistics
MULTIVARIATE BEHAVIORAL RESEARCH, 40(1), 115 148 Copyright 2005, Lawrence Erlbaum Associates, Inc. Fit Indices Versus Test Statistics Ke-Hai Yuan University of Notre Dame Model evaluation is one of the
More informationEvaluation of structural equation models. Hans Baumgartner Penn State University
Evaluation of structural equation models Hans Baumgartner Penn State University Issues related to the initial specification of theoretical models of interest Model specification: Measurement model: EFA
More informationImproper Solutions in Exploratory Factor Analysis: Causes and Treatments
Improper Solutions in Exploratory Factor Analysis: Causes and Treatments Yutaka Kano Faculty of Human Sciences, Osaka University Suita, Osaka 565, Japan. email: kano@hus.osaka-u.ac.jp Abstract: There are
More informationA Cautionary Note on the Use of LISREL s Automatic Start Values in Confirmatory Factor Analysis Studies R. L. Brown University of Wisconsin
A Cautionary Note on the Use of LISREL s Automatic Start Values in Confirmatory Factor Analysis Studies R. L. Brown University of Wisconsin The accuracy of parameter estimates provided by the major computer
More informationSRMR in Mplus. Tihomir Asparouhov and Bengt Muthén. May 2, 2018
SRMR in Mplus Tihomir Asparouhov and Bengt Muthén May 2, 2018 1 Introduction In this note we describe the Mplus implementation of the SRMR standardized root mean squared residual) fit index for the models
More informationPsychology 454: Latent Variable Modeling How do you know if a model works?
Psychology 454: Latent Variable Modeling How do you know if a model works? William Revelle Department of Psychology Northwestern University Evanston, Illinois USA October, 2017 1 / 33 Outline Goodness
More informationA Study of Statistical Power and Type I Errors in Testing a Factor Analytic. Model for Group Differences in Regression Intercepts
A Study of Statistical Power and Type I Errors in Testing a Factor Analytic Model for Group Differences in Regression Intercepts by Margarita Olivera Aguilar A Thesis Presented in Partial Fulfillment of
More informationIntroduction to Confirmatory Factor Analysis
Introduction to Confirmatory Factor Analysis Multivariate Methods in Education ERSH 8350 Lecture #12 November 16, 2011 ERSH 8350: Lecture 12 Today s Class An Introduction to: Confirmatory Factor Analysis
More informationSC705: Advanced Statistics Instructor: Natasha Sarkisian Class notes: Introduction to Structural Equation Modeling (SEM)
SC705: Advanced Statistics Instructor: Natasha Sarkisian Class notes: Introduction to Structural Equation Modeling (SEM) SEM is a family of statistical techniques which builds upon multiple regression,
More informationIntroduction to Structural Equation Modeling Dominique Zephyr Applied Statistics Lab
Applied Statistics Lab Introduction to Structural Equation Modeling Dominique Zephyr Applied Statistics Lab SEM Model 3.64 7.32 Education 2.6 Income 2.1.6.83 Charac. of Individuals 1 5.2e-06 -.62 2.62
More informationScaled and adjusted restricted tests in. multi-sample analysis of moment structures. Albert Satorra. Universitat Pompeu Fabra.
Scaled and adjusted restricted tests in multi-sample analysis of moment structures Albert Satorra Universitat Pompeu Fabra July 15, 1999 The author is grateful to Peter Bentler and Bengt Muthen for their
More informationGramian Matrices in Covariance Structure Models
Gramian Matrices in Covariance Structure Models P. M. Bentler, University of California, Los Angeles Mortaza Jamshidian, Isfahan University of Technology Covariance structure models frequently contain
More informationInference using structural equations with latent variables
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
More informationTesting Structural Equation Models: The Effect of Kurtosis
Testing Structural Equation Models: The Effect of Kurtosis Tron Foss, Karl G Jöreskog & Ulf H Olsson Norwegian School of Management October 18, 2006 Abstract Various chi-square statistics are used for
More informationAn Introduction to Mplus and Path Analysis
An Introduction to Mplus and Path Analysis PSYC 943: Fundamentals of Multivariate Modeling Lecture 10: October 30, 2013 PSYC 943: Lecture 10 Today s Lecture Path analysis starting with multivariate regression
More informationAn Introduction to Path Analysis
An Introduction to Path Analysis PRE 905: Multivariate Analysis Lecture 10: April 15, 2014 PRE 905: Lecture 10 Path Analysis Today s Lecture Path analysis starting with multivariate regression then arriving
More informationInferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data
Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone
More informationThe comparison of estimation methods on the parameter estimates and fit indices in SEM model under 7-point Likert scale
The comparison of estimation methods on the parameter estimates and fit indices in SEM model under 7-point Likert scale Piotr Tarka Abstract In this article, the author discusses the issues and problems
More informationCHAPTER 3. THE IMPERFECT CUMULATIVE SCALE
CHAPTER 3. THE IMPERFECT CUMULATIVE SCALE 3.1 Model Violations If a set of items does not form a perfect Guttman scale but contains a few wrong responses, we do not necessarily need to discard it. A wrong
More informationExtending the Robust Means Modeling Framework. Alyssa Counsell, Phil Chalmers, Matt Sigal, Rob Cribbie
Extending the Robust Means Modeling Framework Alyssa Counsell, Phil Chalmers, Matt Sigal, Rob Cribbie One-way Independent Subjects Design Model: Y ij = µ + τ j + ε ij, j = 1,, J Y ij = score of the ith
More informationCan Variances of Latent Variables be Scaled in Such a Way That They Correspond to Eigenvalues?
International Journal of Statistics and Probability; Vol. 6, No. 6; November 07 ISSN 97-703 E-ISSN 97-7040 Published by Canadian Center of Science and Education Can Variances of Latent Variables be Scaled
More informationMULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS Page 1 MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level
More information3/10/03 Gregory Carey Cholesky Problems - 1. Cholesky Problems
3/10/03 Gregory Carey Cholesky Problems - 1 Cholesky Problems Gregory Carey Department of Psychology and Institute for Behavioral Genetics University of Colorado Boulder CO 80309-0345 Email: gregory.carey@colorado.edu
More informationRegularized Common Factor Analysis
New Trends in Psychometrics 1 Regularized Common Factor Analysis Sunho Jung 1 and Yoshio Takane 1 (1) Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1, Canada
More informationIntroduction to Structural Equation Modeling: Issues and Practical Considerations
An NCME Instructional Module on Introduction to Structural Equation Modeling: Issues and Practical Considerations Pui-Wa Lei and Qiong Wu, The Pennsylvania State University Structural equation modeling
More informationA Monte Carlo Simulation of the Robust Rank- Order Test Under Various Population Symmetry Conditions
Journal of Modern Applied Statistical Methods Volume 12 Issue 1 Article 7 5-1-2013 A Monte Carlo Simulation of the Robust Rank- Order Test Under Various Population Symmetry Conditions William T. Mickelson
More informationCONFIRMATORY FACTOR ANALYSIS
1 CONFIRMATORY FACTOR ANALYSIS The purpose of confirmatory factor analysis (CFA) is to explain the pattern of associations among a set of observed variables in terms of a smaller number of underlying latent
More informationReconciling factor-based and composite-based approaches to structural equation modeling
Reconciling factor-based and composite-based approaches to structural equation modeling Edward E. Rigdon (erigdon@gsu.edu) Modern Modeling Methods Conference May 20, 2015 Thesis: Arguments for factor-based
More informationSTAT 730 Chapter 9: Factor analysis
STAT 730 Chapter 9: Factor analysis Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Data Analysis 1 / 15 Basic idea Factor analysis attempts to explain the
More informationChapter 8. Models with Structural and Measurement Components. Overview. Characteristics of SR models. Analysis of SR models. Estimation of SR models
Chapter 8 Models with Structural and Measurement Components Good people are good because they've come to wisdom through failure. Overview William Saroyan Characteristics of SR models Estimation of SR models
More informationFIT INDEX SENSITIVITY IN MULTILEVEL STRUCTURAL EQUATION MODELING. Aaron Boulton
FIT INDEX SENSITIVITY IN MULTILEVEL STRUCTURAL EQUATION MODELING BY Aaron Boulton Submitted to the graduate degree program in Psychology and the Graduate Faculty of the University of Kansas in partial
More informationDimensionality Reduction Techniques (DRT)
Dimensionality Reduction Techniques (DRT) Introduction: Sometimes we have lot of variables in the data for analysis which create multidimensional matrix. To simplify calculation and to get appropriate,
More informationUsing Mplus individual residual plots for. diagnostics and model evaluation in SEM
Using Mplus individual residual plots for diagnostics and model evaluation in SEM Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 20 October 31, 2017 1 Introduction A variety of plots are available
More informationThe Sensitivity of Confirmatory Factor Analytic Fit Indices to. Violations of Factorial Invariance across Latent Classes: A Simulation.
The Sensitivity of Confirmatory Factor Analytic Fit Indices to Violations of Factorial Invariance across Latent Classes: A Simulation Study by Kimberly Carol Blackwell A Dissertation Presented in Partial
More informationProbability and Statistics
Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be CHAPTER 4: IT IS ALL ABOUT DATA 4a - 1 CHAPTER 4: IT
More informationThe goodness-of-fit test Having discussed how to make comparisons between two proportions, we now consider comparisons of multiple proportions.
The goodness-of-fit test Having discussed how to make comparisons between two proportions, we now consider comparisons of multiple proportions. A common problem of this type is concerned with determining
More informationMICHAEL SCHREINER and KARL SCHWEIZER
Review of Psychology, 2011, Vol. 18, No. 1, 3-11 UDC 159.9 The hypothesis-based investigation of patterns of relatedness by means of confirmatory factor models: The treatment levels of the Exchange Test
More informationThe Impact of Varying the Number of Measurement Invariance Constraints on. the Assessment of Between-Group Differences of Latent Means.
The Impact of Varying the Number of Measurement on the Assessment of Between-Group Differences of Latent Means by Yuning Xu A Thesis Presented in Partial Fulfillment of the Requirements for the Degree
More informationStreamlining Missing Data Analysis by Aggregating Multiple Imputations at the Data Level
Streamlining Missing Data Analysis by Aggregating Multiple Imputations at the Data Level A Monte Carlo Simulation to Test the Tenability of the SuperMatrix Approach Kyle M Lang Quantitative Psychology
More informationModel Invariance Testing Under Different Levels of Invariance. W. Holmes Finch Brian F. French
Model Invariance Testing Under Different Levels of Invariance W. Holmes Finch Brian F. French Measurement Invariance (MI) MI is important. Construct comparability across groups cannot be assumed Must be
More informationCompiled by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Developmentand Continuing Education Faculty of Educational Studies
Compiled by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Developmentand Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang Structural Equation Modeling
More informationRunning head: PERMUTATION INVARIANCE 1. Differential Item Functioning in Multiple-Group Confirmatory Factor Analysis
Running head: PERMUTATION INVARIANCE 1 Permutation Randomization Methods for Testing Measurement Equivalence and Detecting Differential Item Functioning in Multiple-Group Confirmatory Factor Analysis Terrence
More informationPower Comparison of Exact Unconditional Tests for Comparing Two Binomial Proportions
Power Comparison of Exact Unconditional Tests for Comparing Two Binomial Proportions Roger L. Berger Department of Statistics North Carolina State University Raleigh, NC 27695-8203 June 29, 1994 Institute
More informationCH.9 Tests of Hypotheses for a Single Sample
CH.9 Tests of Hypotheses for a Single Sample Hypotheses testing Tests on the mean of a normal distributionvariance known Tests on the mean of a normal distributionvariance unknown Tests on the variance
More informationAn Investigation of the Accuracy of Parallel Analysis for Determining the Number of Factors in a Factor Analysis
Western Kentucky University TopSCHOLAR Honors College Capstone Experience/Thesis Projects Honors College at WKU 6-28-2017 An Investigation of the Accuracy of Parallel Analysis for Determining the Number
More informationLogistic Regression: Regression with a Binary Dependent Variable
Logistic Regression: Regression with a Binary Dependent Variable LEARNING OBJECTIVES Upon completing this chapter, you should be able to do the following: State the circumstances under which logistic regression
More informationABSTRACT. Phillip Edward Gagné. priori information about population membership. There have, however, been
ABSTRACT Title of dissertation: GENERALIZED CONFIRMATORY FACTOR MIXTURE MODELS: A TOOL FOR ASSESSING FACTORIAL INVARIANCE ACROSS UNSPECIFIED POPULATIONS Phillip Edward Gagné Dissertation directed by: Professor
More informationOn Selecting Tests for Equality of Two Normal Mean Vectors
MULTIVARIATE BEHAVIORAL RESEARCH, 41(4), 533 548 Copyright 006, Lawrence Erlbaum Associates, Inc. On Selecting Tests for Equality of Two Normal Mean Vectors K. Krishnamoorthy and Yanping Xia Department
More informationExploring Cultural Differences with Structural Equation Modelling
Exploring Cultural Differences with Structural Equation Modelling Wynne W. Chin University of Calgary and City University of Hong Kong 1996 IS Cross Cultural Workshop slide 1 The objectives for this presentation
More informationEVALUATING EFFECT, COMPOSITE, AND CAUSAL INDICATORS IN STRUCTURAL EQUATION MODELS 1
RESEARCH COMMENTARY EVALUATING EFFECT, COMPOSITE, AND CAUSAL INDICATORS IN STRUCTURAL EQUATION MODELS 1 Kenneth A. Bollen Carolina Population Center, Department of Sociology, University of North Carolina
More informationWhat is Structural Equation Modelling?
methods@manchester What is Structural Equation Modelling? Nick Shryane Institute for Social Change University of Manchester 1 Topics Where SEM fits in the families of statistical models Causality SEM is
More informationWooldridge, Introductory Econometrics, 3d ed. Chapter 9: More on specification and data problems
Wooldridge, Introductory Econometrics, 3d ed. Chapter 9: More on specification and data problems Functional form misspecification We may have a model that is correctly specified, in terms of including
More informationSHOPPING FOR EFFICIENT CONFIDENCE INTERVALS IN STRUCTURAL EQUATION MODELS. Donna Mohr and Yong Xu. University of North Florida
SHOPPING FOR EFFICIENT CONFIDENCE INTERVALS IN STRUCTURAL EQUATION MODELS Donna Mohr and Yong Xu University of North Florida Authors Note Parts of this work were incorporated in Yong Xu s Masters Thesis
More informationStructural Equation Modeling
CHAPTER 23 Structural Equation Modeling JODIE B. ULLMAN AND PETER M. BENTLER A FOUR-STAGE GENERAL PROCESS OF MODELING 663 MODEL ESTIMATION TECHNIQUES AND TEST STATISTICS 667 MODEL EVALUATION 671 MODEL
More informationStructural Equation Modeling in Practice: A Review and Recommended Two-Step Approach
Psychological Bulletin 1988, Vol. 103, No. 3,411-423 Copyright 1988 by the American Psychological Association, Inc. 0033-2909/88/$00.75 Structural Equation Modeling in Practice: A Review and Recommended
More informationPrinciples and Practice in Reporting Structural Equation Analyses
Psychological Methods Copyright 2002 by the American Psychological Association, Inc. 2002, Vol. 7, No. 1, 64 82 1082-989X/02/$5.00 DOI: 10.1037//1082-989X.7.1.64 Principles and Practice in Reporting Structural
More informationLeast Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions
Journal of Modern Applied Statistical Methods Volume 8 Issue 1 Article 13 5-1-2009 Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error
More informationRANDOM INTERCEPT ITEM FACTOR ANALYSIS. IE Working Paper MK8-102-I 02 / 04 / Alberto Maydeu Olivares
RANDOM INTERCEPT ITEM FACTOR ANALYSIS IE Working Paper MK8-102-I 02 / 04 / 2003 Alberto Maydeu Olivares Instituto de Empresa Marketing Dept. C / María de Molina 11-15, 28006 Madrid España Alberto.Maydeu@ie.edu
More informationIntroduction to Structural Equation Modeling
Introduction to Structural Equation Modeling Notes Prepared by: Lisa Lix, PhD Manitoba Centre for Health Policy Topics Section I: Introduction Section II: Review of Statistical Concepts and Regression
More informationMeasuring Market Orientation: Are There Differences Between Business Marketers and Consumer Marketers?
5 Measuring Market Orientation: Are There Differences Between Business Marketers and Consumer Marketers? by Felix T. Mavondo Mark A. Farrell Abstract: The paper investigates issues of scale equivalence
More informationStructural Equation Modeling
Chapter 11 Structural Equation Modeling Hans Baumgartner and Bert Weijters Hans Baumgartner, Smeal College of Business, The Pennsylvania State University, University Park, PA 16802, USA, E-mail: jxb14@psu.edu.
More informationMixture Modeling. Identifying the Correct Number of Classes in a Growth Mixture Model. Davood Tofighi Craig Enders Arizona State University
Identifying the Correct Number of Classes in a Growth Mixture Model Davood Tofighi Craig Enders Arizona State University Mixture Modeling Heterogeneity exists such that the data are comprised of two or
More informationEstimation and Hypothesis Testing in LAV Regression with Autocorrelated Errors: Is Correction for Autocorrelation Helpful?
Journal of Modern Applied Statistical Methods Volume 10 Issue Article 13 11-1-011 Estimation and Hypothesis Testing in LAV Regression with Autocorrelated Errors: Is Correction for Autocorrelation Helpful?
More informationA Simulation Paradigm for Evaluating. Approximate Fit. In Latent Variable Modeling.
A Simulation Paradigm for Evaluating Approximate Fit In Latent Variable Modeling. Roger E. Millsap Arizona State University Talk given at the conference Current topics in the Theory and Application of
More informationTechnical note on seasonal adjustment for M0
Technical note on seasonal adjustment for M0 July 1, 2013 Contents 1 M0 2 2 Steps in the seasonal adjustment procedure 3 2.1 Pre-adjustment analysis............................... 3 2.2 Seasonal adjustment.................................
More informationSignificance Tests and Goodness of Fit in the Analysis of Covariance Structures
Psychological Bulletin 1980, Vol. 88, No. 3, 5SS-606 Significance Tests and Goodness of Fit in the Analysis of Covariance Structures P. M. Bentler and Douglas G. Bonett University of California, Los Angeles
More informationFunctioning of global fit statistics in latent growth curve modeling
University of Northern Colorado Scholarship & Creative Works @ Digital UNC Dissertations Student Research 12-1-2009 Functioning of global fit statistics in latent growth curve modeling Kathryn K. DeRoche
More informationSupplementary Note on Bayesian analysis
Supplementary Note on Bayesian analysis Structured variability of muscle activations supports the minimal intervention principle of motor control Francisco J. Valero-Cuevas 1,2,3, Madhusudhan Venkadesan
More informationIngredients of statistical hypothesis testing and their significance
Ingredients of statistical hypothesis testing and their significance Hans von Storch, Institute of Coastal Research, Helmholtz Zentrum Geesthacht, Germany 13 January 2016, New Orleans; 23rd Conference
More informationBasic Statistics. 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation).
Basic Statistics There are three types of error: 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation). 2. Systematic error - always too high or too low
More informationPoint and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples
90 IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 1, MARCH 2003 Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples N. Balakrishnan, N. Kannan, C. T.
More informationChapter 10. Chapter 10. Multinomial Experiments and. Multinomial Experiments and Contingency Tables. Contingency Tables.
Chapter 10 Multinomial Experiments and Contingency Tables 1 Chapter 10 Multinomial Experiments and Contingency Tables 10-1 1 Overview 10-2 2 Multinomial Experiments: of-fitfit 10-3 3 Contingency Tables:
More informationON THE MULTIVARIATE ASYMPTOTIC DISTRIBUTION OF SEQUENTIAL CHI-SQUARE STATISTICS JAMES H. STEIGER
PSYCHOMETRIKA--VOL. 50, NO. 3, 253--264. SEPTEMBER 1985 ON THE MULTIVARIATE ASYMPTOTIC DISTRIBUTION OF SEQUENTIAL CHI-SQUARE STATISTICS JAMES H. STEIGER UNIVERSITY OF BRITISH COLUMBIA ALEXANDER SHAPIRO
More information1 Overview. Coefficients of. Correlation, Alienation and Determination. Hervé Abdi Lynne J. Williams
In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 2010 Coefficients of Correlation, Alienation and Determination Hervé Abdi Lynne J. Williams 1 Overview The coefficient of
More informationCausal Inference Using Nonnormality Yutaka Kano and Shohei Shimizu 1
Causal Inference Using Nonnormality Yutaka Kano and Shohei Shimizu 1 Path analysis, often applied to observational data to study causal structures, describes causal relationship between observed variables.
More informationLecture Slides. Elementary Statistics. by Mario F. Triola. and the Triola Statistics Series
Lecture Slides Elementary Statistics Tenth Edition and the Triola Statistics Series by Mario F. Triola Slide 1 Chapter 13 Nonparametric Statistics 13-1 Overview 13-2 Sign Test 13-3 Wilcoxon Signed-Ranks
More informationIMP 2 September &October: Solve It
IMP 2 September &October: Solve It IMP 2 November & December: Is There Really a Difference? Interpreting data: Constructing and drawing inferences from charts, tables, and graphs, including frequency bar
More informationFACTOR ANALYSIS AS MATRIX DECOMPOSITION 1. INTRODUCTION
FACTOR ANALYSIS AS MATRIX DECOMPOSITION JAN DE LEEUW ABSTRACT. Meet the abstract. This is the abstract. 1. INTRODUCTION Suppose we have n measurements on each of taking m variables. Collect these measurements
More information11-2 Multinomial Experiment
Chapter 11 Multinomial Experiments and Contingency Tables 1 Chapter 11 Multinomial Experiments and Contingency Tables 11-11 Overview 11-2 Multinomial Experiments: Goodness-of-fitfit 11-3 Contingency Tables:
More informationSTRUCTURAL EQUATION MODELING. Khaled Bedair Statistics Department Virginia Tech LISA, Summer 2013
STRUCTURAL EQUATION MODELING Khaled Bedair Statistics Department Virginia Tech LISA, Summer 2013 Introduction: Path analysis Path Analysis is used to estimate a system of equations in which all of the
More informationRobustness of factor analysis in analysis of data with discrete variables
Aalto University School of Science Degree programme in Engineering Physics and Mathematics Robustness of factor analysis in analysis of data with discrete variables Student Project 26.3.2012 Juha Törmänen
More informationLecture Slides. Section 13-1 Overview. Elementary Statistics Tenth Edition. Chapter 13 Nonparametric Statistics. by Mario F.
Lecture Slides Elementary Statistics Tenth Edition and the Triola Statistics Series by Mario F. Triola Slide 1 Chapter 13 Nonparametric Statistics 13-1 Overview 13-2 Sign Test 13-3 Wilcoxon Signed-Ranks
More informationRigorous Evaluation R.I.T. Analysis and Reporting. Structure is from A Practical Guide to Usability Testing by J. Dumas, J. Redish
Rigorous Evaluation Analysis and Reporting Structure is from A Practical Guide to Usability Testing by J. Dumas, J. Redish S. Ludi/R. Kuehl p. 1 Summarize and Analyze Test Data Qualitative data - comments,
More informationDistribution Fitting (Censored Data)
Distribution Fitting (Censored Data) Summary... 1 Data Input... 2 Analysis Summary... 3 Analysis Options... 4 Goodness-of-Fit Tests... 6 Frequency Histogram... 8 Comparison of Alternative Distributions...
More informationAn Equivalency Test for Model Fit. Craig S. Wells. University of Massachusetts Amherst. James. A. Wollack. Ronald C. Serlin
Equivalency Test for Model Fit 1 Running head: EQUIVALENCY TEST FOR MODEL FIT An Equivalency Test for Model Fit Craig S. Wells University of Massachusetts Amherst James. A. Wollack Ronald C. Serlin University
More information