Viewing Spearman's hypothesis from the perspective of multigroup PCA A comment on SchoÈnemann's criticism

Size: px
Start display at page:

Download "Viewing Spearman's hypothesis from the perspective of multigroup PCA A comment on SchoÈnemann's criticism"

Transcription

1 Intelligence 29 (2001) 231± 245 Viewing Spearman's hypothesis from the perspective of multigroup PCA A comment on SchoÈnemann's criticism Conor V. Dolan a, *, Gitta H. Lubke b,1 a Department of Psychology, University of Amsterdam, Roetersstraat 15, 1018 WB Amsterdam, Netherlands b Faculty of Psychology and Pedagogy, Vrije Universiteit, Van der Boechorstlaan 1, 1981 BT, Amsterdam, Netherlands Received 25 October 1999; received in revised form 15 March 2000; accepted 15 May 2000 Abstract Jensen's test of Spearman's hypothesis is meant to demonstrate the importance of general intelligence in Black ± White (B± W) differences in psychometric intelligence test scores. SchoÈnemann purports to demonstrate, through an analysis of real and simulated data, and the presentation of a theorem, that Spearman correlations are artifacts. We discuss the theorem and conclude that the theorem cannot be advanced in support of the contention that Spearman correlations are positive and substantial by mathematical necessity. The theorem encompasses a multigroup principal component analysis (PCA), which is a viable model to investigate Jensen's proposition that Blacks and Whites differ mainly with respect to g. We view SchoÈnemann's simulation study in the light of the multigroup PCA model, and interpret it as a study of the specificity of Spearman correlations, given model violations. As such, we find it to be open to several criticisms. SchoÈnemann does raise an important issue, namely whether Spearman correlation can be trusted to prove the importance of g in B±W differences in psychometric IQ scores. Based on recent studies, we consider the Spearman correlation to be a suboptimal test of Spearman's hypothesis, and contend that an explicit model-based approach should be used. D 2001 Elsevier Science Inc. All rights reserved. Keywords: Spearman's hypothesis; General IQ; Black ± white differences; Jensen's test; Specificity * Corresponding author. address: conor@psy.uva.nl (C.V. Dolan). 1 address: gh.lubke@psy.vu.nl (G.H. Lubke) /01/$ ± see front matter D 2001 Elsevier Science Inc. All rights reserved. PII: S (00)

2 232 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± Introduction Jensen has proposed that the differences between Blacks (B) and Whites (W) in psychometric intelligence test scores are attributable mainly to a difference in general intelligence, or g (Jensen, 1985, 1998). To test this proposition, Jensen formulated Spearman's hypothesis. There are two versions of this hypothesis, a weak version and a strong version. The strong version states that ``variation in the size of the mean W±B difference across various tests is solely a positive function of variation in the tests' g loadings'' (Jensen, 1998, p. 372). The weak version states that the variation is mainly a positive function of variation in the tests' g loadings. The rationale is that a test, which is strongly influenced by g (i.e., characterized by a large factor loading on g) will display a large B±W difference in mean. Jensen has investigated Spearman's hypothesis by calculating the correlation between the vector of differences in means and the g loadings. These correlations are referred to as Spearman correlations. This use of Spearman correlations has been criticized extensively by SchoÈnemann (1989, 1992, 1997a, 1997b). In a recent special issue of Cahiers de Psychologie Cognitive (CPC), SchoÈnemann (1997a, 1997b) purports to demonstrate that Spearman correlations are positive and substantial by mathematical necessity. To this end, he presented various results including a theorem and a simulation study. These are based on principal component analysis (PCA). Although the special CPC issue included a large number of invited commentaries, none of the contributors actually identified the problematic aspects of SchoÈnemann's criticism. In view of the importance of the issue of B±W differences in IQ test scores in psychology (and beyond), we believe that criticisms of Spearman correlations should be considered carefully. In the present paper, therefore, we discuss SchoÈnemann's theorem. We show that the theorem cannot be advanced in support of the thesis that Spearman correlations are necessarily positive and substantial. The theorem comprises a scenario in which Spearman correlation will equal unity, but whether this scenario holds in reality is an empirical issue. In addition, we discuss the simulation study in the light of the theorem. We interpret this study as an investigation of the specificity of Spearman correlation given certain model violations. As such, this simulation is open to several criticisms. Remaining within the PCA model implied by the theorem, we suggest and illustrate a different way of investigating the specificity of Spearman correlations given model violations. Our own simulations show that Spearman correlations are not positive by mathematical necessity. We note that SchoÈnemann does raise an important issue, viz., is the Spearman correlation a good method to demonstrate the centrality of g in B±W differences. Based on our research, we contend that an explicit modeling approach is preferable to Spearman correlations (Dolan, 2000; Lubke, Dolan, & Kelderman, 2000). In the interest of clarity and to ease presentation, we introduce the central terms that feature in this paper. SchoÈnemann's theorem and SchoÈnemann's simulation are pinpointed below. Jensen's theory pertains to the centrality of the hypothetical construct, g, in B±W differences in psychometric intelligence scores. Jensen proposes that g is the only (strong version), or the main (weak version) source of B±W differences, i.e., B±W differences in psychometric intelligence scores are due (mainly) to differences in g (Jensen, 1998). The consequences of the centrality of g may be considered in different ways. Here we are concerned solely with the implication for mean and covariance structures. Spearman's hypothesis concerns a single aspect of the theory, namely the correlation between

3 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± differences in means and factor loadings. Such correlations are referred to as Spearman correlations. Jensen's test of Spearman's hypothesis is based on such correlations. The specificity of Spearman correlations concerns the probability that a Spearman correlation assumes a low value, when aspects of Spearman's hypothesis are violated. Jensen's test has low specificity if Spearman correlations assume large values, even though aspects of Spearman's hypothesis are false (e.g., g is absent). Jensen's procedure of calculating such correlations involves the following steps: (1) Exploratory factor analyses of psychometric data are carried out in representative samples of Blacks and Whites, separately, to extract factor loadings of the tests on g (Jensen suggests that this can be done in a number of ways, see Jensen & Weng, 1994); (2) Factorial invariance over groups is established by calculating measures of factorial congruence of the factor loadings in the White and the Black samples (congruence should be high); (3) Variation in factor loadings is required to be appreciable; (4) Differences in means are standardized by dividing by the pooled standard deviations; (5) The standardized differences in means are correlated with the g factor loadings. A mean (Spearman's rho) correlation of.59 (S.D. = 0.12), obtained using this procedure, has been reported based on 11 studies (Jensen, 1985). 2. SchoÈnemann's theorem SchoÈnemann (1997a) offers two interpretations of Spearman's hypothesis, the Level I and Level II interpretations. The Level I interpretation concerns PCA of a pooled (over the two groups) covariance matrix without prior within-group centering (subtraction of means). In this case, the presence of between-group differences will distort the results of the within-group PCA. Specifically, the dominant eigenvalue will reflect the between-group variance and the associated eigenvector will approximately equal the vector of differences in means of the groups. The Level I interpretation is not relevant, given Jensen's procedure of testing Spearman's hypothesis. In recent discussions, Jensen (1992, 1998) does not advocate pooling without prior within-group centering. Rather, one is advised to carry out separate factor analyses or PCAs in the two groups. We therefore shall not dwell on the Level I interpretation. SchoÈnemann's Level II interpretation is more relevant. This interpretation runs as follows: `... the mean difference vector correlates positively with the regression weights of the PC1s of both the within sample covariance matrices' (SchoÈnemann, 1997a, p. 668). SchoÈnemann's theorem is closely related to the Level II interpretation, in that it provides a scenario in which the Level II interpretation holds: If the range Re p of a p-variate normal random vector y N p (0,S), where S is positive with diag(s)=i, is partitioned into a High set (H) and a Low set (L) by the plane orthogonal to the PC1, and containing the origin, and both within-covariance matrices S H, S L, remain positive, then (a) the mean difference vector d:=e[ yjh] E[ yjl] will be collinear with the PC1 of the pooled S, and (b) d will also be collinear with both PC1 of S L, S H (SchoÈnemann, 1997a, p. 685; this is a slightly revised version of the theorem presented in SchoÈnemann, 1992 and SchoÈnemann, 1989; see Dolan, 1997).

4 234 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± The meaning of the theorem We attempt to convey the theorem in less technical terms, as we want to make it clear that the consequences (a) and (b) follow trivially from the selection described in the theorem. Whether this type of selection is consistent with observed B±W differences, and so whether (a) and (b) hold, is an empirical issue. In discussing the meaning of the theorem we employ elementary matrix algebra. Many statistics books provide accessible introductions to such material (e.g., see Lawley & Maxwell, 1971, Appendix I; Morrison, 1990, pp. 36±44; Stevens, 1992, Chapter 2). Imagine a population in which realizations of a p-dimensional random vector y, distributed N p (0, S), are observed in a random sample (we do not employ subject subscripts). The notation N p (0, S) implies that y is multivariate normally distributed with zero mean and covariance matrix S. The random vector y may represent the scores of a given subject of an intelligence test (e.g., Jensen & Reynolds, 1982, for a good example). The theorem specifies that S contains positive elements, but that otherwise the covariance structure of S is not an issue. For instance, the covariance matrix S is not required to be consistent with a common factor model. We consider the eigenvalue decomposition of the covariance matrix S. This decomposition is usually carried out as a form of factor analysis, namely PCA. The aim is to reduce the p observed variables to a smaller number of uncorrelated linear combinations of the p variables. The eigenvalue decomposition of S is expressed as t (e.g., Morrison, 1990, Chapter 8), where ( p p) contains the orthonormal (column) eigenvectors ( =[ 1, 2,..., p ]; t equals the identity matrix, I), ( p p; diagonal) contains the positive eigenvalues, and superscript t denotes transposition. We assume that the eigenvalues are distinct and ordered in descending order: diag()=[d 1 > d 2 >...>d p 1 >d p ]. We denote the ( p 1) eigenvector associated with d i as i, and call the principal component scores, calculated as i t y, i scores. In the population, the i scores are distributed N(0, d i ), i.e., as zero-mean normal variables. The eigenvalues d i can be viewed as the variance of the i scores and the matrix may be viewed as a covariance matrix. Note that this matrix is diagonal because the i scores are mutually uncorrelated. According to the theorem, the population is divided into a high (H) and a low (L subpopulation. The H (L) subpopulation consists of individuals whose 1 scores are greater (less) than zero. The collinearity, which features in the theorem, follows naturally from the manner in which the subpopulations are formed by selection on 1 scores. In the H subpopulation we have (approximately) yjh N p (m H, S H ), where S H = H t, m H = H, and yjh denotes the vector y in the H subpopulation (similar results hold in the L subpopulation). The vector H ( L ) equals the means of the i scores in the H (L) subpopulation. The fact that S H = H t and m H = H is due to the nature of the selection variable (see Dolan, 1997). The consequence of selection on 1 scores is that the eigenvectors in the population equal those in the two subpopulations. Furthermore, because selection is on the 1 scores, and these are uncorrelated by definition with the i scores (i =2,..., p), we find that diag( H )=[d 1H, d 2 > d 3 >...> d p ] and diag( L )=[d 1L, d 2 > d 3 >...> d p ]. Thus the variances of the i scores (i =2,..., p) are unchanged, but the variances of the 1 scores do change due to the selection. The eigenvalues d 1H and d 1L

5 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± are not necessarily the largest eigenvalues in the subpopulations. The means of yjh and yjl are a function of the means of the 1 scores. The ( p 1) vectors H and L equal [ H1,0,0,...,0] t and [ L1,0,0,...,0] t, respectively, where H1 ( L1 ) is the mean of the 1 scores in the H (L) subpopulation. Note that the means in the subpopulations of the other components are zero, because selection has taken place on the 1 scores, which are uncorrelated with the other i scores (i =2,..., p). Consequently, given present selection, we may be more explicit and write m H = H = 1 H1 and m L = L = 1 L1. For a detailed derivation of the effects on means and covariance matrices in groups selected on 1 scores the reader is referred to Dolan (1997). We view the theorem as a possible scenario in which the Level II interpretation will hold. SchoÈnemann expresses this as follows: ``...under the stated assumptions, the cosines between the mean difference vector and the largest eigenvectors and the two within-sample covariance matrices are not just positive in general, but, except for sampling error, are unity in all multinormal distributions with positive within-covariance matrices'' (SchoÈnemann 1997a, p. 685). 1 As it stands, however, the theorem is trivial. Given the selection variable ( 1 scores), the vector d (E[ yjh] E[ yjl]) is necessarily collinear with 1. In addition, the theorem is too restrictive: given selection on the 1 scores, the collinearity will hold in the absence of positiveness of S, S H or S L. Furthermore, the standardization of S is not required. The consequences (a) and (b) of the theorem follow necessarily from the premises contained in the theorem. With respect to SchoÈnemann's contention that Spearman correlations are artifacts, it is important to note the following. It is an empirical issue whether the premises of the theorem are, by reasonable approximation, tenable in reality. Hence, the theorem cannot be advanced in support of the thesis that the Spearman correlations are positive and substantial by mathematical necessity. As mentioned, SchoÈnemann presents a simulation study in addition to his theorem. To properly judge this simulation study, we require the means and covariance structure implied by the theorem. 4. The model implied by the theorem As we have seen above, the covariance structure model, after the selection on the 1 scores, is as follows: X ˆ PD H HP t 1a X ˆ PD L LP t ; 1b where, as defined above, the p p matrix contains the eigenvectors (equal across the groups), and the diagonal matrices containing the eigenvalues H and L, as defined above. 1 This `and' presumably should read `of': the eigenvectors of the two within-sample covariance matrices.

6 236 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231±245 Eqs. (1a) and (1b) represent a multigroup PCA (see Flury, 1988). The model for the means is: m H ˆ n P H m L ˆ n P L ; where the vector contains constant intercepts and the vectors H and L contains the means of the i scores (see Dolan, 1997). It is interesting to note that Flury, Pienaar, and Nel (1995) previously presented just this model to investigate group differences. Note that the model implied by the theorem (Eqs. (2a) and (2b)) comes very close to the idea behind Jensen's theory: i.e., if the groups differ with respect to g (strong version of Spearman's hypothesis), and if 1 scores are a good approximations of scores on g, the collinearity should indeed hold. The ( p 1) vector H ( L ) equals [ H1,0,0,...,0] t ([ L1,0,0,...,0] t ), where H1 ( L1 ) is the mean of the 1 scores in the H (L) subpopulation. Collinearity between the eigenvector and the difference in mean vectors follows as m H m L equals 1 ( H1 L1 ) (see Millsap, 1997, for a similar discussion of Spearman's hypothesis in the context of the factor model). As mentioned, the degree to which Spearman's hypothesis is represented by the theorem hinges on the proposition that 1 scores are a good operationalization of g. Within the confines of PCA, this is quite difficult to establish by formal goodness of fit testing. Usually the eigenvalue >1 rule (given that S is a correlation matrix) and the screeplot are used to establish that a given S is compatible with the common factor model (e.g., Stevens, 1992). We return to this issue below. With the model implied by the theorem in place, we now evaluate SchoÈnemann's simulation study. First, we describe this study briefly. 2a 2b 5. SchoÈnemann's simulation In addition to the theorem, SchoÈnemann presents an extensive simulation study to investigate the effects of sample size on his level II interpretation. SchoÈnemann demonstrates that Spearman correlations assume substantial values (about.6) under circumstances that the covariance matrix S is positive, but is not consistent with a single common factor model. In the simulation, the correlation S is calculated by standardizing the matrix TT 0 +I, where I is the p p identity matrix and the matrix T contains elements drawn randomly from the [0,1] uniform distribution. The correlation matrix S constructed in this fashion does not fit a single factor model (as we demonstrate below). The mean vector is set to equal zero, m = 0. Next, SchoÈnemann simulated data from N(m,S). Using these data, he carried out selection to form a high group and a low group. In this simulation, the selection is carried out on the basis of the selection variable w t y, where w is the p-dimensional unit vector, w = 1. Members of the high (low) group have scores w t y > c (w t y < c). Note that this selection variable differs from the selection variable in the theorem. In the theorem, selection is on the 1 scores. For various choices of c and p, SchoÈnemann's carries out Jensen's test of Spearman's hypothesis. For instance, for c = 0 and p = 10, he finds a correlation of about.6, i.e., about the value observed in empirical studies (Jensen, 1985). SchoÈnemann purports to show that the Spearman correlation may assume substantial values,

7 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± even though S is not consistent with the single factor model, i.e., the simplest case of the factor-analytic g model. This suggests that Spearman correlations cannot be trusted to demonstrate the importance of g in B±W differences. We judge SchoÈnemann's simulation to be open to several criticisms, which render it unconvincing. 6. Evaluation of the simulation Working within the context of PCA, one could investigate the soundness of Jensen's procedure by violating the premises of the theorem and gauging their effects on the Spearman correlation. In effect, SchoÈnemann introduces two violations: (1) the correlation matrix S is created such that it does not fit the single common factor model, and (2) in creating the selection variable w equals 1, the unit p column vector, rather than 1. To ease presentation, we consider the normalized vector 1/ p, which we denote 1n, instead of 1. As we interpret SchoÈnemann's simulation study, it essentially addresses the specificity of Spearman correlations, given these violations. The persuasiveness of SchoÈnemann's simulation depends on the degree to which the violations represent a departure from the theorem. If the violations are severe and the Spearman's correlations remain substantial, this would demonstrate the lack of specificity of Spearman correlations. To investigate the violations mentioned, we constructed five correlation matrices, S, in the manner outlined by SchoÈnemann. We created a matrix T consisting of elements drawn randomly from the [0,1] uniform distribution. The S is calculated by standardizing the covariance matrix TT 0 +I, where I is identity matrix. The mean vector is set to equal the zero vector, m = 0. Next we calculated the expected covariance matrices, S L and S H, and mean vectors, m L and m H, in samples selected on the criterion 1 t n y >0or1 t n y < 0. We need not simulate data, as both covariance matrices and mean vectors can be derived analytically: 2 S L ˆ S Sws 2 s 2 L s2 s 2 w t S S H ˆ S Sws 2 s 2 H s2 s 2 w t S m L ˆ m Sws 2 m L m H ˆ m Sws 2 m H 3a 3b 4a 4b 2 Dolan (1997) shows that Eqs. (3a)±(4b) reduce to Eqs. (1a)±(2b) if 1 is substituted for w and t is substituted for S. If1 n is substituted for w, however, this is not the case. As mentioned, the model in Eqs. (1a)± (2b) is a possible model for the strong version of Spearman's hypothesis. Assuming 1 scores are a good proxy for g scores, selection on 1 scores create group differences that are consistent with Jensen's theory. Other selection variables, such as 1 n y, do not give rise to data that are strictly consistent with the model in Eqs. (1a)±(2b), and so consistent with Jensen's theory. The question remains, however, how well Eqs. (1a) ±(2b) can approximate Eqs. (3a)±(4b). This issue of statistical power is addressed in the text.

8 238 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231±245 where w = 1 n, m = 0, and s 2 represents the variance of 1 n t y in the population, i.e., 1 n t S1 n.as m = 0, the mean of 1 n t y, i.e., 1 n t m, is zero. Expressions for the variance (s 2 L and s 2 H) and mean (m L and m H )of1 n t y in the subpopulations can be found by considering the moment of the truncated normal distribution (Kendall & Stuart, 1968). Greene (1993, p. 685) provides the relevant expressions. Note that given the present selection (symmetric about the mean), s 2 L = s 2 H and m H = m L. Eqs. (3a)±(4b) follow from the application of the Pearson±Lawley selection rules (Lawley, 1943). These rules are often used to derive the means and covariance matrix of one set of variable given conditioning (or selection) on a second set of variables (Greene, 1993, p. 76; Morrison, 1990, p. 90 ff.). Here we investigate the means and covariance matrix of y, given selection on the variable 1 n t y. See Meredith (1964) and MutheÂn (1989) for important applications of these rules within the common factor model. We investigated the effects of the violations in the following way: (1) We calculated the scalar 1 n t 1. The more this scalar deviates from 1, the greater the violation. Note that 1 t 1 = 1. (2) We fitted the model (Eqs. (1a), (1b) and (2a), (2b)) to the summary statistics in the L and the H subpopulations (Eqs. (3a), (3b) and (4a), (4b)) using LISREL (JoÈreskog & SoÈrbom, 1993; for details see Dolan, Bechger, & Molenaar, 1999). 3 In order to fit the model, however, we parameterized the model for the means as follows (Eqs. (5a) and (5b)): m H ˆ n m L ˆ n PQ ˆ n 1 H1 L1 ; 5b where 0 equals [ H1,0,0,...,0] t [ L1,0,0,...,0] t,or H L. In this parameterization, the difference in the means of the 1 scores is estimated rather than the means themselves. The parameterization is required for reasons of identification, but is otherwise innocuous. SoÈrbom (1974) discusses this parameterization in the context of the common factor model. If selection based on 1 t n y, rather than on t 1 y, represents a serious violation, this model should not fit the population matrices. 2 In carrying out these analyses, we set N(L) = N(H) = 200, total N = 400. We estimated parameters by minimizing the loglikelihood ratio function. These analyses provide a c 2 goodness of fit index that we use to assess the degree of misfit. 4 For the five covariance matrices the scalar 1 t n 1 equaled 0.997, 0.997, 0.998, 0.998, and This suggests that selection of 1 t n y is not very different from selection on t 1 y. Fitting the model (df = 64) to the population matrices produced the following goodness of fit indices, which may be interpreted as noncentrality parameters (NCPs): , 0.339, 0.182, 0.108, and As these observed indices are extremely small (fitting the true model, would result in NCPs of zero). This again suggests that there is very little difference between selection of 5a 3 A MATLAB m-script to calculate the summary statistics are available on request, as are LISREL input files for all the analyses reported. 4 The goodness of fit index is a function of the loglikelihood ratio (Saris & Satorra, 1993). Fitting the true model to population statistics yields an index of zero. Fitting the true model to sample statistics yields an index that is central c 2 distributed (given that the sample is large and the data multinormally distributed). The expected value of the c 2 equals the degrees of freedom (df ) of the model. Fitting the false model to population statistics yields the noncentrality parameter (NCP) of the noncentral c 2 distribution, and fitting the false model to sample statistics yields a test statistic that has a noncentral c 2 distribution. This distribution is characterized by df and the NCP. The expected value of the index equals df + NCP.

9 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± t n y and t 1 y. Power calculations indicate that given the NCP of 0.339, one would require over 50,000 subjects to reject the model given =.01 and power = The fact that the Spearman correlations in SchoÈnemann's simulation assume large positive values is not surprising given these findings. SchoÈnemann introduces a second violation in that S does not fit a single common factor model. Indeed, using LISREL to fit a single common factor to our five correlation matrices S (again using maximum likelihood estimation) produced the following NCPs: 273.4, 264.3, 237.9, 231.7, (df = 35; N was set to equal 400). Given the NCP of 231.7, power calculations reveal that one would require less than 100 subjects to reject the model with =.05 and power = Clearly, SchoÈnemann's method of constructing correlation matrices that do not fit a single factor model is effective. Still a problem that remains is that, within PCA, it is difficult to establish that a single common factor model does not fit. We have mentioned that the eigenvalue >1 and screeplots are usually employed to determine the numbers of principal component to retain in a PCA. Fig. 1 contains the plot of the eigenvector associated with the dominant eigenvalue of S and the plots of the eigenvalues of S and S L (we do not consider those in S H as they equal those in S L ). This figure is instructive for two reasons. First, it shows why the scalar 1 t n 1 assumes values that are so close to 1: given the manner in which the matrices are constructed, there is very p little variation in 1. The components of 1 equal about 0.32, while those in 1 n equal 1= 10 ( 0.316). Second, the screeplots of the eigenvalues of the correlation matrix S and the covariance matrix S L indicate that both appear to be consistent with the single factor Fig. 1. Top left panel: 1 (eigenvector associated with the dominant eigenvalue). Top right panel: screeplot of S. Bottom left panel: screeplot of S L (S L is not standardized).

10 240 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231±245 model. Clearly SchoÈnemann's method of creating the covariance matrix S does not represent a severe violation, at least given the current heuristics as applied in PCA to determine the number of factors. The first eigenvalue is clearly dominant (see Fig. 1). Again, the fact the Spearman correlations assume substantial values is unsurprising. 7. Three specific criticisms of the simulation When viewed in the light of the theorem, SchoÈnemann's simulation, as a study of specificity, is not convincing or informative. It approximates the scenario of the theorem quite closely, and it is limited and idiosyncratic in the violations of the model (Eqs. (1a)± (2b)). Given the manner of constructing S, selection based on 1 n t y does not differ greatly from selection based on 1 t y. More generally, as long as w contains positive elements, and S L and S H are required to remain positive, selection on w t y will resemble selection on the 1 scores, 1 t y. Satisfying these requirements makes the model associated with selection on w t y closely resemble the model associated with selection on 1 t y. In addition to this general point, we note that the simulation study is inconsistent with Jensen's theory and his procedure on three counts. (1) Jensen does not actually require that covariance matrices be positive. Positiveness can be relinquished, e.g., removed by simple transformation, with no ill effects to Jensen's test. (2) Jensen does not propose that Blacks and Whites arise through any selection mechanism based on the variable 1 t y. This mechanism is implausible (see Loehlin, 1992). Jensen (1998) proposes that Blacks and Whites differ with respect to the genetic and environmental causes of individual differences in g. (3) Jensen emphasizes that the factor loadings are required to display ample variation (Jensen, 1985, 1992, 1998). It is not clear just how much variation should be present, but it is reasonable to conclude, from Fig. 1 and our results, that this requirement is not met. In conclusion, we do not believe that SchoÈnemann's simulation provides convincing support for the proposition that Spearman correlations are positive by mathematical necessity. 8. Returning to the model implied by the theorem As mentioned, working within the confines of PCA, Jensen's theory that Blacks and Whites differ with respect to g, may accurately be conveyed by the model implied by the theorem (Eqs. (1a),(1b) and (2a), (2b)). This model is consistent with the idea that g is the source of group differences (insofar as these differences are expressed in means and covariance structures). One can investigate Spearman correlations without relying on artificial selection by simply introducing violations of this model and carrying out Jensen's procedure. In so doing, one may create covariance matrices and mean vectors that differ from expectation (i.e., Eqs. (1a)±(2b)) in a controllable and informative manner. In addition, by staying with a model-fitting approach, one can formally assess the power to detect violations. Such an investigation was carried out by Lubke et al. (2000). Working with the multigroup confirmatory factor model, Lubke et al. (2000) found that the Spearman correlation lacks specificity. Although a detailed study is beyond the present scope, we do

11 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± want to illustrate this method of investigating Spearman correlations within the context of PCA (i.e., Eqs. (1a)±(2b)). 9. Illustrative simulation Blacks and Whites may differ in many ways that are not compatible with the strong version of Spearman's hypothesis (Lubke, Dolan, & Kelderman, 2000). We distinguish two classes of violation, not mutually exclusive. (1) Blacks and Whites differ with respect to principal components in addition to the dominant one. This violation includes instances of the weak version of Spearman's hypothesis. Because Jensen formulation of the weak version is ill defined (Dolan, 2000; Lubke, Dolan, & Kelderman, 2000), it is difficult to tell when we may conclude that the weak version holds (rather than reject Spearman's hypothesis). (2) Blacks and Whites differ with respect to variables that have nothing to do with the principal components. We illustrate instances of both classes of violation within the PCA model. The first violation involves the introduction of nonzero components into the vectors of means of the i scores (i =2,..., p). According to the strong version of Spearman's hypothesis these should equal B =[ B1,0,0,...,0] t and W =[ W1,0,0,...,0] t (subscripts B and W denote Blacks and Whites, respectively). The second violation involves setting both B1 and W1 to equal zero, and adding a vector of random negative values to the mean vector m B (means of the observed y). These violations differ in an interesting manner. In the first, a central aspect of Jensen's theory is retained: the groups do differ with respect to principal components, but not in the manner predicted by the strong version of Spearman's hypothesis. The mean structure can still be modeled in a manner that is consistent with Eqs. (2a) and (2b). The second violation is more serious in theory as it implies that the groups differ in a manner that cannot be modeled by Eqs. (2a) and (2b). Here a crucial aspect of the model, namely, the fact that withinand between-group variance is attributable to the same principal components, is lost. We create 500 (10 10) covariance matrices S according to SchoÈnemann's method. 3 As above, the eigenvalue decomposition is denoted S = t. We assign identical S to both the Black (S B ) and the White group (S W ). Although this is not realistic (variance in White samples is larger), it is of little consequence here. Trial and error revealed that W1 = 0 and B1 = 3.25 produces observed mean differences of about one standard deviation. We introduce the following violations of the strong version of the hypothesis: Violation A: B ˆ 3:25; 0:2; 0;...; 0Š t : Violation B: B ˆ 3:25; 0:2; 0:2; 0; 0;...; 0Š t :

12 242 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231±245 Violation C: B ˆ 3:25; 0:2; 0:2; 0:2; 0;...; 0Š t : Violation D: B ˆ 3:25; 0:2; 0:2; 0:2; 0:2; 0;...; 0Š t : In all cases, we retain W =[0, 0,...,0] t. Using these vectors and n=[0,...,0] t,we construct mean vectors according to Eqs. (2a) and (2b). In addition, we introduced the following violation: Violation E: m W ˆ n and m B ˆ n 1 j; where again is the ( p 1) zero vector and the components of X are drawn randomly from the uniform [.05,.05] distribution. Given Violations A±E, the means of the Spearman correlation (based on 500 replications) equal.65 (S.D. = 0.20),.52 (0.24),.44 (0.27),.40 (0.27),.002 (0.34), respectively. Histograms of the Spearman correlations are shown in Fig. 2. We do not draw definite conclusions about Spearman correlations based on these results, because Violations A±D are quite arbitrary. However, the drop in mean Spearman correlation Fig. 2. Histograms of Spearman correlations given violation A (top histogram) to E (bottom histogram) based on 500 replications. The nature of the violation is explained in the text.

13 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± reflects the increasing severity of the violation. More importantly, the mean correlation in the case of Violation E is about zero (.002). Here the between-group differences have nothing to do with the within-group differences, in contrast to Violations A±D, which may be viewed as compatible with the weak version of the hypothesis. Although limited in scope, these results show clearly that Spearman correlations are not positive by mathematical necessity. 10. Discussion The aim of the present paper is to consider SchoÈnemann's criticism of Spearman correlations from the perspective of multigroup PCA. SchoÈnemann's theorem concerns a scenario in which the Spearman correlation will equal unity by mathematical necessity. As explained, the theorem comprises a multigroup PCA-based model that provides a quite accurate representation of Jensen's hypothesis concerning B±W differences. The theorem however does not prove the contention that Spearman correlations are substantial by mathematical necessity. The consistency between the model implied by the theorem (Eqs. (1a), (1b) and (2a), (2b)) and a given data set is an empirical issue and should be treated accordingly. Our discussion of SchoÈnemann's simulation study identified several limitations, which render it quite unconvincing. These concern the manner of creating S and the creation groups based on the selection variable 1 t y. These aspects of the simulation give rise to covariance and mean structures that closely approximate the structures expected given SchoÈnemann's theorem. In addition the variation in the components of 1 (as observed in the simulation Ð see Fig. 1) is necessarily small. Jensen clearly emphasizes the importance of variable factor loadings. We believe that a central problem is the creation of groups based on the variable 1 t y. SchoÈnemann does not provide any rationale or justification for this manner of data simulation. Ultimately, we view SchoÈnemann's simulation as a study of specificity of Spearman correlations given model violations. Of course, demonstrating a lack of specificity is not the same as demonstrating that Spearman correlations are positive by mathematical necessity. All tests are characterized by a degree of specificity and sensitivity: this does not render their outcome trivial. Our own simulation study, although limited in scope, shows that Spearman correlations vary with the degree of model violation, and are not positive by necessity. Regardless of the details of his simulation study, it is clear that SchoÈnemann raises an important issue, viz., are Spearman correlations a good test of the role of g in B±W differences in IQ scores? Many commentators on Spearman's hypothesis appear to accept that Jensen's procedure is OK (for an overview, see SchoÈnemann, 1997a, pp. 666±667), and apply it accordingly (for recent applications, see Lynn & Owen, 1994; Nyborg & Jensen, 2000; Rushton, 1999; Te Nijenhuis, Evers, & Mur, 2000; Te Nijenhuis & van der Flier, 1997). Dolan (2000) and Lubke, Dolan, & Kelderman, (2000) have identified several weaknesses of Jensen's procedure. Perhaps the most obvious problem with Spearman correlations is that they address a single aspect of the model implied by Spearman's hypothesis concerning the centrality of g in B±W differences. In addition, Jensen's procedure does not involve any goodness-of-fit testing or any comparison or investigation of competing models (see Dolan,

14 244 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± ). Spearman's hypothesis embodies a specific multigroup means and covariance structure model. Several commentators (Dolan, 1997; Gustafsson, 1992; Horn, 1997; Millsap, 1997) have pointed out that the issue of B±W differences in intelligence scores should be investigated using multigroup confirmatory factor analysis (MGCFA; SoÈrbom, 1974). The conditions for a meaningful comparison of groups within the factor model are known (Meredith, 1993). Such analyses can be carried out readily using widely disseminated software. Dolan (2000) presented an extensive discussion and application to the data published in Jensen and Reynolds (1982). In the present paper, we have considered the multigroup PCA model implied by SchoÈnemann's theorem and previously presented by Flury et al. (1995). However, we do not believe that this model is preferable to MGCFA. MGCFA has several advantages that include scale invariance and flexibility in model specification (Dolan, 2000). An important aspect of the model-based approach is that the relevant models capture a central aspect of Jensen's theory. This aspect concerns the relationship between within- and betweengroup variance. Jensen essentially argues that factors that cause within-group (individual) differences also cause between-group (mean) differences. An often repeated response to Jensen's theory is that within and between-group variance may be independent (e.g., Jensen, 1973, p. 134). However, the present PCA-based model, as well as the MGCFA-based models presented in Dolan (2000), provides a viable way to investigate the proposition that withingroup and between-group variance are attributable to common causes. The often-repeated proposition that these two classes of variance may be independent is thus amenable to empirical investigation. Lubke, Dolan, Kelderman, and Mellenbergh (2000) discuss the usefulness of MGCFA in investigations of both B±W differences and the Flynn effect. Acknowledgments The research of Conor Dolan has been made possible by a fellowship of the Royal Netherlands Academy of the Arts and Sciences. References Dolan, C. V. (1997). A note on SchoÈnemann's refutation of Spearman's hypothesis. Multivariate Behavioral Research, 32, 319 ±325. Dolan, C. V. (2000). Investigating Spearman's hypothesis by means of multi-group confirmatory factor analysis. Multivariate Behavioral Research, 35, 21± 50. Dolan, C. V., Bechger, T., & Molenaar, P. C. M. (1999). Using structural equation modeling to fit models incorporating principal components. Structural Equation Modeling, 6, 233 ±261. Flury, B. (1988). Common principal components and related multivariate models. New York: Wiley. Flury, B. D., Nel, D. G., & Pienaar, I. (1995). Simultaneous detection of shift in means and variances. Journal of the American Statistical Association, 90, 1474 ±1481. Greene, W. H. (1993). Econometric analysis (2nd ed.). New York: Macmillan. Gustafsson, J. E. (1992). The relevance of factor analysis for the study of group differences. Multivariate Behavioral Research, 27, 319 ± 325. Horn, J. (1997). On the mathematical relationship between factor or component coefficients and differences between means. Cahiers de Psychologie Cognitive, 16, 750 ± 757.

15 C.V. Dolan, G.H. Lubke / Intelligence 29 (2001) 231± Jensen, A. R. (1973). Educability and group differences. London: Muthuen. Jensen, A. R. (1985). The nature of the Black ± White difference on various psychometric tests: Spearman's hypothesis. Behavioral and Brain Sciences, 8, 193 ±263. Jensen, A. R. (1992). Spearman's hypothesis: methodology and evidence. Multivariate Behavioral Research, 27, 225 ±233. Jensen, A. R. (1998). The g-factor. The science of mental ability. Westport: Praeger. Jensen, A. R., & Reynolds, C. R. (1982). Race, social class and ability patterns on the WISC-R. Personality and Individual Differences, 3, 423 ± 438. Jensen, A. R., & Weng, L.-J. (1994). What is good g? Intelligence, 18, 231±258. JoÈreskog, K. G., & SoÈrbom, D. (1993). LISREL 8: structural equation modeling with the SIMPLIS command language. Chicago: Scientific Software International. Kendall, M. G., & Stuart, A. (1968). The advanced theory of statistics vol. 2 (3rd ed.). London: Griffin. Lawley, D. N. (1943). A note on Karl Pearson's selection formulae. Proceedings of the Royal Society Edinburgh, Section A, 62, 28±30. Lawley, D. N., & Maxwell, A. E. (1971). Factor analysis as a statistical method. London: Butterworth. Loehlin, J. C. (1992). On SchoÈnemann on Guttman on Jensen, via Lewontin. Multivariate Behavioral Research, 27, 261±263. Lubke, G. H., Dolan, C. V., & Kelderman, H. (2000). Investigating black ± white difference on cognitive tests using Spearman's hypothesis. Multivariate Behavioral Research (in press). Lubke, G. H., Dolan, C. V., Kelderman, H., & Mellenbergh, G. J. (2000). Explicit models of within- and betweengroup variance: a tool to investigate the Flynn effect and Group differences in intelligence tests. Submitted for publication. Lynn, R., & Owen, K. (1994). Spearman's hypothesis and test scores differences between White, Indian, and Blacks in South Africa. Journal of General Psychology, 121, 27 ± 36. Meredith, W. (1964). Notes on factorial invariance. Psychometrika, 29, 177± 185. Meredith, W. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58, 525± 543. Millsap, R. E. (1997). The investigation of Spearman's hypothesis and the failure to understand factor analysis. Cahiers de Psychologie Cognitive, 16, 750 ±757. Morrison, D. F. (1990). Multivariate statistical methods (3rd ed.). New York: McGraw-Hill. MutheÂn, B. O. (1989). Factor structure in groups selected on observed scores. British Journal of Mathematical and Statistical Psychology, 42, 81 ±90. Nyborg, H., & Jensen, A. R. (2000). Black ±white differences on various psychometric tests: Spearman's hypothesis tested on American armed services veterans. Personality and Individual Differences, 28, 593 ± 599. Rushton, J. P. (1999). Secular gains in IQ not related to the g factor and inbreeding depression: a reply to Flynn. Personality and Individual Differences, 26, 381 ± 389. Saris, W. E., & Satorra, A. (1993). Power evaluations in structural equation models. In: K. A. Bollen, & J. S. Long (Eds.), Testing structural equation models ( pp. 181 ±204). Newbury Park: Sage Publications. SchoÈnemann, P. H. (1989). Some new results on the Spearman hypothesis artefact. Bulletin of the Psychonomic Society, 27, 462±464. SchoÈnemann, P. H. (1992). Extension of Guttman's result from g to PC1. Multivariate Behavioral Research, 27, 219 ±224. SchoÈnemann, P. H. (1997a). Famous artifacts: Spearman's hypothesis. Cahiers de Psychologie Cognitive, 16, 665 ±694. SchoÈnemann, P. H. (1997b). The rise and fall of Spearman's hypothesis. Cahiers de Psychologie Cognitive, 16, 788 ±812. SoÈrbom, D. (1974). A general method for studying differences in factor means and factor structure between groups. British Journal of Mathematical and Statistical Psychology, 27, 229 ±239. Stevens, J. (1992). Applied multivariate statistics for the social sciences (2nd ed.). Hillsdale: Lawrence Erlbaum. Te Nijenhuis, J., Evers, A., & Mur, J. P. (2000). Validity of the differential aptitude test for the assessment of immigrant children. Educational Psychology, 20, 99 ±115. Te Nijenhuis, J., & Van der Flier, H. (1997). Comparability of GATB scores for immigrants and majority group members: some Dutch findings. Journal of Applied Psychology, 82, 675 ±687.

Two failures of Spearman s hypothesis: The GATB in Holland and the JAT in South Africa

Two failures of Spearman s hypothesis: The GATB in Holland and the JAT in South Africa Intelligence 32 (2004) 155 173 Two failures of Spearman s hypothesis: The GATB in Holland and the JAT in South Africa Conor V. Dolan*, Willemijn Roorda, Jelte M. Wicherts Department of Psychology, University

More information

Intelligence 37 (2009) Contents lists available at ScienceDirect. Intelligence

Intelligence 37 (2009) Contents lists available at ScienceDirect. Intelligence Intelligence 37 (2009) 396 404 Contents lists available at ScienceDirect Intelligence The power to detect sex differences in IQ test scores using Multi-Group Covariance and Means Structure Analyses Dylan

More information

To cite this article: Edward E. Roskam & Jules Ellis (1992) Reaction to Other Commentaries, Multivariate Behavioral Research, 27:2,

To cite this article: Edward E. Roskam & Jules Ellis (1992) Reaction to Other Commentaries, Multivariate Behavioral Research, 27:2, This article was downloaded by: [Memorial University of Newfoundland] On: 29 January 2015, At: 12:02 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Conor V Dolan & Ellen L Hamaker

Conor V Dolan & Ellen L Hamaker Investigating black-white differences in psychometric IQ: Multi-group confirmatory factor analyses of the WISC-R and K- ABC and a critique of the method of correlated vectors Conor V Dolan & Ellen L Hamaker

More information

A Rejoinder to Mackintosh and some Remarks on the. Concept of General Intelligence

A Rejoinder to Mackintosh and some Remarks on the. Concept of General Intelligence A Rejoinder to Mackintosh and some Remarks on the Concept of General Intelligence Moritz Heene Department of Psychology, Ludwig Maximilian University, Munich, Germany. 1 Abstract In 2000 Nicholas J. Mackintosh

More information

CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum 1997 65 CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE 4.0. Introduction In Chapter

More information

A Study of Statistical Power and Type I Errors in Testing a Factor Analytic. Model for Group Differences in Regression Intercepts

A Study of Statistical Power and Type I Errors in Testing a Factor Analytic. Model for Group Differences in Regression Intercepts A Study of Statistical Power and Type I Errors in Testing a Factor Analytic Model for Group Differences in Regression Intercepts by Margarita Olivera Aguilar A Thesis Presented in Partial Fulfillment of

More information

Chapter 4: Factor Analysis

Chapter 4: Factor Analysis Chapter 4: Factor Analysis In many studies, we may not be able to measure directly the variables of interest. We can merely collect data on other variables which may be related to the variables of interest.

More information

Department of Psychology, University of Amsterdam, b Department of Education, University of Amsterdam,

Department of Psychology, University of Amsterdam, b Department of Education, University of Amsterdam, This article was downloaded by: [Universiteit van Amsterdam] On: 2 December 2009 Access details: Access Details: [subscription number 907217973] Publisher Psychology Press Informa Ltd Registered in England

More information

Phenotypic factor analysis

Phenotypic factor analysis 1 Phenotypic factor analysis Conor V. Dolan & Michel Nivard VU, Amsterdam Boulder Workshop - March 2018 2 Phenotypic factor analysis A statistical technique to investigate the dimensionality of correlated

More information

Chapter 14: Repeated-measures designs

Chapter 14: Repeated-measures designs Chapter 14: Repeated-measures designs Oliver Twisted Please, Sir, can I have some more sphericity? The following article is adapted from: Field, A. P. (1998). A bluffer s guide to sphericity. Newsletter

More information

On Likelihoodism and Intelligent Design

On Likelihoodism and Intelligent Design On Likelihoodism and Intelligent Design Sebastian Lutz Draft: 2011 02 14 Abstract Two common and plausible claims in the philosophy of science are that (i) a theory that makes no predictions is not testable

More information

Causal Inference Using Nonnormality Yutaka Kano and Shohei Shimizu 1

Causal Inference Using Nonnormality Yutaka Kano and Shohei Shimizu 1 Causal Inference Using Nonnormality Yutaka Kano and Shohei Shimizu 1 Path analysis, often applied to observational data to study causal structures, describes causal relationship between observed variables.

More information

Introduction to Factor Analysis

Introduction to Factor Analysis to Factor Analysis Lecture 10 August 2, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #10-8/3/2011 Slide 1 of 55 Today s Lecture Factor Analysis Today s Lecture Exploratory

More information

Measurement exchangeability and normal one-factor models

Measurement exchangeability and normal one-factor models Biometrika (2004) 91 3 pp 738 742 2004 Biometrika Trust Printed in Great Britain Measurement exchangeability and normal one-factor models BY HENK KELDERMAN Department of Work and Organizational Psychology

More information

Psychology 282 Lecture #4 Outline Inferences in SLR

Psychology 282 Lecture #4 Outline Inferences in SLR Psychology 282 Lecture #4 Outline Inferences in SLR Assumptions To this point we have not had to make any distributional assumptions. Principle of least squares requires no assumptions. Can use correlations

More information

Investigating Population Heterogeneity With Factor Mixture Models

Investigating Population Heterogeneity With Factor Mixture Models Psychological Methods 2005, Vol. 10, No. 1, 21 39 Copyright 2005 by the American Psychological Association 1082-989X/05/$12.00 DOI: 10.1037/1082-989X.10.1.21 Investigating Population Heterogeneity With

More information

Assessing Factorial Invariance in Ordered-Categorical Measures

Assessing Factorial Invariance in Ordered-Categorical Measures Multivariate Behavioral Research, 39 (3), 479-515 Copyright 2004, Lawrence Erlbaum Associates, Inc. Assessing Factorial Invariance in Ordered-Categorical Measures Roger E. Millsap and Jenn Yun-Tein Arizona

More information

FACTOR ANALYSIS AS MATRIX DECOMPOSITION 1. INTRODUCTION

FACTOR ANALYSIS AS MATRIX DECOMPOSITION 1. INTRODUCTION FACTOR ANALYSIS AS MATRIX DECOMPOSITION JAN DE LEEUW ABSTRACT. Meet the abstract. This is the abstract. 1. INTRODUCTION Suppose we have n measurements on each of taking m variables. Collect these measurements

More information

A note on structured means analysis for a single group. André Beauducel 1. October 3 rd, 2015

A note on structured means analysis for a single group. André Beauducel 1. October 3 rd, 2015 Structured means analysis for a single group 1 A note on structured means analysis for a single group André Beauducel 1 October 3 rd, 2015 Abstract The calculation of common factor means in structured

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

The 3 Indeterminacies of Common Factor Analysis

The 3 Indeterminacies of Common Factor Analysis The 3 Indeterminacies of Common Factor Analysis James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The 3 Indeterminacies of Common

More information

Introduction to Factor Analysis

Introduction to Factor Analysis to Factor Analysis Lecture 11 November 2, 2005 Multivariate Analysis Lecture #11-11/2/2005 Slide 1 of 58 Today s Lecture Factor Analysis. Today s Lecture Exploratory factor analysis (EFA). Confirmatory

More information

Chapter 5. Introduction to Path Analysis. Overview. Correlation and causation. Specification of path models. Types of path models

Chapter 5. Introduction to Path Analysis. Overview. Correlation and causation. Specification of path models. Types of path models Chapter 5 Introduction to Path Analysis Put simply, the basic dilemma in all sciences is that of how much to oversimplify reality. Overview H. M. Blalock Correlation and causation Specification of path

More information

3/10/03 Gregory Carey Cholesky Problems - 1. Cholesky Problems

3/10/03 Gregory Carey Cholesky Problems - 1. Cholesky Problems 3/10/03 Gregory Carey Cholesky Problems - 1 Cholesky Problems Gregory Carey Department of Psychology and Institute for Behavioral Genetics University of Colorado Boulder CO 80309-0345 Email: gregory.carey@colorado.edu

More information

Exploratory Factor Analysis: dimensionality and factor scores. Psychology 588: Covariance structure and factor models

Exploratory Factor Analysis: dimensionality and factor scores. Psychology 588: Covariance structure and factor models Exploratory Factor Analysis: dimensionality and factor scores Psychology 588: Covariance structure and factor models How many PCs to retain 2 Unlike confirmatory FA, the number of factors to extract is

More information

1 Overview. Coefficients of. Correlation, Alienation and Determination. Hervé Abdi Lynne J. Williams

1 Overview. Coefficients of. Correlation, Alienation and Determination. Hervé Abdi Lynne J. Williams In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 2010 Coefficients of Correlation, Alienation and Determination Hervé Abdi Lynne J. Williams 1 Overview The coefficient of

More information

Reconciling factor-based and composite-based approaches to structural equation modeling

Reconciling factor-based and composite-based approaches to structural equation modeling Reconciling factor-based and composite-based approaches to structural equation modeling Edward E. Rigdon (erigdon@gsu.edu) Modern Modeling Methods Conference May 20, 2015 Thesis: Arguments for factor-based

More information

1 Motivation for Instrumental Variable (IV) Regression

1 Motivation for Instrumental Variable (IV) Regression ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data

More information

FACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING

FACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING FACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING Vishwanath Mantha Department for Electrical and Computer Engineering Mississippi State University, Mississippi State, MS 39762 mantha@isip.msstate.edu ABSTRACT

More information

UNIT 4 RANK CORRELATION (Rho AND KENDALL RANK CORRELATION

UNIT 4 RANK CORRELATION (Rho AND KENDALL RANK CORRELATION UNIT 4 RANK CORRELATION (Rho AND KENDALL RANK CORRELATION Structure 4.0 Introduction 4.1 Objectives 4. Rank-Order s 4..1 Rank-order data 4.. Assumptions Underlying Pearson s r are Not Satisfied 4.3 Spearman

More information

Can Variances of Latent Variables be Scaled in Such a Way That They Correspond to Eigenvalues?

Can Variances of Latent Variables be Scaled in Such a Way That They Correspond to Eigenvalues? International Journal of Statistics and Probability; Vol. 6, No. 6; November 07 ISSN 97-703 E-ISSN 97-7040 Published by Canadian Center of Science and Education Can Variances of Latent Variables be Scaled

More information

Citation for published version (APA): Jak, S. (2013). Cluster bias: Testing measurement invariance in multilevel data

Citation for published version (APA): Jak, S. (2013). Cluster bias: Testing measurement invariance in multilevel data UvA-DARE (Digital Academic Repository) Cluster bias: Testing measurement invariance in multilevel data Jak, S. Link to publication Citation for published version (APA): Jak, S. (2013). Cluster bias: Testing

More information

Exploratory Factor Analysis and Principal Component Analysis

Exploratory Factor Analysis and Principal Component Analysis Exploratory Factor Analysis and Principal Component Analysis Today s Topics: What are EFA and PCA for? Planning a factor analytic study Analysis steps: Extraction methods How many factors Rotation and

More information

A Cautionary Note on the Use of LISREL s Automatic Start Values in Confirmatory Factor Analysis Studies R. L. Brown University of Wisconsin

A Cautionary Note on the Use of LISREL s Automatic Start Values in Confirmatory Factor Analysis Studies R. L. Brown University of Wisconsin A Cautionary Note on the Use of LISREL s Automatic Start Values in Confirmatory Factor Analysis Studies R. L. Brown University of Wisconsin The accuracy of parameter estimates provided by the major computer

More information

Improper Solutions in Exploratory Factor Analysis: Causes and Treatments

Improper Solutions in Exploratory Factor Analysis: Causes and Treatments Improper Solutions in Exploratory Factor Analysis: Causes and Treatments Yutaka Kano Faculty of Human Sciences, Osaka University Suita, Osaka 565, Japan. email: kano@hus.osaka-u.ac.jp Abstract: There are

More information

Exploratory Factor Analysis and Principal Component Analysis

Exploratory Factor Analysis and Principal Component Analysis Exploratory Factor Analysis and Principal Component Analysis Today s Topics: What are EFA and PCA for? Planning a factor analytic study Analysis steps: Extraction methods How many factors Rotation and

More information

Chapter 8. Models with Structural and Measurement Components. Overview. Characteristics of SR models. Analysis of SR models. Estimation of SR models

Chapter 8. Models with Structural and Measurement Components. Overview. Characteristics of SR models. Analysis of SR models. Estimation of SR models Chapter 8 Models with Structural and Measurement Components Good people are good because they've come to wisdom through failure. Overview William Saroyan Characteristics of SR models Estimation of SR models

More information

Elementary Linear Algebra, Second Edition, by Spence, Insel, and Friedberg. ISBN Pearson Education, Inc., Upper Saddle River, NJ.

Elementary Linear Algebra, Second Edition, by Spence, Insel, and Friedberg. ISBN Pearson Education, Inc., Upper Saddle River, NJ. 2008 Pearson Education, Inc., Upper Saddle River, NJ. All rights reserved. APPENDIX: Mathematical Proof There are many mathematical statements whose truth is not obvious. For example, the French mathematician

More information

Lecture notes I: Measurement invariance 1

Lecture notes I: Measurement invariance 1 Lecture notes I: Measurement Invariance (RM20; Jelte Wicherts). 1 Lecture notes I: Measurement invariance 1 Literature. Mellenbergh, G. J. (1989). Item bias and item response theory. International Journal

More information

Diagnostics and Transformations Part 2

Diagnostics and Transformations Part 2 Diagnostics and Transformations Part 2 Bivariate Linear Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University Multilevel Regression Modeling, 2009 Diagnostics

More information

Retained-Components Factor Transformation: Factor Loadings and Factor Score Predictors in the Column Space of Retained Components

Retained-Components Factor Transformation: Factor Loadings and Factor Score Predictors in the Column Space of Retained Components Journal of Modern Applied Statistical Methods Volume 13 Issue 2 Article 6 11-2014 Retained-Components Factor Transformation: Factor Loadings and Factor Score Predictors in the Column Space of Retained

More information

STRUCTURAL EQUATION MODELING. Khaled Bedair Statistics Department Virginia Tech LISA, Summer 2013

STRUCTURAL EQUATION MODELING. Khaled Bedair Statistics Department Virginia Tech LISA, Summer 2013 STRUCTURAL EQUATION MODELING Khaled Bedair Statistics Department Virginia Tech LISA, Summer 2013 Introduction: Path analysis Path Analysis is used to estimate a system of equations in which all of the

More information

Factor Analysis. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA

Factor Analysis. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA Factor Analysis Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA 1 Factor Models The multivariate regression model Y = XB +U expresses each row Y i R p as a linear combination

More information

Measurement Independence, Parameter Independence and Non-locality

Measurement Independence, Parameter Independence and Non-locality Measurement Independence, Parameter Independence and Non-locality Iñaki San Pedro Department of Logic and Philosophy of Science University of the Basque Country, UPV/EHU inaki.sanpedro@ehu.es Abstract

More information

Structural equation modeling

Structural equation modeling Structural equation modeling Rex B Kline Concordia University Montréal ISTQL Set B B1 Data, path models Data o N o Form o Screening B2 B3 Sample size o N needed: Complexity Estimation method Distributions

More information

Language American English

Language American English Language American English 1 Easing into Eigenvectors and Eigenvalues in Introductory Linear Algebra Jeffrey L. Stuart Department of Mathematics University of Southern Mississippi Hattiesburg, Mississippi

More information

Monte Carlo Simulations for Rasch Model Tests

Monte Carlo Simulations for Rasch Model Tests Monte Carlo Simulations for Rasch Model Tests Patrick Mair Vienna University of Economics Thomas Ledl University of Vienna Abstract: Sources of deviation from model fit in Rasch models can be lack of unidimensionality,

More information

Hardy s Paradox. Chapter Introduction

Hardy s Paradox. Chapter Introduction Chapter 25 Hardy s Paradox 25.1 Introduction Hardy s paradox resembles the Bohm version of the Einstein-Podolsky-Rosen paradox, discussed in Chs. 23 and 24, in that it involves two correlated particles,

More information

Donghoh Kim & Se-Kang Kim

Donghoh Kim & Se-Kang Kim Behav Res (202) 44:239 243 DOI 0.3758/s3428-02-093- Comparing patterns of component loadings: Principal Analysis (PCA) versus Independent Analysis (ICA) in analyzing multivariate non-normal data Donghoh

More information

A Multivariate Perspective

A Multivariate Perspective A Multivariate Perspective on the Analysis of Categorical Data Rebecca Zwick Educational Testing Service Ellijot M. Cramer University of North Carolina at Chapel Hill Psychological research often involves

More information

The Tetrad Criterion

The Tetrad Criterion The Tetrad Criterion James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Tetrad Criterion 1 / 17 The Tetrad Criterion 1 Introduction

More information

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Combinations of features Given a data matrix X n p with p fairly large, it can

More information

Comparing Change Scores with Lagged Dependent Variables in Models of the Effects of Parents Actions to Modify Children's Problem Behavior

Comparing Change Scores with Lagged Dependent Variables in Models of the Effects of Parents Actions to Modify Children's Problem Behavior Comparing Change Scores with Lagged Dependent Variables in Models of the Effects of Parents Actions to Modify Children's Problem Behavior David R. Johnson Department of Sociology and Haskell Sie Department

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

Scaled and adjusted restricted tests in. multi-sample analysis of moment structures. Albert Satorra. Universitat Pompeu Fabra.

Scaled and adjusted restricted tests in. multi-sample analysis of moment structures. Albert Satorra. Universitat Pompeu Fabra. Scaled and adjusted restricted tests in multi-sample analysis of moment structures Albert Satorra Universitat Pompeu Fabra July 15, 1999 The author is grateful to Peter Bentler and Bengt Muthen for their

More information

Explaining Correlations by Plotting Orthogonal Contrasts

Explaining Correlations by Plotting Orthogonal Contrasts Explaining Correlations by Plotting Orthogonal Contrasts Øyvind Langsrud MATFORSK, Norwegian Food Research Institute. www.matforsk.no/ola/ To appear in The American Statistician www.amstat.org/publications/tas/

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Statistics Introductory Correlation

Statistics Introductory Correlation Statistics Introductory Correlation Session 10 oscardavid.barrerarodriguez@sciencespo.fr April 9, 2018 Outline 1 Statistics are not used only to describe central tendency and variability for a single variable.

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

MICHAEL SCHREINER and KARL SCHWEIZER

MICHAEL SCHREINER and KARL SCHWEIZER Review of Psychology, 2011, Vol. 18, No. 1, 3-11 UDC 159.9 The hypothesis-based investigation of patterns of relatedness by means of confirmatory factor models: The treatment levels of the Exchange Test

More information

Three-Level Modeling for Factorial Experiments With Experimentally Induced Clustering

Three-Level Modeling for Factorial Experiments With Experimentally Induced Clustering Three-Level Modeling for Factorial Experiments With Experimentally Induced Clustering John J. Dziak The Pennsylvania State University Inbal Nahum-Shani The University of Michigan Copyright 016, Penn State.

More information

An Approximate Test for Homogeneity of Correlated Correlation Coefficients

An Approximate Test for Homogeneity of Correlated Correlation Coefficients Quality & Quantity 37: 99 110, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 99 Research Note An Approximate Test for Homogeneity of Correlated Correlation Coefficients TRIVELLORE

More information

Electronic Research Archive of Blekinge Institute of Technology

Electronic Research Archive of Blekinge Institute of Technology Electronic Research Archive of Blekinge Institute of Technology http://www.bth.se/fou/ This is an author produced version of a ournal paper. The paper has been peer-reviewed but may not include the final

More information

Chapter 7: Simple linear regression

Chapter 7: Simple linear regression The absolute movement of the ground and buildings during an earthquake is small even in major earthquakes. The damage that a building suffers depends not upon its displacement, but upon the acceleration.

More information

INTRODUCTION TO ANALYSIS OF VARIANCE

INTRODUCTION TO ANALYSIS OF VARIANCE CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two

More information

Principal Components Analysis (PCA)

Principal Components Analysis (PCA) Principal Components Analysis (PCA) Principal Components Analysis (PCA) a technique for finding patterns in data of high dimension Outline:. Eigenvectors and eigenvalues. PCA: a) Getting the data b) Centering

More information

Key Algebraic Results in Linear Regression

Key Algebraic Results in Linear Regression Key Algebraic Results in Linear Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 30 Key Algebraic Results in

More information

Principal Component Analysis (PCA) Theory, Practice, and Examples

Principal Component Analysis (PCA) Theory, Practice, and Examples Principal Component Analysis (PCA) Theory, Practice, and Examples Data Reduction summarization of data with many (p) variables by a smaller set of (k) derived (synthetic, composite) variables. p k n A

More information

Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti

Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Prepared by: Prof Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang M L Regression is an extension to

More information

NAG Toolbox for Matlab. g03aa.1

NAG Toolbox for Matlab. g03aa.1 G03 Multivariate Methods 1 Purpose NAG Toolbox for Matlab performs a principal component analysis on a data matrix; both the principal component loadings and the principal component scores are returned.

More information

Structural Equation Modeling and Confirmatory Factor Analysis. Types of Variables

Structural Equation Modeling and Confirmatory Factor Analysis. Types of Variables /4/04 Structural Equation Modeling and Confirmatory Factor Analysis Advanced Statistics for Researchers Session 3 Dr. Chris Rakes Website: http://csrakes.yolasite.com Email: Rakes@umbc.edu Twitter: @RakesChris

More information

An Investigation of the Accuracy of Parallel Analysis for Determining the Number of Factors in a Factor Analysis

An Investigation of the Accuracy of Parallel Analysis for Determining the Number of Factors in a Factor Analysis Western Kentucky University TopSCHOLAR Honors College Capstone Experience/Thesis Projects Honors College at WKU 6-28-2017 An Investigation of the Accuracy of Parallel Analysis for Determining the Number

More information

LECTURE 4 PRINCIPAL COMPONENTS ANALYSIS / EXPLORATORY FACTOR ANALYSIS

LECTURE 4 PRINCIPAL COMPONENTS ANALYSIS / EXPLORATORY FACTOR ANALYSIS LECTURE 4 PRINCIPAL COMPONENTS ANALYSIS / EXPLORATORY FACTOR ANALYSIS NOTES FROM PRE- LECTURE RECORDING ON PCA PCA and EFA have similar goals. They are substantially different in important ways. The goal

More information

Introduction to Matrix Algebra and the Multivariate Normal Distribution

Introduction to Matrix Algebra and the Multivariate Normal Distribution Introduction to Matrix Algebra and the Multivariate Normal Distribution Introduction to Structural Equation Modeling Lecture #2 January 18, 2012 ERSH 8750: Lecture 2 Motivation for Learning the Multivariate

More information

B. Weaver (18-Oct-2001) Factor analysis Chapter 7: Factor Analysis

B. Weaver (18-Oct-2001) Factor analysis Chapter 7: Factor Analysis B Weaver (18-Oct-2001) Factor analysis 1 Chapter 7: Factor Analysis 71 Introduction Factor analysis (FA) was developed by C Spearman It is a technique for examining the interrelationships in a set of variables

More information

On Selecting Tests for Equality of Two Normal Mean Vectors

On Selecting Tests for Equality of Two Normal Mean Vectors MULTIVARIATE BEHAVIORAL RESEARCH, 41(4), 533 548 Copyright 006, Lawrence Erlbaum Associates, Inc. On Selecting Tests for Equality of Two Normal Mean Vectors K. Krishnamoorthy and Yanping Xia Department

More information

An Alternative Proof of the Greville Formula

An Alternative Proof of the Greville Formula JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 1, pp. 23-28, JULY 1997 An Alternative Proof of the Greville Formula F. E. UDWADIA1 AND R. E. KALABA2 Abstract. A simple proof of the Greville

More information

INTRODUCTION TO STRUCTURAL EQUATION MODELS

INTRODUCTION TO STRUCTURAL EQUATION MODELS I. Description of the course. INTRODUCTION TO STRUCTURAL EQUATION MODELS A. Objectives and scope of the course. B. Logistics of enrollment, auditing, requirements, distribution of notes, access to programs.

More information

RANDOM INTERCEPT ITEM FACTOR ANALYSIS. IE Working Paper MK8-102-I 02 / 04 / Alberto Maydeu Olivares

RANDOM INTERCEPT ITEM FACTOR ANALYSIS. IE Working Paper MK8-102-I 02 / 04 / Alberto Maydeu Olivares RANDOM INTERCEPT ITEM FACTOR ANALYSIS IE Working Paper MK8-102-I 02 / 04 / 2003 Alberto Maydeu Olivares Instituto de Empresa Marketing Dept. C / María de Molina 11-15, 28006 Madrid España Alberto.Maydeu@ie.edu

More information

Data analysis strategies for high dimensional social science data M3 Conference May 2013

Data analysis strategies for high dimensional social science data M3 Conference May 2013 Data analysis strategies for high dimensional social science data M3 Conference May 2013 W. Holmes Finch, Maria Hernández Finch, David E. McIntosh, & Lauren E. Moss Ball State University High dimensional

More information

Econometric software as a theoretical research tool

Econometric software as a theoretical research tool Journal of Economic and Social Measurement 29 (2004) 183 187 183 IOS Press Some Milestones in Econometric Computing Econometric software as a theoretical research tool Houston H. Stokes Department of Economics,

More information

Reliability Coefficients

Reliability Coefficients Testing the Equality of Two Related Intraclass Reliability Coefficients Yousef M. Alsawaimeh, Yarmouk University Leonard S. Feldt, University of lowa An approximate statistical test of the equality of

More information

1 A factor can be considered to be an underlying latent variable: (a) on which people differ. (b) that is explained by unknown variables

1 A factor can be considered to be an underlying latent variable: (a) on which people differ. (b) that is explained by unknown variables 1 A factor can be considered to be an underlying latent variable: (a) on which people differ (b) that is explained by unknown variables (c) that cannot be defined (d) that is influenced by observed variables

More information

Incompatibility Paradoxes

Incompatibility Paradoxes Chapter 22 Incompatibility Paradoxes 22.1 Simultaneous Values There is never any difficulty in supposing that a classical mechanical system possesses, at a particular instant of time, precise values of

More information

Package paramap. R topics documented: September 20, 2017

Package paramap. R topics documented: September 20, 2017 Package paramap September 20, 2017 Type Package Title paramap Version 1.4 Date 2017-09-20 Author Brian P. O'Connor Maintainer Brian P. O'Connor Depends R(>= 1.9.0), psych, polycor

More information

Intermediate Social Statistics

Intermediate Social Statistics Intermediate Social Statistics Lecture 5. Factor Analysis Tom A.B. Snijders University of Oxford January, 2008 c Tom A.B. Snijders (University of Oxford) Intermediate Social Statistics January, 2008 1

More information

Abstract Title Page. Title: Degenerate Power in Multilevel Mediation: The Non-monotonic Relationship Between Power & Effect Size

Abstract Title Page. Title: Degenerate Power in Multilevel Mediation: The Non-monotonic Relationship Between Power & Effect Size Abstract Title Page Title: Degenerate Power in Multilevel Mediation: The Non-monotonic Relationship Between Power & Effect Size Authors and Affiliations: Ben Kelcey University of Cincinnati SREE Spring

More information

Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model

Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model Wooldridge, Introductory Econometrics, 4th ed. Chapter 2: The simple regression model Most of this course will be concerned with use of a regression model: a structure in which one or more explanatory

More information

Independent Component Analysis

Independent Component Analysis Independent Component Analysis James V. Stone November 4, 24 Sheffield University, Sheffield, UK Keywords: independent component analysis, independence, blind source separation, projection pursuit, complexity

More information

PIRLS 2016 Achievement Scaling Methodology 1

PIRLS 2016 Achievement Scaling Methodology 1 CHAPTER 11 PIRLS 2016 Achievement Scaling Methodology 1 The PIRLS approach to scaling the achievement data, based on item response theory (IRT) scaling with marginal estimation, was developed originally

More information

Do not copy, post, or distribute

Do not copy, post, or distribute 14 CORRELATION ANALYSIS AND LINEAR REGRESSION Assessing the Covariability of Two Quantitative Properties 14.0 LEARNING OBJECTIVES In this chapter, we discuss two related techniques for assessing a possible

More information

Scaling of Variance Space

Scaling of Variance Space ... it Dominance, Information, and Hierarchical Scaling of Variance Space David J. Krus and Robert W. Ceurvorst Arizona State University A method for computation of dominance relations and for construction

More information

International Journal of Statistics: Advances in Theory and Applications

International Journal of Statistics: Advances in Theory and Applications International Journal of Statistics: Advances in Theory and Applications Vol. 1, Issue 1, 2017, Pages 1-19 Published Online on April 7, 2017 2017 Jyoti Academic Press http://jyotiacademicpress.org COMPARING

More information

Factor Analysis. Qian-Li Xue

Factor Analysis. Qian-Li Xue Factor Analysis Qian-Li Xue Biostatistics Program Harvard Catalyst The Harvard Clinical & Translational Science Center Short course, October 7, 06 Well-used latent variable models Latent variable scale

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Model Estimation Example

Model Estimation Example Ronald H. Heck 1 EDEP 606: Multivariate Methods (S2013) April 7, 2013 Model Estimation Example As we have moved through the course this semester, we have encountered the concept of model estimation. Discussions

More information

Problems with parallel analysis in data sets with oblique simple structure

Problems with parallel analysis in data sets with oblique simple structure Methods of Psychological Research Online 2001, Vol.6, No.2 Internet: http://www.mpr-online.de Institute for Science Education 2001 IPN Kiel Problems with parallel analysis in data sets with oblique simple

More information

CHAPTER 7 INTRODUCTION TO EXPLORATORY FACTOR ANALYSIS. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

CHAPTER 7 INTRODUCTION TO EXPLORATORY FACTOR ANALYSIS. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum CHAPTER 7 INTRODUCTION TO EXPLORATORY FACTOR ANALYSIS From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum 1997 144 CHAPTER 7 INTRODUCTION TO EXPLORATORY FACTOR ANALYSIS Factor analytic

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information