Least Squares Analyses of Variance and Covariance

Size: px
Start display at page:

Download "Least Squares Analyses of Variance and Covariance"

Transcription

1 Least Squares Analyses of Variance and Covariance One-Way ANOVA Read Sections 1 and 2 in Chapter 16 of Howell. Run the program ANOVA1- LS.sas, which can be found on my SAS programs page. The data here are from Table 16.1 of Howell. Dummy Variable Coding DATA Dummy; INPUT Y TITLE1 'Dummy Variable Coded 1-Way ANOVA'; CARDS; PROC REG simple corr; MODEL Y = X1-X3; run; The dummy variable coding matrix is: Group X1 X2 X Look at the values of X1-X3 in the data in the Data Dummy section of the program file. X1 codes whether or not an observation is from Group 1 (0 = no, 1 = yes), X2 whether or not it is from Group 2, and X3 whether or not it is from Group 3. Only k-1 (4-1) dummy variables are needed, since an observation that is not in any of the first k-1 groups must be in the k th group. For each dummy variable the partial coefficients (b) represent a contrast between its group and the reference group (the one coded with all 0 s), that is, X1 s partials code Group 1 vs. Group 4, X2 codes Group 2 vs. Group 4, and X3 codes Group 3 vs. Group 4. Copyright 2018 Karl L. Wuensch - All rights reserved. LeastSq.docx

2 Analysis of Variance Source DF Sum of Squares Mean Square Page 2 F Value Pr > F Model Error Corrected Total R-Square Variable DF Parameter Estimate t Value Pr > t Intercept X X X The intercept is the mean of the reference group. For each X the b is the difference between its group s mean and the mean of the reference group. For example, the b for X1 is the mean for Group 1 minus the mean for Group 4, (8-6.33) = Look at the correlations among the X s and note that with equal n s the off-diagonal correlations are constant. Now look at the output from the regression analysis. Note that the omnibus F of is the same that would be obtained from a traditional ANOVA. Also note the following about the partial statistics: Do note that only Group 3 differs significantly from the reference group. Effects Coding Look at the Data Effects section of the program and its output. This is the type of coding that Howell uses. The design matrix is exactly like that in dummy variable coding except that the reference group is coded with -1 on each X. The design matrix is:

3 Page 3 Group X1 X2 X DATA Effects; INPUT Y Title1 'Effects Coded 1-Way ANOVA'; CARDS; PROC REG simple corr; MODEL Y = X1-X3; run; The values of F, p, and R 2 will all be the same with one type of coding as with any other type of coding. Source DF Sum of Squares Mean Square F Value Pr > F Model R-Square The result of this coding scheme is that each X s partial coefficients now represents one group versus the grand mean, that is, X1 represents Group 1 versus the grand mean, X2 represents Group 2 versus the grand mean, etc. As before, equal n s cause constant off-diagonal r s among the X s. Note that the omnibus F from the regression analysis is still The intercept is now equal to the grand mean. Each X s b now equals the difference between its group s mean and the grand mean, for example, for X1 b = (8-5.5) = 2.5.

4 Page 4 Variable DF Parameter Estimate t Value Pr > t Intercept <.0001 X X X Contrast Coding Look at Data Contrast section of the program and its output. The design matrix here codes a complete orthogonal set of comparisons: Group X1 X2 X DATA Contrast; INPUT Y TITLE1 'Contrast Coded 1-Way ANOVA'; CARDS; Variable DF Parameter Estimate t Value Pr > t Intercept <.0001 X X X The intercept equals the grand mean. Each of the b s is ½ of the difference between the means of the groups contrasted. For example, the b for X1 is one half of the difference between the

5 Page 5 mean of Groups 1 and 2 (6.5) and the mean of Groups 3 and 4 (4.5), ( )/2=1. X1 contrasts Groups 1 & 2 with Groups 3 & 4, X2 contrasts Group 1 with Group 2, and X3 contrasts Group 3 with Group 4. For each contrast varaiable (X1, X2, and X3), the sum of the coefficients must be 0. Furthermore, the products of the coefficients for and one contrast variable and any other contrast variable must sum to 0 for example, for X1 and X2, (1)(1)+1(-1)+(-1)(0) +(-1)(0) = =0. Again, the off-diagonal r s among the X s are constant, but this time, since the design matrix codes orthogonal contrasts, the r s all are 0 s. The omnibus F is still Standard Contrast Coding There are some advantages of using a standard set of weights. The coefficients for the one set of means must equal +1 divided by the number of conditions in that set while those for the other set must equal -1 divided by the number of conditions in that other set. The sum of the absolute values of the coefficients must be 2. For our design, here are standard weights: Group X1 X2 X3 1 ½ ½ 0 2 ½ -½ 0 3 -½ 0 ½ 4 -½ 0 -½ DATA StandardContrast; INPUT Y TITLE1 'Standard Contrast Coded 1-Way ANOVA'; CARDS; PROC REG simple corr; MODEL Y = X1-X3; run; Now the slopes equal the differences between the means of the groups contrasted.

6 Page 6 Variable DF Parameter Estimate t Value Pr > t Intercept <.0001 X X X Let GLM Do It. The code in the Data GLM section of the program does the ANOVA with Proc GLM. The contrast statements reproduce the contrasts earlier produced with contrast coding. Each F from PROC GLM CONTRAST statements is the square of t from PROC REG. DATA GLM; INPUT Y TITLE1 'Let GLM Do It'; CARDS; PROC GLM; CLASS A; MODEL Y = A / SS1; CONTRAST '12 VS 34' A ; CONTRAST '1 VS 2' A ; CONTRAST '3 VS 4' A ; CONTRAST '12s VS 34s' A ; CONTRAST '1s VS 2s' A ; CONTRAST '3s VS 4s' A ; run; run; quit; Contrast DF Contrast SS Mean Square F Value Pr > F 12 VS VS VS s VS 34s s VS 2s s VS 4s Two-Way ANOVA Read Sections 16.3 and 16.4 in Howell. I also recommend that you reread my handout Four Types of Sums of Squares for ANOVA Effects. Run the programs ANOVA2-LS-Eq.sas and ANOV2-LS-UnEq.sas.

7 Page 7 Orthogonal ANOVA, 2 x 4 = 8 groups ANOVA2-LS-Eq.sas uses the data from Table 16.2 of Howell and the effects coding matrix presented in Section One dummy variable to code the main effect of A Three dummy variables to code the main effect of B 1 x 3 = 3 dummy variables to code the interaction as products of the dummy variable for A by those for B as many X s as degrees of freedom -- 1 for factor A, 3 for factor B, and 3 for the interaction between A and B If we had three levels of A and four of B we would have 11 X s: A1, A2, B1, B2, B3, A1B1, A1B2, A1B3, A2B1, A2B2, A2B3. DATA SOL; DROP I; INPUT A1 B1 B2 B3 A1B1 A1B2 A1B3; DO I=1 TO 4; INPUT OUTPUT; END; CARDS; PROC REG; full: MODEL Y = A1 B1 B2 B3 A1B1 A1B2 A1B3; a_x_b: MODEL Y = A1 B1 B2 B3; b: MODEL Y = A1 A1B1 A1B2 A1B3; a: MODEL Y = B1 B2 B3 A1B1 A1B2 A1B3;

8 Analysis of Variance Source DF Sum of Squares Mean Square Page 8 F Value Pr > F Model Error Corrected Total R-Square Variable DF Parameter Estimate t Value Pr > t Intercept <.0001 A B B B A1B A1B A1B Look at the output from the full model. The error SS there is the SSE for the factorial ANOVA, on 24 df. The model SS (cells SS) there is the sum of the A, B, and AxB sums-of-squares from a factorial ANOVA. The b s are as indicated in part (c) of Table 16.2 in Howell, and the intercept equals the grand mean. Now look at the output from Model: A_X_B. We have deleted from this model the three terms that code the interaction. The interaction sum-of-squares is simply the difference between the full model and this reduced model, = The 2 for the interaction is simply the difference between the full model R 2 and the reduced model R 2, ( ) =.0737, which also equals the interaction SS divided by the total SS. We can test the significance of the interaction term by testing the significance of the reduction in the regression SS that accompanied the deletion of the dummy variables that coded the interaction. Using a partial F, we obtain the same value of F we would get using the traditional means (interaction mean square divided by error mean square):

9 SS SS full F ( f r )( MSE reduced full ) (3)( ) Page 9 The Model: B output is for a reduced model with the three terms coding the main effect of B deleted. You find the SS and 2 for B by subtracting the appropriate reduced model statistics from the full model statistics. The Model: A output is for a reduced model with the one term coding the main effect of A deleted. The results you get from testing each of the three reduced model are the same you would get using PROC ANOVA or PROC GLM, which do the dummy coding for you. Nonorthogonal Analysis ANOV2-LS-UnEq.sas uses the unequal n s data from Table 16.5 of Howell. The coding scheme is the same as in the previous analysis. Obtain sums-of-squares for A, B, and AxB in the same way as you did in the previous analysis and you will have done an Overall and Spiegel Method I analysis. Do note that the sums-of-squares do not sum to the total SS, since we have excluded variance that is ambiguous. Each effect is partialled for every other effect. The results from such an analysis are identical to those provided by the TYPE III SS computed by PROC GLM you will see that they are identical. Source DF Sum of Squares Mean Square F Value Pr > F Model Error Corrected Total Source DF Type III SS Mean Square F Value Pr > F A B A*B =

10 Analysis of Covariance Page 10 Read Sections 16.5 through in Howell and Chapter 6 in Tabachnick and Fidell. As explained there, the ANCOV is simply a least-squares ANOVA where the covariate or covariates are entered into the model prior to or simultaneously with the categorical variables. The effect of each categorical variable is adjusted for the covariate(s). Do note the additional assumptions involved in ANCOV (that each covariate has a linear relationship with the outcome variable and that the slope for that relationship does not change across levels of the categorical predictor variable(s). Carefully read Howell s cautions about interpreting analyses of covariance when subjects have not been randomly assigned to treatment groups. Run the programs ANCOV1.sas and ANCOV2.sas. One-Way ANCOV I am not going to burden you with doing ANCOV with PROC REG I think you already have the basic idea of least-squares analyses mastered. Look at ANCOV1.sas and its output. These data were obtained from Figure 2 in the article, "Relationships among models of salary bias," by M. H. Birnbaum (1985, American Psychologist, pp ) and are said to be representative of data obtained in various studies of sex bias in faculty salaries. I did double the sample size from that displayed in the plot from which I harvested the data. We can imagine that we have data from three different departments faculty members: The professor s Gender (1 = male, 2 = female), an objective measure of the professor s QUALIFICations (a composite of things like number of publications, ratings of instruction, etc.), and SALARY (in thousands of 1985 dollars). Proc Format; Value sx 1='M' 2='F'; run; DATA LOTUS; INPUT GENDER QUALIFIC format gender sx.; cards; PROC PLOT;PLOT SALARY*QUALIFIC=GENDER; The data are plotted, using the symbol for gender as the plotting symbol. The plot suggests three lines, one for each department (salaries being highest in the business department and lowest in the sociology department), but that is not our primary interest. Do note that salaries go up as qualifications go up. Also note that the M s tend to be plotted higher and more to the right than the F s.

11 Page 11 PROC ANOVA; CLASS GENDER; Model QUALIFIC SALARY = Gender; Means Gender; TITLE3 'Gender differences in qualifications and in salary ($thousands)'; run; quit; Dependent Variable: QUALIFIC Source DF Sum of Squares Mean Square F Value Pr > F Model Error Corrected Total Dependent Variable: SALARY Source DF Sum of Squares Mean Square F Value Pr > F Model <.0001 Error Corrected Total

12 Level of GENDER Page 12 N QUALIFIC SALARY Mean Std Dev Mean Std Dev F M The sexes differ significantly on both qualification and salaries. PROC ANOVA does two simple ANOVAs, one on the qualifications data (later to be used as a covariate) and one on the salary data. Both are significant. This is going to make the interpretation of the ANCOV difficult, since we will be adjusting group means on the salary variable to remove the effect of the qualifications variable (the covariate), but the groups differ on both. The interpretation would be more straightforward if the groups did not differ on the covariate, in which case adjusting for the covariate would simply reduce the error term, providing for a more powerful analysis. The error SS (1789.3) from the analysis on the covariate is that which Howell calls SS e(c) when discussing making comparisons between pairs of adjusted means. PROC GLM; CLASS Gender; MODEL Salary = Qualific Gender / SS3; title3 'Test Qualific*Gender for Homogeneity of Regression'; run; quit; Source DF Type III SS Mean Square F Value Pr > F QUALIFIC GENDER QUALIFIC*GENDER

13 Page 13 The first invocation of PROC GLM is used to test the homogeneity of regression assumption. PROC ANOVA does not allow any continuous effects (such as a continuous covariate). The model statement includes (when the bar notation is expanded) the interaction term, Qualific*Gender. Some computing time is saved by asking for only sequential (SS1) sums of squares. Were Qualific*Gender significant, we would have a significant violation of the homogeneity of regression assumption (the slopes of the lines for predicting salary from qualifications would differ significantly between genders), which would, I opine, be a very interesting finding in its own right. What should you do if your primary interest was to test the effect of groups after holding constant the effect of the covariate, but the damn interaction is significant? The answer is the same thing you would do if you were interested in the main effect of Factor A but factorial ANOVA showed that there was an interaction between A and B look at the simple main effects of A at levels of B. That is, do a moderation analysis. See ANCOV_HeroRegr. PROC GLM; CLASS Gender; MODEL Salary = Gender Qualific(Gender) / SS3 SOLUTION; title3 'Get Slopes for Salary Predicted From Qualifications for Each Group'; run; quit; Parameter Estimate Standard Error t Value Pr > t Intercept B GENDER F B GENDER M B... QUALIFIC(GENDER) F QUALIFIC(GENDER) M Note: The X'X matrix has been found to be singular, and a generalized inverse was used to solve the normal equations. Terms whose estimates are followed by the letter 'B' are not uniquely estimable. The second invocation of PROC GLM is used to obtain the slopes for predicting salary from qualifications within each level of Gender QUALIFIC(GENDER). We already know that these two slopes do not differ significantly, but I do find it interesting that the slope for the male faculty is higher than that for the female faculty. PROC GLM; CLASS GENDER; MODEL SALARY = QUALIFIC GENDER / SS3 EFFECTSIZE alpha =.1; LSMEANS GENDER; TITLE3 'ANCOV: Gender differences in salary holding qualifications constant.'; run; quit;

14 Source Page 14 DF Sum of Squares Mean Square F Value Pr > F Model <.0001 Error Corrected Total R-Square Source DF Type III SS Mean Square F Value Pr > F Total Variation Accounted For Semipartial Eta-Square Semipartial Omega- Square Conservative 90% Confidence Limits QUALIFIC GENDER The third invocation of PROC GLM is used to do the Analysis of Covariance. The LSMEANS are estimates of what the group means would be if the groups did not differ on qualifications. If you have more than two groups, you will probably want to use the PDIFF option, for example, LSMEANS GROUP / PDIFF. The matrix of p-values produced with the PDIFF option are for pairwise comparisons between adjusted means (with no adjustment of per-comparison alpha). You can adjust the alpha-criterion downwards (Bonferroni, Sidak) if you are worried about familywise error rates. We can estimate the magnitude of effect of gender with an eta-squared statistic, the ratio of the gender sum of squares to the total sum of squares, / 3537 =.076. This is equivalent to the increase in R 2 when we add gender to a model for predicting salary from the covariate(s). The Proc Corr shows that r for predicting salary from qualifications is Proc GLM shows that the R 2 for predicting salary from qualifications and gender is Accordingly, eta-squared = =.076. If men and women were equally qualified, 7.6% of the differences in salaries would be explained by gender. Look back at the ANOVA comparing the genders on salary. The eta-squared there was.306. If we ignore qualifications, 30.6% of the differences in salaries is explained by gender (which is confounded with qualifications and other unknown variables). We could also use dˆ to estimate the magnitude of the difference between groups. The raw difference between adjusted means is = 5.58 thousand dollars. The standardizer will be the square root of the MSE from the ANOVA or from

15 Page 15 the ANCOV. Howell (following the advice of Cortina & Nouri) recommends the former Accordingly, d ˆ. 76. If we were to ignore qualifications (by using the unadjusted means), ˆ d Notice that after removing the interaction term the within-gender regression line have identical slopes. GENDER SALARY LSMEAN F M Our results indicate that even when we statistically adjust for differences in qualifications, men receive a salary significantly higher than that of women. This would seem to be pretty good evidence of bias against women, but will the results look the same if we view them from a different perspective? Look at the last invocation of PROC GLM. Here we compared the genders on qualifications after removing the effect of salary. PROC GLM; CLASS GENDER; MODEL QUALIFIC = SALARY GENDER / SS3 EFFECTSIZE alpha =.1; LSMEANS GENDER; TITLE3 'ANCOV: Gender differences in qualifications holding salary constant.'; run; quit;

16 Source Page 16 DF Sum of Squares Mean Square F Value Pr > F Model <.0001 Error Corrected Total R-Square GENDER QUALIFIC LSMEAN F M The results indicate that when we equate the groups on salary the mean qualifications of the men is significantly greater than that of the women. That looks like bias too, but in the opposite direction. ANCOV is a slippery thing, especially when dealing with data from a confounded design where the covariate is correlated not only with the dependent variable but with the independent variable as well. Two-Way ANCOV PROC ANOVA; CLASS task smoke; MODEL errors distract = task smoke; MEANS task smoke; TITLE3 'Two-way ANOVAs using errors and distract as DVs'; Title4 'Were the design unbalanced (unequal n''s) you would use'; title5 'PROC GLM here instead of PROC ANOVA.'; run; quit; Look at ANCOV2.sas and its output. The data are from Table in Howell. The program is a straightforward extension of ANCOV1.sas to a two-way design. First PROC ANOVA is used to evaluate effects of type of task (pattern recognition, cognitive, or driving simulation) and smoking condition (active smoking, delayed smoking, nonsmokers) on the covariate (distractability) and on the dependent variable (errors made on the task). Were the design unbalanced (unequal n s) you would need to use PROC GLM with Type III sums-of-squares here. The model SS, , from the ANOVA on the covariate is the SS cells(c) from Howell s discussion of comparing adjusted means. The error SS from the same analysis, 54,285,8, is Howell s SS e(c) and the SS Smoke, 730, is Howell s SS g(c).

17 Page 17 Dependent Variable: errors Source DF Sum of Squares Mean Square F Value Pr > F Model <.0001 Error Corrected Total R-Square Source DF Anova SS Mean Square F Value Pr > F task <.0001 smoke task*smoke Dependent Variable: distract Source DF Sum of Squares Mean Square F Value Pr > F Model Error Corrected Total R-Square Source DF Anova SS Mean Square F Value Pr > F task smoke task*smoke PROC GLM; CLASS task smoke; MODEL errors = task smoke distract / SS3; title3 'Test Homogeneity of Regression Within Cells and Treatments'; run; quit;

18 Page 18 Source DF Type III SS Mean Square F Value Pr > F task smoke task*smoke distract <.0001 distract*task <.0001 distract*smoke distract*task*smoke PROC GLM is first used to test homogeneity of regression within cells and treatments. The DISTRACT TASK F tests the null hypothesis that the slope for predicting ERRORS from DISTRACT is the same for all three types of task. The DISTRACT SMOKE tests slopes across smoking groups. The DISTRACT TASK SMOKE tests the null that the slope is the same in every cell of the two-way design. Howell did not extract the DISTRACT TASK and DISTRACT SMOKE terms from the error term and he did not test them, although in the third edition of his text (p. 562) he admitted that a good case could be made for testing those effects (he wanted their 3 df in the error term). Our analysis indicates that we have no problem with heterogeneity of regression across cells, but notice that there is heterogeneity of regression across tasks and across smoking groups. PROC GLM; CLASS task smoke; MODEL errors = task smoke distract(task smoke) / SS1 SOLUTION; title2 'Obtain Within-Cell Slopes'; Parameter Estimate t Value Pr > t distract(task*smoke) Cognitive Active Smoker <.0001 distract(task*smoke) Cognitive Delayed Smoker <.0001 distract(task*smoke) Cognitive NonSmoker <.0001 distract(task*smoke) Driving Active Smoker distract(task*smoke) Driving Delayed Smoker distract(task*smoke) Driving NonSmoker distract(task*smoke) Pattern Active Smoker distract(task*smoke) Pattern Delayed Smoker distract(task*smoke) Pattern NonSmoker

19 Page 19 PROC GLM is next used to obtain the slopes for each cell. Ignore the Biased estimates for within treatment slopes. Although these slopes do not differ enough across cells to produce significant heterogeneity of regression, inspection of the slopes shows why the DISTRACT*TASK effect was significant. Look at how high the slopes are for the cognitive task as compared to the other two tasks. Clearly the number of errors increased more rapidly with participants level of distractibility with the cognitive task than with the other tasks, especially for those nicotine junkies who had been deprived of their drug. You can also see the (smaller) DISTRACT*SMOKE effect, with the slopes for the delayed smokers (smokers who had not had a smoke in three hours) being larger than for the other participants. PROC GLM; CLASS task smoke; MODEL errors = distract task smoke / SOLUTION; LSMEANS task smoke / PDIFF; title2 'The ANCOV With Means and Adjusted Means'; run; quit; Source DF Type I SS Mean Square F Value Pr > F distract <.0001 Parameter Estimate t Value Pr > t distract <.0001 The next GLM does the ANCOV. Note that DISTRACT is significantly correlated with ERRORS (p <.001, Type I SS). Remember that the Type I SS reported here does not adjust the first term in the model (the covariate) for the later terms in the model. Howell prefers to adjust the covariate for the other effects in the model, so he uses SPSS unique (same as SAS Type III) SS to test the covariate. The common slope used to adjust scores is Source DF Type III SS Mean Square F Value Pr > F distract <.0001 task <.0001 smoke task*smoke I copied the adjusted means into SPSS and used UNIANOVA to produce an interaction plot.

20 UNIANOVA Errors BY Task Smoking /METHOD=SSTYPE(3) /INTERCEPT=INCLUDE /PLOT=PROFILE(Smoking*Task) /CRITERIA=ALPHA(0.05) /DESIGN=Task Smoking Task*Smoking. Page 20 As you can see, the smoking condition had a greater effect on performance on the cognitive task than on the other tasks. The very large main effect of type of task is obvious in that plot too, with errors being much more likely with the cognitive task than with the other two tasks. task errors LSMEAN LSMEAN Number Cognitive Driving Pattern Least Squares Means for effect task Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j <.0001 < < <

21 Page 21 If we ignore the interaction and look at the comparisons between marginal means (using the PDIFF output, and not worrying about familywise error), we see that, for the type of task variable, there were significantly more errors with the cognitive task than with the other two types of tasks. smoke errors LSMEAN LSMEAN Number Active Smoker Delayed Smoker NonSmoker Least Squares Means for effect smoke Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j On the smoking variable, we see that the nonsmokers made significantly fewer errors than did those in the two groups of smokers. TASK, SMOKE, and TASK SMOKE all have significant effects after we adjust for the covariate (Type III SS). Since the interaction is significant, we need to do some simple main effects analyses. PROC SORT; BY task; PROC GLM; CLASS smoke; MODEL errors = smoke distract / solution; LSMEANS smoke / PDIFF; BY task; title2 'Simple Main Effects ANCOV by Level of Task; run; quit; task=pattern Source DF Sum of Squares Mean Square F Value Pr > F Model Error Corrected Total

22 R-Square Page 22 Source DF Type I SS Mean Square F Value Pr > F distract smoke Parameter Estimate distract The simple main effects analysis done with the data from the pattern recognition task shows that the smoking groups did not differ significantly. The Type I SS smoke gives us a test of the effect of smoking history ignoring the covariate. The slope used to adjust the scores on the pattern recognition test is 0.085, notably less than the used in the factorial ANCOV. task=cognitive Source DF Sum of Squares Mean Square F Value Pr > F Model <.0001 Error Corrected Total R-Square Source DF Type I SS Mean Square F Value Pr > F distract <.0001 smoke Parameter Estimate distract

23 Page 23 smoke errors LSMEAN LSMEAN Number Active Smoker Delayed Smoker NonSmoker Least Squares Means for effect smoke Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j When we look at the analysis of the data from the cognitive task, we see that the smoking groups differ significantly. The nonsmokers made significantly fewer errors than did the participants in both the smoking groups. The slope used for adjusting scores (0.537) is notably more than it was with the factorial ANCOV or with the other two tasks. This is due to the DISTRACT*TASK interaction which Howell choose to ignore, but we detected. Source task=driving DF Sum of Squares Mean Square F Value Pr > F Model Error Corrected Total R-Square Source DF Type I SS Mean Square F Value Pr > F distract smoke

24 Parameter Estimate distract Page 24 smoke errors LSMEAN LSMEAN Number Active Smoker Delayed Smoker NonSmoker Least Squares Means for effect smoke Pr > t for H0: LSMean(i)=LSMean(j) Dependent Variable: errors i/j Finally, with the driving task, we see that the smoking groups differ significantly, with the active smokers making significantly fewer errors than did the delayed smokers and the nonsmokers. I guess the stimulant properties of nicotine are of some value when driving. Controlling Familywise Error When Using PDIFF If the comparisons being made involve only three means, I recommend Fisher s procedure that is, do not adjust the p values, but require that the main effect be statistically significant if it is not, none of the pairwise differences are significant. If the comparisons involve more than three means, you can tell SAS to adjust the p values to control familywise error. For example, LSMEANS smoke / PDIFF ADJUST=TUKEY; would apply a Tukey adjustment. Other adjustments available include BONferroni, SIDAK, DUNNETT, and SCHEFFE. References and Recommended Readings Birnbaum, M. H. (1985). Relationships among models of salary bias. American Psychologist, 40, Howell, D. C. (2013). Statistical methods for psychology (8 th ed.). Belmont, CA: Cengage Wadsworth. ISBN-13:

25 Page 25 Huck, S. W., & McLean, R. A. (1975). Using a repeated measures ANOVA to analyze the data from a pretest-posttest design: A potentially confusing task. Psychological Bulletin, Maxwell, S. E., Delaney, H. D., & Dill. (1984). Another look at ANCOVA versus blocking. Psychological Bulletin, 95, Maxwell, S. E., Delaney, H. D., & Manheimer, J. M. (1985). ANOVA of residuals and ANCOVA: Correcting an illusion by using model comparisons and graphs. Journal of Educational and Behavioral Statistics, 10, doi: / Rausch, J. R., Maxwell, S. E., & and Kelley, K. (2003). Analytic methods for questions pertaining to a randomized pretest, posttest, follow-up design. Journal of Clinical Child and Adolescent Psychology, 32, Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6 th ed.). Boston: Pearson. ISBN-10: ISBN-13: Example of Presentation of Results from One-Way ANCOV The Pretest-Posttest x Groups Design: How to Analyze the Data Matching and ANCOV with Confounded Variables Effect Size Confidence Intervals in ANCOV Return to Wuensch s Stats Lessons Page Copyright 2018 Karl L. Wuensch - All rights reserved.

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

One-Way ANOVA. Some examples of when ANOVA would be appropriate include: One-Way ANOVA 1. Purpose Analysis of variance (ANOVA) is used when one wishes to determine whether two or more groups (e.g., classes A, B, and C) differ on some outcome of interest (e.g., an achievement

More information

Simple, Marginal, and Interaction Effects in General Linear Models: Part 1

Simple, Marginal, and Interaction Effects in General Linear Models: Part 1 Simple, Marginal, and Interaction Effects in General Linear Models: Part 1 PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 2: August 24, 2012 PSYC 943: Lecture 2 Today s Class Centering and

More information

DESIGNING EXPERIMENTS AND ANALYZING DATA A Model Comparison Perspective

DESIGNING EXPERIMENTS AND ANALYZING DATA A Model Comparison Perspective DESIGNING EXPERIMENTS AND ANALYZING DATA A Model Comparison Perspective Second Edition Scott E. Maxwell Uniuersity of Notre Dame Harold D. Delaney Uniuersity of New Mexico J,t{,.?; LAWRENCE ERLBAUM ASSOCIATES,

More information

T-test: means of Spock's judge versus all other judges 1 12:10 Wednesday, January 5, judge1 N Mean Std Dev Std Err Minimum Maximum

T-test: means of Spock's judge versus all other judges 1 12:10 Wednesday, January 5, judge1 N Mean Std Dev Std Err Minimum Maximum T-test: means of Spock's judge versus all other judges 1 The TTEST Procedure Variable: pcwomen judge1 N Mean Std Dev Std Err Minimum Maximum OTHER 37 29.4919 7.4308 1.2216 16.5000 48.9000 SPOCKS 9 14.6222

More information

1 Tomato yield example.

1 Tomato yield example. ST706 - Linear Models II. Spring 2013 Two-way Analysis of Variance examples. Here we illustrate what happens analyzing two way data in proc glm in SAS. Similar issues come up with other software where

More information

General Linear Model (Chapter 4)

General Linear Model (Chapter 4) General Linear Model (Chapter 4) Outcome variable is considered continuous Simple linear regression Scatterplots OLS is BLUE under basic assumptions MSE estimates residual variance testing regression coefficients

More information

Simple, Marginal, and Interaction Effects in General Linear Models

Simple, Marginal, and Interaction Effects in General Linear Models Simple, Marginal, and Interaction Effects in General Linear Models PRE 905: Multivariate Analysis Lecture 3 Today s Class Centering and Coding Predictors Interpreting Parameters in the Model for the Means

More information

Two-Way ANOVA. Chapter 15

Two-Way ANOVA. Chapter 15 Two-Way ANOVA Chapter 15 Interaction Defined An interaction is present when the effects of one IV depend upon a second IV Interaction effect : The effect of each IV across the levels of the other IV When

More information

psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests

psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests last lecture: introduction to factorial designs next lecture: factorial between-ps ANOVA II: (effect sizes and follow-up tests) 1 general

More information

Biological Applications of ANOVA - Examples and Readings

Biological Applications of ANOVA - Examples and Readings BIO 575 Biological Applications of ANOVA - Winter Quarter 2010 Page 1 ANOVA Pac Biological Applications of ANOVA - Examples and Readings One-factor Model I (Fixed Effects) This is the same example for

More information

Categorical Predictor Variables

Categorical Predictor Variables Categorical Predictor Variables We often wish to use categorical (or qualitative) variables as covariates in a regression model. For binary variables (taking on only 2 values, e.g. sex), it is relatively

More information

SAS Commands. General Plan. Output. Construct scatterplot / interaction plot. Run full model

SAS Commands. General Plan. Output. Construct scatterplot / interaction plot. Run full model Topic 23 - Unequal Replication Data Model Outline - Fall 2013 Parameter Estimates Inference Topic 23 2 Example Page 954 Data for Two Factor ANOVA Y is the response variable Factor A has levels i = 1, 2,...,

More information

3 Variables: Cyberloafing Conscientiousness Age

3 Variables: Cyberloafing Conscientiousness Age title 'Cyberloafing, Mike Sage'; run; PROC CORR data=sage; var Cyberloafing Conscientiousness Age; run; quit; The CORR Procedure 3 Variables: Cyberloafing Conscientiousness Age Simple Statistics Variable

More information

data proc sort proc corr run proc reg run proc glm run proc glm run proc glm run proc reg CONMAIN CONINT run proc reg DUMMAIN DUMINT run proc reg

data proc sort proc corr run proc reg run proc glm run proc glm run proc glm run proc reg CONMAIN CONINT run proc reg DUMMAIN DUMINT run proc reg data one; input id Y group X; I1=0;I2=0;I3=0;if group=1 then I1=1;if group=2 then I2=1;if group=3 then I3=1; IINT1=I1*X;IINT2=I2*X;IINT3=I3*X; *************************************************************************;

More information

REVIEW 8/2/2017 陈芳华东师大英语系

REVIEW 8/2/2017 陈芳华东师大英语系 REVIEW Hypothesis testing starts with a null hypothesis and a null distribution. We compare what we have to the null distribution, if the result is too extreme to belong to the null distribution (p

More information

A Re-Introduction to General Linear Models (GLM)

A Re-Introduction to General Linear Models (GLM) A Re-Introduction to General Linear Models (GLM) Today s Class: You do know the GLM Estimation (where the numbers in the output come from): From least squares to restricted maximum likelihood (REML) Reviewing

More information

PLS205!! Lab 9!! March 6, Topic 13: Covariance Analysis

PLS205!! Lab 9!! March 6, Topic 13: Covariance Analysis PLS205!! Lab 9!! March 6, 2014 Topic 13: Covariance Analysis Covariable as a tool for increasing precision Carrying out a full ANCOVA Testing ANOVA assumptions Happiness! Covariable as a Tool for Increasing

More information

4.8 Alternate Analysis as a Oneway ANOVA

4.8 Alternate Analysis as a Oneway ANOVA 4.8 Alternate Analysis as a Oneway ANOVA Suppose we have data from a two-factor factorial design. The following method can be used to perform a multiple comparison test to compare treatment means as well

More information

Formula for the t-test

Formula for the t-test Formula for the t-test: How the t-test Relates to the Distribution of the Data for the Groups Formula for the t-test: Formula for the Standard Error of the Difference Between the Means Formula for the

More information

ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS

ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS Ravinder Malhotra and Vipul Sharma National Dairy Research Institute, Karnal-132001 The most common use of statistics in dairy science is testing

More information

Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology

Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology Data_Analysis.calm Three Factor Completely Randomized Design with One Continuous Factor: Using SPSS GLM UNIVARIATE R. C. Gardner Department of Psychology This article considers a three factor completely

More information

Topic 28: Unequal Replication in Two-Way ANOVA

Topic 28: Unequal Replication in Two-Way ANOVA Topic 28: Unequal Replication in Two-Way ANOVA Outline Two-way ANOVA with unequal numbers of observations in the cells Data and model Regression approach Parameter estimates Previous analyses with constant

More information

ST505/S697R: Fall Homework 2 Solution.

ST505/S697R: Fall Homework 2 Solution. ST505/S69R: Fall 2012. Homework 2 Solution. 1. 1a; problem 1.22 Below is the summary information (edited) from the regression (using R output); code at end of solution as is code and output for SAS. a)

More information

Analysis of Covariance

Analysis of Covariance Analysis of Covariance (ANCOVA) Bruce A Craig Department of Statistics Purdue University STAT 514 Topic 10 1 When to Use ANCOVA In experiment, there is a nuisance factor x that is 1 Correlated with y 2

More information

Sociology Research Statistics I Final Exam Answer Key December 15, 1993

Sociology Research Statistics I Final Exam Answer Key December 15, 1993 Sociology 592 - Research Statistics I Final Exam Answer Key December 15, 1993 Where appropriate, show your work - partial credit may be given. (On the other hand, don't waste a lot of time on excess verbiage.)

More information

One-way between-subjects ANOVA. Comparing three or more independent means

One-way between-subjects ANOVA. Comparing three or more independent means One-way between-subjects ANOVA Comparing three or more independent means Data files SpiderBG.sav Attractiveness.sav Homework: sourcesofself-esteem.sav ANOVA: A Framework Understand the basic principles

More information

Module 2. General Linear Model

Module 2. General Linear Model D.G. Bonett (9/018) Module General Linear Model The relation between one response variable (y) and q 1 predictor variables (x 1, x,, x q ) for one randomly selected person can be represented by the following

More information

Linear models Analysis of Covariance

Linear models Analysis of Covariance Esben Budtz-Jørgensen April 22, 2008 Linear models Analysis of Covariance Confounding Interactions Parameterizations Analysis of Covariance group comparisons can become biased if an important predictor

More information

Linear models Analysis of Covariance

Linear models Analysis of Covariance Esben Budtz-Jørgensen November 20, 2007 Linear models Analysis of Covariance Confounding Interactions Parameterizations Analysis of Covariance group comparisons can become biased if an important predictor

More information

Topic 13. Analysis of Covariance (ANCOVA) [ST&D chapter 17] 13.1 Introduction Review of regression concepts

Topic 13. Analysis of Covariance (ANCOVA) [ST&D chapter 17] 13.1 Introduction Review of regression concepts Topic 13. Analysis of Covariance (ANCOVA) [ST&D chapter 17] 13.1 Introduction The analysis of covariance (ANCOVA) is a technique that is occasionally useful for improving the precision of an experiment.

More information

1. (Rao example 11.15) A study measures oxygen demand (y) (on a log scale) and five explanatory variables (see below). Data are available as

1. (Rao example 11.15) A study measures oxygen demand (y) (on a log scale) and five explanatory variables (see below). Data are available as ST 51, Summer, Dr. Jason A. Osborne Homework assignment # - Solutions 1. (Rao example 11.15) A study measures oxygen demand (y) (on a log scale) and five explanatory variables (see below). Data are available

More information

Statistics 5100 Spring 2018 Exam 1

Statistics 5100 Spring 2018 Exam 1 Statistics 5100 Spring 2018 Exam 1 Directions: You have 60 minutes to complete the exam. Be sure to answer every question, and do not spend too much time on any part of any question. Be concise with all

More information

General Linear Models (GLM) for Fixed Factors

General Linear Models (GLM) for Fixed Factors Chapter 224 General Linear Models (GLM) for Fixed Factors Introduction This procedure performs analysis of variance (ANOVA) and analysis of covariance (ANCOVA) for factorial models that include fixed factors

More information

CHAPTER 7 - FACTORIAL ANOVA

CHAPTER 7 - FACTORIAL ANOVA Between-S Designs Factorial 7-1 CHAPTER 7 - FACTORIAL ANOVA Introduction to Factorial Designs................................................. 2 A 2 x 2 Factorial Example.......................................................

More information

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES BIOL 458 - Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES PART 1: INTRODUCTION TO ANOVA Purpose of ANOVA Analysis of Variance (ANOVA) is an extremely useful statistical method

More information

6. Multiple regression - PROC GLM

6. Multiple regression - PROC GLM Use of SAS - November 2016 6. Multiple regression - PROC GLM Karl Bang Christensen Department of Biostatistics, University of Copenhagen. http://biostat.ku.dk/~kach/sas2016/ kach@biostat.ku.dk, tel: 35327491

More information

4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES

4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES 4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES FOR SINGLE FACTOR BETWEEN-S DESIGNS Planned or A Priori Comparisons We previously showed various ways to test all possible pairwise comparisons for

More information

One-way between-subjects ANOVA. Comparing three or more independent means

One-way between-subjects ANOVA. Comparing three or more independent means One-way between-subjects ANOVA Comparing three or more independent means ANOVA: A Framework Understand the basic principles of ANOVA Why it is done? What it tells us? Theory of one-way between-subjects

More information

MANOVA is an extension of the univariate ANOVA as it involves more than one Dependent Variable (DV). The following are assumptions for using MANOVA:

MANOVA is an extension of the univariate ANOVA as it involves more than one Dependent Variable (DV). The following are assumptions for using MANOVA: MULTIVARIATE ANALYSIS OF VARIANCE MANOVA is an extension of the univariate ANOVA as it involves more than one Dependent Variable (DV). The following are assumptions for using MANOVA: 1. Cell sizes : o

More information

WELCOME! Lecture 13 Thommy Perlinger

WELCOME! Lecture 13 Thommy Perlinger Quantitative Methods II WELCOME! Lecture 13 Thommy Perlinger Parametrical tests (tests for the mean) Nature and number of variables One-way vs. two-way ANOVA One-way ANOVA Y X 1 1 One dependent variable

More information

171:162 Design and Analysis of Biomedical Studies, Summer 2011 Exam #3, July 16th

171:162 Design and Analysis of Biomedical Studies, Summer 2011 Exam #3, July 16th Name 171:162 Design and Analysis of Biomedical Studies, Summer 2011 Exam #3, July 16th Use the selected SAS output to help you answer the questions. The SAS output is all at the back of the exam on pages

More information

STA441: Spring Multiple Regression. This slide show is a free open source document. See the last slide for copyright information.

STA441: Spring Multiple Regression. This slide show is a free open source document. See the last slide for copyright information. STA441: Spring 2018 Multiple Regression This slide show is a free open source document. See the last slide for copyright information. 1 Least Squares Plane 2 Statistical MODEL There are p-1 explanatory

More information

ANCOVA. ANCOVA allows the inclusion of a 3rd source of variation into the F-formula (called the covariate) and changes the F-formula

ANCOVA. ANCOVA allows the inclusion of a 3rd source of variation into the F-formula (called the covariate) and changes the F-formula ANCOVA Workings of ANOVA & ANCOVA ANCOVA, Semi-Partial correlations, statistical control Using model plotting to think about ANCOVA & Statistical control You know how ANOVA works the total variation among

More information

Laboratory Topics 4 & 5

Laboratory Topics 4 & 5 PLS205 Lab 3 January 23, 2014 Orthogonal contrasts Class comparisons in SAS Trend analysis in SAS Multiple mean comparisons Laboratory Topics 4 & 5 Orthogonal contrasts Planned, single degree-of-freedom

More information

Self-Assessment Weeks 8: Multiple Regression with Qualitative Predictors; Multiple Comparisons

Self-Assessment Weeks 8: Multiple Regression with Qualitative Predictors; Multiple Comparisons Self-Assessment Weeks 8: Multiple Regression with Qualitative Predictors; Multiple Comparisons 1. Suppose we wish to assess the impact of five treatments while blocking for study participant race (Black,

More information

MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010

MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010 MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010 Part 1 of this document can be found at http://www.uvm.edu/~dhowell/methods/supplements/mixed Models for Repeated Measures1.pdf

More information

5:1LEC - BETWEEN-S FACTORIAL ANOVA

5:1LEC - BETWEEN-S FACTORIAL ANOVA 5:1LEC - BETWEEN-S FACTORIAL ANOVA The single-factor Between-S design described in previous classes is only appropriate when there is just one independent variable or factor in the study. Often, however,

More information

Pairwise multiple comparisons are easy to compute using SAS Proc GLM. The basic statement is:

Pairwise multiple comparisons are easy to compute using SAS Proc GLM. The basic statement is: Pairwise Multiple Comparisons in SAS Pairwise multiple comparisons are easy to compute using SAS Proc GLM. The basic statement is: means effects / options Here, means is the statement initiat, effects

More information

Unit 12: Analysis of Single Factor Experiments

Unit 12: Analysis of Single Factor Experiments Unit 12: Analysis of Single Factor Experiments Statistics 571: Statistical Methods Ramón V. León 7/16/2004 Unit 12 - Stat 571 - Ramón V. León 1 Introduction Chapter 8: How to compare two treatments. Chapter

More information

Research Methodology: Tools

Research Methodology: Tools MSc Business Administration Research Methodology: Tools Applied Data Analysis (with SPSS) Lecture 09: Introduction to Analysis of Variance (ANOVA) April 2014 Prof. Dr. Jürg Schwarz Lic. phil. Heidi Bruderer

More information

Analysis of variance and regression. April 17, Contents Comparison of several groups One-way ANOVA. Two-way ANOVA Interaction Model checking

Analysis of variance and regression. April 17, Contents Comparison of several groups One-way ANOVA. Two-way ANOVA Interaction Model checking Analysis of variance and regression Contents Comparison of several groups One-way ANOVA April 7, 008 Two-way ANOVA Interaction Model checking ANOVA, April 008 Comparison of or more groups Julie Lyng Forman,

More information

Inferences for Regression

Inferences for Regression Inferences for Regression An Example: Body Fat and Waist Size Looking at the relationship between % body fat and waist size (in inches). Here is a scatterplot of our data set: Remembering Regression In

More information

Topic 20: Single Factor Analysis of Variance

Topic 20: Single Factor Analysis of Variance Topic 20: Single Factor Analysis of Variance Outline Single factor Analysis of Variance One set of treatments Cell means model Factor effects model Link to linear regression using indicator explanatory

More information

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 12: Detailed Analyses of Main Effects and Simple Effects

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 12: Detailed Analyses of Main Effects and Simple Effects Keppel, G. & Wickens, T. D. Design and Analysis Chapter 1: Detailed Analyses of Main Effects and Simple Effects If the interaction is significant, then less attention is paid to the two main effects, and

More information

Analysis of Covariance. The following example illustrates a case where the covariate is affected by the treatments.

Analysis of Covariance. The following example illustrates a case where the covariate is affected by the treatments. Analysis of Covariance In some experiments, the experimental units (subjects) are nonhomogeneous or there is variation in the experimental conditions that are not due to the treatments. For example, a

More information

What Does the F-Ratio Tell Us?

What Does the F-Ratio Tell Us? Planned Comparisons What Does the F-Ratio Tell Us? The F-ratio (called an omnibus or overall F) provides a test of whether or not there a treatment effects in an experiment A significant F-ratio suggests

More information

unadjusted model for baseline cholesterol 22:31 Monday, April 19,

unadjusted model for baseline cholesterol 22:31 Monday, April 19, unadjusted model for baseline cholesterol 22:31 Monday, April 19, 2004 1 Class Level Information Class Levels Values TRETGRP 3 3 4 5 SEX 2 0 1 Number of observations 916 unadjusted model for baseline cholesterol

More information

Outline. Topic 19 - Inference. The Cell Means Model. Estimates. Inference for Means Differences in cell means Contrasts. STAT Fall 2013

Outline. Topic 19 - Inference. The Cell Means Model. Estimates. Inference for Means Differences in cell means Contrasts. STAT Fall 2013 Topic 19 - Inference - Fall 2013 Outline Inference for Means Differences in cell means Contrasts Multiplicity Topic 19 2 The Cell Means Model Expressed numerically Y ij = µ i + ε ij where µ i is the theoretical

More information

Correlation & Simple Regression

Correlation & Simple Regression Chapter 11 Correlation & Simple Regression The previous chapter dealt with inference for two categorical variables. In this chapter, we would like to examine the relationship between two quantitative variables.

More information

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

 M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2 Notation and Equations for Final Exam Symbol Definition X The variable we measure in a scientific study n The size of the sample N The size of the population M The mean of the sample µ The mean of the

More information

Analysis of variance. April 16, Contents Comparison of several groups

Analysis of variance. April 16, Contents Comparison of several groups Contents Comparison of several groups Analysis of variance April 16, 2009 One-way ANOVA Two-way ANOVA Interaction Model checking Acknowledgement for use of presentation Julie Lyng Forman, Dept. of Biostatistics

More information

STAT 705 Chapter 19: Two-way ANOVA

STAT 705 Chapter 19: Two-way ANOVA STAT 705 Chapter 19: Two-way ANOVA Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 38 Two-way ANOVA Material covered in Sections 19.2 19.4, but a bit

More information

Analysis of variance. April 16, 2009

Analysis of variance. April 16, 2009 Analysis of variance April 16, 2009 Contents Comparison of several groups One-way ANOVA Two-way ANOVA Interaction Model checking Acknowledgement for use of presentation Julie Lyng Forman, Dept. of Biostatistics

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

GLM Repeated-measures designs: One within-subjects factor

GLM Repeated-measures designs: One within-subjects factor GLM Repeated-measures designs: One within-subjects factor Reading: SPSS dvanced Models 9.0: 2. Repeated Measures Homework: Sums of Squares for Within-Subject Effects Download: glm_withn1.sav (Download

More information

Factorial Analysis of Variance

Factorial Analysis of Variance Chapter 13 Factorial Analysis of Variance Objectives To discuss the analysis of variance for the case of two or more independent variables. The chapter also includes coverage of nested designs. Contents

More information

A Re-Introduction to General Linear Models

A Re-Introduction to General Linear Models A Re-Introduction to General Linear Models Today s Class: Big picture overview Why we are using restricted maximum likelihood within MIXED instead of least squares within GLM Linear model interpretation

More information

Chapter 7 Factorial ANOVA: Two-way ANOVA

Chapter 7 Factorial ANOVA: Two-way ANOVA Chapter 7 Factorial ANOVA: Two-way ANOVA Page Two-way ANOVA: Equal n. Examples 7-. Terminology 7-6 3. Understanding main effects 7- and interactions 4. Structural model 7-5 5. Variance partitioning 7-6.

More information

Factorial ANOVA. STA305 Spring More than one categorical explanatory variable

Factorial ANOVA. STA305 Spring More than one categorical explanatory variable Factorial ANOVA STA305 Spring 2014 More than one categorical explanatory variable Optional Background Reading Chapter 7 of Data analysis with SAS 2 Factorial ANOVA More than one categorical explanatory

More information

10/31/2012. One-Way ANOVA F-test

10/31/2012. One-Way ANOVA F-test PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic 3.Distribution 4. Assumptions One-Way ANOVA F-test One factor J>2 independent samples

More information

One-Way Analysis of Covariance (ANCOVA)

One-Way Analysis of Covariance (ANCOVA) Chapter 225 One-Way Analysis of Covariance (ANCOVA) Introduction This procedure performs analysis of covariance (ANCOVA) with one group variable and one covariate. This procedure uses multiple regression

More information

Business Statistics. Lecture 10: Course Review

Business Statistics. Lecture 10: Course Review Business Statistics Lecture 10: Course Review 1 Descriptive Statistics for Continuous Data Numerical Summaries Location: mean, median Spread or variability: variance, standard deviation, range, percentiles,

More information

Multiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600

Multiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600 Multiple Comparison Procedures Cohen Chapter 13 For EDUC/PSY 6600 1 We have to go to the deductions and the inferences, said Lestrade, winking at me. I find it hard enough to tackle facts, Holmes, without

More information

PLS205 Lab 6 February 13, Laboratory Topic 9

PLS205 Lab 6 February 13, Laboratory Topic 9 PLS205 Lab 6 February 13, 2014 Laboratory Topic 9 A word about factorials Specifying interactions among factorial effects in SAS The relationship between factors and treatment Interpreting results of an

More information

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means 4.1 The Need for Analytical Comparisons...the between-groups sum of squares averages the differences

More information

Multivariate analysis of variance and covariance

Multivariate analysis of variance and covariance Introduction Multivariate analysis of variance and covariance Univariate ANOVA: have observations from several groups, numerical dependent variable. Ask whether dependent variable has same mean for each

More information

Analysis of Covariance

Analysis of Covariance B. Weaver (15-Feb-2002) ANCOVA... 1 Analysis of Covariance 2.1 Conceptual overview of ANCOVA Howell (1997) introduces analysis of covariance (ANCOVA) in the context of a simple 3-group experiment. The

More information

ANCOVA. Psy 420 Andrew Ainsworth

ANCOVA. Psy 420 Andrew Ainsworth ANCOVA Psy 420 Andrew Ainsworth What is ANCOVA? Analysis of covariance an extension of ANOVA in which main effects and interactions are assessed on DV scores after the DV has been adjusted for by the DV

More information

9. Linear Regression and Correlation

9. Linear Regression and Correlation 9. Linear Regression and Correlation Data: y a quantitative response variable x a quantitative explanatory variable (Chap. 8: Recall that both variables were categorical) For example, y = annual income,

More information

A discussion on multiple regression models

A discussion on multiple regression models A discussion on multiple regression models In our previous discussion of simple linear regression, we focused on a model in which one independent or explanatory variable X was used to predict the value

More information

Analyses of Variance. Block 2b

Analyses of Variance. Block 2b Analyses of Variance Block 2b Types of analyses 1 way ANOVA For more than 2 levels of a factor between subjects ANCOVA For continuous co-varying factor, between subjects ANOVA for factorial design Multiple

More information

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics DETAILED CONTENTS About the Author Preface to the Instructor To the Student How to Use SPSS With This Book PART I INTRODUCTION AND DESCRIPTIVE STATISTICS 1. Introduction to Statistics 1.1 Descriptive and

More information

Outline. Analysis of Variance. Acknowledgements. Comparison of 2 or more groups. Comparison of serveral groups

Outline. Analysis of Variance. Acknowledgements. Comparison of 2 or more groups. Comparison of serveral groups Outline Analysis of Variance Analysis of variance and regression course http://staff.pubhealth.ku.dk/~lts/regression10_2/index.html Comparison of serveral groups Model checking Marc Andersen, mja@statgroup.dk

More information

(Where does Ch. 7 on comparing 2 means or 2 proportions fit into this?)

(Where does Ch. 7 on comparing 2 means or 2 proportions fit into this?) 12. Comparing Groups: Analysis of Variance (ANOVA) Methods Response y Explanatory x var s Method Categorical Categorical Contingency tables (Ch. 8) (chi-squared, etc.) Quantitative Quantitative Regression

More information

Comparing Several Means: ANOVA

Comparing Several Means: ANOVA Comparing Several Means: ANOVA Understand the basic principles of ANOVA Why it is done? What it tells us? Theory of one way independent ANOVA Following up an ANOVA: Planned contrasts/comparisons Choosing

More information

T. Mark Beasley One-Way Repeated Measures ANOVA handout

T. Mark Beasley One-Way Repeated Measures ANOVA handout T. Mark Beasley One-Way Repeated Measures ANOVA handout Profile Analysis Example In the One-Way Repeated Measures ANOVA, two factors represent separate sources of variance. Their interaction presents an

More information

Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti

Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang Use in experiment, quasi-experiment

More information

Chapter 14: Repeated-measures designs

Chapter 14: Repeated-measures designs Chapter 14: Repeated-measures designs Oliver Twisted Please, Sir, can I have some more sphericity? The following article is adapted from: Field, A. P. (1998). A bluffer s guide to sphericity. Newsletter

More information

Contrasts (in general)

Contrasts (in general) 10/1/015 6-09/749 Experimental Design for Behavioral and Social Sciences Contrasts (in general) Context: An ANOVA rejects the overall null hypothesis that all k means of some factor are not equal, i.e.,

More information

Simple Linear Regression: One Quantitative IV

Simple Linear Regression: One Quantitative IV Simple Linear Regression: One Quantitative IV Linear regression is frequently used to explain variation observed in a dependent variable (DV) with theoretically linked independent variables (IV). For example,

More information

Correlations. Notes. Output Created Comments 04-OCT :34:52

Correlations. Notes. Output Created Comments 04-OCT :34:52 Correlations Output Created Comments Input Missing Value Handling Syntax Resources Notes Data Active Dataset Filter Weight Split File N of Rows in Working Data File Definition of Missing Cases Used Processor

More information

Neuendorf MANOVA /MANCOVA. Model: X1 (Factor A) X2 (Factor B) X1 x X2 (Interaction) Y4. Like ANOVA/ANCOVA:

Neuendorf MANOVA /MANCOVA. Model: X1 (Factor A) X2 (Factor B) X1 x X2 (Interaction) Y4. Like ANOVA/ANCOVA: 1 Neuendorf MANOVA /MANCOVA Model: X1 (Factor A) X2 (Factor B) X1 x X2 (Interaction) Y1 Y2 Y3 Y4 Like ANOVA/ANCOVA: 1. Assumes equal variance (equal covariance matrices) across cells (groups defined by

More information

Review of Multiple Regression

Review of Multiple Regression Ronald H. Heck 1 Let s begin with a little review of multiple regression this week. Linear models [e.g., correlation, t-tests, analysis of variance (ANOVA), multiple regression, path analysis, multivariate

More information

11 Factors, ANOVA, and Regression: SAS versus Splus

11 Factors, ANOVA, and Regression: SAS versus Splus Adapted from P. Smith, and expanded 11 Factors, ANOVA, and Regression: SAS versus Splus Factors. A factor is a variable with finitely many values or levels which is treated as a predictor within regression-type

More information

Dr. Junchao Xia Center of Biophysics and Computational Biology. Fall /1/2016 1/46

Dr. Junchao Xia Center of Biophysics and Computational Biology. Fall /1/2016 1/46 BIO5312 Biostatistics Lecture 10:Regression and Correlation Methods Dr. Junchao Xia Center of Biophysics and Computational Biology Fall 2016 11/1/2016 1/46 Outline In this lecture, we will discuss topics

More information

STAT 705 Chapter 19: Two-way ANOVA

STAT 705 Chapter 19: Two-way ANOVA STAT 705 Chapter 19: Two-way ANOVA Adapted from Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Data Analysis II 1 / 41 Two-way ANOVA This material is covered in Sections

More information

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model

Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model Course Introduction and Overview Descriptive Statistics Conceptualizations of Variance Review of the General Linear Model PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 1: August 22, 2012

More information

Outline. Analysis of Variance. Comparison of 2 or more groups. Acknowledgements. Comparison of serveral groups

Outline. Analysis of Variance. Comparison of 2 or more groups. Acknowledgements. Comparison of serveral groups Outline Analysis of Variance Analysis of variance and regression course http://staff.pubhealth.ku.dk/~jufo/varianceregressionf2011.html Comparison of serveral groups Model checking Marc Andersen, mja@statgroup.dk

More information

1 Introduction to Minitab

1 Introduction to Minitab 1 Introduction to Minitab Minitab is a statistical analysis software package. The software is freely available to all students and is downloadable through the Technology Tab at my.calpoly.edu. When you

More information

Warner, R. M. (2008). Applied Statistics: From bivariate through multivariate techniques. Thousand Oaks: Sage.

Warner, R. M. (2008). Applied Statistics: From bivariate through multivariate techniques. Thousand Oaks: Sage. Errata for Warner, R. M. (2008). Applied Statistics: From bivariate through multivariate techniques. Thousand Oaks: Sage. Most recent update: March 4, 2009 Please send information about any errors in the

More information