Chapter 5 Contrasts for one-way ANOVA
|
|
- Joshua Elliott
- 5 years ago
- Views:
Transcription
1 Chapter Contrasts for one-way NOV Page. What is a contrast? -. Types of contrasts -. Significance tests of a single contrast -0. Brand name contrasts -. Relationships between the omnibus F and contrasts - 6. Robust tests for a single contrast Effect sizes for a single contrast - 8. n example - dvanced topics in contrast analysis 9. Trend analysis Simultaneous significance tests on multiple contrasts -. Contrasts with unequal cell size -6. final example Karpinski
2 Contrasts for one-way NOV. What is a contrast? focused test of means weighted sum of means Contrasts allow you to test your research hypothesis (as opposed to the statistical hypothesis) Example: You want to investigate if a college education improves ST scores. You obtain five groups with n in each group: o High School Seniors o College Seniors Mathematics Majors Chemistry Majors English Majors History Majors o ll participants take the ST and scores are recorded o The omnibus F-test examines the following hypotheses: H : µ µ µ µ µ 0 H : Not all i 's are equal µ o But you want to know: Do college seniors score differently than high school seniors? Do natural science majors score differently than humanities majors? Do math majors score differently than chemistry majors? Do English majors score differently than history majors? HS College Students Students Math Chemistry English History µ µ µ µ µ Karpinski
3 Do college seniors score differently than high school seniors? HS College Students Students Math Chemistry English History µ µ µ µ µ H 0 µ µ µ µ : µ H µ µ µ µ : µ Do natural science majors score differently than humanities majors? HS College Students Students Math Chemistry English History µ µ µ µ H 0 µ µ µ µ : H µ µ µ µ : Do math majors score differently than chemistry majors? HS College Students Students Math Chemistry English History µ µ H : µ µ H : µ µ 0 Do English majors score differently than history majors? HS College Students Students Math Chemistry English History µ µ H : µ µ H : µ µ Karpinski
4 In general, a contrast is a set of weights that defines a specific comparison over the cell means a ψ c i µ i c µ c µ c µ... c a µ a j a ˆ ψ c i X i c X c X c X... c a X a j o Where (µ,µ,µ,...,µ a ) are the population means for each group (X, X,X,...,X a ) are the observed means for each group (c,c,c,...,c a )are weights/contrast coefficients with a c i i 0 contrast is a linear combination of cell means o Do college seniors score differently than high school seniors? µ µ µ µ H 0 : µ or H 0 : µ µ µ µ 0 ψ µ c,,,, o Do natural science majors score differently than humanities majors? µ µ µ µ H 0 : or H 0 : µ µ µ 0 ψ µ µ c 0,,,, o Do math majors score differently than chemistry majors? H 0 : µ µ or H 0 : µ 0 ψ µ c ( 0,,,0,0 ) Karpinski
5 . Types of Contrasts Pairwise contrasts o Comparisons between two cell means o Contrast is of the form c i and c i for some i and i a(a ) o If you have a groups then there are possible pairwise contrasts o Examples: Do math majors score differently than chemistry majors? ψ µ c ( 0,,,0,0 ) Do English majors score differently than history majors? ψ µ c ( 0,0,0,, ) µ o When there are two groups (a ), then the two independent samples t- test is equivalent to the c (, ) contrast on the two means. Complex contrasts o contrast between more than two means o There are an infinite number of contrasts you can perform for any design o Do college seniors score differently than high school seniors? ψ µ c,,,, o Do natural science majors score differently than humanities majors? ψ µ µ c 0,,,, o So long as the coefficients sum to zero, you can make any comparison: ψ k. 0µ.08µ.98µ.8µ. 7µ c (. 0,.08,.98,.8,.7) But remember you have to be able to interpret the result! Karpinski
6 Orthogonal contrasts o Sometimes called non-redundant contrasts o Orthogonality may be best understood through a counter-example o Suppose you want to test three contrasts: Do math majors score differently than high school seniors? ψ µ c (,,0,0,0 ) Do chemistry majors score differently than high school seniors? ψ µ c (,0,,0,0 ) Do math majors score differently than chemistry majors? ψ µ c ( 0,,,0,0 ) µ o But we notice that ψ µ µ ( µ ) ( µ ) ( µ ) ψ ψ If I know ψ and ψ then I can determine the value of ψ ψ, ψ, and ψ are redundant or non-orthogonal contrasts o Orthogonality defined: set of contrasts is orthogonal if they are independent of each other (or if knowing the value of one contrast in no way provides any information about the other contrast) If a set of contrasts are orthogonal then the contrast coefficients are not correlated with each other Two contrasts are orthogonal if the angle between them in a-space is a right angle Two contrasts are orthogonal if for equal n ψ (a,a,a,...,a a ) ψ (b,b,b,...,b a ) a a i b i 0 or a b a b... a a b a 0 j Karpinski
7 Two contrasts are orthogonal if for unequal n ψ (a,a,a,...,a a ) ψ (b,b,b,...,b a ) a j a i b i n i 0 or a b n a b n... a a b a n a 0 o Examples of Orthogonality (assuming equal n) a c i c * i 0* ( ) * Set #: c (,0, ) and c,, j 0 0 c and c are orthogonal Set #: ( 0,, ) c c c and c,, a c i c i ( 0* ) * * j 0 c and c are orthogonal 0 c c Set #: c (,,0 ) and c (,0, ) a 6 c i c 6i ( *) ( *0) 0* j 0 0 ( ) c and c 6 are NOT orthogonal Karpinski
8 o set of contrasts is orthogonal if each contrast is orthogonal to all other contrasts in the set You can check that: c (,,0,0 ) c c c (,,,0) c c (,,, ) c c c o If you have a groups, then there are a- possible orthogonal contrasts We lose one contrast for the grand mean (the unit contrast) Having the contrasts sum to zero assures that they will be orthogonal to the unit contrast If you have more than a- contrasts, then the contrasts are redundant and you can write at least one contrast as a linear combination of the other contrasts Example: For a, we can find only orthogonal contrasts. ny other contrasts are redundant. µ ψ ψ µ µ µ ψ µ µ µ ψ ψ ψ is not orthogonal to ψ ψ is not orthogonal to ψ We can write ψ in terms of ψ and ψ 9 ψ ψ ψ 0 9 ( µ ) µ µ µ µ µ µ µ µ µ In general, you will not need to show how a contrast may be calculated from a set of orthogonal contrasts. It is sufficient to know that if you have more than a- contrasts, there must be at least one contrast you can write as a linear combination of the other contrasts Karpinski
9 o There is nothing wrong with testing non-orthogonal contrasts, as long as you are aware that they are redundant. o For example, you may want to examine all possible pairwise contrasts. These contrasts are not orthogonal, but they may be relevant to your research hypothesis. Nice properties of orthogonal contrasts: o We will learn to compute Sums of Squares associated with each contrast (SSC i ) o For a set of a- orthogonal contrasts Each contrast has one degree of freedom SSB SSC SSC SSC... SSC a- In other words, a set of a- orthogonal contrasts partitions the SSB Recall that for the omnibus NOV, the df bet a. The omnibus test combines the results of these a- contrasts and reports them in one lump test ny set of a- orthogonal contrasts will yield the identical result as the omnibus test Karpinski
10 . Significance tests of a single contrast Recall the general form of a t-test: estimate of population parameter t ~ estimated standard error t ~ ψˆ standard error( ψˆ ) To compute a significance test for a single contrast, we need: o n estimate of the value of the contrast o n estimate of the standard error of the contrast (The standard deviation of the sampling distribution of the contrast) o Value of a contrast: a ψ c i µ i c µ c µ c µ... c a µ a j a ˆ ψ c i X i c X c X c X... c a X a j ψˆ is an unbiased estimate of the true population value of ψ o Standard error of a contrast Recall that standard error is the standard deviation of the sampling distribution The standard error for the two independent samples t-test: Std Error s pooled n n The standard error of a contrast has a similar form: Std Error( ˆ ψ ) MSW c i a i n i Where ci is the squared weight for each group ni is the sample size for each group MSW is MSW from the omnibus NOV Karpinski
11 Constructing a significance test estimate of population parameter t ~ estimated standard error t ~ ψˆ standard error( ψˆ ) o Now we can insert the parts of the t-test into the equation: t observed i MSW c X i c i n i with degrees of freedom df w N a o To determine the level of significance, you can: Look up t crit for df N-a and the appropriate α Or preferably, compare p obs with p crit o Note that because the test for a contrast is calculated using the t- distribution, you can use either a one-tailed or two-tailed test of significance. s previously mentioned, you typically want to report the two-tailed test of the contrast Karpinski
12 Example #: a and c (, ) ψ ˆ () X ( ) X X X Std error (ψˆ ) MSW ( ) MSW n n n n t ψˆ StdError( ψˆ ) X X MSW n n o But we know that for two groups, MSW s pooled o Thus, the two independent samples t-test is identical to a c (, ) contrast on the two means Example #: study of the effects of reward on learning in children DV Number of trials to learn a puzzle Level of Reward Frequent Infrequent (66%) (%) Constant (00%) Never (0%) H: Constant reward will produce faster learning than the average of the other conditions H: Frequent reward will produce faster learning than the average of infrequent or no reward H: Infrequent reward will produce faster learning than no reward Karpinski
13 trials reward TRILS 0 8 N 6 Constant REWRD 6 Frequent 6 Infrequent 6 Never o Step : Convert the research hypothesis into a contrast of means H: Constant reward will produce faster learning than the average of the other conditions µ µ µ µ µ µ H 0 : µ H : µ ψ µ c,,, H: Frequent reward will produce faster learning than the average of infrequent or no reward µ µ µ µ H 0 : µ H : µ ψ µ c 0,,, H: Infrequent reward will produce faster learning than no reward 0 : µ µ ψ : µ µ H H µ c 0,0,, ( ) Karpinski
14 o Step : Determine if the contrasts of interest are orthogonal c & c & c & c : ( * 0) * * * 0 j c c i i c : ( * 0) * 0 * * 0 j c c i i c : ( 0 * 0) ( * 0) * * 0 j c i ic o Step : Compute values for each contrast c c c c c c ψˆ X X X X () (6) (7).6667 (). X (6) (7) ˆ X X ψ ˆ X X ψ (6) (7) o Step : Conduct omnibus NOV to obtain MSW NOV Source of Variation SS df MS F P-value F crit Between Groups Within Groups Total 9 9 Note: we do not care about the results of this test. We only want to calculate MSW Karpinski
15 o Step : Compute standard error for each contrast Std Error( ˆ ψ ) MSW c i a i n i Std err ( ψ ˆ ).87.87* Std err ( ψ ˆ ).87 0 ().87* Std err ( ψ ˆ ) () ( ).87*.0.0 o Step 6: Compute observed t or F statistic for each contrast t observed i MSW c X i c i n i.6667 ψ ˆ : observed t. ψ ˆ : t observed ψ ˆ : t observed Karpinski
16 o Step 7: Determine statistical significance.0 Method (The table method): Find t crit for df N-a and α. 0 t crit (6) α.0. Compare t crit to t obs if if t < t then retain H 0 observed observed critical t t then reject H 0 critical We reject the null hypothesis for ψ ˆ and ψ ˆ Constant reward produced faster learning than the average of the other conditions Frequent reward produced faster learning than the average of infrequent or no reward.0 Method : (The exact method): Find p obs for df N-a and α. 0 for each t obs and Then compare p obs to p crit.0 ψ : t observed (6).6, p.0 ψ : t observed (6).0, p <.0 ψ : t observed (6) 0.80, p. o lternatively, you can perform an F-test to evaluate contrasts. We know that t F t observed i MSW c X i c i n i ψˆ F observed c i MSW n i df N-a df (, N-a) You will obtain the exact same results with t-tests or F-tests Karpinski
17 Confidence intervals for contrasts o In general, the formula for a confidence interval is estimate ± ( t * standard error) critical o For a contrast, the formula for a confidence interval is estimate ± ( t * standard error) critical ci ψ ˆ ± tcritical ( dfw) * MSW ni In the learning example, t crit (6) α.0. ψ :.667 ± (. *.06) ψ :. ± (. *.078) ψ :.0 ± (. *.0) (-.8, -0.) (-7.79, -.) (-.6,.6) Using SPSS to evaluate contrasts o If you use ONEWY, you can directly enter the contrast coefficients to obtain the desired contrast. ONEWY trials BY reward /STT desc /CONTRST, -., -., -. /CONTRST 0,, -., -. /CONTRST 0, 0,, Karpinski
18 Descriptives TRILS Constant Frequent Infrequent Never Total 9% Confidence Interval for Mean N Mean Std. Deviation Std. Error Lower Bound Upper Bound Contrast Coefficients Contrast REWRD Constant Frequent Infrequent Never Contrast Tests TRILS ssume equal variances Contrast Value of Contrast Std. Error t df Sig. (-tailed) o Multiplying the coefficients by a constant will not change the results of the significance test on that contrast. If you multiply the values of a contrast by any constant (positive or negative), you will obtain the identical test statistic and p-value in your analysis. The value of the contrast, the standard error, and the size of the CIs will shrink or expand by a factor of the constant used, but key features (i.e., p-values and whether or not the CIs overlap) remain the same Karpinski
19 o Let s examine Hypothesis using three different sets of contrast coefficients: ONEWY trials BY reward /STT desc /CONTRST, -., -., -. /CONTRST, -, -, - /CONTRST -6,,,. Contrast Coefficients Contrast REWRD Constant Frequent Infrequent Never Contrast Tests TRILS ssume equal variances Contrast Value of Contrast Std. Error t df Sig. (-tailed) You get more precise values if you enter the exact contrast coefficients into SPSS, so try to avoid rounding decimal places. Instead, multiply the coefficients by a constant so that all coefficients are whole numbers. In this case, the tests for contrasts and are exact. The test for contrast is slightly off due to rounding Karpinski
20 Sums of Squares of a contrast o s previously mentioned, a set of a- orthogonal contrasts will perfectly partition the SSB: SSB SSψ ˆ SSψ ˆ SSψ ˆ o To compute the Sums of Squares of a contrast: ψˆ SSψˆ c i n i SSψ ˆ ( ) ( 8) ( ) ( ) SSψ ˆ 0 () (.) SSψ ˆ ( ) () 0 0 ( ).0.. SSψ ˆ SSψ ˆ SSψ ˆ SSB 0 NOV Source of Variation SS df MS F P-value F crit Between Groups Within Groups Total Karpinski
21 o Once we have calculated the SSC, then we can compute an F-test directly: SSC dfc F (, dfw) SSW dfw SSC MSW ψ ˆ : F observed ψ ˆ : F observed ψ ˆ : F observed o NOV table for contrasts NOV Source of Variation SS df MS F P-value Between Groups ψ ˆ ψ ˆ ψ ˆ Within Groups Total 9 9 In this NOV table, we show that SSC partitions SSB. But this relationship only holds for sets of orthogonal contrasts In general, you should only construct an NOV table for a set of a- orthogonal contrasts Note: We will shortly see they you can either perform the omnibus test OR tests of orthogonal contrasts, but not both. Nevertheless, this NOV table nicely displays the SS partition Karpinski
22 . Brand name contrasts easily obtained from SPSS Difference contrasts Helmert contrasts Simple contrasts Repeated contrasts Polynomial contrasts (to be covered later) Difference contrasts: Each level of a factor is compared to the mean of the previous levels (These are orthogonal with equal n) c c c c Contrast Coefficients (L' Matrix) Parameter Intercept [REWRD.00] [REWRD.00] [REWRD.00] [REWRD.00] REWRD Difference Contrast Level vs. Level vs. Level vs. Level Previous Previous The default display of this matrix is the transpose of the corresponding L matrix. UNINOV trials BY reward /CONTRST (reward)difference /PRINT test(lmatrix) Karpinski
23 Helmert contrasts: Each level of a factor is compared to the mean of subsequent levels (These are orthogonal with equal n). o The researcher s original hypotheses for the learning data are Helmert contrasts. c c c c Contrast Coefficients (L' Matrix) Parameter Intercept [REWRD.00] [REWRD.00] [REWRD.00] [REWRD.00] REWRD Helmert Contrast Level Level Level vs. vs. Later vs. Later Level The default display of this matrix is the transpose of the corresponding L matrix. UNINOV trials BY reward /CONTRST (reward)helmert /PRINT test(lmatrix) Simple contrasts: Each level of a factor is compared to the last level (These contrasts are not orthogonal). c c c c Contrast Coefficients (L' Matrix) Parameter Intercept [REWRD.00] [REWRD.00] [REWRD.00] [REWRD.00] REWRD Simple Contrast a Level vs. Level vs. Level vs. Level Level Level The default display of this matrix is the transpose of the corresponding L matrix. a. Reference category UNINOV trials BY reward /CONTRST (reward)simple /PRINT test(lmatrix) Karpinski
24 Repeated contrasts: Each level of a factor is compared to the previous level (These contrasts are not orthogonal). c c c c Contrast Coefficients (L' Matrix) Parameter Intercept [REWRD.00] [REWRD.00] [REWRD.00] [REWRD.00] REWRD Repeated Contrast Level vs. Level vs. Level vs. Level Level Level The default display of this matrix is the transpose of the corresponding L matrix. UNINOV trials BY reward /CONTRST (reward)repeated /PRINT test(lmatrix).. Relationships between the omnibus F and contrasts (for equal n designs) Relationship #: The omnibus F test is equal to the average t s from all possible pairwise contrasts. o Consequence: If the omnibus F test is significant, then at least one pairwise contrast is significant Karpinski
25 Relationship #: If you take the average F (or t s) from a set of a- orthogonal contrasts, the result will equal the omnibus F! o mini-proof: For ψ ˆ : F SSψˆ dfψˆ SSψˆ.... For ˆ ψ SSW a : F a MSW dfw SS ˆ ψ a df ˆ ψ a SSW dfw SS ˆ ψ a MSW F F F... F a a SS ˆ ψ MSW SS ˆ ψ MSW... SS ˆ ψ a MSW a SS ˆ ψ SS ˆ ψ... SS ˆ MSW a ψ a SSB MSW a SSB a MSW MSB MSW F omnibus o Consequence: If the omnibus F test is significant, then at least one contrast is significant Karpinski
26 o In the learning example: NOV Source of Variation SS df MS F P-value Between Groups ψ ˆ ψ ˆ ψ ˆ Within Groups Total 9 9 verage F from the set of orthogonal contrasts: (Difference is due to rounding error) Is it possible to have a significant contrast, but have a non-significant omnibus F? YES! o Let s consider an example: IV Level Level Level Level Karpinski
27 ONEWY dv BY iv /CONTRST NOV DV Between Groups Within Groups Total Sum of Squares df Mean Square F Sig o Omnibus F-test is not significant Contrast Tests DV ssume equal variances Contrast Value of Contrast Std. Error t df Sig. (-tailed) o The contrast comparing Group to the average of the other groups is significant Suppose none of the pairwise contrasts are significant. Is it possible to have a significant contrast? YES! o If none of the pairwise contrasts are significant, then the omnibus F test will not be significant. But you may still find a contrast that is significant! IV Level Level Level Level Karpinski
28 ONEWY dv BY iv /CONTRST /CONTRST 0-0 /CONTRST /CONTRST 0-0 /CONTRST /CONTRST o None of the pairwise contrasts are significant: Contrast Tests DV Contrast 6 Value of Contrast Std. Error t df Sig. (-tailed) o So we know that the omnibus F-test is not significant: NOV DV Between Groups Within Groups Total Sum of Squares df Mean Square F Sig o But it is still possible to find a significant contrast: ONEWY dv BY iv /CONTRST - -. Contrast Tests DV Contrast Value of Contrast Std. Error t df Sig. (-tailed) Karpinski
29 To reiterate: o significant omnibus F-test There will be at least significant contrast o significant contrast DOES NOT IMPLY a significant omnibus F-test o non significant omnibus F-test DOES NOT IMPLY all contrasts will be non-significant 6. Robust tests for a single contrast The assumptions for contrasts are the same as those for NOV o Independent samples o Within each group, participants are independent and randomly selected o Equal population variances in each group o Each group is drawn from a normal population Tests of contrasts are not robust to heterogeneity of variances, even with equal n We can use our standard NOV techniques to test these assumptions. Presumably, by the time you are testing contrasts, you have already identified troublesome aspects about your data. But once you have identified the problems what can you do? o In general, the same fixes for NOV work for contrasts o Transformations can be used for non-normality or heterogeneous variances o sensitivity analysis can be used to investigate the impact of outliers There are two additional tools we did not use for NOV o Use a contrast-specific variance so that we do not assume equality of variances in all groups o Try a pairwise rank-based alternative Karpinski
30 Use a contrast-specific variance o In the standard hypothesis test of a contrast, the denominator uses the MSW, a pooled variance estimate ci X i tobserved ci MSW n o What we would like to do is compute a new standard error of ψˆ that does not rely on MSW o The details are messy but fortunately you do not have to do the dirty work; SPSS automatically prints out tests of contrasts with unequal variances. When a, this test reduces exactly to the Welch s separate variance two-sample t-test o Let s return to the learning example and pretend that we found heterogeneous variance. Thus, to test our original hypotheses in the data (see p -), we need to use the modified test for contrast: ONEWY trials BY reward /CONTRST, -, -, - /CONTRST 0,, -., -. /CONTRST 0, 0,, -. i Contrast Tests TRILS ssume equal variances Does not assume equal variances ˆ H Contrast ψ : t (.).0, p. 00 ψ ˆ H : t ( 7.0).8, p. 00 ψ : t (.) 0.7, p. 0 ˆ H Value of Contrast Std. Error t df Sig. (-tailed) o Remember, this Welch correction only corrects for unequal variances and does not correct or adjust for non-normality Karpinski
31 Try a non-parametric, rank-based alternative o Pair-wise tests can be conducted using the Mann-Whitney U test (or an NOV on the ranked data). o However, complex comparisons should be avoided! Because ranked data are ordinal data, we should not average (or take any linear combination) across groups. o comparison of Mann-Whitney U pairwise contrasts with NOV by ranks approach ψ : µ µ ψ : µ µ c : (,,0,0) c : (0,,,0 ) Mann-Whitney U pairwise contrasts: NPR TESTS NPR TESTS /M-W trials BY reward( ). /M-W trials BY reward( ). Test Statistics Mann-Whitney U Wilcoxon W Z symp. Sig. (-tailed) Exact Sig. [*(-tailed Sig.)] a. Not corrected for ties. TRILS a Test Statistics Mann-Whitney U Wilcoxon W Z symp. Sig. (-tailed) Exact Sig. [*(-tailed Sig.)] a. Not corrected for ties. TRILS a NOV by ranks approach: RNK VRIBLEStrials. ONEWY rtrials BY reward /CONT /CONT 0-0. Contrast Tests RNK of TRILS ssume equal variances Does not assume equal variances Contrast Value of Contrast Std. Error t df Sig. (-tailed) Karpinski
32 pproach Contrast Mann-Whitney U NOV by Ranks µ µ z -0.68, p. t(6) -0., p.9 µ z -.6, p.008 t(6) -.680, p.00 µ o rank modification of NOV is easy to use but: Not much theoretical work has been done on this type of test. This approach is probably not valid for multi-factor NOV. This approach is likely to be trouble for complex comparisons. Remember that the conclusions you draw are on the ranks, and not on the observed values! 7. Effect sizes for a single contrast For pairwise contrasts, you can use Hedges s g: g X X ˆ σ X X MSW In the general case there are several options o Use omega squared ( ω ) ˆ ω SSψˆ MSW SST MSW ω.0 ω.06 ω. small effect size medium effect size large effect size ω has an easy interpretation: it is the percentage of the variance in the dependent variable (in the population) that is accounted for by the contrast Karpinski
33 o Treat the complex comparison as a comparison between two groups and use Hedges s g, but we need the sum of the contrast coefficients to equal : g ˆ ψ MSW where a i ny contrast can be considered to be a comparison between two groups. We can use the mean of those two groups to compute a d. ψ µ ψ µ µ ψ is a comparison between group and the average of groups - H 0 : µ µ µ µ µ g X X X X X MSW ˆ ψ MSW ψ is a comparison between the average of groups and and the average of groups and µ µ µ µ H 0 : g X X X X MSW ˆ ψ MSW Interpretation of this g is the same as the g for two groups, but you must be able to interpret the contrast as a comparison between two groups. For example, polynomial contrasts cannot be considered comparisons between two groups. Thus, g is not appropriate for polynomial contrasts Karpinski
34 o Compute an r measure of effect size: r F contrast F contrast df within t contrast t contrast df within r is interpretable as the (partial) correlation between the group means and the contrast values, controlling for non-contrast variability. 8. n example Rehabilitation Example. We have a sample of male participants between the age of 8 and 0 who have all undergone corrective knee surgery in the past year. We would like to investigate the relationship between prior physical fitness status (below average, average, above average) and the number of days required for successful completion of physical therapy. Prior physical fitness status Below verage verage bove verage o We would like to test if: bove average participants complete therapy faster than other groups verage participants complete therapy faster than below average participants verage participants complete therapy slower than above average participants Karpinski
35 days DYS N 8 Below verage FITNESS 0 verage 6 bove verage fitness o We need to convert the hypotheses to contrast coefficients bove average participants complete therapy faster than other groups µ µ H 0 : µ ψ µ c,, verage participants complete therapy faster than below average participants H 0 : µ µ ψ µ c (,,0 ) µ verage participants complete therapy slower than above average participants H 0 : µ µ ψ µ c ( 0,, ) µ o re these three contrasts an orthogonal set? With groups, we can only have orthogonal contrasts If we had equal sample sizes, then ψ ψ With unequal n we do not have an orthogonal set of contrasts Karpinski
36 o Conduct significance tests for these contrasts ONEWY days BY fitness /STT desc /CONTRST -.,-., /CONTRST -,, 0 /CONTRST 0, -,. Descriptives DYS Below verage verage bove verage Total 9% Confidence Interval for Mean N Mean Std. Deviation Std. Error Lower Bound Upper Bound NOV DYS Between Groups Within Groups Total Sum of Squares df Mean Square F Sig Contrast Coefficients Contrast FITNESS Below bove verage verage verage Contrast Tests DYS ssume equal variances Does not assume equal variances Contrast Value of Contrast Std. Error t df Sig. (-tailed) Karpinski
37 o Compute a measure of effect size for each contrast ˆ ω SSψˆ MSW SST MSW We need to compute the SS for each contrast SSψˆ ψˆ c i n i SSψ ˆ 8 ( ) 0 () SS ˆ ψ ( ) () ( 0 ) 8 ( 6) SSψ ˆ 0 ( 8) ( ) () Now compute omega squared ψ ˆ : ˆ ω ψ ˆ : ˆ ω ψ ˆ : ˆ ω Karpinski
38 OR Compute an r measure of effect size for each contrast: r F contrast F contrast df within t contrast t contrast df within ψ ˆ : r...7 ψ ˆ : r.8.8. ψ ˆ : r o Report the results ψ : t ()., p <.0, ω. 7 ˆ ˆ ψ : t ().8, p.0, ω. ˆ ψ : t ().8, p <.0, ω. 0 In your results section, you need to say in English (not in statistics or symbols) what each contrast is testing In general, it is not necessary to report the value of the contrast or the contrast coefficients used... contrast revealed that above average individuals recovered faster than all other individuals, t ()., p <.0, ω. 7. Pairwise tests also revealed that average individuals completed therapy faster than below average individuals, t ().8, p.0, ω., and that above average individuals completed therapy faster than average participants, t ().8, p <.0, ω Karpinski
39 9. Polynomial Trend Contrasts Trend contrasts are a specific kind of orthogonal contrasts that may be of interest for certain designs. Tests for trends are used only for comparing quantitative (ordered) independent variables. o IV 0mg, 0mg, 0mg, 0mg of a drug Trend contrasts are used to explore polynomial trends in the data Order of # of Trend Polynomial Bends Shape Linear st 0 Straight Line Quadratic nd U-shaped Cubic rd Wave Quartic th Wave Etc. Linear Quadratic Cubic Quartic Karpinski
40 Memory Example # Study Time Minute Minutes Minutes Minutes o It looks like there might be a linear trend in the data o To test for trends, tables of orthogonal trend contrasts have been computed. For a, we can have orthogonal contrasts c c c c Linear - - Quadratic - - Cubic - - o To use these values, the levels of the IV need to be equally spaced and the cell sizes must be equal Karpinski
41 o To compute the trend contrasts, we use the orthogonal trend contrasts and the usual procedure for computing and testing contrasts: For the linear contrast, use c (-,-,, ) ψ linear µ µ µ ˆ ψ linear X X X X () (6) (8) (9) SS( ˆ () ψ linear ) ( ) 6 ( ) () 6 6 () For the quadratic contrast, use c (, -, -, ) ψ quadratic µ µ µ µ ˆ ψ quadratic X X X X SS( ˆ ψ quadratic ) ( ) () 6 ( ) 6 6 ( ) () 6.0 For the cubic contrast, use c (-,, -, ) ψ cubic µ µ µ ˆ ψ cubic X X X X (6) (8) 9 SS( ˆ () ψ cubic ) ( ) 6 () 6 ( ) 0.0 () 6 6 Comments about trend contrasts o These contrasts are orthogonal (when ns are equal), so it is possible to have any combination of effects (or lack of effects) o Because the sets of weights are not equally scaled, you cannot compare the strength of effects simply by inspecting the value of the contrast. o Some people place an additional constraint on the contrast weights: a c i i When the sum of the absolute value of the contrast values is not constant across contrasts (as with the trend contrasts), then you CN NOT compare contrast values. You can only compare sums of squares and measures of effect size Karpinski
42 Memory Example # Study Time Minute Minutes Minutes Minutes o Looks like we have a linear effect with some quadratic ψ linear µ µ µ ˆ ψ linear X X X X (6) (6) (7) () 9 SS( ˆ (9) ψ linear ) ( ) 6 ( ) () 6 6 () ψ quadratic µ µ µ µ ˆ ψ quadratic X X X X SS( ˆ ψ quadratic ) () () 6 ( ) 6 6 ( ) () ψ cubic µ µ µ ˆ ψ cubic X X X X 6 (6) (7) SS( ˆ ψ cubic ) ( ) () ( ) 6 () 6 6 () Karpinski
43 Memory Example # Study Time Minute Minutes Minutes Minutes Looks like we have a quadratic effect ψ linear µ µ µ ˆ ψ linear X X X X () (6) (7) () SS( ˆ () ψ linear ) ( ) 6 ( ) () 6 6 ().8 6 ψ quadratic µ µ µ µ ˆ ψ quadratic X X X X SS( ˆ ψ quadratic ) (0) () 6 ( ) 6 6 ( ) () 6 0 ψ cubic µ µ µ ˆ ψ cubic X X X X (6) (7) SS( ˆ ψ cubic ) ( ) ( ) ( ) 6 () 6 6 () Karpinski
44 trials Statistical tests for trend analysis: Reanalyzing the learning example o Our learning example is perfectly suited for a trend analysis (Why?) o When we initially analyzed these data, we selected one set of orthogonal contrasts, but there are many possible sets of orthogonal contrasts, including the trend contrasts reward o For a, we can test for a linear, a quadratic, and a cubic trend c c c c Linear - - Quadratic - - Cubic - - ψˆ linear X X X X () () (6) (7) 0 SS( ˆ (0) ψ lin ) ( ) ( ) () () ψˆ quadratic X X X X () () (6) (7) 00 SS( ˆ () ψ quad ) () ( ) ( ) () ψˆ cubic X X X X () () (6) (7) 0 ( 0) ( ) () ( ) SS( ˆ ψ cubic ) () Karpinski
45 o Rather than determine significance by hand, we can use ONEWY: ONEWY trials BY reward /CONT -, -,, /CONT, -, -, /CONT -,, -,. Contrast Coefficients Contrast REWRD Contrast Tests TRILS ssume equal variances Contrast Value of Contrast Std. Error t df Sig. (-tailed) We find evidence for significant linear and cubic trends ψ ˆ : t(6).08, p <.0, r.79 linear ψ ˆ : t(6)., p.7, r.7 quadratic ψ ˆ : t(6)., p.0, r. cubic Karpinski
46 To complete the NOV table, we need the sums of squares for each contrast SSψˆ ψˆ c i n i SS ˆ ψ linear 00 SS ˆ ψ quad SS ˆ ψ cubic SSψˆ linear SSψˆ quadratic SSψˆ cubic 00 0 SSB NOV Source of Variation SS df MS F P-value Between Groups ψˆ lin ψˆ quad ψˆ cubic Within Groups Total Karpinski
47 You can also directly ask for polynomial contrasts in SPSS o Method : ONEWY ONEWY trials BY reward /POLYNOMIL. fter polynomial, enter the highest degree polynomial you wish to test. NOV TRILS Between Groups (Combined) Linear Term Contrast Deviation Sum of Squares df Mean Square F Sig Within Groups Total Quadratic Term Cubic Term Contrast Deviation Contrast o dvantages of the ONEWY method for polynomial contrasts: It utilizes the easiest oneway NOV command It gives you the sums of squares of the contrast It uses the spacing of the IV in the data (Be careful!) It gives you the deviation test (to be explained later) o Disadvantages of the ONEWY method for polynomial contrasts: You can not see the value of the contrast or the contrast coefficients Karpinski
48 o Method : UNINOV UNINOV trials BY reward /CONTRST (reward)polynomial /PRINT test(lmatrix). Contrast Results (K Matrix) REWRD Polynomial Contrast a Linear Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Dependent Variable TRILS Quadratic Cubic Std. Error Sig. 9% Confidence Interval for Difference Lower Bound Upper Bound Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Std. Error Sig. 9% Confidence Interval for Difference Lower Bound Upper Bound Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Std. Error Sig. 9% Confidence Interval for Difference a. Metric.000,.000,.000,.000 Lower Bound Upper Bound Test Results Dependent Variable: TRILS Sum of Source Squares df Mean Square F Sig. Contrast Error Karpinski
49 Contrast Coefficients (L' Matrix) REWRD Polynomial Contrast a Parameter Intercept [REWRD.00] [REWRD.00] [REWRD.00] [REWRD.00] Linear Quadratic Cubic The default display of this matrix is the transpose of the corresponding L matrix. a. Metric.000,.000,.000,.000 This is the matrix of trend coefficients used by SPSS to calculate the contrasts. SPSS coefficients SPSS coefficients X 6 c (.67,.,.,.67) c (,,, ) c (.,.,.,.) c (,,,) c (.,.67,.67.) c (,,,) You can check that these coefficients are orthogonal o Suppose the reward intervals were not equally spaced: Constant (00%) Frequent (7%) Level of Reward Infrequent (%) Never (0%) Now we cannot use the tabled contrast values, because they require equal spacing between intervals Karpinski
50 trials trials Equal spacing Unequal spacing reward reward UNINOV trials BY reward /CONTRST (reward)polynomial (,.7,., 0) /PRINT test(lmatrix). Contrast Results (K Matrix) REWRD Polynomial Contrast a Linear Quadratic Cubic Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Std. Error Sig. 9% Confidence Interval for Difference Lower Bound Upper Bound Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Std. Error Sig. 9% Confidence Interval for Difference Lower Bound Upper Bound Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Std. Error Sig. 9% Confidence Interval for Difference a. Metric.000,.70,.0,.000 Lower Bound Upper Bound Dependent Variable TRILS Karpinski
51 Now only the linear trend is significant SPSS calculates a set of orthogonal trend contrasts, based on the spacing you provide. Here they are: Contrast Coefficients (L' Matrix) REWRD Polynomial Contrast a Parameter Intercept [REWRD.00] [REWRD.00] [REWRD.00] [REWRD.00] Linear Quadratic Cubic The default display of this matrix is the transpose of the corresponding L matrix. a. Metric.000,.70,.0,.000 o dvantages of the UNINOV method: It is the only way SPSS gives you confidence intervals for a contrast (But remember, the width of CIs depends on the contrast values) It allows you to deal with unequally spaced intervals (It assumes equal spacing unless you tell it otherwise; no matter how you have the data coded!) It will print the contrast values SPSS uses o Disadvantages of the UNINOV method: It does not print out a test statistic or the degrees of freedom of the test!?! Remember, for a one-way design, you can obtain a test of any contrast by using the ONEWY command and entering the values for each contrast. With ONEWY method, you know exactly how the contrast is being computed and analyzed Karpinski
52 0. Simultaneous significance tests on multiple orthogonal contrasts Sometimes you may wish to test the significance of several contrasts in a single omnibus test Example # o We would like to compare the effect of four drugs on body temperature. To test these drugs, we randomly assign people to receive one of the four drugs. fter a period of time, we record each participant s body temperature. Drug Drug B Drug C Drug D o Our hypothesis was that Drug B would result in a higher body temperature than the other drugs ψ µ µ B µ C D o We also wanted to know if the other drugs (, C, D) differed in their effect on body temperature H : µ µ µ 0 C D H : The three means are not equal Note that this hypothesis is an omnibus hypothesis! Report TEMP DRUG B C D Total Mean N Std. Deviation Karpinski
53 o Step : Obtain SSB, SST, and MSW NOV TEMP Between Groups Within Groups Total Sum of Squares df Mean Square F Sig o Step : Test ψ µ µ µ B C D ψ ˆ (9.) (96.) (9.9) (97.). SS ˆ ψ ( ) () ( ) ( ) (.) NOV Source of Variation SS df MS F P-value Between Groups ψ Within Groups Total o Step : Test H 0 : µ µ C µ D The trick is to remember that an omnibus NOV test m means is equal to the simultaneous test on any set of (m-) orthogonal contrasts We can then combine these orthogonal contrasts in a single omnibus F-test: F comb (m, dfw) SSC... SSC m m MSW Karpinski
54 We need to choose any contrasts so long as we have an orthogonal set of three contrasts (including the contrast associated with the first hypothesis): c : (,,, ) c : (,0,,) c : (,0,,0) This set will work because all three contrasts are orthogonal. Now, let s compute the simultaneous F-test of these two contrasts. ψ ˆ (9.) 0 (9.9) (97.). SS ˆ ˆ ψ ( ) ( ) ( ) 0 ( ) (.) ψ (9.) 0 (9.9) 0(97.) 0. SS ˆ ψ ( ) ( ) () 0 ( 0 ) ( 0.) F comb (m, dfw) SSC... SSC m m MSW F comb (,6) We reject H 0 and conclude that.7, p.0 µ, are not all equal µ C, and µ D NOV Source of Variation SS df MS F P-value Between Groups ψ µ µ C µ D ( ψ,ψ ) Within Groups Total Karpinski
55 o Note: We also could have computed the test of µ µ C µ D without directly computing the two orthogonal contrasts: If we knew the combined sums of squares of these two contrasts then we could fill in the remainder of the NOV table. But we do know the combined sums of squares for the remaining two contrasts (so long as all the contrasts are orthogonal)! NOV Source of Variation SS df MS F P-value Between Groups ψ µ µ C µ D Within Groups Total SSB SSψ SSψ SSψ SS ψ SSψ SSB SSψ We can substitute SSψ SSψ into the table and compute the F-test as we did previous (except in this case, we never identified or computed the two additional contrasts to complete the orthogonal set). NOV Source of Variation SS df MS F P-value Between Groups ψ µ µ C µ D Within Groups Total Karpinski
56 Example # o We want to examine the effect of caffeine on cognitive performance and attention. Participants are randomly assigned to one of dosages of caffeine. In a subsequent proofreading task, we count the number of errors. Dose of Caffeine 0mg 0mg 00mg 0mg 00mg N mg 0 mg 00 mg 0 mg 00 mg DOSE Karpinski
57 Descriptives ERRORS 0 mg 0 mg 00 mg 0 mg 00 mg Total 9% Confidence Interval for Mean N Mean Std. Deviation Std. Error Lower Bound Upper Bound o We would like to test if there is a linear or a quadratic trend. We are not really interested in any higher order trends o With equally spaced intervals, we can use the coefficients from the orthogonal polynomial table. With five groups, we can test up to four orthogonal polynomials c c c c c Linear Quadratic Cubic Quartic Karpinski
58 o Method : Let SPSS do all the work ONEWY errors BY dose /POLYNOMIL. NOV ERRORS Between Groups (Combined) Linear Term Contrast Deviation Sum of Squares df Mean Square F Sig Within Groups Total Quadratic Term Contrast Deviation Under Linear Term CONTRST is the test for the linear contrast: F(,)., p.09 DEVITION is the combined test for the quadratic, cubic, and quartic contrasts: F(,) 6.6, p.00 Under Quadratic Term CONTRST is the test for the quadratic contrast: F(,)., p.00 DEVITION is the combined test for the cubic and quartic contrasts F(,)., p.089 Is it safe to report that there are no higher order trends? Karpinski
59 o Method a: Let SPSS do some of the work ONEWY errors BY dose /CONT /CONT /CONT /CONT Contrast Tests ERRORS ssume equal variances Does not assume equal variances Contrast Value of Contrast Std. Error t df Sig. (-tailed) o Method b: Let SPSS do some of the work UNINOV errors BY dose /CONTRST (dose)polynomil /PRINTTEST(LMTRIX). Contrast Coefficients (L' Matrix) DOSE Polynomial Contrast a Parameter Intercept [DOSE.00] [DOSE.00] [DOSE.00] [DOSE.00] [DOSE.00] Linear Quadratic Cubic Order The default display of this matrix is the transpose of the corresponding L matrix. a. Metric.000,.000,.000,.000, Karpinski
60 DOSE Polynomial Contrast a Linear Quadratic Cubic Order Contrast Results (K Matrix) Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Std. Error Sig. 9% Confidence Interval Lower Bound for Difference Upper Bound Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Std. Error Sig. 9% Confidence Interval Lower Bound for Difference Upper Bound Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Std. Error Sig. 9% Confidence Interval Lower Bound for Difference Upper Bound Contrast Estimate Hypothesized Value Difference (Estimate - Hypothesized) Depende nt Variable ERRORS E E Std. Error Sig. 9% Confidence Interval for Difference a. Metric.000,.000,.000,.000,.000 Lower Bound Upper Bound Karpinski
61 o Here are the results ψ ˆ linear : t()., p.09, r.0 ψ ˆ : t().68, p.00, r.8 quadratic ψ ˆ cubic : t()., p.0, r. ψ ˆ : t() 0.8, p.70, r.06 quartic We conclude there are significant linear, quadratic and cubic trends. o Wait a minute... Didn t we just conclude there were no significant trends higher than quadratic!?! o When the omnibus test is not significant, you still may be able to find significant contrasts. (Remember, we demonstrated that a significant contrast does not imply a significant omnibus F-test) Use combined contrast tests with caution! NOV Table Source Sum of Squares df Mean Square F Sig. Between Groups Linear Term Quadratic Term Cubic Term th-order Term Within Groups Total In general, to test m orthogonal contrasts simultaneously SS ˆ ψ... SS ˆ ψ m m F(m,dfw) MSW Where m a Karpinski
62 . Polynomial trends with unequal cell size Our formulas for computing the value of a contrast, the sums of squares of a contrast, and the significance of a contrast can all handle designs with unequal n in each cell a ˆ ψ c i X i c X c X c X... c a X a j SSψˆ ψˆ c i n i t observed (N a) c i X i MSW c i n i or F observed (,N a) ˆ ψ MSW c i n i The problem is in the orthogonality of contrasts with unequal n ψ (a,a,a,...,a a ) ψ (b,b,b,...,b a ) o Two contrasts are orthogonal for unequal n if a j a i b i n i 0 or a b n a b n... a a b a n a 0 o ll of our standard orthogonal contrasts will no longer be orthogonal Karpinski
63 anxiety Example # with unequal n It was of interest to determine the effects of the ingestion of alcohol on anxiety level. Five groups of 0 year-old adults were administered between 0 and ounces of pure alcohol per day over a one-month period. t the end of the experiment, their anxiety scores were measured with a well-known nxiety scale. 0oz. oz. oz. oz. oz X j n j NXIETY N LCOHOL alcohol Karpinski
64 o We would like to test for linear, quadratic, cubic and higher-order trends UNINOV anxiety BY alcohol /CONTRST (alcohol)polynomial /PRINTtest(LMTRIX). Contrast Coefficients (L' Matrix) LCOHOL Polynomial Contrast a Parameter Intercept [LCOHOL.00] [LCOHOL.00] [LCOHOL.00] [LCOHOL.00] [LCOHOL.00] Linear Quadratic Cubic Order The default display of this matrix is the transpose of the corresponding L matrix. a. Metric.000,.000,.000,.000,.000 We could also use ONEWY, but then we could not see the contrast coefficients SPSS generates these contrasts, let s check to see if they are orthogonal a a i b i 0 j n i Linear vs.quadratic : (.6)(.) (.6)(.67) (.6)(.67) (.6)(.) 0 0 The SPSS generated contrasts are not orthogonal when ns are unequal! Let s see what happens when we proceed: ψˆ SSψˆ c i n i Karpinski
65 ˆ.6X.6X 0X.6X.6X ψ linear.8 ( ).6( 0.80) 0.6( 8.60).6( 9.80) ψ ˆ quad.8 ψ ˆ cubic. 8 ψ ˆ quartic SSψˆ linear (.8) (.6) (.6) ( 0) (.6) (.6) SSψˆ quadratic (.8) (.6) (.6) ( 0) (.6) (.6) SSψˆ cubic (.8) (.6) (.6) ( 0) (.6) (.6) 7 8. SSψˆ quartic ( 8.8) (.0) (.78) (.77) (.78) (.0) NOV NXIETY Between Groups Within Groups Total Sum of Squares df Mean Square F Sig Karpinski
66 dv SSψˆ linear SSψˆ quadratic SSψˆ cubic SSψˆ quartic SSB For non-orthogonal contrasts, we can no longer decompose the sums of squares additively One fix is to weight the cell means by their cell size. Using this weighted approach is equivalent to adjusting the contrast coefficients by the cell size. Example # with unequal n Group X j n j DV - N GROUP group Karpinski
67 o Is there a linear trend in the DV? ONEWY dv BY group /POLY. NOV DV Between Groups (Combined) Linear Term Unweighted Weighted Sum of Squares df Mean Square F Sig Within Groups Quadratic Term Cubic Term Unweighted Weighted Unweighted Weighted Total o ccording to the unweighted analysis, there is no linear trend This analysis treats all group means equally The Contrast SS do not partition the SSB o ccording to the weighted analysis, there is a linear trend This analysis gives most of the weight to group and group The Contrast SS do partition the SSB exactly o Which method is better? If your goal is to compare group means, then you should conduct the unweighted analysis. This case holds most of the time! Remember, there is nothing wrong with testing non-orthogonal contrasts But you cannot construct combined contrasts tests If the inequality in the cell sizes reflects a meaningful difference in group sizes and you want to reflect those differences in your analysis, then a weighted means approach may be appropriate. You must have a representative sample Your main goal would NOT be to compare groups If you think a weighted analysis may be appropriate, then you should read more about proper interpretation of this analysis. (see Maxwell & Delaney, 990) Karpinski
Chapter 7 Factorial ANOVA: Two-way ANOVA
Chapter 7 Factorial ANOVA: Two-way ANOVA Page Two-way ANOVA: Equal n. Examples 7-. Terminology 7-6 3. Understanding main effects 7- and interactions 4. Structural model 7-5 5. Variance partitioning 7-6.
More informationChapter 6 Planned Contrasts and Post-hoc Tests for one-way ANOVA
Chapter 6 Planned Contrasts and Post-hoc Tests for one-way NOV Page. The Problem of Multiple Comparisons 6-. Types of Type Error Rates 6-. Planned contrasts vs. Post hoc Contrasts 6-7 4. Planned Contrasts
More informationTwo-Sample Inferential Statistics
The t Test for Two Independent Samples 1 Two-Sample Inferential Statistics In an experiment there are two or more conditions One condition is often called the control condition in which the treatment is
More information10/31/2012. One-Way ANOVA F-test
PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic 3.Distribution 4. Assumptions One-Way ANOVA F-test One factor J>2 independent samples
More informationSampling Distributions: Central Limit Theorem
Review for Exam 2 Sampling Distributions: Central Limit Theorem Conceptually, we can break up the theorem into three parts: 1. The mean (µ M ) of a population of sample means (M) is equal to the mean (µ)
More informationAnalytical Comparisons Among Treatment Means (Chapter 4) Analysis of Trend (Chapter 5) ERSH 8310 Fall 2009
Analytical Comparisons Among Treatment Means (Chapter 4) Analysis of Trend (Chapter 5) ERSH 8310 Fall 009 September 9, 009 Today s Class Chapter 4 Analytic comparisons The need for analytic comparisons
More information4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES
4:3 LEC - PLANNED COMPARISONS AND REGRESSION ANALYSES FOR SINGLE FACTOR BETWEEN-S DESIGNS Planned or A Priori Comparisons We previously showed various ways to test all possible pairwise comparisons for
More informationThe t-test: A z-score for a sample mean tells us where in the distribution the particular mean lies
The t-test: So Far: Sampling distribution benefit is that even if the original population is not normal, a sampling distribution based on this population will be normal (for sample size > 30). Benefit
More information8/23/2018. One-Way ANOVA F-test. 1. Situation/hypotheses. 2. Test statistic. 3.Distribution. 4. Assumptions
PSY 5101: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic One-Way ANOVA F-test One factor J>2 independent samples H o :µ 1 µ 2 µ J F 3.Distribution
More informationHypothesis testing: Steps
Review for Exam 2 Hypothesis testing: Steps Exam 2 Review 1. Determine appropriate test and hypotheses 2. Use distribution table to find critical statistic value(s) representing rejection region 3. Compute
More informationDisadvantages of using many pooled t procedures. The sampling distribution of the sample means. The variability between the sample means
Stat 529 (Winter 2011) Analysis of Variance (ANOVA) Reading: Sections 5.1 5.3. Introduction and notation Birthweight example Disadvantages of using many pooled t procedures The analysis of variance procedure
More informationAdvanced Experimental Design
Advanced Experimental Design Topic Four Hypothesis testing (z and t tests) & Power Agenda Hypothesis testing Sampling distributions/central limit theorem z test (σ known) One sample z & Confidence intervals
More informationOne-Way ANOVA. Some examples of when ANOVA would be appropriate include:
One-Way ANOVA 1. Purpose Analysis of variance (ANOVA) is used when one wishes to determine whether two or more groups (e.g., classes A, B, and C) differ on some outcome of interest (e.g., an achievement
More informationOrthogonal, Planned and Unplanned Comparisons
This is a chapter excerpt from Guilford Publications. Data Analysis for Experimental Design, by Richard Gonzalez Copyright 2008. 8 Orthogonal, Planned and Unplanned Comparisons 8.1 Introduction In this
More informationInferences About the Difference Between Two Means
7 Inferences About the Difference Between Two Means Chapter Outline 7.1 New Concepts 7.1.1 Independent Versus Dependent Samples 7.1. Hypotheses 7. Inferences About Two Independent Means 7..1 Independent
More informationOne-factor analysis of variance (ANOVA)
One-factor analysis of variance (ANOVA) March 1, 2017 psych10.stanford.edu Announcements / Action Items Schedule update: final R lab moved to Week 10 Optional Survey 5 coming soon, due on Saturday Last
More informationHypothesis testing: Steps
Review for Exam 2 Hypothesis testing: Steps Repeated-Measures ANOVA 1. Determine appropriate test and hypotheses 2. Use distribution table to find critical statistic value(s) representing rejection region
More informationCHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)
FROM: PAGANO, R. R. (007) I. INTRODUCTION: DISTINCTION BETWEEN PARAMETRIC AND NON-PARAMETRIC TESTS Statistical inference tests are often classified as to whether they are parametric or nonparametric Parameter
More informationINTRODUCTION TO ANALYSIS OF VARIANCE
CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two
More information4.1. Introduction: Comparing Means
4. Analysis of Variance (ANOVA) 4.1. Introduction: Comparing Means Consider the problem of testing H 0 : µ 1 = µ 2 against H 1 : µ 1 µ 2 in two independent samples of two different populations of possibly
More informationKeppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means
Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means 4.1 The Need for Analytical Comparisons...the between-groups sum of squares averages the differences
More informationWhat Does the F-Ratio Tell Us?
Planned Comparisons What Does the F-Ratio Tell Us? The F-ratio (called an omnibus or overall F) provides a test of whether or not there a treatment effects in an experiment A significant F-ratio suggests
More informationReview. One-way ANOVA, I. What s coming up. Multiple comparisons
Review One-way ANOVA, I 9.07 /15/00 Earlier in this class, we talked about twosample z- and t-tests for the difference between two conditions of an independent variable Does a trial drug work better than
More informationLecture 7: Hypothesis Testing and ANOVA
Lecture 7: Hypothesis Testing and ANOVA Goals Overview of key elements of hypothesis testing Review of common one and two sample tests Introduction to ANOVA Hypothesis Testing The intent of hypothesis
More informationIntroduction to Business Statistics QM 220 Chapter 12
Department of Quantitative Methods & Information Systems Introduction to Business Statistics QM 220 Chapter 12 Dr. Mohammad Zainal 12.1 The F distribution We already covered this topic in Ch. 10 QM-220,
More informationAnalysis of 2x2 Cross-Over Designs using T-Tests
Chapter 234 Analysis of 2x2 Cross-Over Designs using T-Tests Introduction This procedure analyzes data from a two-treatment, two-period (2x2) cross-over design. The response is assumed to be a continuous
More informationChapter 23. Inference About Means
Chapter 23 Inference About Means 1 /57 Homework p554 2, 4, 9, 10, 13, 15, 17, 33, 34 2 /57 Objective Students test null and alternate hypotheses about a population mean. 3 /57 Here We Go Again Now that
More informationChapter 13 Correlation
Chapter Correlation Page. Pearson correlation coefficient -. Inferential tests on correlation coefficients -9. Correlational assumptions -. on-parametric measures of correlation -5 5. correlational example
More informationOne-way between-subjects ANOVA. Comparing three or more independent means
One-way between-subjects ANOVA Comparing three or more independent means Data files SpiderBG.sav Attractiveness.sav Homework: sourcesofself-esteem.sav ANOVA: A Framework Understand the basic principles
More informationAn Analysis of College Algebra Exam Scores December 14, James D Jones Math Section 01
An Analysis of College Algebra Exam s December, 000 James D Jones Math - Section 0 An Analysis of College Algebra Exam s Introduction Students often complain about a test being too difficult. Are there
More informationWELCOME! Lecture 13 Thommy Perlinger
Quantitative Methods II WELCOME! Lecture 13 Thommy Perlinger Parametrical tests (tests for the mean) Nature and number of variables One-way vs. two-way ANOVA One-way ANOVA Y X 1 1 One dependent variable
More informationANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4)
ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4) ERSH 8310 Fall 2007 September 11, 2007 Today s Class The need for analytic comparisons. Planned comparisons. Comparisons among treatment means.
More informationOne-Way ANOVA Cohen Chapter 12 EDUC/PSY 6600
One-Way ANOVA Cohen Chapter 1 EDUC/PSY 6600 1 It is easy to lie with statistics. It is hard to tell the truth without statistics. -Andrejs Dunkels Motivating examples Dr. Vito randomly assigns 30 individuals
More informationStatistics For Economics & Business
Statistics For Economics & Business Analysis of Variance In this chapter, you learn: Learning Objectives The basic concepts of experimental design How to use one-way analysis of variance to test for differences
More informationDegrees of freedom df=1. Limitations OR in SPSS LIM: Knowing σ and µ is unlikely in large
Z Test Comparing a group mean to a hypothesis T test (about 1 mean) T test (about 2 means) Comparing mean to sample mean. Similar means = will have same response to treatment Two unknown means are different
More informationChapter 9 Inferences from Two Samples
Chapter 9 Inferences from Two Samples 9-1 Review and Preview 9-2 Two Proportions 9-3 Two Means: Independent Samples 9-4 Two Dependent Samples (Matched Pairs) 9-5 Two Variances or Standard Deviations Review
More informationBIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES
BIOL 458 - Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES PART 1: INTRODUCTION TO ANOVA Purpose of ANOVA Analysis of Variance (ANOVA) is an extremely useful statistical method
More informationLecture 5: ANOVA and Correlation
Lecture 5: ANOVA and Correlation Ani Manichaikul amanicha@jhsph.edu 23 April 2007 1 / 62 Comparing Multiple Groups Continous data: comparing means Analysis of variance Binary data: comparing proportions
More informationCOMPARING SEVERAL MEANS: ANOVA
LAST UPDATED: November 15, 2012 COMPARING SEVERAL MEANS: ANOVA Objectives 2 Basic principles of ANOVA Equations underlying one-way ANOVA Doing a one-way ANOVA in R Following up an ANOVA: Planned contrasts/comparisons
More informationNotes on Maxwell & Delaney
Notes on Maxwell & Delaney PSCH 710 6 Chapter 6 - Trend Analysis Previously, we discussed how to use linear contrasts, or comparisons, to test specific hypotheses about differences among means. Those discussions
More informationANOVA: Comparing More Than Two Means
1 ANOVA: Comparing More Than Two Means 10.1 ANOVA: The Completely Randomized Design Elements of a Designed Experiment Before we begin any calculations, we need to discuss some terminology. To make this
More informationStatistics: revision
NST 1B Experimental Psychology Statistics practical 5 Statistics: revision Rudolf Cardinal & Mike Aitken 29 / 30 April 2004 Department of Experimental Psychology University of Cambridge Handouts: Answers
More informationThe independent-means t-test:
The independent-means t-test: Answers the question: is there a "real" difference between the two conditions in my experiment? Or is the difference due to chance? Previous lecture: (a) Dependent-means t-test:
More informationSampling Distributions
Sampling Distributions Sampling Distribution of the Mean & Hypothesis Testing Remember sampling? Sampling Part 1 of definition Selecting a subset of the population to create a sample Generally random sampling
More informationStatistical methods for comparing multiple groups. Lecture 7: ANOVA. ANOVA: Definition. ANOVA: Concepts
Statistical methods for comparing multiple groups Lecture 7: ANOVA Sandy Eckel seckel@jhsph.edu 30 April 2008 Continuous data: comparing multiple means Analysis of variance Binary data: comparing multiple
More informationT.I.H.E. IT 233 Statistics and Probability: Sem. 1: 2013 ESTIMATION AND HYPOTHESIS TESTING OF TWO POPULATIONS
ESTIMATION AND HYPOTHESIS TESTING OF TWO POPULATIONS In our work on hypothesis testing, we used the value of a sample statistic to challenge an accepted value of a population parameter. We focused only
More informationReview of Multiple Regression
Ronald H. Heck 1 Let s begin with a little review of multiple regression this week. Linear models [e.g., correlation, t-tests, analysis of variance (ANOVA), multiple regression, path analysis, multivariate
More informationIn a one-way ANOVA, the total sums of squares among observations is partitioned into two components: Sums of squares represent:
Activity #10: AxS ANOVA (Repeated subjects design) Resources: optimism.sav So far in MATH 300 and 301, we have studied the following hypothesis testing procedures: 1) Binomial test, sign-test, Fisher s
More informationChapter 23: Inferences About Means
Chapter 3: Inferences About Means Sample of Means: number of observations in one sample the population mean (theoretical mean) sample mean (observed mean) is the theoretical standard deviation of the population
More informationDepartment of Economics. Business Statistics. Chapter 12 Chi-square test of independence & Analysis of Variance ECON 509. Dr.
Department of Economics Business Statistics Chapter 1 Chi-square test of independence & Analysis of Variance ECON 509 Dr. Mohammad Zainal Chapter Goals After completing this chapter, you should be able
More informationRegression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.
Regression, part II I. What does it all mean? A) Notice that so far all we ve done is math. 1) One can calculate the Least Squares Regression Line for anything, regardless of any assumptions. 2) But, if
More informationStatistics for IT Managers
Statistics for IT Managers 95-796, Fall 2012 Module 2: Hypothesis Testing and Statistical Inference (5 lectures) Reading: Statistics for Business and Economics, Ch. 5-7 Confidence intervals Given the sample
More informationCOGS 14B: INTRODUCTION TO STATISTICAL ANALYSIS
COGS 14B: INTRODUCTION TO STATISTICAL ANALYSIS TA: Sai Chowdary Gullapally scgullap@eng.ucsd.edu Office Hours: Thursday (Mandeville) 3:30PM - 4:30PM (or by appointment) Slides: I am using the amazing slides
More informationAnalysis of Variance (ANOVA)
Analysis of Variance (ANOVA) Two types of ANOVA tests: Independent measures and Repeated measures Comparing 2 means: X 1 = 20 t - test X 2 = 30 How can we Compare 3 means?: X 1 = 20 X 2 = 30 X 3 = 35 ANOVA
More informationWhile you wait: Enter the following in your calculator. Find the mean and sample variation of each group. Bluman, Chapter 12 1
While you wait: Enter the following in your calculator. Find the mean and sample variation of each group. Bluman, Chapter 12 1 Chapter 12 Analysis of Variance McGraw-Hill, Bluman, 7th ed., Chapter 12 2
More informationPSY 216. Assignment 9 Answers. Under what circumstances is a t statistic used instead of a z-score for a hypothesis test
PSY 216 Assignment 9 Answers 1. Problem 1 from the text Under what circumstances is a t statistic used instead of a z-score for a hypothesis test The t statistic should be used when the population standard
More informationHypothesis Testing hypothesis testing approach
Hypothesis Testing In this case, we d be trying to form an inference about that neighborhood: Do people there shop more often those people who are members of the larger population To ascertain this, we
More information1 Descriptive statistics. 2 Scores and probability distributions. 3 Hypothesis testing and one-sample t-test. 4 More on t-tests
Overall Overview INFOWO Statistics lecture S3: Hypothesis testing Peter de Waal Department of Information and Computing Sciences Faculty of Science, Universiteit Utrecht 1 Descriptive statistics 2 Scores
More informationMultiple Regression. More Hypothesis Testing. More Hypothesis Testing The big question: What we really want to know: What we actually know: We know:
Multiple Regression Ψ320 Ainsworth More Hypothesis Testing What we really want to know: Is the relationship in the population we have selected between X & Y strong enough that we can use the relationship
More informationBasic Statistics. 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation).
Basic Statistics There are three types of error: 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation). 2. Systematic error - always too high or too low
More informationHYPOTHESIS TESTING II TESTS ON MEANS. Sorana D. Bolboacă
HYPOTHESIS TESTING II TESTS ON MEANS Sorana D. Bolboacă OBJECTIVES Significance value vs p value Parametric vs non parametric tests Tests on means: 1 Dec 14 2 SIGNIFICANCE LEVEL VS. p VALUE Materials and
More informationParametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami
Parametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami Parametric Assumptions The observations must be independent. Dependent variable should be continuous
More information9/28/2013. PSY 511: Advanced Statistics for Psychological and Behavioral Research 1
PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 The one-sample t-test and test of correlation are realistic, useful statistical tests The tests that we will learn next are even
More informationTHE ROYAL STATISTICAL SOCIETY HIGHER CERTIFICATE
THE ROYAL STATISTICAL SOCIETY 004 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER II STATISTICAL METHODS The Society provides these solutions to assist candidates preparing for the examinations in future
More informationChapter 8 Student Lecture Notes 8-1. Department of Economics. Business Statistics. Chapter 12 Chi-square test of independence & Analysis of Variance
Chapter 8 Student Lecture Notes 8-1 Department of Economics Business Statistics Chapter 1 Chi-square test of independence & Analysis of Variance ECON 509 Dr. Mohammad Zainal Chapter Goals After completing
More informationpsyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests
psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests last lecture: introduction to factorial designs next lecture: factorial between-ps ANOVA II: (effect sizes and follow-up tests) 1 general
More informationANOVA - analysis of variance - used to compare the means of several populations.
12.1 One-Way Analysis of Variance ANOVA - analysis of variance - used to compare the means of several populations. Assumptions for One-Way ANOVA: 1. Independent samples are taken using a randomized design.
More informationTable of z values and probabilities for the standard normal distribution. z is the first column plus the top row. Each cell shows P(X z).
Table of z values and probabilities for the standard normal distribution. z is the first column plus the top row. Each cell shows P(X z). For example P(X 1.04) =.8508. For z < 0 subtract the value from
More informationDETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics
DETAILED CONTENTS About the Author Preface to the Instructor To the Student How to Use SPSS With This Book PART I INTRODUCTION AND DESCRIPTIVE STATISTICS 1. Introduction to Statistics 1.1 Descriptive and
More informationMathematical Notation Math Introduction to Applied Statistics
Mathematical Notation Math 113 - Introduction to Applied Statistics Name : Use Word or WordPerfect to recreate the following documents. Each article is worth 10 points and should be emailed to the instructor
More informationFrequency table: Var2 (Spreadsheet1) Count Cumulative Percent Cumulative From To. Percent <x<=
A frequency distribution is a kind of probability distribution. It gives the frequency or relative frequency at which given values have been observed among the data collected. For example, for age, Frequency
More informationPooled Variance t Test
Pooled Variance t Test Tests means of independent populations having equal variances Parametric test procedure Assumptions Both populations are normally distributed If not normal, can be approximated by
More informationStudent s t-distribution. The t-distribution, t-tests, & Measures of Effect Size
Student s t-distribution The t-distribution, t-tests, & Measures of Effect Size Sampling Distributions Redux Chapter 7 opens with a return to the concept of sampling distributions from chapter 4 Sampling
More informationHypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =
Hypothesis testing I I. What is hypothesis testing? [Note we re temporarily bouncing around in the book a lot! Things will settle down again in a week or so] - Exactly what it says. We develop a hypothesis,
More informationChapter 7: Correlation
Chapter 7: Correlation Oliver Twisted Please, Sir, can I have some more confidence intervals? To use this syntax open the data file CIr.sav. The data editor looks like this: The values in the table are
More informationMultiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600
Multiple Comparison Procedures Cohen Chapter 13 For EDUC/PSY 6600 1 We have to go to the deductions and the inferences, said Lestrade, winking at me. I find it hard enough to tackle facts, Holmes, without
More informationStatistical Inference: Estimation and Confidence Intervals Hypothesis Testing
Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire
More informationUnit 27 One-Way Analysis of Variance
Unit 27 One-Way Analysis of Variance Objectives: To perform the hypothesis test in a one-way analysis of variance for comparing more than two population means Recall that a two sample t test is applied
More informationOne-Way Analysis of Variance (ANOVA)
1 One-Way Analysis of Variance (ANOVA) One-Way Analysis of Variance (ANOVA) is a method for comparing the means of a populations. This kind of problem arises in two different settings 1. When a independent
More informationThis gives us an upper and lower bound that capture our population mean.
Confidence Intervals Critical Values Practice Problems 1 Estimation 1.1 Confidence Intervals Definition 1.1 Margin of error. The margin of error of a distribution is the amount of error we predict when
More informationPsych 230. Psychological Measurement and Statistics
Psych 230 Psychological Measurement and Statistics Pedro Wolf December 9, 2009 This Time. Non-Parametric statistics Chi-Square test One-way Two-way Statistical Testing 1. Decide which test to use 2. State
More informationGLM Repeated-measures designs: One within-subjects factor
GLM Repeated-measures designs: One within-subjects factor Reading: SPSS dvanced Models 9.0: 2. Repeated Measures Homework: Sums of Squares for Within-Subject Effects Download: glm_withn1.sav (Download
More informationREVIEW 8/2/2017 陈芳华东师大英语系
REVIEW Hypothesis testing starts with a null hypothesis and a null distribution. We compare what we have to the null distribution, if the result is too extreme to belong to the null distribution (p
More informationBasics on t-tests Independent Sample t-tests Single-Sample t-tests Summary of t-tests Multiple Tests, Effect Size Proportions. Statistiek I.
Statistiek I t-tests John Nerbonne CLCG, Rijksuniversiteit Groningen http://www.let.rug.nl/nerbonne/teach/statistiek-i/ John Nerbonne 1/46 Overview 1 Basics on t-tests 2 Independent Sample t-tests 3 Single-Sample
More informationPOLI 443 Applied Political Research
POLI 443 Applied Political Research Session 4 Tests of Hypotheses The Normal Curve Lecturer: Prof. A. Essuman-Johnson, Dept. of Political Science Contact Information: aessuman-johnson@ug.edu.gh College
More informationMATH Notebook 3 Spring 2018
MATH448001 Notebook 3 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2010 2018 by Jenny A. Baglivo. All Rights Reserved. 3 MATH448001 Notebook 3 3 3.1 One Way Layout........................................
More informationChapter 24. Comparing Means
Chapter 4 Comparing Means!1 /34 Homework p579, 5, 7, 8, 10, 11, 17, 31, 3! /34 !3 /34 Objective Students test null and alternate hypothesis about two!4 /34 Plot the Data The intuitive display for comparing
More informationDo not copy, post, or distribute. Independent-Samples t Test and Mann- C h a p t e r 13
C h a p t e r 13 Independent-Samples t Test and Mann- Whitney U Test 13.1 Introduction and Objectives This chapter continues the theme of hypothesis testing as an inferential statistical procedure. In
More informationIntroduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs
Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs The Analysis of Variance (ANOVA) The analysis of variance (ANOVA) is a statistical technique
More informationRegression: Main Ideas Setting: Quantitative outcome with a quantitative explanatory variable. Example, cont.
TCELL 9/4/205 36-309/749 Experimental Design for Behavioral and Social Sciences Simple Regression Example Male black wheatear birds carry stones to the nest as a form of sexual display. Soler et al. wanted
More informationUsing SPSS for One Way Analysis of Variance
Using SPSS for One Way Analysis of Variance This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests. This tutorial
More informationCh. 16: Correlation and Regression
Ch. 1: Correlation and Regression With the shift to correlational analyses, we change the very nature of the question we are asking of our data. Heretofore, we were asking if a difference was likely to
More informationTopic 4: Orthogonal Contrasts
Topic 4: Orthogonal Contrasts ANOVA is a useful and powerful tool to compare several treatment means. In comparing t treatments, the null hypothesis tested is that the t true means are all equal (H 0 :
More informationData Analysis and Statistical Methods Statistics 651
Data Analysis and Statistical Methods Statistics 65 http://www.stat.tamu.edu/~suhasini/teaching.html Suhasini Subba Rao Review In the previous lecture we considered the following tests: The independent
More informationQuestions 3.83, 6.11, 6.12, 6.17, 6.25, 6.29, 6.33, 6.35, 6.50, 6.51, 6.53, 6.55, 6.59, 6.60, 6.65, 6.69, 6.70, 6.77, 6.79, 6.89, 6.
Chapter 7 Reading 7.1, 7.2 Questions 3.83, 6.11, 6.12, 6.17, 6.25, 6.29, 6.33, 6.35, 6.50, 6.51, 6.53, 6.55, 6.59, 6.60, 6.65, 6.69, 6.70, 6.77, 6.79, 6.89, 6.112 Introduction In Chapter 5 and 6, we emphasized
More informationCalculating Fobt for all possible combinations of variances for each sample Calculating the probability of (F) for each different value of Fobt
PSY 305 Module 5-A AVP Transcript During the past two modules, you have been introduced to inferential statistics. We have spent time on z-tests and the three types of t-tests. We are now ready to move
More informationAnalysis of variance
Analysis of variance Tron Anders Moger 3.0.007 Comparing more than two groups Up to now we have studied situations with One observation per subject One group Two groups Two or more observations per subject
More informationAnalysis of variance
Analysis of variance 1 Method If the null hypothesis is true, then the populations are the same: they are normal, and they have the same mean and the same variance. We will estimate the numerical value
More informationThe One-Way Independent-Samples ANOVA. (For Between-Subjects Designs)
The One-Way Independent-Samples ANOVA (For Between-Subjects Designs) Computations for the ANOVA In computing the terms required for the F-statistic, we won t explicitly compute any sample variances or
More informationOrdinary Least Squares Regression Explained: Vartanian
Ordinary Least Squares Regression Explained: Vartanian When to Use Ordinary Least Squares Regression Analysis A. Variable types. When you have an interval/ratio scale dependent variable.. When your independent
More information