Z Test Comparing a group mean to a hypothesis T test (about 1 mean) T test (about 2 means) Comparing mean to sample mean. Similar means = will have same response to treatment Two unknown means are different based on their samples. Binary variable demonstrating group membership Binary variable demonstrating group membership Binary variable demonstrating group membership - Normal distribution of - σ and µ of must be known - Sample must be chosen randomly - Normal distribution of - Mean of taken from H 0 - Random sample - both s follow ND - homogeneity of (SD) σ - Random sample Z score: follows normal Critical value at 0.05 is abs. value 1.96. If abs. z score is greater than 1.96, reject H 0 t-distribution changes with sample size (DF), CV not always 1.96. compare t- statistic to CV. If >CV, reject H 0. OR P-val Because assumptions fulfilled, only parameter that can change is group means. Pearson s Correlation Coefficient (r) Pearson s Correlation Coefficient (r) LIM: Knowing σ and µ is unlikely in large s Unit dependent (scale invariant) LIM: 2 means df=1 df=1 df=1
One-Way ANOVA (Independent) One-Way ANOVA (Repeated) 2 means N subjects are measured on a single DV under K conditions or levels of a factor. Analysis of : IV = factor. Has many levels or categories. Treatment levels. Analysis of between DVs across treatment levels. Testing for significant differences across subjects. - Indep. of observations in each group - All groups have equal in pop - groups follow ND -Distribution of obs in each level follows the normal - Homogeneity of at each level of factor. -homogenous co If these two assumptions are met = COMPOUND SYMMETRY V B: Between groups V W: Within groups F Statistic= (V B/ V W) If >CV, reject H 0. Also look at p-val (<0.05, reject H 0) ω LIM: F(df(b), df(w)) Gives overall global effect of IV on DV. Doesn t tell which pairs of means are different. SPSS: look at p-value of each observed F-value. Sphericity is a more general condition of compound symmetry. Tests for violation of compound symmetry: Mauchley s W. Here, if p<0.05, Compound symmetry is violated. When this is the case, the F-test tends to be inflated. Convenient approach CONSERVATIVE F-TEST create a more conservative value against which to compare F. This is done by reducing the degrees of. DF(b)=ε(k-1) and DF(bs)=ε(k-1)(n-1). In this situation, ε relates the degree to which CS is violated. 1 (when cs holds) ε [1/(k-1)]. How do we decide on the value of ε? SPSS in Mauchley s W gives us two values Greenhouse Geiser and Huynh-Feltd. ω df=n-1 Df=k-1 2
Two-Way ANOVA Two factors of interest (IVs) effect on DV Two factors of interest (IVs) effect on DV Two main effects (row and column) and one interaction effect. Normality homogeneity of independence of observations V B partitioned into SS(R), SS(C), and SS(RC). Comparing different parts of variation 3 f statistics for main effects and interaction effect. Post-hoc analyses for each of the factors, simple effect analysis for the interaction effect. 3
POST-HOC COMPARISON TESTING 3 group means compared, if can reject H 0. Test each pair of two means at a time. Ex: 3 group means, testing 3 pairs of two group means. Scheffe s Test Tukey s HSD test Tukey-Kramer Test Most conservative post-hoc comparison test. Limits type 1 error. Use when omnibus f-test has been rejected (significant result), and when have greater than 3 group means. More liberal posthoc comparison test of pairwise comparisons. An insignificant pair of means may turn out to be significant in HSD. If sample sizes are equal, this is the same as tukey s HSD test. Using a smaller critical value for F, with df(b)= k-1 and df(w)=n-k multiplied by (k-1). Uses studentized range statistic Q One-Way ANOVA has been calculated, F-test was significant When 3 or more group means are being compared. Equal sample sizes. Unequal sample sizes Using F-test again to look for significant differences between groups in pairwise comparisons. Observing Q value against critical value of Q (C Q) for alpha level 0.05 SPSS: Looking at Mean differences (I- J) tale. A star will be beside each pairwise comparison value that is significant at the 0.05 significance level. Same as ^ 4
ASSESSING THE ASSUMPTIONS OF ONE-WAY ANOVA homogeneity of, each group follows ND in (not necessarily in sample), all observations are independent (no correlations among observations) Skewedness Kolmogrov- Smirnov & Shapiro-Wilk test F max test of Hartley Assessing normality of the Statistical tests of normality Assessing the homogeneity of assumption Use the t-test as only one parameter is being tested. Can also use Q-Q plots: here calculating the z-scores of sorted observations Looking at p- value of plot. If it is greater than 0.05, then the sample is drawn from a normal Calculate Fmax using MaxVj/MinVj Equal sample sizes. T-test. If you cannot reject the null hypothesis, skewness=0 and the = normal. Compared against a critical value from a table, if violates we reject H0 that samples have same Also construct charts and histograms. Roughly symmetrical = good. LIM: large heterogeneity. Leads to inflated F- ratio, higher chance type 1 error. Easy to get significant results w/ > N. 5
Levene s Test Testing the assumption that the samples have equal. p<0.05, we reject H0 and the assumption that all samples have equal. F-test is robust against this violation if sample sizes are equal. No way to test for the independence of observations, and this is possibly the most crucial component. Also no way to fix/adjust if this assumption is violated. NON-PARAMETRIC TESTING do not require normality assumption or assumptions about parameters Chi-Square Test χ 2 Wilcoxon Rank-Sum test for testing independence of 2 nominal variables. Use chi-square to see if there is a correlation between two nominal variables. Non-parametric version of independent samples t-test. Data arranged on a contingency table which lists frequencies or counts of input data Nominal variables. Mann-Whitney U statistic Dependent variable can be ordinal, nominal or ratio. Independent observations N 20 Random samples Av. Cell frequency 5 Data will be transformed into rank data. Does not require the normal distribution Expected cell frequency: (T Rr*T Cc)/T sample χ 2 = expected frequency if H 0 is true. If observed val of χ 2 is stat val CV, reject H 0. There is an association. Mann-Whitney U statistic. If < 0.05, reject H 0. Means that two s do not come from the same continuous SPSS: χ 2 test table. Look at Pearson Chi- Square value= χ 2 value. Asymp Sig=pval SPSS: Look at Asymp Sig for p-value. Report Mann- Whitney value as U. df=(r-1)(c-1) 6
Kruskal-Wallis H test Non-parametric version of One- Way ANOVA Data is transformed into ranked data. Testing the significance of the differences among groups. k>2 Does not require normal distribution of data. SPSS: look at Asymp Sig for p-val. Report H(df) = Chi-Square df=n-1 7