Degrees of freedom df=1. Limitations OR in SPSS LIM: Knowing σ and µ is unlikely in large

Similar documents
CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)

Non-parametric tests, part A:

Parametric versus Nonparametric Statistics-when to use them and which is more powerful? Dr Mahmoud Alhussami

SEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics

SPSS Guide For MMI 409

Types of Statistical Tests DR. MIKE MARRAPODI

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics

4/6/16. Non-parametric Test. Overview. Stephen Opiyo. Distinguish Parametric and Nonparametric Test Procedures

Nonparametric Statistics. Leah Wright, Tyler Ross, Taylor Brown

Multiple Comparisons

What Is ANOVA? Comparing Groups. One-way ANOVA. One way ANOVA (the F ratio test)

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Workshop Research Methods and Statistical Analysis

Textbook Examples of. SPSS Procedure

4.1. Introduction: Comparing Means

HYPOTHESIS TESTING II TESTS ON MEANS. Sorana D. Bolboacă

Inferences About the Difference Between Two Means

sphericity, 5-29, 5-32 residuals, 7-1 spread and level, 2-17 t test, 1-13 transformations, 2-15 violations, 1-19

CHI SQUARE ANALYSIS 8/18/2011 HYPOTHESIS TESTS SO FAR PARAMETRIC VS. NON-PARAMETRIC

MANOVA is an extension of the univariate ANOVA as it involves more than one Dependent Variable (DV). The following are assumptions for using MANOVA:

Hypothesis testing, part 2. With some material from Howard Seltman, Blase Ur, Bilge Mutlu, Vibha Sazawal

PSY 307 Statistics for the Behavioral Sciences. Chapter 20 Tests for Ranked Data, Choosing Statistical Tests

The independent-means t-test:

The goodness-of-fit test Having discussed how to make comparisons between two proportions, we now consider comparisons of multiple proportions.

H0: Tested by k-grp ANOVA

Multiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600

One-Way ANOVA Source Table J - 1 SS B / J - 1 MS B /MS W. Pairwise Post-Hoc Comparisons of Means

Solutions exercises of Chapter 7

The entire data set consists of n = 32 widgets, 8 of which were made from each of q = 4 different materials.

An Analysis of College Algebra Exam Scores December 14, James D Jones Math Section 01

Using SPSS for One Way Analysis of Variance

COMPARING SEVERAL MEANS: ANOVA

Non-parametric (Distribution-free) approaches p188 CN

Analysis of variance (ANOVA) Comparing the means of more than two groups

Statistics for Managers Using Microsoft Excel Chapter 10 ANOVA and Other C-Sample Tests With Numerical Data

Data are sometimes not compatible with the assumptions of parametric statistical tests (i.e. t-test, regression, ANOVA)

Chi-Square. Heibatollah Baghi, and Mastee Badii

Data Analysis: Agonistic Display in Betta splendens I. Betta splendens Research: Parametric or Non-parametric Data?

Introduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p.

Statistiek I. Nonparametric Tests. John Nerbonne. CLCG, Rijksuniversiteit Groningen.

13: Additional ANOVA Topics. Post hoc Comparisons

Agonistic Display in Betta splendens: Data Analysis I. Betta splendens Research: Parametric or Non-parametric Data?

Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti

Basic Statistical Analysis

Introduction to inferential statistics. Alissa Melinger IGK summer school 2006 Edinburgh

Comparing Several Means: ANOVA

ADDITIONAL STATISTICAL ANALYSES. The data were not normally distributed (Kolmogorov-Smirnov test; Legendre &

Data analysis and Geostatistics - lecture VII

Analysis of variance

H0: Tested by k-grp ANOVA

same hypothesis Assumptions N = subjects K = groups df 1 = between (numerator) df 2 = within (denominator)

Hypothesis Testing hypothesis testing approach

Turning a research question into a statistical question.

2. RELATIONSHIP BETWEEN A QUALITATIVE AND A QUANTITATIVE VARIABLE

13: Additional ANOVA Topics

Introduction. Chapter 8

Non-Parametric Two-Sample Analysis: The Mann-Whitney U Test

= 1 i. normal approximation to χ 2 df > df

Why should I use a Kruskal-Wallis test? (With Minitab) Why should I use a Kruskal-Wallis test? (With SPSS)

Glossary for the Triola Statistics Series

ANOVA continued. Chapter 10

Contents. Acknowledgments. xix

Nonparametric Statistics

Lecture 10: Non- parametric Comparison of Loca6on. GENOME 560, Spring 2015 Doug Fowler, GS

Testing for Normality

Multiple t Tests. Introduction to Analysis of Variance. Experiments with More than 2 Conditions

CDA Chapter 3 part II

Analysis of Variance (ANOVA)

Review for Final. Chapter 1 Type of studies: anecdotal, observational, experimental Random sampling

Lecture 06. DSUR CH 05 Exploring Assumptions of parametric statistics Hypothesis Testing Power

Study Guide #3: OneWay ANALYSIS OF VARIANCE (ANOVA)

ANOVA continued. Chapter 10

Advanced Experimental Design

Group comparison test for independent samples

What Does the F-Ratio Tell Us?

Exam details. Final Review Session. Things to Review

My data doesn t look like that..

Independent Samples ANOVA

What Are Nonparametric Statistics and When Do You Use Them? Jennifer Catrambone

Section 4.6 Simple Linear Regression

Nominal Data. Parametric Statistics. Nonparametric Statistics. Parametric vs Nonparametric Tests. Greg C Elvers

Review of Statistics 101

Biostatistics 270 Kruskal-Wallis Test 1. Kruskal-Wallis Test

Testing for Normality

N J SS W /df W N - 1

3. Nonparametric methods

Introduction to Analysis of Variance. Chapter 11

Introduction to Statistics with GraphPad Prism 7

Statistics: revision

Intro to Parametric & Nonparametric Statistics

Repeated Measures Analysis of Variance

Regression Analysis. BUS 735: Business Decision Making and Research. Learn how to detect relationships between ordinal and categorical variables.

Business Statistics (BK/IBA) Tutorial 4 Full solutions

Basics on t-tests Independent Sample t-tests Single-Sample t-tests Summary of t-tests Multiple Tests, Effect Size Proportions. Statistiek I.

Difference in two or more average scores in different groups

Repeated-Measures ANOVA in SPSS Correct data formatting for a repeated-measures ANOVA in SPSS involves having a single line of data for each

Ø Set of mutually exclusive categories. Ø Classify or categorize subject. Ø No meaningful order to categorization.

ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS


10 One-way analysis of variance (ANOVA)

Transcription:

Z Test Comparing a group mean to a hypothesis T test (about 1 mean) T test (about 2 means) Comparing mean to sample mean. Similar means = will have same response to treatment Two unknown means are different based on their samples. Binary variable demonstrating group membership Binary variable demonstrating group membership Binary variable demonstrating group membership - Normal distribution of - σ and µ of must be known - Sample must be chosen randomly - Normal distribution of - Mean of taken from H 0 - Random sample - both s follow ND - homogeneity of (SD) σ - Random sample Z score: follows normal Critical value at 0.05 is abs. value 1.96. If abs. z score is greater than 1.96, reject H 0 t-distribution changes with sample size (DF), CV not always 1.96. compare t- statistic to CV. If >CV, reject H 0. OR P-val Because assumptions fulfilled, only parameter that can change is group means. Pearson s Correlation Coefficient (r) Pearson s Correlation Coefficient (r) LIM: Knowing σ and µ is unlikely in large s Unit dependent (scale invariant) LIM: 2 means df=1 df=1 df=1

One-Way ANOVA (Independent) One-Way ANOVA (Repeated) 2 means N subjects are measured on a single DV under K conditions or levels of a factor. Analysis of : IV = factor. Has many levels or categories. Treatment levels. Analysis of between DVs across treatment levels. Testing for significant differences across subjects. - Indep. of observations in each group - All groups have equal in pop - groups follow ND -Distribution of obs in each level follows the normal - Homogeneity of at each level of factor. -homogenous co If these two assumptions are met = COMPOUND SYMMETRY V B: Between groups V W: Within groups F Statistic= (V B/ V W) If >CV, reject H 0. Also look at p-val (<0.05, reject H 0) ω LIM: F(df(b), df(w)) Gives overall global effect of IV on DV. Doesn t tell which pairs of means are different. SPSS: look at p-value of each observed F-value. Sphericity is a more general condition of compound symmetry. Tests for violation of compound symmetry: Mauchley s W. Here, if p<0.05, Compound symmetry is violated. When this is the case, the F-test tends to be inflated. Convenient approach CONSERVATIVE F-TEST create a more conservative value against which to compare F. This is done by reducing the degrees of. DF(b)=ε(k-1) and DF(bs)=ε(k-1)(n-1). In this situation, ε relates the degree to which CS is violated. 1 (when cs holds) ε [1/(k-1)]. How do we decide on the value of ε? SPSS in Mauchley s W gives us two values Greenhouse Geiser and Huynh-Feltd. ω df=n-1 Df=k-1 2

Two-Way ANOVA Two factors of interest (IVs) effect on DV Two factors of interest (IVs) effect on DV Two main effects (row and column) and one interaction effect. Normality homogeneity of independence of observations V B partitioned into SS(R), SS(C), and SS(RC). Comparing different parts of variation 3 f statistics for main effects and interaction effect. Post-hoc analyses for each of the factors, simple effect analysis for the interaction effect. 3

POST-HOC COMPARISON TESTING 3 group means compared, if can reject H 0. Test each pair of two means at a time. Ex: 3 group means, testing 3 pairs of two group means. Scheffe s Test Tukey s HSD test Tukey-Kramer Test Most conservative post-hoc comparison test. Limits type 1 error. Use when omnibus f-test has been rejected (significant result), and when have greater than 3 group means. More liberal posthoc comparison test of pairwise comparisons. An insignificant pair of means may turn out to be significant in HSD. If sample sizes are equal, this is the same as tukey s HSD test. Using a smaller critical value for F, with df(b)= k-1 and df(w)=n-k multiplied by (k-1). Uses studentized range statistic Q One-Way ANOVA has been calculated, F-test was significant When 3 or more group means are being compared. Equal sample sizes. Unequal sample sizes Using F-test again to look for significant differences between groups in pairwise comparisons. Observing Q value against critical value of Q (C Q) for alpha level 0.05 SPSS: Looking at Mean differences (I- J) tale. A star will be beside each pairwise comparison value that is significant at the 0.05 significance level. Same as ^ 4

ASSESSING THE ASSUMPTIONS OF ONE-WAY ANOVA homogeneity of, each group follows ND in (not necessarily in sample), all observations are independent (no correlations among observations) Skewedness Kolmogrov- Smirnov & Shapiro-Wilk test F max test of Hartley Assessing normality of the Statistical tests of normality Assessing the homogeneity of assumption Use the t-test as only one parameter is being tested. Can also use Q-Q plots: here calculating the z-scores of sorted observations Looking at p- value of plot. If it is greater than 0.05, then the sample is drawn from a normal Calculate Fmax using MaxVj/MinVj Equal sample sizes. T-test. If you cannot reject the null hypothesis, skewness=0 and the = normal. Compared against a critical value from a table, if violates we reject H0 that samples have same Also construct charts and histograms. Roughly symmetrical = good. LIM: large heterogeneity. Leads to inflated F- ratio, higher chance type 1 error. Easy to get significant results w/ > N. 5

Levene s Test Testing the assumption that the samples have equal. p<0.05, we reject H0 and the assumption that all samples have equal. F-test is robust against this violation if sample sizes are equal. No way to test for the independence of observations, and this is possibly the most crucial component. Also no way to fix/adjust if this assumption is violated. NON-PARAMETRIC TESTING do not require normality assumption or assumptions about parameters Chi-Square Test χ 2 Wilcoxon Rank-Sum test for testing independence of 2 nominal variables. Use chi-square to see if there is a correlation between two nominal variables. Non-parametric version of independent samples t-test. Data arranged on a contingency table which lists frequencies or counts of input data Nominal variables. Mann-Whitney U statistic Dependent variable can be ordinal, nominal or ratio. Independent observations N 20 Random samples Av. Cell frequency 5 Data will be transformed into rank data. Does not require the normal distribution Expected cell frequency: (T Rr*T Cc)/T sample χ 2 = expected frequency if H 0 is true. If observed val of χ 2 is stat val CV, reject H 0. There is an association. Mann-Whitney U statistic. If < 0.05, reject H 0. Means that two s do not come from the same continuous SPSS: χ 2 test table. Look at Pearson Chi- Square value= χ 2 value. Asymp Sig=pval SPSS: Look at Asymp Sig for p-value. Report Mann- Whitney value as U. df=(r-1)(c-1) 6

Kruskal-Wallis H test Non-parametric version of One- Way ANOVA Data is transformed into ranked data. Testing the significance of the differences among groups. k>2 Does not require normal distribution of data. SPSS: look at Asymp Sig for p-val. Report H(df) = Chi-Square df=n-1 7