PSYC 331 STATISTICS FOR PSYCHOLOGISTS

Similar documents
The One-Way Repeated-Measures ANOVA. (For Within-Subjects Designs)

The One-Way Independent-Samples ANOVA. (For Between-Subjects Designs)

PSYC 331 STATISTICS FOR PSYCHOLOGIST

POLI 443 Applied Political Research

Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs

Hypothesis testing: Steps

Hypothesis testing: Steps

An Old Research Question

Multiple t Tests. Introduction to Analysis of Variance. Experiments with More than 2 Conditions

Difference in two or more average scores in different groups

Your schedule of coming weeks. One-way ANOVA, II. Review from last time. Review from last time /22/2004. Create ANOVA table

Introduction to Business Statistics QM 220 Chapter 12

POLI 443 Applied Political Research

Sampling Distributions: Central Limit Theorem

Review. One-way ANOVA, I. What s coming up. Multiple comparisons

Independent Samples ANOVA

Analysis of Variance: Part 1

Introduction to the Analysis of Variance (ANOVA)

10/31/2012. One-Way ANOVA F-test

HYPOTHESIS TESTING. Hypothesis Testing

Factorial designs. Experiments

COMPARING SEVERAL MEANS: ANOVA

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics

Comparing Several Means: ANOVA

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

AMS7: WEEK 7. CLASS 1. More on Hypothesis Testing Monday May 11th, 2015

Hypothesis T e T sting w ith with O ne O One-Way - ANOV ANO A V Statistics Arlo Clark Foos -

Two-Sample Inferential Statistics

Factorial Analysis of Variance

1 Descriptive statistics. 2 Scores and probability distributions. 3 Hypothesis testing and one-sample t-test. 4 More on t-tests

Using SPSS for One Way Analysis of Variance

Analysis of Variance ANOVA. What We Will Cover in This Section. Situation

Psych 230. Psychological Measurement and Statistics

The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization.

ANOVA 3/12/2012. Two reasons for using ANOVA. Type I Error and Multiple Tests. Review Independent Samples t test

10/4/2013. Hypothesis Testing & z-test. Hypothesis Testing. Hypothesis Testing

One-Way Analysis of Variance (ANOVA) Paul K. Strode, Ph.D.

MATH 240. Chapter 8 Outlines of Hypothesis Tests

PLSC PRACTICE TEST ONE

Sampling distribution of t. 2. Sampling distribution of t. 3. Example: Gas mileage investigation. II. Inferential Statistics (8) t =

Statistical methods for comparing multiple groups. Lecture 7: ANOVA. ANOVA: Definition. ANOVA: Concepts

COGS 14B: INTRODUCTION TO STATISTICAL ANALYSIS

PSYC 331 STATISTICS FOR PSYCHOLOGISTS

FRANKLIN UNIVERSITY PROFICIENCY EXAM (FUPE) STUDY GUIDE

2 and F Distributions. Barrow, Statistics for Economics, Accounting and Business Studies, 4 th edition Pearson Education Limited 2006

OHSU OGI Class ECE-580-DOE :Design of Experiments Steve Brainerd

Workshop Research Methods and Statistical Analysis

Statistics Primer. ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong

Chapter 12 - Lecture 2 Inferences about regression coefficient

Psychology 282 Lecture #4 Outline Inferences in SLR

Chapter 8 Student Lecture Notes 8-1. Department of Economics. Business Statistics. Chapter 12 Chi-square test of independence & Analysis of Variance

DESIGNING EXPERIMENTS AND ANALYZING DATA A Model Comparison Perspective

Advanced Experimental Design

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

Unit 27 One-Way Analysis of Variance

Factorial Independent Samples ANOVA

Analysis of variance

The t-statistic. Student s t Test

Chapter 12: Estimation

STAT 115:Experimental Designs

Factorial Analysis of Variance

Lecture 11: Two Way Analysis of Variance

Chapter Seven: Multi-Sample Methods 1/52

Department of Economics. Business Statistics. Chapter 12 Chi-square test of independence & Analysis of Variance ECON 509. Dr.

Lecture 18: Analysis of variance: ANOVA

8/23/2018. One-Way ANOVA F-test. 1. Situation/hypotheses. 2. Test statistic. 3.Distribution. 4. Assumptions

CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)

Lecture 7: Hypothesis Testing and ANOVA

Statistiek II. John Nerbonne using reworkings by Hartmut Fitz and Wilbert Heeringa. February 13, Dept of Information Science

Note: k = the # of conditions n = # of data points in a condition N = total # of data points

Lab #12: Exam 3 Review Key

Design of Experiments. Factorial experiments require a lot of resources

These are all actually contrasts (the coef sum to zero). What are these contrasts representing? What would make them large?

ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4)

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES

STAT 263/363: Experimental Design Winter 2016/17. Lecture 1 January 9. Why perform Design of Experiments (DOE)? There are at least two reasons:

INTRODUCTION TO ANALYSIS OF VARIANCE

Analysis of Variance (ANOVA)

Introduction and Descriptive Statistics p. 1 Introduction to Statistics p. 3 Statistics, Science, and Observations p. 5 Populations and Samples p.

One-way between-subjects ANOVA. Comparing three or more independent means

Lecture 3: Analysis of Variance II

One-way ANOVA. Experimental Design. One-way ANOVA

Chapter 9 Inferences from Two Samples

9 One-Way Analysis of Variance

Lec 1: An Introduction to ANOVA

Statistics For Economics & Business

16.400/453J Human Factors Engineering. Design of Experiments II

One-way between-subjects ANOVA. Comparing three or more independent means

psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests

4/6/16. Non-parametric Test. Overview. Stephen Opiyo. Distinguish Parametric and Nonparametric Test Procedures

SOCI 221 Basic Concepts in Sociology

Nominal Data. Parametric Statistics. Nonparametric Statistics. Parametric vs Nonparametric Tests. Greg C Elvers

Comparing the means of more than two groups

One-Way ANOVA Source Table J - 1 SS B / J - 1 MS B /MS W. Pairwise Post-Hoc Comparisons of Means

Hypothesis Tests and Estimation for Population Variances. Copyright 2014 Pearson Education, Inc.

DISTRIBUTIONS USED IN STATISTICAL WORK

SEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics

One-Way Analysis of Variance: ANOVA

16.3 One-Way ANOVA: The Procedure

CIVL /8904 T R A F F I C F L O W T H E O R Y L E C T U R E - 8

Transcription:

PSYC 331 STATISTICS FOR PSYCHOLOGISTS Session 4 A PARAMETRIC STATISTICAL TEST FOR MORE THAN TWO POPULATIONS Lecturer: Dr. Paul Narh Doku, Dept of Psychology, UG Contact Information: pndoku@ug.edu.gh College of Education School of Continuing and Distance Education 2014/2015 2016/2017 godsonug.wordpress.com/blog

Session Overview This session builds upon previous sessions and provides further insight into some parametric statistical concepts that will help in the testing of hypotheses. The goal of this session is to equip students with the ability to explain the terminology of analysis of variance (ANOVA) ; Dr. P. N. Doku, Slide 2

Session Outline The key topics to be covered in the session are as follows: The analysis of variance (ANOVA) procedure The general logic of ANOVA Computational procedures Post-hoc analysis: Multiple comparisons following the ANOVA test Worked example and exercises based on the One-Way ANOVA test Introduction to Two-Way analysis of variance (Two-Way ANOVA) Slide 3

Reading List Opoku, J. Y. (2007). Tutorials in Inferential Social Statistics. (2nd Ed.). Accra: Ghana Universities Press. Pages 85-109 Slide 4

Analysis of Variance The analysis of variance is the parametric procedure for determining whether significant differences occur in an experiment with three or more sample means However, in a research study of experiment involving only two conditions of the independent variable (two samples means), you may use either a t-test or the ANOVA and the outcome of the analysis will be the same. Slide 5

Experiment-Wise Error The probability of making a Type I error over a series of individual statistical tests or comparisons in an experiment is called the experiment-wise error rate When we use a t-test to compare only two means in an experiment, the experimentwise error rate equals Slide 6

Experiment-Wise Error When there are more than two means in an experiment, the multiple t-tests result in an experiment-wise error rate much larger than the we have selected Using the ANOVA allows us to make all our decisions and keep the experiment-wise error rate equal to Slide 7

An Overview of One Way ANOVA ANalysis Of VAriance is abbreviated as ANOVA ANOVA is also called the F ratio There is a single independent variable, hence called One-Way An independent variable is also called a factor Each condition of the independent variable is called a level or treatment Differences produced by the independent variable are treatment effect Slide 8

Requirements for using the F ratio 1) Must be a comparison between three or more means. 2) Must be working with interval data. 3) Our sample must have been collected randomly from the research population. 4) We can/must assume that the sample characteristics are normally distributed. 5) We must assume that the variance between samples

Between-Subjects A one-way ANOVA is performed when one independent variable is tested in the experiment When an independent variable is studied using independent samples in all conditions, it is called a between-subjects factor A between-subjects factor involves using the formulas for a between-subjects ANOVA 10

Within-Subjects Factor When a factor is studied using related (dependent) samples in all levels, it is called a within-subjects factor This involves a set of formulas called a within-subjects ANOVA 11

Diagram of a Study Having Three Levels of One Factor 12

Null and Alternate Hypotheses Null hypothesis H 0 : 1 2... k Alternate hypothesis: states that at least the means of two of the populations differ. H a : not all k are equal 13

The ANOVA (F) Test The statistic for the ANOVA is F When F obs is significant, it indicates only that somewhere among the means at least two of them differ significantly It does NOT indicate which specific means differ significantly When the F-test is significant, we perform post hoc comparisons to determine which specific means differ significantly

Computation of the ANOVA (F) Test The Analysis of Variance is a multi-step process. 1. Sum of Squares 2. Mean Square 3. F Ratio Slide 15

Sum of Squares The computations for the ANOVA require the use of several sums of squared deviations The sum of squares is simply the sum of the squared deviations of a set of scores around the mean of those scores Adding them up. It is symbolized as SS

Sum of Squares Comparing Groups: When groups are compared, there are more than one type of sum of squares. Total Sum of Squares (SS total) Between Groups Sum of Squares (SS between) Within Groups Sum of Squares (SS within) Each type represents the sum of squared deviations

Computational Formulae for SS SS T 2 X X 2 N X 1 2 X 2 2 X k 1 2 X k 2 X 2 SS B... n 1 n2 n k 1 n k N 2 SS W X 1 2 X 2... 2 X k 1 X 1 2 X 2 2 X k 1 2 X k 2 n 1 n 2 Slide 18 n k 1 2 X k n k

The Computational Formulas for Sum of Squares: worked example

The Computational Formulas for Sum of Squares: worked example

The Computational Formulas for Sum of Squares: Summary

The mean square between groups describes the differences between the means of the conditions in a factor. It is symbolized by MS. Mean Squares NOTE: The value of the sum of squares becomes larger as variation increases. The sum of squares also increases with sample size. Because of this, the SS cannot be viewed as a true measure of variation. Another measure of variation that we can use is the Mean Square. The mean square within groups describes the variability in scores within the conditions of an experiment. It is symbolized by MS W.

Coputation of Mean Squares Between Within MS between SS between df between MS within SS within df within MS between = between group mean square SS between = between group sum of squares df between = between group degrees of freedom MS within = within group mean square SS within = within group sum of squares df within = within group degrees of freedom

Degrees of Freedom Use the following equations to obtain the correct degrees of freedom: df be tween k 1 df within N total k k = number of groups

Critical Value of F (F critical) The Critical value of F (F crit ) depends on: The degrees of freedom (both the df bn = k 1 and the df wn = N k) The selected To obtain the F crit from the F statistical table: Use the df B (the numerator) across the top of the table. Use the df W (the denominator) along the side of the table.

Worked example of Mean Square Calculating the Mean Square Computation using Table 8.2 data in the previous example

Computing F obs The analysis of variance yields an F ratio. The F ratio is the variance between groups and variation within groups compared. F obs M S be twe e n (bn ) M S within ( wn ) The larger our calculated F ratio, the increased likelihood that we will have a statistically significant result.

Illustration of another way of computing the Sum of Squares and Mean Squares using the mean method Dr. Richard Boateng, UGBS Slide 28

Example: does family size vary by religious affiliation?

Step 1: Find the mean for each sample

Step 2:Cal. (1) Sum of scores, (2) sum of sq. scores, (3) number of subjs., (4) and mean

computations

computations

computation

DECISION To reject the null hypothesis at the.05 significance level with 2 and 12 degrees of freedom, our calculated F ratio must exceed 3.88. From the computation, our obtained F ratio of 8.24, is clearly greater than the F critical, hence we must reject the null hypothesis. Interpretation: At 0.05 significant level, it is indeed true that Family size does vary by religion.

Post Hoc Comparisons When the F-test is significant, we perform post hoc comparisons Post hoc comparisons are like t-tests We compare all possible pairs of level means from a factor, one pair at a time to determine which means differ significantly from each other Examples: The protected t test method and Fisher Least Significant (LSD) method

The Protected t Test method The null hypothesis for comparing any pair of means tested with the formula: and is t X 1 X 2 X 1 X 2 MS error MS error 1 1 MS error n 1 n 2 n 1 n 2 Ms error = MS w where MS w is simply taken from the ANOVA results and n 1 and n 2 are the sizes of the two samples whose means we are comparing. The computed value of t is referred to the t tables at α = 0.05 for a two-tailed test with the degrees of freedom (df) associated with the MS w (= N - k) and a decision is taken as to whether or not H o should be

Note that t here refers to the critical value of t with N-k df in a two-tailed test Fisher LSD (Least Significant Difference) method Used when all the groups have equal sample sizes, i.e. n1=n2=n3 Then the denominator of the protected t test becomes a constant for all pairwise comparisons. In such a situation, it becomes possible to determine what least significant difference (LSD) between means is needed to reject H o at any given level of significance. = t X 1 X 2 X 1 X 2 MS error MS error 1 1 MS error n 1 n 2 n 1 n 2

Two- way ANOVA- overview We have learned how to test for the effects of independent variables considered one at a time. However, much of human behavior is determined by the influence of several variables operating at the same time. Sometimes these variables combine to influence performance.

Two- way ANOVA We need to test for the independent and combined effects of multiple variables on performance. We do this with a Two- way ANOVA that asks: (i)how different from each other are the means for levels of Variable A? (ii)how different from each other are the means for levels of Variable B? (iii)how different from each other are the means for the treatment combinations produced by A and B together?

Two way ANOVA The first two of those questions are questions about main effects of the respective independent variables. The third question is about the interaction effect, the effect of the two variables considered simultaneously.

MAIN vs INTERACTION EFFECTS Main effect A main effect is the effect on performance of one treatment variable considered in isolation (ignoring other variables in the study) Interaction Effect an interaction effect occurs when the effect of one variable is different across levels of one or more other variables Slide 42

Illustration In order to detect interaction effects, we must use factorial designs. In a factorial design each variable is tested at every level of all of the other variables. Below represent two variables A and B both with two levels A1,A2 and B1,B2 respectively. A1 A2 B1 i ii B2 iii iv

Illustration I.vs III Effect of B at level A 1 of variable A II.vs IV Effect of B at A 2 If these are different, then we say that A and B interact I vs II ALTERNATIVELY Effect of A at B 1 III vs IV Effect of A at B 2 If these are different, then we say that A and B interact

B 2 B 2 Illustration B 1 B 1 A 1 A 2 A 1 A 2 In the graphs above, the effect of A varies at levels of B, and the effect of B varies at levels of A. How you say it is a matter of preference (and your theory). In each case, the interaction is the whole pattern. No part of the graph shows the interaction. It can only be seen in the entire pattern (here, all 4 data points).

Computation of F ratios in Two- Way ANOVA In a Two-Way ANOVA, three F ratios are computed: One F ratio is computed for the factor represented along the rows; a second F ratio is computed for the factor represented along the columns; and a third F ratio is computed for the interaction between the factors represented along the rows and columns. The various F ratios are each referred to the F tables with the appropriate degrees of freedom associated with each F ratio under a specified decision rule and a decision is taken as to whether or not H o should be rejected in each case. Slide 46