Multiple t Tests. Introduction to Analysis of Variance. Experiments with More than 2 Conditions
|
|
- Alaina Hicks
- 5 years ago
- Views:
Transcription
1 Introduction to Analysis of Variance 1 Experiments with More than 2 Conditions Often the research that psychologists perform has more conditions than just the control and experimental conditions You might want to compare the effectiveness of a new treatment to an old treatment and to a placebo control group Factor the variable (independent or quasi independent) that designates the groups being compared Level the values that the factor can take on 2 Multiple t Tests How would you analyze data from an experiment with more than two levels? Given what we know so far, our only real option would be to perform multiple t-tests The problem with this approach to the data analysis is that you might have to perform many, many t-tests to test all possible combinations of the levels 3 1
2 Multiple t-tests As the number of levels (or conditions) increases, the number of comparisons needed increases more rapidly # comparisons = (n 2 - n) / 2 n = number of levels 4 Why Not Perform Multiple t Tests? With a computer, it is easy, quick, and error-free to perform multiple t-tests, so why shouldn t we do it? The answer lies in α α is the probability of making a Type-I error, or incorrectly rejecting H 0 when H 0 is true α applies to a single comparison It is correctly called α comparison wise 5 Why Not Perform Multiple t Tests? What happens to the probability of making at least one Type-I error as the number of comparisons increases? It increases too The probability of making at least one Type-I error across all of your inferential statistical tests is called α familywise or α fw 6 2
3 α fw If the probability of making a Type-I error on a given comparison is given by α, what is the probability of making at least 1 Type- I error when you make n comparisons? 7 α fw α fw = 1 - (1 - α) n This assumes that the Type I errors are independent As the number of comparisons increases, the probability of making at least one Type-I error increases rapidly 8 Why Not Multiple t-tests? Performing multiple t-tests is bad because it increases the probability that you will make at least one Type-I error You can never determine which statistically significant results, if any, are probably due to chance 9 3
4 ANOVA Part of the solution to this problem rests in a statistical procedure known as the analysis of variance or ANOVA for short ANOVA replaces the multiple comparisons with a single omnibus null hypothesis Omnibus -- covers all aspects In ANOVA, H 0 is of the form: H 0 : µ 1 = µ 2 = µ 3 =. = µ n 10 That is, H 0 is that all the means are equal ANOVA Alternative Hypothesis Given the H 0 and H 1 must be both mutually exclusive and exhaustive, what is H 1 for ANOVA? Why isn t this H 1? H 1 : µ 1 µ 2 µ 3. µ n In ANOVA, the alternative hypothesis is of the form: H 1 : not H 0 or H 1 : at least one population mean is different from another 11 Two Estimates of Variance ANOVA compares two estimates of the variability in the data in order to determine if H 0 can be rejected: Between-treatments variance Also called between-groups variance Within-treatment variance Also called within-groups variance 12 4
5 Within-Treatment Variance Within-treatment variance is the weighted mean variability within each group or condition Which of the two sets of distributions to the right has a larger within-treatment variance? Why? 13 Within-Treatment Variance What causes variability within a treatment? Within-treatment variation is caused by factors that we cannot or did not control in the experiment, such as individual differences between the participants Do you want within-treatment variability to be large or small? 14 Between-Treatments Variance Between-treatments variance is a measure of how different the groups are from each other Which set of distributions has a greater betweentreatments variance? 15 5
6 Sources of Between-Treatments Variance What causes, or is the source of, betweentreatments variance? That is, why are not all the groups identical to each other? Between-treatments variance is partially caused by the effect that the treatment has on the dependent variable The larger the effect is, the more different the groups become and the larger betweentreatments variance will be 16 Sources of Between-Treatments Variance Between-treatments variance also measures sampling error Even if the treatment had no effect on the dependent variable, you still would not expect the distributions to be equal because samples rarely are identical to each other That is, some of the between-treatments variance is due to the fact that the groups are initially different due to random fluctuations (or errors) in our sampling 17 Variance Summary Within-treatments variance measures error Between-treatments variance measures the effect of the treatment on the dependent variable and error 18 6
7 F Ratio Fisher s F ratio is defined as: 19 F Ratio What should F equal if H 0 is true? When H 0 is true, then there is no effect of the treatment on the DV Thus, when H 0 is true, we have random, unsystematic differences / random, unsystematic differences which should equal 1 20 F Ratio What should F be if H 0 is not true? When H 0 is not true, then there is an effect of the treatment on the DV Thus, when H 0 is not true, F should be larger than
8 F Ratio The denominator of the F ratio is called the error term or error variance. It provides a measure of the variance due to random, unsystematic differences 22 How Big Does F Have to Be? Several factors influence how big F has to be in order to reject H 0 To reject H 0, the calculated F must be larger than the critical F which can be found in a table Anything that makes the critical F value large will make it more difficult to reject H 0 23 Factors That Influence the Size of the Critical F α level -- as α decreases, the size of the critical F increases Sample size -- as the sample size increase, the degrees of freedom for the withintreatments variance increases, and the size of the critical F decreases As the sample becomes larger, it becomes more representative of the population 24 8
9 Factors That Influence the Size of the Critical F Number of conditions -- as the number of conditions increase, so does the degrees of freedom for the between-treatments variance term For any reasonable sample size, the size of the critical F will decrease as the number of conditions increases 25 Factors That Influence the Size of the Observed F Experimental Control -- as experimental control increases, within-treatments variance decreases, and the observed F increases 26 ANOVA Example Students used one of three types of mnemonics to learn a list of words: rehearsal, visual imagery, peg word. The number of words recalled in the correct order was measured. There were 10 students in each level of the factor 27 9
10 ANOVA Example Rehearsal Visual Imagery Peg Word n = 10 =6.5 ΣX = 65 ΣX 2 = 627 SS Reh = / 10 = n = 10 =4.9 ΣX = 49 ΣX 2 = 445 SS Vis = / 10 = n = 10 =16.8 ΣX = 168 ΣX 2 = 2936 SS Peg = / 10 = N = 30 =9.4 ΣX= 282 = ΣX 2 =4008= SS Total = / 30 = ANOVA Summary Table SS df MS F p Between treatments Within treatment Total SS = sum of squares -- the numerator of the variance formula = Σ(X- ) 2 = ΣX 2 (ΣX) 2 / n If n 1 = n 2 = n 3 = SS Between = n SS M ΣM = = 28.2 ΣM 2 = = SS Between = 10 ( / 3) = ANOVA Summary Table SS df MS F p Between treatments Within treatment Total SS Within = SS Rehearsal + SS Visual Imagery + SS Peg Word = = SS Total = SS Between + SS Within = =
11 ANOVA Summary Table SS df MS F p Between treatments Within treatment Total df Between = # of conditions 1 = 3 1 = 2 df Within = Σ(n i 1) = (10 1) + (10 1) + (10 1) = 27 df Total = N 1 = 30 1 = 29 df Total = df Between + df Within = = ANOVA Summary Table SS df MS F p Between treatments Within treatment Total MS Between = SS Between / df Between = / 2 = MS Within = SS Within / df Within = / 27 = F = MS Between / MS Within = / = Critical Value of F Reject H 0 when the calculated F ratio is at least as large as the critical F ratio Find the critical F ratio in a table of critical F ratios You need to know df between treatments, df within treatment, and α For the summary table on slide 29, df between treatments = 2, df within treatment = 27, and α =.05 Critical F = 3.35 Since the calculated F = is larger than the critical F = 3.35, we should reject H
12 Effect Sizes with ANOVA r 2 is the proportion of the total variance that is explainable by the treatment For ANOVA, r 2 is reported as η 2 (eta squared) in the literature 34 ANOVA Assumptions ANOVA makes certain assumptions: Population from which the samples are selected are normally distributed Homogeneity of variance -- the variability within each group is approximately equal Independence of observations 35 ANOVA Assumptions ANOVA is fairly robust (it will give good results even if the assumptions are violated) to the normality assumption and the homogeneity of variance assumption as long as: the number of participants in each group is equal the number of participants in each group is fairly large 36 12
13 Post Hoc Tests / Multiple Comparisons ANOVA only tells us if there is an effect That is, are the means of the groups not all equal? ANOVA does not tell us which means are different from which other means Multiple comparisons, or post hoc tests, are used after you reject ANOVA s H 0 to determine which means are probably different from which other means 37 Post Hoc Tests Post hoc tests are not fundamentally different from performing a t-test That is, both post hoc tests and t-tests answer the same question: are two means different from each other The difference is that the post hoc tests protect us from making a Type-I error even though we perform many such comparisons 38 Post Hoc Tests There are many types of post hoc tests E.g. Tukey, Newman-Keuls, Dunnett, Scheffé, Bonferroni There is little agreement about which particular one is best The different tests trade off statistical power for protection from Type-I errors 39 13
14 Tukey Tests We will consider only the Tukey test; other people may feel that other tests are more appropriate The Tukey test offers reasonable protection from Type-I errors while maintaining reasonable statistical power 40 Tukey Tests You should perform Tukey tests only when two criteria have been met: There are more than two levels to the factor / IV The factor / IV is statistically significant 41 Steps in Performing Tukey Tests Write the hypotheses H 0 : µ 1 = µ 2 H 1 : µ 1 µ 2 Specify the α level α =
15 Steps in Performing Tukey Tests Calculate the honestly significant difference (HSD) q α,k,dfwithin-treatment = tabled q value α = α level k = number of levels of the IV df within-treatment = degrees of freedom for MS within-treatment MS within-treatment = within-treatment variance estimate n = number of scores in each treatment 43 Steps in Performing Tukey Tests Take the difference of the means of the conditions you are comparing If the difference of the means is at least as large as the HSD, you can reject H 0 Repeat for whatever other comparisons need to be made 44 Tukey Example Mnemonic Rehearsal 6.5 Visual Imagery 4.9 Peg Word 16.8 SS df MS F p Between treatments Within treatment Total
16 Tukey Example SS df MS F p Between treatments Within treatment Total Tukey Example Mnemonic Rehearsal 6.5 Visual Imagery 4.9 Peg Word 16.8 If a pair of means is at least the HSD, 4.88, apart, then they are reliably different 47 Tukey Example Mnemonic Rehearsal 6.5 Visual Imagery 4.9 Peg Word 16.8 H 0 : µ Rehearsal = µ Visual Imagery H 1 : µ Rehearsal µ Visual Imagery If Rehearsal - Visual Imagery HSD, then reject H < 4.88 fail to reject H
17 Tukey Example Mnemonic Rehearsal 6.5 Visual Imagery 4.9 Peg Word 16.8 H 0 : µ Rehearsal = µ Peg Word H 1 : µ Rehearsal µ Peg Word If Rehearsal - Peg Word HSD, then reject H reject H 0 49 Tukey Example Mnemonic Rehearsal 6.5 Visual Imagery 4.9 Peg Word 16.8 H 0 : µ Visual Imagery = µ Peg Word H 1 : µ Visual Imagery µ Peg Word If Visual Imagery - Peg Word HSD, then reject H reject H 0 50 F = t 2 When there are only two levels to the factor and you are performing a two-tailed test, you can use either the t-test or ANOVA The calculated value of F will equal the square of the calculated value of t The critical value of F will equal the square of the critical value of t Thus, you will always reach the same conclusion either reject H 0 or fail to reject H
Independent Samples ANOVA
Independent Samples ANOVA In this example students were randomly assigned to one of three mnemonics (techniques for improving memory) rehearsal (the control group; simply repeat the words), visual imagery
More informationAn Old Research Question
ANOVA An Old Research Question The impact of TV on high-school grade Watch or not watch Two groups The impact of TV hours on high-school grade Exactly how much TV watching would make difference Multiple
More informationIntroduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs
Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs The Analysis of Variance (ANOVA) The analysis of variance (ANOVA) is a statistical technique
More informationThe One-Way Independent-Samples ANOVA. (For Between-Subjects Designs)
The One-Way Independent-Samples ANOVA (For Between-Subjects Designs) Computations for the ANOVA In computing the terms required for the F-statistic, we won t explicitly compute any sample variances or
More informationCOMPARING SEVERAL MEANS: ANOVA
LAST UPDATED: November 15, 2012 COMPARING SEVERAL MEANS: ANOVA Objectives 2 Basic principles of ANOVA Equations underlying one-way ANOVA Doing a one-way ANOVA in R Following up an ANOVA: Planned contrasts/comparisons
More informationUsing SPSS for One Way Analysis of Variance
Using SPSS for One Way Analysis of Variance This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests. This tutorial
More informationIntroduction to the Analysis of Variance (ANOVA)
Introduction to the Analysis of Variance (ANOVA) The Analysis of Variance (ANOVA) The analysis of variance (ANOVA) is a statistical technique for testing for differences between the means of multiple (more
More informationReview. One-way ANOVA, I. What s coming up. Multiple comparisons
Review One-way ANOVA, I 9.07 /15/00 Earlier in this class, we talked about twosample z- and t-tests for the difference between two conditions of an independent variable Does a trial drug work better than
More informationA posteriori multiple comparison tests
A posteriori multiple comparison tests 11/15/16 1 Recall the Lakes experiment Source of variation SS DF MS F P Lakes 58.000 2 29.400 8.243 0.006 Error 42.800 12 3.567 Total 101.600 14 The ANOVA tells us
More informationMultiple Comparisons
Multiple Comparisons Error Rates, A Priori Tests, and Post-Hoc Tests Multiple Comparisons: A Rationale Multiple comparison tests function to tease apart differences between the groups within our IV when
More informationThe One-Way Repeated-Measures ANOVA. (For Within-Subjects Designs)
The One-Way Repeated-Measures ANOVA (For Within-Subjects Designs) Logic of the Repeated-Measures ANOVA The repeated-measures ANOVA extends the analysis of variance to research situations using repeated-measures
More informationDifference in two or more average scores in different groups
ANOVAs Analysis of Variance (ANOVA) Difference in two or more average scores in different groups Each participant tested once Same outcome tested in each group Simplest is one-way ANOVA (one variable as
More informationComparing Several Means: ANOVA
Comparing Several Means: ANOVA Understand the basic principles of ANOVA Why it is done? What it tells us? Theory of one way independent ANOVA Following up an ANOVA: Planned contrasts/comparisons Choosing
More informationAnalysis of Variance (ANOVA)
Analysis of Variance (ANOVA) Used for comparing or more means an extension of the t test Independent Variable (factor) = categorical (qualita5ve) predictor should have at least levels, but can have many
More information1. What does the alternate hypothesis ask for a one-way between-subjects analysis of variance?
1. What does the alternate hypothesis ask for a one-way between-subjects analysis of variance? 2. What is the difference between between-group variability and within-group variability? 3. What does between-group
More informationFactorial Independent Samples ANOVA
Factorial Independent Samples ANOVA Liljenquist, Zhong and Galinsky (2010) found that people were more charitable when they were in a clean smelling room than in a neutral smelling room. Based on that
More informationOne-way between-subjects ANOVA. Comparing three or more independent means
One-way between-subjects ANOVA Comparing three or more independent means ANOVA: A Framework Understand the basic principles of ANOVA Why it is done? What it tells us? Theory of one-way between-subjects
More informationMultiple Comparison Procedures Cohen Chapter 13. For EDUC/PSY 6600
Multiple Comparison Procedures Cohen Chapter 13 For EDUC/PSY 6600 1 We have to go to the deductions and the inferences, said Lestrade, winking at me. I find it hard enough to tackle facts, Holmes, without
More informationAnalysis of Variance (ANOVA)
Analysis of Variance (ANOVA) Two types of ANOVA tests: Independent measures and Repeated measures Comparing 2 means: X 1 = 20 t - test X 2 = 30 How can we Compare 3 means?: X 1 = 20 X 2 = 30 X 3 = 35 ANOVA
More informationH0: Tested by k-grp ANOVA
Pairwise Comparisons ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation & Correction LSD & HSD procedures Alpha estimation reconsidered H0: Tested by k-grp ANOVA Regardless
More informationPSY 216. Assignment 12 Answers. Explain why the F-ratio is expected to be near 1.00 when the null hypothesis is true.
PSY 21 Assignment 12 Answers 1. Problem 1 from the text Explain why the F-ratio is expected to be near 1.00 when the null hypothesis is true. When H0 is true, the treatment had no systematic effect. In
More informationH0: Tested by k-grp ANOVA
Analyses of K-Group Designs : Omnibus F, Pairwise Comparisons & Trend Analyses ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation & Correction LSD & HSD procedures
More informationYour schedule of coming weeks. One-way ANOVA, II. Review from last time. Review from last time /22/2004. Create ANOVA table
Your schedule of coming weeks One-way ANOVA, II 9.07 //00 Today: One-way ANOVA, part II Next week: Two-way ANOVA, parts I and II. One-way ANOVA HW due Thursday Week of May Teacher out of town all week
More informationPreview from Notesale.co.uk Page 3 of 63
Stem-and-leaf diagram - vertical numbers on far left represent the 10s, numbers right of the line represent the 1s The mean should not be used if there are extreme scores, or for ranks and categories Unbiased
More informationChapter Seven: Multi-Sample Methods 1/52
Chapter Seven: Multi-Sample Methods 1/52 7.1 Introduction 2/52 Introduction The independent samples t test and the independent samples Z test for a difference between proportions are designed to analyze
More informationTwo-Sample Inferential Statistics
The t Test for Two Independent Samples 1 Two-Sample Inferential Statistics In an experiment there are two or more conditions One condition is often called the control condition in which the treatment is
More informationIntroduction to Analysis of Variance. Chapter 11
Introduction to Analysis of Variance Chapter 11 Review t-tests Single-sample t-test Independent samples t-test Related or paired-samples t-test s m M t ) ( 1 1 ) ( m m s M M t M D D D s M t n s s M 1 )
More informationAnalysis of Variance: Repeated measures
Repeated-Measures ANOVA: Analysis of Variance: Repeated measures Each subject participates in all conditions in the experiment (which is why it is called repeated measures). A repeated-measures ANOVA is
More informationPSYC 331 STATISTICS FOR PSYCHOLOGISTS
PSYC 331 STATISTICS FOR PSYCHOLOGISTS Session 4 A PARAMETRIC STATISTICAL TEST FOR MORE THAN TWO POPULATIONS Lecturer: Dr. Paul Narh Doku, Dept of Psychology, UG Contact Information: pndoku@ug.edu.gh College
More informationOne-way between-subjects ANOVA. Comparing three or more independent means
One-way between-subjects ANOVA Comparing three or more independent means Data files SpiderBG.sav Attractiveness.sav Homework: sourcesofself-esteem.sav ANOVA: A Framework Understand the basic principles
More informationCalculating Fobt for all possible combinations of variances for each sample Calculating the probability of (F) for each different value of Fobt
PSY 305 Module 5-A AVP Transcript During the past two modules, you have been introduced to inferential statistics. We have spent time on z-tests and the three types of t-tests. We are now ready to move
More informationHypothesis testing: Steps
Review for Exam 2 Hypothesis testing: Steps Repeated-Measures ANOVA 1. Determine appropriate test and hypotheses 2. Use distribution table to find critical statistic value(s) representing rejection region
More informationB. Weaver (18-Oct-2006) MC Procedures Chapter 1: Multiple Comparison Procedures ) C (1.1)
B. Weaver (18-Oct-2006) MC Procedures... 1 Chapter 1: Multiple Comparison Procedures 1.1 Introduction The omnibus F-test in a one-way ANOVA is a test of the null hypothesis that the population means of
More informationhttp://www.statsoft.it/out.php?loc=http://www.statsoft.com/textbook/ Group comparison test for independent samples The purpose of the Analysis of Variance (ANOVA) is to test for significant differences
More informationLec 1: An Introduction to ANOVA
Ying Li Stockholm University October 31, 2011 Three end-aisle displays Which is the best? Design of the Experiment Identify the stores of the similar size and type. The displays are randomly assigned to
More information10/31/2012. One-Way ANOVA F-test
PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic 3.Distribution 4. Assumptions One-Way ANOVA F-test One factor J>2 independent samples
More informationHypothesis testing: Steps
Review for Exam 2 Hypothesis testing: Steps Exam 2 Review 1. Determine appropriate test and hypotheses 2. Use distribution table to find critical statistic value(s) representing rejection region 3. Compute
More informationDETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics
DETAILED CONTENTS About the Author Preface to the Instructor To the Student How to Use SPSS With This Book PART I INTRODUCTION AND DESCRIPTIVE STATISTICS 1. Introduction to Statistics 1.1 Descriptive and
More informationStatistical Inference. Why Use Statistical Inference. Point Estimates. Point Estimates. Greg C Elvers
Statistical Inference Greg C Elvers 1 Why Use Statistical Inference Whenever we collect data, we want our results to be true for the entire population and not just the sample that we used But our sample
More information8/23/2018. One-Way ANOVA F-test. 1. Situation/hypotheses. 2. Test statistic. 3.Distribution. 4. Assumptions
PSY 5101: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic One-Way ANOVA F-test One factor J>2 independent samples H o :µ 1 µ 2 µ J F 3.Distribution
More informationFactorial Analysis of Variance
Factorial Analysis of Variance Conceptual Example A repeated-measures t-test is more likely to lead to rejection of the null hypothesis if a) *Subjects show considerable variability in their change scores.
More informationSEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics
SEVERAL μs AND MEDIANS: MORE ISSUES Business Statistics CONTENTS Post-hoc analysis ANOVA for 2 groups The equal variances assumption The Kruskal-Wallis test Old exam question Further study POST-HOC ANALYSIS
More informationSTAT22200 Spring 2014 Chapter 5
STAT22200 Spring 2014 Chapter 5 Yibi Huang April 29, 2014 Chapter 5 Multiple Comparisons Chapter 5-1 Chapter 5 Multiple Comparisons Note the t-tests and C.I. s are constructed assuming we only do one test,
More informationIntroduction to Business Statistics QM 220 Chapter 12
Department of Quantitative Methods & Information Systems Introduction to Business Statistics QM 220 Chapter 12 Dr. Mohammad Zainal 12.1 The F distribution We already covered this topic in Ch. 10 QM-220,
More informationHypothesis T e T sting w ith with O ne O One-Way - ANOV ANO A V Statistics Arlo Clark Foos -
Hypothesis Testing with One-Way ANOVA Statistics Arlo Clark-Foos Conceptual Refresher 1. Standardized z distribution of scores and of means can be represented as percentile rankings. 2. t distribution
More informationChapter 9: How can I tell if scores differ between three or more groups? One-way independent measures ANOVA.
Chapter 9: How can I tell if scores differ between three or more groups? One-way independent measures ANOVA. Full answers to study questions 1. Familywise error as a result of conducting multiple t tests.
More informationANOVA continued. Chapter 10
ANOVA continued Chapter 10 Zettergren (003) School adjustment in adolescence for previously rejected, average, and popular children. Effect of peer reputation on academic performance and school adjustment
More informationANOVA continued. Chapter 11
ANOVA continued Chapter 11 Zettergren (003) School adjustment in adolescence for previously rejected, average, and popular children. Effect of peer reputation on academic performance and school adjustment
More informationSTAT 5200 Handout #7a Contrasts & Post hoc Means Comparisons (Ch. 4-5)
STAT 5200 Handout #7a Contrasts & Post hoc Means Comparisons Ch. 4-5) Recall CRD means and effects models: Y ij = µ i + ϵ ij = µ + α i + ϵ ij i = 1,..., g ; j = 1,..., n ; ϵ ij s iid N0, σ 2 ) If we reject
More informationOne-Way ANOVA Source Table J - 1 SS B / J - 1 MS B /MS W. Pairwise Post-Hoc Comparisons of Means
One-Way ANOVA Source Table ANOVA MODEL: ij = µ* + α j + ε ij H 0 : µ 1 = µ =... = µ j or H 0 : Σα j = 0 Source Sum of Squares df Mean Squares F Between Groups n j ( j - * ) J - 1 SS B / J - 1 MS B /MS
More information2 Hand-out 2. Dr. M. P. M. M. M c Loughlin Revised 2018
Math 403 - P. & S. III - Dr. McLoughlin - 1 2018 2 Hand-out 2 Dr. M. P. M. M. M c Loughlin Revised 2018 3. Fundamentals 3.1. Preliminaries. Suppose we can produce a random sample of weights of 10 year-olds
More informationPrepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti
Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang Use in experiment, quasi-experiment
More informationANOVA continued. Chapter 10
ANOVA continued Chapter 10 Zettergren (003) School adjustment in adolescence for previously rejected, average, and popular children. Effect of peer reputation on academic performance and school adjustment
More informationSampling Distributions: Central Limit Theorem
Review for Exam 2 Sampling Distributions: Central Limit Theorem Conceptually, we can break up the theorem into three parts: 1. The mean (µ M ) of a population of sample means (M) is equal to the mean (µ)
More informationGroup comparison test for independent samples
Group comparison test for independent samples The purpose of the Analysis of Variance (ANOVA) is to test for significant differences between means. Supposing that: samples come from normal populations
More informationOHSU OGI Class ECE-580-DOE :Design of Experiments Steve Brainerd
Why We Use Analysis of Variance to Compare Group Means and How it Works The question of how to compare the population means of more than two groups is an important one to researchers. Let us suppose that
More information13: Additional ANOVA Topics. Post hoc Comparisons
13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Post hoc Comparisons In the prior chapter we used ANOVA
More informationChapter 6 Planned Contrasts and Post-hoc Tests for one-way ANOVA
Chapter 6 Planned Contrasts and Post-hoc Tests for one-way NOV Page. The Problem of Multiple Comparisons 6-. Types of Type Error Rates 6-. Planned contrasts vs. Post hoc Contrasts 6-7 4. Planned Contrasts
More informationMultiple Testing. Gary W. Oehlert. January 28, School of Statistics University of Minnesota
Multiple Testing Gary W. Oehlert School of Statistics University of Minnesota January 28, 2016 Background Suppose that you had a 20-sided die. Nineteen of the sides are labeled 0 and one of the sides is
More informationANOVA Analysis of Variance
ANOVA Analysis of Variance ANOVA Analysis of Variance Extends independent samples t test ANOVA Analysis of Variance Extends independent samples t test Compares the means of groups of independent observations
More informationWhat Does the F-Ratio Tell Us?
Planned Comparisons What Does the F-Ratio Tell Us? The F-ratio (called an omnibus or overall F) provides a test of whether or not there a treatment effects in an experiment A significant F-ratio suggests
More informationAnalysis of Variance and Contrasts
Analysis of Variance and Contrasts Ken Kelley s Class Notes 1 / 103 Lesson Breakdown by Topic 1 Goal of Analysis of Variance A Conceptual Example Appropriate for ANOVA Example F -Test for Independent Variances
More informationThe t-statistic. Student s t Test
The t-statistic 1 Student s t Test When the population standard deviation is not known, you cannot use a z score hypothesis test Use Student s t test instead Student s t, or t test is, conceptually, very
More informationContrasts and Multiple Comparisons Supplement for Pages
Contrasts and Multiple Comparisons Supplement for Pages 302-323 Brian Habing University of South Carolina Last Updated: July 20, 2001 The F-test from the ANOVA table allows us to test the null hypothesis
More informationThe t-test: A z-score for a sample mean tells us where in the distribution the particular mean lies
The t-test: So Far: Sampling distribution benefit is that even if the original population is not normal, a sampling distribution based on this population will be normal (for sample size > 30). Benefit
More informationPLSC PRACTICE TEST ONE
PLSC 724 - PRACTICE TEST ONE 1. Discuss briefly the relationship between the shape of the normal curve and the variance. 2. What is the relationship between a statistic and a parameter? 3. How is the α
More informationCHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)
FROM: PAGANO, R. R. (007) I. INTRODUCTION: DISTINCTION BETWEEN PARAMETRIC AND NON-PARAMETRIC TESTS Statistical inference tests are often classified as to whether they are parametric or nonparametric Parameter
More informationSingle Sample Means. SOCY601 Alan Neustadtl
Single Sample Means SOCY601 Alan Neustadtl The Central Limit Theorem If we have a population measured by a variable with a mean µ and a standard deviation σ, and if all possible random samples of size
More informationKeppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means
Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means 4.1 The Need for Analytical Comparisons...the between-groups sum of squares averages the differences
More informationMultiple Pairwise Comparison Procedures in One-Way ANOVA with Fixed Effects Model
Biostatistics 250 ANOVA Multiple Comparisons 1 ORIGIN 1 Multiple Pairwise Comparison Procedures in One-Way ANOVA with Fixed Effects Model When the omnibus F-Test for ANOVA rejects the null hypothesis that
More informationOne-Way ANOVA Cohen Chapter 12 EDUC/PSY 6600
One-Way ANOVA Cohen Chapter 1 EDUC/PSY 6600 1 It is easy to lie with statistics. It is hard to tell the truth without statistics. -Andrejs Dunkels Motivating examples Dr. Vito randomly assigns 30 individuals
More informationOne-Way Analysis of Variance: ANOVA
One-Way Analysis of Variance: ANOVA Dr. J. Kyle Roberts Southern Methodist University Simmons School of Education and Human Development Department of Teaching and Learning Background to ANOVA Recall from
More information9/28/2013. PSY 511: Advanced Statistics for Psychological and Behavioral Research 1
PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 The one-sample t-test and test of correlation are realistic, useful statistical tests The tests that we will learn next are even
More informationAnalysis of Variance: Part 1
Analysis of Variance: Part 1 Oneway ANOVA When there are more than two means Each time two means are compared the probability (Type I error) =α. When there are more than two means Each time two means are
More informationChapter 13 Section D. F versus Q: Different Approaches to Controlling Type I Errors with Multiple Comparisons
Explaining Psychological Statistics (2 nd Ed.) by Barry H. Cohen Chapter 13 Section D F versus Q: Different Approaches to Controlling Type I Errors with Multiple Comparisons In section B of this chapter,
More informationsame hypothesis Assumptions N = subjects K = groups df 1 = between (numerator) df 2 = within (denominator)
compiled by Janine Lim, EDRM 61, Spring 008 This file is copyrighted (010) and a part of my Leadership Portfolio found at http://www.janinelim.com/leadportfolio. It is shared for your learning use only.
More informationOrthogonal, Planned and Unplanned Comparisons
This is a chapter excerpt from Guilford Publications. Data Analysis for Experimental Design, by Richard Gonzalez Copyright 2008. 8 Orthogonal, Planned and Unplanned Comparisons 8.1 Introduction In this
More informationBIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES
BIOL 458 - Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES PART 1: INTRODUCTION TO ANOVA Purpose of ANOVA Analysis of Variance (ANOVA) is an extremely useful statistical method
More informationChapter 24. Comparing Means. Copyright 2010 Pearson Education, Inc.
Chapter 24 Comparing Means Copyright 2010 Pearson Education, Inc. Plot the Data The natural display for comparing two groups is boxplots of the data for the two groups, placed side-by-side. For example:
More informationSampling distribution of t. 2. Sampling distribution of t. 3. Example: Gas mileage investigation. II. Inferential Statistics (8) t =
2. The distribution of t values that would be obtained if a value of t were calculated for each sample mean for all possible random of a given size from a population _ t ratio: (X - µ hyp ) t s x The result
More informationSummary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1)
Summary of Chapter 7 (Sections 7.2-7.5) and Chapter 8 (Section 8.1) Chapter 7. Tests of Statistical Hypotheses 7.2. Tests about One Mean (1) Test about One Mean Case 1: σ is known. Assume that X N(µ, σ
More informationAn inferential procedure to use sample data to understand a population Procedures
Hypothesis Test An inferential procedure to use sample data to understand a population Procedures Hypotheses, the alpha value, the critical region (z-scores), statistics, conclusion Two types of errors
More informationStatistical Inference for Means
Statistical Inference for Means Jamie Monogan University of Georgia February 18, 2011 Jamie Monogan (UGA) Statistical Inference for Means February 18, 2011 1 / 19 Objectives By the end of this meeting,
More informationIntroduction to Analysis of Variance (ANOVA) Part 2
Introduction to Analysis of Variance (ANOVA) Part 2 Single factor Serpulid recruitment and biofilms Effect of biofilm type on number of recruiting serpulid worms in Port Phillip Bay Response variable:
More informationpsyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests
psyc3010 lecture 2 factorial between-ps ANOVA I: omnibus tests last lecture: introduction to factorial designs next lecture: factorial between-ps ANOVA II: (effect sizes and follow-up tests) 1 general
More informationNote: k = the # of conditions n = # of data points in a condition N = total # of data points
The ANOVA for2 Dependent Groups -- Analysis of 2-Within (or Matched)-Group Data with a Quantitative Response Variable Application: This statistic has two applications that can appear very different, but
More informationLaboratory Topics 4 & 5
PLS205 Lab 3 January 23, 2014 Orthogonal contrasts Class comparisons in SAS Trend analysis in SAS Multiple mean comparisons Laboratory Topics 4 & 5 Orthogonal contrasts Planned, single degree-of-freedom
More informationHypothesis Testing and Confidence Intervals (Part 2): Cohen s d, Logic of Testing, and Confidence Intervals
Hypothesis Testing and Confidence Intervals (Part 2): Cohen s d, Logic of Testing, and Confidence Intervals Lecture 9 Justin Kern April 9, 2018 Measuring Effect Size: Cohen s d Simply finding whether a
More informationAnalysis of Variance ANOVA. What We Will Cover in This Section. Situation
Analysis of Variance ANOVA 8//007 P7 Analysis of Variance What We Will Cover in This Section Introduction. Overview. Simple ANOVA. Repeated Measures ANOVA. Factorial ANOVA 8//007 P7 Analysis of Variance
More information1 DV is normally distributed in the population for each level of the within-subjects factor 2 The population variances of the difference scores
One-way Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang The purpose is to test the
More informationKeppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares
Keppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares K&W introduce the notion of a simple experiment with two conditions. Note that the raw data (p. 16)
More informationIn a one-way ANOVA, the total sums of squares among observations is partitioned into two components: Sums of squares represent:
Activity #10: AxS ANOVA (Repeated subjects design) Resources: optimism.sav So far in MATH 300 and 301, we have studied the following hypothesis testing procedures: 1) Binomial test, sign-test, Fisher s
More informationCOGS 14B: INTRODUCTION TO STATISTICAL ANALYSIS
COGS 14B: INTRODUCTION TO STATISTICAL ANALYSIS TA: Sai Chowdary Gullapally scgullap@eng.ucsd.edu Office Hours: Thursday (Mandeville) 3:30PM - 4:30PM (or by appointment) Slides: I am using the amazing slides
More informationStatistics Primer. ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong
Statistics Primer ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong 1 Quick Overview of Statistics 2 Descriptive vs. Inferential Statistics Descriptive Statistics: summarize and describe data
More informationANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4)
ANALYTICAL COMPARISONS AMONG TREATMENT MEANS (CHAPTER 4) ERSH 8310 Fall 2007 September 11, 2007 Today s Class The need for analytic comparisons. Planned comparisons. Comparisons among treatment means.
More informationSpecific Differences. Lukas Meier, Seminar für Statistik
Specific Differences Lukas Meier, Seminar für Statistik Problem with Global F-test Problem: Global F-test (aka omnibus F-test) is very unspecific. Typically: Want a more precise answer (or have a more
More informationComparisons among means (or, the analysis of factor effects)
Comparisons among means (or, the analysis of factor effects) In carrying out our usual test that μ 1 = = μ r, we might be content to just reject this omnibus hypothesis but typically more is required:
More informationAMS7: WEEK 7. CLASS 1. More on Hypothesis Testing Monday May 11th, 2015
AMS7: WEEK 7. CLASS 1 More on Hypothesis Testing Monday May 11th, 2015 Testing a Claim about a Standard Deviation or a Variance We want to test claims about or 2 Example: Newborn babies from mothers taking
More informationN J SS W /df W N - 1
One-Way ANOVA Source Table ANOVA MODEL: ij = µ* + α j + ε ij H 0 : µ = µ =... = µ j or H 0 : Σα j = 0 Source Sum of Squares df Mean Squares F J Between Groups nj( j * ) J - SS B /(J ) MS B /MS W = ( N
More information1 Descriptive statistics. 2 Scores and probability distributions. 3 Hypothesis testing and one-sample t-test. 4 More on t-tests
Overall Overview INFOWO Statistics lecture S3: Hypothesis testing Peter de Waal Department of Information and Computing Sciences Faculty of Science, Universiteit Utrecht 1 Descriptive statistics 2 Scores
More information