Week 14 Comparing k(> 2) Populations

Size: px
Start display at page:

Download "Week 14 Comparing k(> 2) Populations"

Transcription

1 Week 14 Comparing k(> 2) Populations

2 Week 14 Objectives Methods associated with testing for the equality of k(> 2) means or proportions are presented. Post-testing concepts and analysis are introduced. In particular: 1 The principle of analysis of variance is used to derive the F test for comparing k(> 2) means. 2 The χ 2 test for comparing k(> 2) proportions is presented in its contingency table form and its relation to the F test is explained. 3 The rank-based Kruskal-Wallis test is presented. 4 The concepts of multiple comparisons and simultaneous CIs are introduced and methods for their implementation are presented.

3 Bonferroni Tukey s

4 The Statistical Model and Hypothesis X i1, X i2,..., X ini, i = 1,..., k, are independent samples. X ij = µ i + ɛ ij, Var(ɛ ij ) = σ 2 i. Write µ i = µ + α i, where µ = k 1 n i=1 µ i, and α i = µ i µ is the effect of population i (of factor level i). Of interest is the hypothesis of no effects (all α i = 0), or H 0 : µ 1 = µ 2 = = µ k vs H a : H 0 is false. The sample mean and sample variance from population i are: X i, S 2 i, i = 1,..., k.

5 The basic idea As in regression, variability is represented by the so-called sums of squares, or SS, and the ANOVA approach decomposes the total variability into components. For the comparison of k means, one of the components is called between groups variability and the other is called within groups variability. If the between groups variability is large compared to the within groups variability, the hypothesis of equal means is rejected.

6 Assume homoscedasticity, i.e. Var(ɛ ij ) = σ 2, for all i = 1,..., k, set N = n n k, and define: X = 1 N k n i X ij = 1 N i=1 j=1 k n i X i i=1 The overall mean X is the sample mean of all data. SST = k n i (X ij X ) 2 Total Sum of Squares i=1 j=1 SST is the numerator of sample variance of all data.

7 k n i SSE = (X ij X i ) 2 Error Sum of Squares i=1 j=1 SSE represents the within group variability SSTr = k n i (X i X ) 2 i=1 Treatment Sum of Squares SSTr represents the between group variability It can be shown that SST = SSTr + SSE.

8 Each SS has its own DF (recall N = n n k ): DF SST = N 1, DF SSE = N k, DF SSTr = k 1 Dividing each SS by its DF we obtain the mean squares: MSE = SSE SSTr, MSTr = N k k 1. MSE is the k-sample version of the 2-sample pooled variance: MSE = S 2 p = (n 1 1)S (n k 1)S 2 k n n k k Thus, MSE is an unbiased estimator of the common error variance σ 2.

9 These calculations are summarized in the ANOVA table: Source df SS MS F Treatment k 1 SSTr MSTr= SSTr k 1 Error N k SSE MSE= SSE N k Total N 1 SST Under H 0, and the assumption of normality F H0 = MSTr MSE F H0 F k 1,N k. Without normality the above holds approximately if all sample sizes are 30.

10 If H 0 does not hold, the statistic F H0 tends to take larger values. Thus, H 0 is rejected at level α if F H0 > F k 1,N k,α, where F k 1,N k,α denotes the (1 α)100-th percentile of the F distribution with k 1 and N k degrees of freedom.

11 Example The following data resulted from comparing the degree of soiling in fabric treated with three different mixtures of methacrylic acid. Mix 1: Mix 2: Mix 3: Test H 0 : µ 1 = µ 2 = µ 3 vs H a : H 0 is false, at α = 0.1.

12 Example (Continued) Solution. Because of the small sample sizes, we need to assume that the three populations are normal and homoscedastic. With this data, X 1 = 0.918, X 2 = 0.794, X 3 = and X = Also, the three sample variances are S1 2 = , S2 2 = and S2 3 = Thus, SSTr = 5( ) 2 + 5( ) 2 +5( ) 2 = SSE = 4S S S2 3 = It follows that, SST = = , MSTr = /(3 1) = ,

13 Example (Continued) MSE = /(15 3) = and F H0 = MSTr/MSE = The above calculation are summarized in the following ANOVA table Source df SS MS F Treatment k 1 = Error N k = Total N 1 = The rejection rule specifies that H 0 be rejected if F > F 2,12,0.1. Since F 2,12,0.1 = (found in R by qf(0.9,2,12)), H 0 is not rejected.

14 F and χ 2 Tests F and χ 2 distributions are connected by the following result: If Y F ν1,ν 2, with ν 2 large, then ν 1 Y χ 2 ν 1. For example, under H 0, k i=1 Q H0 = (k 1)F H0 = n i(x i X) 2 S 2 p χ 2 k 1. The χ 2 statistic for testing the equality of k proportions is essentially the Q H0 statistic adjusted to Bernoulli data.

15 Computational Form of Q H0 To adjust Q H0 to Bernoulli data, replace X i by p i, X by the pooled proportion of successes p, and S 2 p by p(1 p). Thus the test statistic for H 0 : p 1 = = p k is: Q H0 = k i=1 n i( p i p) 2, p(1 p) and H 0 is rejected at level α if Q H0 > χ 2 k 1,α. χ 2 k 1,α is found in R by qchisq(1-α, k-1). The p-value corresponding to Q H0 is found in R by 1-pchisq(Q H0, k-1).

16 The Contingency Table Form of Q H0 An alternative form of the test statistic Q H0, called the contingency table form, is more common: k 2 (O Q H0 = li E li ) 2, where i=1 l=1 O 1i = n i p i, O 2i = n i ( 1 p i ), E1i = n i p, E 2i = n i ( 1 p ). The algebraic equivalence of the two forms of Q H0 is shown on page 357 of the book. The contingency table form is not recommended for hand calculations. E li

17 Example (Pilot response time for different panel designs) The sample sizes, n i, and number of times, O 1i, that the response times were below 3 seconds for the four designs are as follows: n 1 = 45, O 11 = 29; n 2 = 50, O 12 = 42; n 3 = 55, O 13 = 28; n 4 = 50, O 14 = 24. Perform the test at α = Solution. We will use the computational form of Q H0. Here, each p i = O 1i /n i, so p 1 = , p 2 = 0.84, p 3 = and p 4 = Also, p = O 11 + O 12 + O 13 + O 14 n 1 + n 2 + n 3 + n 4 = 0.615, so that S 2 p = p(1 p) = Thus,

18 Example (Example continued) Q H0 = 45( ) ( ) ( )2 50( ) = Since > χ 2 3 (0.05) = 7.815, H 0 is rejected.

19 The extension of the two-sample rank sum test procedure to k samples, i.e., for testing is called Kruskal-Wallis test. H F 0 : F 1 = = F k, (4.1) Like the rank-sum test, the Kruskal-Wallis test uses the (mid-)ranks of the observations: Combine the observations, X i1,..., X ini, i = 1,..., k, from the k samples into an overall set of N = n n k observations, and arrange them from smallest to largest.

20 Let R ij denote the (mid-)rank of observation X ij, and set R i = n 1 i n i j=1 R ij, S 2 KW = 1 N 1 k n i i=1 j=1 ( R ij N + 1 ) 2. 2 Note that SKW 2 is the sample variance of the collection of all ranks. The Kruskal-Wallis TS is KW k = 1 S 2 KW k i=1 ( n i R i N + 1 ) 2. 2

21 If there are no ties a simpler expression holds: KW k = 12 N(N + 1) k i=1 ( n i R i N + 1 ) 2. 2 If the k populations have continuous distributions, the exact null distribution of KW k is known. For hand implementation, we require n i > 8, for all i, and use the approximate null distribution KW k χ 2 k 1. Thus, the RR at level α is KW k > χ 2 k 1,α, and the p-value is computed in R by 1-pchisq(KW k, k-1).

22 Example (Rank procedures for the flammability data) A flammability test was performed on six pieces from each of three types of fabric used in children s clothing. The response variable is the length of the burn mark. The data is given in Flammability.txt. Test the hypothesis that there is no difference in flammability among the three materials, at α = Solution The ranks, and the three rank averages are Ranks R i Material Material Material

23 Example (Example continued) The sample variance of the combined ranks is SKW 2 = Thus, KW k = [ ( ) 2 ( ) ( ) ] 2 = The R command 1-pchisq(4.823,2) yields a p-value of , so that H 0 cannot be rejected at level 0.05.

24 Bonferroni Tukey s Bonferroni Tukey s

25 Bonferroni Tukey s When H 0 : µ 1 = µ 2 = = µ k is rejected, it is not clear which of the µ i are significantly different. It would seem that this question can be addressed quite simply by making all pairwise comparisons. For example, if k = 5 and H 0 is rejected we can make CIs for all 10 pairwise differences, µ 1 µ 2,..., µ 1 µ 5, µ 2 µ 3,..., µ 2 µ 5,..., µ 4 µ 5, and if a CI does not include 0, the corresponding difference is declared significantly different from 0. Alternatively, we can test each of the 10 null hypotheses H i,j 0 : µ i = µ j, for 1 i < j 5, against the two-sided alternative. With some fine tuning this idea works.

26 Bonferroni Tukey s The fine-tuning is needed to assure that the overall, or experiment-wise error rate does not exceed α. The word experiment in the term experiment-wise error rate refers to all pairwise comparisons. The experiment-wise error rate is defined as the probability of at least one pair of means being declared different when all means are equal. The experiment-wise error rate is nothing but the type I error rate for the procedure that rejects H 0 : µ 1 = µ 2 = = µ k if at least one of the H i,j 0 is rejected, or if at least one of the ( k 2) CI for the pairwise differences does not include 0. Fine-tuning is needed because the experiment-wise error rate is different from the level of significance used for testing each H i,j 0.

27 Bonferroni Tukey s To appreciate how much the experiment-wise error rate can differ from the level of significance used for each H i,j 0, consider k = 5 and suppose the 10 tests are performed independently. (They are not really!) Then the probability that no H i,j 0 is rejected when in fact H 0 is true, is (1 α) 10, and thus the experiment-wise error rate is 1 (1 α) 10. If α = 0.05, this is 1 (1 0.05) 10 = In spite of the unrealistic independence assumption, the above calculation gives an fairly close approximation to the true experiment-wise error rate.

28 Bonferroni Tukey s Confidence intervals which control the experiment-wise error rate at a desired level α will be called (1 α)100% simultaneous confidence intervals. Thus, 10 CI are called 90% simultaneous confidence intervals if 90% of the time they all contain the true value of the respective difference or, equivalently, 10% of the time at least one of them does not contain the true value of the respective difference. The term multiple comparisons is used to refer to a test procedure that controls the experiment-wise error rate. Multiple comparisons can be done through simultaneous CI, or via hypothesis testing.

29 Bonferroni Tukey s We will see two methods of constructing simultaneous CIs. Bonferroni CIs: These use Bonferroni s inequality to give a lower bound on the level of the simultaneous CIs (or an upper bound on the experiment-wise error rate). Tukey CIs: The level of these simultaneous CIs is exact when sampling from normal homoscedastic populations. They can also be used as a good approximation with large samples from any distribution. Tukey s method can also be applied when sampling from skewed homoscedastic populations and have large sample sizes, as well as on the ranks with smaller sample sizes.

30 Bonferroni Tukey s Bonferroni Tukey s

31 Bonferroni Tukey s Bonferroni s CIs achieve the desired level by adjusting the level of the traditional CIs. Exact adjustment is not possible due to the dependence of the CIs. But Bonferroni s inequality guarantees that the intervals are conservative (i.e., the exact level of 90% Bonferroni CIs is higher than 90%). Bonferroni s inequality asserts that, the simultaneous coverage of m CIs, each of which has confidence level 1 α, is at least 1 mα. Equivalently, if each of m pairwise tests are performed at level α, the experiment-wise error rate is no greater than mα.

32 Bonferroni Tukey s For example, if we are comparing k means (or proportions) there are ( k 2) pairwise comparisons to be made. For k = 3, 4, 5 we have 3, 6, and 10 pairwise comparisons respectively. Assuming the pairwise 95% CIs are independently constructed, their simultaneous coverage and the corresponding Bonferroni lower bound, is (0.857, 0.85), (0.735, 0.7), (0.599, 0.5). The above was computed by the R command: a=0.05; (1-a)**3; 1-3*a; (1-a)**6; 1-6*a; (1-a)**10; 1-10*a)

33 Bonferroni Tukey s The above discussion leads to the following procedure for constructing (1 α)100% Bonferroni simultaneous CIs and multiple comparisons: For each of the m contrast construct a (1 α/m)100% CI. This set of m CIs are the (1 α)100% Bonferroni simultaneous CIs for the m contrasts. Bonferroni multiple comparisons at level α: If any of the m (1 α)100% Bonferroni simultaneous CIs does not contain zero, the corresponding contrast is declared significantly different from zero at experiment-wise level α. Bonferroni multiple comparisons can also be conducted with pairwise testing, without CIs. Each of the m pairwise tests is conducted at level α/m. Those rejected are declared significantly different at experiment-wise level α.

34 Bonferroni Tukey s Example Test, at α = 0.05, the null hypothesis that the four panel designs have no effect on whether or not the pilot reaction time is below 3 seconds (H 0 : p 1 = p 2 = p 3 = p 4 vs H a : H 0 is false) using Bonferroni multiple comparisons. Solution: We will construct 95% Bonferroni simultaneous CIs for the contrasts p 1 p 2, p 1 p 3, p 1 p 4, p 2 p 3, p 2 p 4, p 3 p 4. Because there are m = 6 contrasts, we construct (1 0.05/6)100% = 99.17% CIs for each of the above contrasts. The data are: n 1 = 45, O 11 = 29; n 2 = 50, O 12 = 42; n 3 = 55, O 13 = 28; n 4 = 50, O 14 = 24.

35 Bonferroni Tukey s Example (Continued) The CIs for each contrast were constructed according to p i (1 p i ) p i p j ± z αb /2 + p j (1 p j ) n i n j where α B = For example, the following R commands compute the CI for p 1 p 2 : n1= 45; n2=50; p1=29/n1; p2=42/n2 p1-p2 -qnorm(1-( )/2)*sqrt(p1*(1-p1)/n1+p2*(1-p2)/n2) p1-p2 +qnorm(1-( )/2)*sqrt(p1*(1-p1)/n1+p2*(1-p2)/n2) The second and third commands give , , respectively. The command prop.test(c(29,42), c(45,50), conf.level = , correct=f) gives the same result.

36 Bonferroni Tukey s Example (Continued) The resulting CIs can be presented in a table: Contrast 99.17% CI Contains zero? p 1 p 2 (-0.428, ) Yes p 1 p 3 (-0.124, 0.394) Yes p 1 p 4 (-0.101, 0.429) Yes p 2 p 3 (0.106, 0.555) No p 2 p 4 (0.129, 0.591) No p 3 p 4 (-0.229, 0.287) Yes Only p 2 p 3, and p 2 p 4 are significantly different from zero, at experiment-wise level α = 0.05, from p 3 and p 4. Thus, H 0 : p 1 = = p 4 was rejected due to significant differences between panel design 2 and panel designs 3 and 4.

37 Bonferroni Tukey s An organized way of presenting the multiple comparisons outcome consists of a) List the sample proportions (or sample means) in increasing order, and b) underline pairs that are not significantly different. Then pairs that are not underlined are significantly different. In the previous example, p 4 = 0.48, p 3 = 0.51, p 1 = 0.64 and p 2 = Thus, the multiple comparisons outcome can be presented as p 4 p 3 p 1 p

38 Bonferroni Tukey s Example The data in GradesTeachMeth.txt contain exams scores for students exposed to three different teaching methods (Exercises 5 and 6, Section 10.3). After reading the data into gr, the commands attach(gr); kruskal.test(score method) return a p-value of 0.021; thus H 0 : F 1 = F 2 = F 3 is rejected at α = Use the Bonferroni multiple comparisons procedure, based on the rank-sum test, to identify which methods differ significantly at α = Solution: We will perform multiple comparisons through testing. Because the desired experiment-wise error rate is 0.05, we will conduct each of the m = 3 pair-wise comparisons (A vs B, A vs C, and B vs C) at level 0.05/3 = If the p-value of one of these comparisons is smaller than , the corresponding methods are declared significantly different.

39 Bonferroni Tukey s Example (Continued) The command wilcox.test(score[1:16] method[1:16]) yields a p-value of for the A vs B comparison. The p-values for the other comparisons are obtained similarly. The results from all tree rank-sum tests are summarized in the following table: Comparison p-value Less than ? A vs B No A vs C Yes B vs C No Thus, methods A and C are significantly different at experiment-wise error rate α = 0.05, but methods A and B, as well as methods B and C are not significantly different.

40 Bonferroni Tukey s Bonferroni Tukey s

41 Bonferroni Tukey s This procedure requires normality (or all n i 30) and homoscedasticity. They are based on the studentized range distribution which is characterized by two degrees of freedom. The numerator degrees of freedom equals k (the number of populations). The denominator degrees of freedom equals the degrees of freedom for the SSE, i.e., N k, with N = n n k. Tables for the studentized range distribution are available but we will only use R output. The R commands for constructing them are given next.

42 Bonferroni Tukey s R Commands for Tukey s Simultaneous CIs Import the iron concentration data from http: //personal.psu.edu/acq/401/data/fedata.txt in the data frame fe. Then do: out=aov(fe$conc fe$ind) TukeyHSD(out) # lm instead of aov will NOT work here plot(tukeyhsd(out)) or TukeyHSD(out,conf.level=0.99) # for 99% simultaneous CIs plot(tukeyhsd(out,conf.level=0.99))

Notes for Week 13 Analysis of Variance (ANOVA) continued WEEK 13 page 1

Notes for Week 13 Analysis of Variance (ANOVA) continued WEEK 13 page 1 Notes for Wee 13 Analysis of Variance (ANOVA) continued WEEK 13 page 1 Exam 3 is on Friday May 1. A part of one of the exam problems is on Predictiontervals : When randomly sampling from a normal population

More information

16.3 One-Way ANOVA: The Procedure

16.3 One-Way ANOVA: The Procedure 16.3 One-Way ANOVA: The Procedure Tom Lewis Fall Term 2009 Tom Lewis () 16.3 One-Way ANOVA: The Procedure Fall Term 2009 1 / 10 Outline 1 The background 2 Computing formulas 3 The ANOVA Identity 4 Tom

More information

Econ 3790: Business and Economic Statistics. Instructor: Yogesh Uppal

Econ 3790: Business and Economic Statistics. Instructor: Yogesh Uppal Econ 3790: Business and Economic Statistics Instructor: Yogesh Uppal Email: yuppal@ysu.edu Chapter 13, Part A: Analysis of Variance and Experimental Design Introduction to Analysis of Variance Analysis

More information

Week 12 Hypothesis Testing, Part II Comparing Two Populations

Week 12 Hypothesis Testing, Part II Comparing Two Populations Week 12 Hypothesis Testing, Part II Week 12 Hypothesis Testing, Part II Week 12 Objectives 1 The principle of Analysis of Variance is introduced and used to derive the F-test for testing the model utility

More information

Chapter 11 - Lecture 1 Single Factor ANOVA

Chapter 11 - Lecture 1 Single Factor ANOVA Chapter 11 - Lecture 1 Single Factor ANOVA April 7th, 2010 Means Variance Sum of Squares Review In Chapter 9 we have seen how to make hypothesis testing for one population mean. In Chapter 10 we have seen

More information

STAT 135 Lab 9 Multiple Testing, One-Way ANOVA and Kruskal-Wallis

STAT 135 Lab 9 Multiple Testing, One-Way ANOVA and Kruskal-Wallis STAT 135 Lab 9 Multiple Testing, One-Way ANOVA and Kruskal-Wallis Rebecca Barter April 6, 2015 Multiple Testing Multiple Testing Recall that when we were doing two sample t-tests, we were testing the equality

More information

Chapter 10: Analysis of variance (ANOVA)

Chapter 10: Analysis of variance (ANOVA) Chapter 10: Analysis of variance (ANOVA) ANOVA (Analysis of variance) is a collection of techniques for dealing with more general experiments than the previous one-sample or two-sample tests. We first

More information

SEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics

SEVERAL μs AND MEDIANS: MORE ISSUES. Business Statistics SEVERAL μs AND MEDIANS: MORE ISSUES Business Statistics CONTENTS Post-hoc analysis ANOVA for 2 groups The equal variances assumption The Kruskal-Wallis test Old exam question Further study POST-HOC ANALYSIS

More information

1 Introduction to One-way ANOVA

1 Introduction to One-way ANOVA Review Source: Chapter 10 - Analysis of Variance (ANOVA). Example Data Source: Example problem 10.1 (dataset: exp10-1.mtw) Link to Data: http://www.auburn.edu/~carpedm/courses/stat3610/textbookdata/minitab/

More information

Tukey Complete Pairwise Post-Hoc Comparison

Tukey Complete Pairwise Post-Hoc Comparison Tukey Complete Pairwise Post-Hoc Comparison Engineering Statistics II Section 10.2 Josh Engwer TTU 2018 Josh Engwer (TTU) Tukey Complete Pairwise Post-Hoc Comparison 2018 1 / 23 PART I PART I: Gosset s

More information

CHAPTER 4 Analysis of Variance. One-way ANOVA Two-way ANOVA i) Two way ANOVA without replication ii) Two way ANOVA with replication

CHAPTER 4 Analysis of Variance. One-way ANOVA Two-way ANOVA i) Two way ANOVA without replication ii) Two way ANOVA with replication CHAPTER 4 Analysis of Variance One-way ANOVA Two-way ANOVA i) Two way ANOVA without replication ii) Two way ANOVA with replication 1 Introduction In this chapter, expand the idea of hypothesis tests. We

More information

Chapter 11 - Lecture 1 Single Factor ANOVA

Chapter 11 - Lecture 1 Single Factor ANOVA April 5, 2013 Chapter 9 : hypothesis testing for one population mean. Chapter 10: hypothesis testing for two population means. What comes next? Chapter 9 : hypothesis testing for one population mean. Chapter

More information

In ANOVA the response variable is numerical and the explanatory variables are categorical.

In ANOVA the response variable is numerical and the explanatory variables are categorical. 1 ANOVA ANOVA means ANalysis Of VAriance. The ANOVA is a tool for studying the influence of one or more qualitative variables on the mean of a numerical variable in a population. In ANOVA the response

More information

Chapter Seven: Multi-Sample Methods 1/52

Chapter Seven: Multi-Sample Methods 1/52 Chapter Seven: Multi-Sample Methods 1/52 7.1 Introduction 2/52 Introduction The independent samples t test and the independent samples Z test for a difference between proportions are designed to analyze

More information

The One-Way Independent-Samples ANOVA. (For Between-Subjects Designs)

The One-Way Independent-Samples ANOVA. (For Between-Subjects Designs) The One-Way Independent-Samples ANOVA (For Between-Subjects Designs) Computations for the ANOVA In computing the terms required for the F-statistic, we won t explicitly compute any sample variances or

More information

Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) Analysis of Variance ANOVA) Compare several means Radu Trîmbiţaş 1 Analysis of Variance for a One-Way Layout 1.1 One-way ANOVA Analysis of Variance for a One-Way Layout procedure for one-way layout Suppose

More information

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1)

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1) Summary of Chapter 7 (Sections 7.2-7.5) and Chapter 8 (Section 8.1) Chapter 7. Tests of Statistical Hypotheses 7.2. Tests about One Mean (1) Test about One Mean Case 1: σ is known. Assume that X N(µ, σ

More information

10 One-way analysis of variance (ANOVA)

10 One-way analysis of variance (ANOVA) 10 One-way analysis of variance (ANOVA) A factor is in an experiment; its values are. A one-way analysis of variance (ANOVA) tests H 0 : µ 1 = = µ I, where I is the for one factor, against H A : at least

More information

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics DETAILED CONTENTS About the Author Preface to the Instructor To the Student How to Use SPSS With This Book PART I INTRODUCTION AND DESCRIPTIVE STATISTICS 1. Introduction to Statistics 1.1 Descriptive and

More information

More about Single Factor Experiments

More about Single Factor Experiments More about Single Factor Experiments 1 2 3 0 / 23 1 2 3 1 / 23 Parameter estimation Effect Model (1): Y ij = µ + A i + ɛ ij, Ji A i = 0 Estimation: µ + A i = y i. ˆµ = y..  i = y i. y.. Effect Modell

More information

1 The Randomized Block Design

1 The Randomized Block Design 1 The Randomized Block Design When introducing ANOVA, we mentioned that this model will allow us to include more than one categorical factor(explanatory) or confounding variables in the model. In a first

More information

Hypothesis testing: Steps

Hypothesis testing: Steps Review for Exam 2 Hypothesis testing: Steps Repeated-Measures ANOVA 1. Determine appropriate test and hypotheses 2. Use distribution table to find critical statistic value(s) representing rejection region

More information

1-Way ANOVA MATH 143. Spring Department of Mathematics and Statistics Calvin College

1-Way ANOVA MATH 143. Spring Department of Mathematics and Statistics Calvin College 1-Way ANOVA MATH 143 Department of Mathematics and Statistics Calvin College Spring 2010 The basic ANOVA situation Two variables: 1 Categorical, 1 Quantitative Main Question: Do the (means of) the quantitative

More information

I i=1 1 I(J 1) j=1 (Y ij Ȳi ) 2. j=1 (Y j Ȳ )2 ] = 2n( is the two-sample t-test statistic.

I i=1 1 I(J 1) j=1 (Y ij Ȳi ) 2. j=1 (Y j Ȳ )2 ] = 2n( is the two-sample t-test statistic. Serik Sagitov, Chalmers and GU, February, 08 Solutions chapter Matlab commands: x = data matrix boxplot(x) anova(x) anova(x) Problem.3 Consider one-way ANOVA test statistic For I = and = n, put F = MS

More information

Multiple comparisons - subsequent inferences for two-way ANOVA

Multiple comparisons - subsequent inferences for two-way ANOVA 1 Multiple comparisons - subsequent inferences for two-way ANOVA the kinds of inferences to be made after the F tests of a two-way ANOVA depend on the results if none of the F tests lead to rejection of

More information

Hypothesis testing: Steps

Hypothesis testing: Steps Review for Exam 2 Hypothesis testing: Steps Exam 2 Review 1. Determine appropriate test and hypotheses 2. Use distribution table to find critical statistic value(s) representing rejection region 3. Compute

More information

PSY 307 Statistics for the Behavioral Sciences. Chapter 20 Tests for Ranked Data, Choosing Statistical Tests

PSY 307 Statistics for the Behavioral Sciences. Chapter 20 Tests for Ranked Data, Choosing Statistical Tests PSY 307 Statistics for the Behavioral Sciences Chapter 20 Tests for Ranked Data, Choosing Statistical Tests What To Do with Non-normal Distributions Tranformations (pg 382): The shape of the distribution

More information

ANOVA: Comparing More Than Two Means

ANOVA: Comparing More Than Two Means ANOVA: Comparing More Than Two Means Chapter 11 Cathy Poliak, Ph.D. cathy@math.uh.edu Office Fleming 11c Department of Mathematics University of Houston Lecture 25-3339 Cathy Poliak, Ph.D. cathy@math.uh.edu

More information

Sampling Distributions: Central Limit Theorem

Sampling Distributions: Central Limit Theorem Review for Exam 2 Sampling Distributions: Central Limit Theorem Conceptually, we can break up the theorem into three parts: 1. The mean (µ M ) of a population of sample means (M) is equal to the mean (µ)

More information

Comparing the means of more than two groups

Comparing the means of more than two groups Comparing the means of more than two groups Chapter 15 Analysis of variance (ANOVA) Like a t-test, but can compare more than two groups Asks whether any of two or more means is different from any other.

More information

Chapter 10. Design of Experiments and Analysis of Variance

Chapter 10. Design of Experiments and Analysis of Variance Chapter 10 Design of Experiments and Analysis of Variance Elements of a Designed Experiment Response variable Also called the dependent variable Factors (quantitative and qualitative) Also called the independent

More information

Much of the material we will be covering for a while has to do with designing an experimental study that concerns some phenomenon of interest.

Much of the material we will be covering for a while has to do with designing an experimental study that concerns some phenomenon of interest. Experimental Design: Much of the material we will be covering for a while has to do with designing an experimental study that concerns some phenomenon of interest We wish to use our subjects in the best

More information

Analysis of variance

Analysis of variance Analysis of variance Tron Anders Moger 3.0.007 Comparing more than two groups Up to now we have studied situations with One observation per subject One group Two groups Two or more observations per subject

More information

ANOVA (Analysis of Variance) output RLS 11/20/2016

ANOVA (Analysis of Variance) output RLS 11/20/2016 ANOVA (Analysis of Variance) output RLS 11/20/2016 1. Analysis of Variance (ANOVA) The goal of ANOVA is to see if the variation in the data can explain enough to see if there are differences in the means.

More information

STATS Analysis of variance: ANOVA

STATS Analysis of variance: ANOVA STATS 1060 Analysis of variance: ANOVA READINGS: Chapters 28 of your text book (DeVeaux, Vellman and Bock); on-line notes for ANOVA; on-line practice problems for ANOVA NOTICE: You should print a copy

More information

1 Statistical inference for a population mean

1 Statistical inference for a population mean 1 Statistical inference for a population mean 1. Inference for a large sample, known variance Suppose X 1,..., X n represents a large random sample of data from a population with unknown mean µ and known

More information

Formal Statement of Simple Linear Regression Model

Formal Statement of Simple Linear Regression Model Formal Statement of Simple Linear Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor

More information

Unit 27 One-Way Analysis of Variance

Unit 27 One-Way Analysis of Variance Unit 27 One-Way Analysis of Variance Objectives: To perform the hypothesis test in a one-way analysis of variance for comparing more than two population means Recall that a two sample t test is applied

More information

Sociology 6Z03 Review II

Sociology 6Z03 Review II Sociology 6Z03 Review II John Fox McMaster University Fall 2016 John Fox (McMaster University) Sociology 6Z03 Review II Fall 2016 1 / 35 Outline: Review II Probability Part I Sampling Distributions Probability

More information

ANOVA - analysis of variance - used to compare the means of several populations.

ANOVA - analysis of variance - used to compare the means of several populations. 12.1 One-Way Analysis of Variance ANOVA - analysis of variance - used to compare the means of several populations. Assumptions for One-Way ANOVA: 1. Independent samples are taken using a randomized design.

More information

MATH Notebook 3 Spring 2018

MATH Notebook 3 Spring 2018 MATH448001 Notebook 3 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2010 2018 by Jenny A. Baglivo. All Rights Reserved. 3 MATH448001 Notebook 3 3 3.1 One Way Layout........................................

More information

Inference for the Regression Coefficient

Inference for the Regression Coefficient Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression line. We can shows that b 0 and b 1 are the unbiased estimates

More information

Table of z values and probabilities for the standard normal distribution. z is the first column plus the top row. Each cell shows P(X z).

Table of z values and probabilities for the standard normal distribution. z is the first column plus the top row. Each cell shows P(X z). Table of z values and probabilities for the standard normal distribution. z is the first column plus the top row. Each cell shows P(X z). For example P(X.04) =.8508. For z < 0 subtract the value from,

More information

df=degrees of freedom = n - 1

df=degrees of freedom = n - 1 One sample t-test test of the mean Assumptions: Independent, random samples Approximately normal distribution (from intro class: σ is unknown, need to calculate and use s (sample standard deviation)) Hypotheses:

More information

Statistiek II. John Nerbonne using reworkings by Hartmut Fitz and Wilbert Heeringa. February 13, Dept of Information Science

Statistiek II. John Nerbonne using reworkings by Hartmut Fitz and Wilbert Heeringa. February 13, Dept of Information Science Statistiek II John Nerbonne using reworkings by Hartmut Fitz and Wilbert Heeringa Dept of Information Science j.nerbonne@rug.nl February 13, 2014 Course outline 1 One-way ANOVA. 2 Factorial ANOVA. 3 Repeated

More information

ANOVA: Analysis of Variation

ANOVA: Analysis of Variation ANOVA: Analysis of Variation The basic ANOVA situation Two variables: 1 Categorical, 1 Quantitative Main Question: Do the (means of) the quantitative variables depend on which group (given by categorical

More information

These are all actually contrasts (the coef sum to zero). What are these contrasts representing? What would make them large?

These are all actually contrasts (the coef sum to zero). What are these contrasts representing? What would make them large? Lecture 12 Comparing treatment effects Orthogonal Contrasts What use are contrasts? Recall the Cotton data In this case, the treatment levels have an ordering to them this is not always the case) Consider

More information

Lecture 14: ANOVA and the F-test

Lecture 14: ANOVA and the F-test Lecture 14: ANOVA and the F-test S. Massa, Department of Statistics, University of Oxford 3 February 2016 Example Consider a study of 983 individuals and examine the relationship between duration of breastfeeding

More information

Analysis of variance (ANOVA) ANOVA. Null hypothesis for simple ANOVA. H 0 : Variance among groups = 0

Analysis of variance (ANOVA) ANOVA. Null hypothesis for simple ANOVA. H 0 : Variance among groups = 0 Analysis of variance (ANOVA) ANOVA Comparing the means of more than two groups Like a t-test, but can compare more than two groups Asks whether any of two or more means is different from any other. In

More information

COMPARING SEVERAL MEANS: ANOVA

COMPARING SEVERAL MEANS: ANOVA LAST UPDATED: November 15, 2012 COMPARING SEVERAL MEANS: ANOVA Objectives 2 Basic principles of ANOVA Equations underlying one-way ANOVA Doing a one-way ANOVA in R Following up an ANOVA: Planned contrasts/comparisons

More information

2 Hand-out 2. Dr. M. P. M. M. M c Loughlin Revised 2018

2 Hand-out 2. Dr. M. P. M. M. M c Loughlin Revised 2018 Math 403 - P. & S. III - Dr. McLoughlin - 1 2018 2 Hand-out 2 Dr. M. P. M. M. M c Loughlin Revised 2018 3. Fundamentals 3.1. Preliminaries. Suppose we can produce a random sample of weights of 10 year-olds

More information

Statistics for Managers Using Microsoft Excel Chapter 10 ANOVA and Other C-Sample Tests With Numerical Data

Statistics for Managers Using Microsoft Excel Chapter 10 ANOVA and Other C-Sample Tests With Numerical Data Statistics for Managers Using Microsoft Excel Chapter 10 ANOVA and Other C-Sample Tests With Numerical Data 1999 Prentice-Hall, Inc. Chap. 10-1 Chapter Topics The Completely Randomized Model: One-Factor

More information

What is a Hypothesis?

What is a Hypothesis? What is a Hypothesis? A hypothesis is a claim (assumption) about a population parameter: population mean Example: The mean monthly cell phone bill in this city is μ = $42 population proportion Example:

More information

STAT22200 Spring 2014 Chapter 5

STAT22200 Spring 2014 Chapter 5 STAT22200 Spring 2014 Chapter 5 Yibi Huang April 29, 2014 Chapter 5 Multiple Comparisons Chapter 5-1 Chapter 5 Multiple Comparisons Note the t-tests and C.I. s are constructed assuming we only do one test,

More information

Introduction to Statistical Inference Lecture 10: ANOVA, Kruskal-Wallis Test

Introduction to Statistical Inference Lecture 10: ANOVA, Kruskal-Wallis Test Introduction to Statistical Inference Lecture 10: ANOVA, Kruskal-Wallis Test la Contents The two sample t-test generalizes into Analysis of Variance. In analysis of variance ANOVA the population consists

More information

4.1. Introduction: Comparing Means

4.1. Introduction: Comparing Means 4. Analysis of Variance (ANOVA) 4.1. Introduction: Comparing Means Consider the problem of testing H 0 : µ 1 = µ 2 against H 1 : µ 1 µ 2 in two independent samples of two different populations of possibly

More information

9 One-Way Analysis of Variance

9 One-Way Analysis of Variance 9 One-Way Analysis of Variance SW Chapter 11 - all sections except 6. The one-way analysis of variance (ANOVA) is a generalization of the two sample t test to k 2 groups. Assume that the populations of

More information

Analysis of Variance

Analysis of Variance Analysis of Variance Math 36b May 7, 2009 Contents 2 ANOVA: Analysis of Variance 16 2.1 Basic ANOVA........................... 16 2.1.1 the model......................... 17 2.1.2 treatment sum of squares.................

More information

Exercise I.1 I.2 I.3 I.4 II.1 II.2 III.1 III.2 III.3 IV.1 Question (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) Answer

Exercise I.1 I.2 I.3 I.4 II.1 II.2 III.1 III.2 III.3 IV.1 Question (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) Answer Solutions to Exam in 02402 December 2012 Exercise I.1 I.2 I.3 I.4 II.1 II.2 III.1 III.2 III.3 IV.1 Question (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) Answer 3 1 5 2 5 2 3 5 1 3 Exercise IV.2 IV.3 IV.4 V.1

More information

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups

One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups One-Way Analysis of Variance: A Guide to Testing Differences Between Multiple Groups In analysis of variance, the main research question is whether the sample means are from different populations. The

More information

Analysis of variance (ANOVA) Comparing the means of more than two groups

Analysis of variance (ANOVA) Comparing the means of more than two groups Analysis of variance (ANOVA) Comparing the means of more than two groups Example: Cost of mating in male fruit flies Drosophila Treatments: place males with and without unmated (virgin) females Five treatments

More information

Lecture 7: Hypothesis Testing and ANOVA

Lecture 7: Hypothesis Testing and ANOVA Lecture 7: Hypothesis Testing and ANOVA Goals Overview of key elements of hypothesis testing Review of common one and two sample tests Introduction to ANOVA Hypothesis Testing The intent of hypothesis

More information

Statistics for EES Factorial analysis of variance

Statistics for EES Factorial analysis of variance Statistics for EES Factorial analysis of variance Dirk Metzler June 12, 2015 Contents 1 ANOVA and F -Test 1 2 Pairwise comparisons and multiple testing 6 3 Non-parametric: The Kruskal-Wallis Test 9 1 ANOVA

More information

Topic 22 Analysis of Variance

Topic 22 Analysis of Variance Topic 22 Analysis of Variance Comparing Multiple Populations 1 / 14 Outline Overview One Way Analysis of Variance Sample Means Sums of Squares The F Statistic Confidence Intervals 2 / 14 Overview Two-sample

More information

Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs

Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs Introduction to the Analysis of Variance (ANOVA) Computing One-Way Independent Measures (Between Subjects) ANOVAs The Analysis of Variance (ANOVA) The analysis of variance (ANOVA) is a statistical technique

More information

Statistical Hypothesis Testing

Statistical Hypothesis Testing Statistical Hypothesis Testing Dr. Phillip YAM 2012/2013 Spring Semester Reference: Chapter 7 of Tests of Statistical Hypotheses by Hogg and Tanis. Section 7.1 Tests about Proportions A statistical hypothesis

More information

STAT 5200 Handout #7a Contrasts & Post hoc Means Comparisons (Ch. 4-5)

STAT 5200 Handout #7a Contrasts & Post hoc Means Comparisons (Ch. 4-5) STAT 5200 Handout #7a Contrasts & Post hoc Means Comparisons Ch. 4-5) Recall CRD means and effects models: Y ij = µ i + ϵ ij = µ + α i + ϵ ij i = 1,..., g ; j = 1,..., n ; ϵ ij s iid N0, σ 2 ) If we reject

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

(Where does Ch. 7 on comparing 2 means or 2 proportions fit into this?)

(Where does Ch. 7 on comparing 2 means or 2 proportions fit into this?) 12. Comparing Groups: Analysis of Variance (ANOVA) Methods Response y Explanatory x var s Method Categorical Categorical Contingency tables (Ch. 8) (chi-squared, etc.) Quantitative Quantitative Regression

More information

Tentative solutions TMA4255 Applied Statistics 16 May, 2015

Tentative solutions TMA4255 Applied Statistics 16 May, 2015 Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Tentative solutions TMA455 Applied Statistics 6 May, 05 Problem Manufacturer of fertilizers a) Are these independent

More information

Specific Differences. Lukas Meier, Seminar für Statistik

Specific Differences. Lukas Meier, Seminar für Statistik Specific Differences Lukas Meier, Seminar für Statistik Problem with Global F-test Problem: Global F-test (aka omnibus F-test) is very unspecific. Typically: Want a more precise answer (or have a more

More information

Analysis of variance

Analysis of variance Analysis of variance 1 Method If the null hypothesis is true, then the populations are the same: they are normal, and they have the same mean and the same variance. We will estimate the numerical value

More information

Chapter 12. Analysis of variance

Chapter 12. Analysis of variance Serik Sagitov, Chalmers and GU, January 9, 016 Chapter 1. Analysis of variance Chapter 11: I = samples independent samples paired samples Chapter 1: I 3 samples of equal size J one-way layout two-way layout

More information

Example: Four levels of herbicide strength in an experiment on dry weight of treated plants.

Example: Four levels of herbicide strength in an experiment on dry weight of treated plants. The idea of ANOVA Reminders: A factor is a variable that can take one of several levels used to differentiate one group from another. An experiment has a one-way, or completely randomized, design if several

More information

Chap The McGraw-Hill Companies, Inc. All rights reserved.

Chap The McGraw-Hill Companies, Inc. All rights reserved. 11 pter11 Chap Analysis of Variance Overview of ANOVA Multiple Comparisons Tests for Homogeneity of Variances Two-Factor ANOVA Without Replication General Linear Model Experimental Design: An Overview

More information

Sampling distribution of t. 2. Sampling distribution of t. 3. Example: Gas mileage investigation. II. Inferential Statistics (8) t =

Sampling distribution of t. 2. Sampling distribution of t. 3. Example: Gas mileage investigation. II. Inferential Statistics (8) t = 2. The distribution of t values that would be obtained if a value of t were calculated for each sample mean for all possible random of a given size from a population _ t ratio: (X - µ hyp ) t s x The result

More information

Chapter 14 Student Lecture Notes Department of Quantitative Methods & Information Systems. Business Statistics. Chapter 14 Multiple Regression

Chapter 14 Student Lecture Notes Department of Quantitative Methods & Information Systems. Business Statistics. Chapter 14 Multiple Regression Chapter 14 Student Lecture Notes 14-1 Department of Quantitative Methods & Information Systems Business Statistics Chapter 14 Multiple Regression QMIS 0 Dr. Mohammad Zainal Chapter Goals After completing

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization.

The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization. 1 Chapter 1: Research Design Principles The legacy of Sir Ronald A. Fisher. Fisher s three fundamental principles: local control, replication, and randomization. 2 Chapter 2: Completely Randomized Design

More information

Lecture 5: Comparing Treatment Means Montgomery: Section 3-5

Lecture 5: Comparing Treatment Means Montgomery: Section 3-5 Lecture 5: Comparing Treatment Means Montgomery: Section 3-5 Page 1 Linear Combination of Means ANOVA: y ij = µ + τ i + ɛ ij = µ i + ɛ ij Linear combination: L = c 1 µ 1 + c 1 µ 2 +...+ c a µ a = a i=1

More information

ANOVA Situation The F Statistic Multiple Comparisons. 1-Way ANOVA MATH 143. Department of Mathematics and Statistics Calvin College

ANOVA Situation The F Statistic Multiple Comparisons. 1-Way ANOVA MATH 143. Department of Mathematics and Statistics Calvin College 1-Way ANOVA MATH 143 Department of Mathematics and Statistics Calvin College An example ANOVA situation Example (Treating Blisters) Subjects: 25 patients with blisters Treatments: Treatment A, Treatment

More information

Regression models. Categorical covariate, Quantitative outcome. Examples of categorical covariates. Group characteristics. Faculty of Health Sciences

Regression models. Categorical covariate, Quantitative outcome. Examples of categorical covariates. Group characteristics. Faculty of Health Sciences Faculty of Health Sciences Categorical covariate, Quantitative outcome Regression models Categorical covariate, Quantitative outcome Lene Theil Skovgaard April 29, 2013 PKA & LTS, Sect. 3.2, 3.2.1 ANOVA

More information

1. The (dependent variable) is the variable of interest to be measured in the experiment.

1. The (dependent variable) is the variable of interest to be measured in the experiment. Chapter 10 Analysis of variance (ANOVA) 10.1 Elements of a designed experiment 1. The (dependent variable) is the variable of interest to be measured in the experiment. 2. are those variables whose effect

More information

Biostatistics 270 Kruskal-Wallis Test 1. Kruskal-Wallis Test

Biostatistics 270 Kruskal-Wallis Test 1. Kruskal-Wallis Test Biostatistics 270 Kruskal-Wallis Test 1 ORIGIN 1 Kruskal-Wallis Test The Kruskal-Wallis is a non-parametric analog to the One-Way ANOVA F-Test of means. It is useful when the k samples appear not to come

More information

The entire data set consists of n = 32 widgets, 8 of which were made from each of q = 4 different materials.

The entire data set consists of n = 32 widgets, 8 of which were made from each of q = 4 different materials. One-Way ANOVA Summary The One-Way ANOVA procedure is designed to construct a statistical model describing the impact of a single categorical factor X on a dependent variable Y. Tests are run to determine

More information

EX1. One way ANOVA: miles versus Plug. a) What are the hypotheses to be tested? b) What are df 1 and df 2? Verify by hand. , y 3

EX1. One way ANOVA: miles versus Plug. a) What are the hypotheses to be tested? b) What are df 1 and df 2? Verify by hand. , y 3 EX. Chapter 8 Examples In an experiment to investigate the performance of four different brands of spark plugs intended for the use on a motorcycle, plugs of each brand were tested and the number of miles

More information

Concordia University (5+5)Q 1.

Concordia University (5+5)Q 1. (5+5)Q 1. Concordia University Department of Mathematics and Statistics Course Number Section Statistics 360/1 40 Examination Date Time Pages Mid Term Test May 26, 2004 Two Hours 3 Instructor Course Examiner

More information

Assignment #7. Chapter 12: 18, 24 Chapter 13: 28. Due next Friday Nov. 20 th by 2pm in your TA s homework box

Assignment #7. Chapter 12: 18, 24 Chapter 13: 28. Due next Friday Nov. 20 th by 2pm in your TA s homework box Assignment #7 Chapter 12: 18, 24 Chapter 13: 28 Due next Friday Nov. 20 th by 2pm in your TA s homework box Lab Report Posted on web-site Dates Rough draft due to TAs homework box on Monday Nov. 16 th

More information

Lec 1: An Introduction to ANOVA

Lec 1: An Introduction to ANOVA Ying Li Stockholm University October 31, 2011 Three end-aisle displays Which is the best? Design of the Experiment Identify the stores of the similar size and type. The displays are randomly assigned to

More information

Non-parametric (Distribution-free) approaches p188 CN

Non-parametric (Distribution-free) approaches p188 CN Week 1: Introduction to some nonparametric and computer intensive (re-sampling) approaches: the sign test, Wilcoxon tests and multi-sample extensions, Spearman s rank correlation; the Bootstrap. (ch14

More information

AMS7: WEEK 7. CLASS 1. More on Hypothesis Testing Monday May 11th, 2015

AMS7: WEEK 7. CLASS 1. More on Hypothesis Testing Monday May 11th, 2015 AMS7: WEEK 7. CLASS 1 More on Hypothesis Testing Monday May 11th, 2015 Testing a Claim about a Standard Deviation or a Variance We want to test claims about or 2 Example: Newborn babies from mothers taking

More information

The Random Effects Model Introduction

The Random Effects Model Introduction The Random Effects Model Introduction Sometimes, treatments included in experiment are randomly chosen from set of all possible treatments. Conclusions from such experiment can then be generalized to other

More information

13: Additional ANOVA Topics

13: Additional ANOVA Topics 13: Additional ANOVA Topics Post hoc comparisons Least squared difference The multiple comparisons problem Bonferroni ANOVA assumptions Assessing equal variance When assumptions are severely violated Kruskal-Wallis

More information

ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS

ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS Ravinder Malhotra and Vipul Sharma National Dairy Research Institute, Karnal-132001 The most common use of statistics in dairy science is testing

More information

Multiple Regression. Inference for Multiple Regression and A Case Study. IPS Chapters 11.1 and W.H. Freeman and Company

Multiple Regression. Inference for Multiple Regression and A Case Study. IPS Chapters 11.1 and W.H. Freeman and Company Multiple Regression Inference for Multiple Regression and A Case Study IPS Chapters 11.1 and 11.2 2009 W.H. Freeman and Company Objectives (IPS Chapters 11.1 and 11.2) Multiple regression Data for multiple

More information

What Is ANOVA? Comparing Groups. One-way ANOVA. One way ANOVA (the F ratio test)

What Is ANOVA? Comparing Groups. One-way ANOVA. One way ANOVA (the F ratio test) What Is ANOVA? One-way ANOVA ANOVA ANalysis Of VAriance ANOVA compares the means of several groups. The groups are sometimes called "treatments" First textbook presentation in 95. Group Group σ µ µ σ µ

More information

Lecture 6 Multiple Linear Regression, cont.

Lecture 6 Multiple Linear Regression, cont. Lecture 6 Multiple Linear Regression, cont. BIOST 515 January 22, 2004 BIOST 515, Lecture 6 Testing general linear hypotheses Suppose we are interested in testing linear combinations of the regression

More information

Dr. Junchao Xia Center of Biophysics and Computational Biology. Fall /8/2016 1/38

Dr. Junchao Xia Center of Biophysics and Computational Biology. Fall /8/2016 1/38 BIO5312 Biostatistics Lecture 11: Multisample Hypothesis Testing II Dr. Junchao Xia Center of Biophysics and Computational Biology Fall 2016 11/8/2016 1/38 Outline In this lecture, we will continue to

More information

Comparisons among means (or, the analysis of factor effects)

Comparisons among means (or, the analysis of factor effects) Comparisons among means (or, the analysis of factor effects) In carrying out our usual test that μ 1 = = μ r, we might be content to just reject this omnibus hypothesis but typically more is required:

More information

4/6/16. Non-parametric Test. Overview. Stephen Opiyo. Distinguish Parametric and Nonparametric Test Procedures

4/6/16. Non-parametric Test. Overview. Stephen Opiyo. Distinguish Parametric and Nonparametric Test Procedures Non-parametric Test Stephen Opiyo Overview Distinguish Parametric and Nonparametric Test Procedures Explain commonly used Nonparametric Test Procedures Perform Hypothesis Tests Using Nonparametric Procedures

More information