Chapter 24. Comparing Means. Copyright 2010 Pearson Education, Inc.

Similar documents
STA Module 11 Inferences for Two Population Means

STA Rev. F Learning Objectives. Two Population Means. Module 11 Inferences for Two Population Means

STA Module 10 Comparing Two Proportions

Chapter 24. Comparing Means

T.I.H.E. IT 233 Statistics and Probability: Sem. 1: 2013 ESTIMATION AND HYPOTHESIS TESTING OF TWO POPULATIONS

Chapter 23. Inferences About Means. Monday, May 6, 13. Copyright 2009 Pearson Education, Inc.

appstats27.notebook April 06, 2017

Chapter 27 Summary Inferences for Regression

MATH 240. Chapter 8 Outlines of Hypothesis Tests

Chapter 15. Probability Rules! Copyright 2012, 2008, 2005 Pearson Education, Inc.

AMS7: WEEK 7. CLASS 1. More on Hypothesis Testing Monday May 11th, 2015

Chapter 18. Sampling Distribution Models. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 9.1-1

Chapter 8. Linear Regression. The Linear Model. Fat Versus Protein: An Example. The Linear Model (cont.) Residuals

Chapter 8. Linear Regression. Copyright 2010 Pearson Education, Inc.

Warm-up Using the given data Create a scatterplot Find the regression line

CHAPTER 10 Comparing Two Populations or Groups

CHAPTER 10 Comparing Two Populations or Groups

STA 101 Final Review

Stat 427/527: Advanced Data Analysis I

Statistics Primer. ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong

Chapter 23: Inferences About Means

POLI 443 Applied Political Research

Two-Sample Inferential Statistics

7.2 One-Sample Correlation ( = a) Introduction. Correlation analysis measures the strength and direction of association between

Chapter 4. Regression Models. Learning Objectives

Chapter 8. Inferences Based on a Two Samples Confidence Intervals and Tests of Hypothesis

Difference between means - t-test /25

Business Statistics. Lecture 10: Course Review

Statistics for Managers Using Microsoft Excel 7 th Edition

Chapter 9 Inferences from Two Samples

Inferences for Regression

Business Statistics. Lecture 5: Confidence Intervals

Multivariate Statistical Analysis

Chapter 7. Inference for Distributions. Introduction to the Practice of STATISTICS SEVENTH. Moore / McCabe / Craig. Lecture Presentation Slides

Lecture Slides. Elementary Statistics. by Mario F. Triola. and the Triola Statistics Series

Ch18 links / ch18 pdf links Ch18 image t-dist table

STAT 512 MidTerm I (2/21/2013) Spring 2013 INSTRUCTIONS

CIVL /8904 T R A F F I C F L O W T H E O R Y L E C T U R E - 8

CHAPTER 10 Comparing Two Populations or Groups

Chapter 5 Confidence Intervals

The t-distribution. Patrick Breheny. October 13. z tests The χ 2 -distribution The t-distribution Summary

Chapter 22. Comparing Two Proportions 1 /29

One-factor analysis of variance (ANOVA)

Multiple samples: Modeling and ANOVA

Correlation and regression

Analysis of variance (ANOVA) Comparing the means of more than two groups

Data Analysis and Statistical Methods Statistics 651

Multiple t Tests. Introduction to Analysis of Variance. Experiments with More than 2 Conditions

Stats Review Chapter 14. Mary Stangler Center for Academic Success Revised 8/16

Questions 3.83, 6.11, 6.12, 6.17, 6.25, 6.29, 6.33, 6.35, 6.50, 6.51, 6.53, 6.55, 6.59, 6.60, 6.65, 6.69, 6.70, 6.77, 6.79, 6.89, 6.

An inferential procedure to use sample data to understand a population Procedures

Sociology 6Z03 Review II

Chapter 7. Scatterplots, Association, and Correlation. Copyright 2010 Pearson Education, Inc.

4.1 Hypothesis Testing

Two-sample inference: Continuous data

First we look at some terms to be used in this section.

Chapter 23. Inference About Means

Chapter. Hypothesis Testing with Two Samples. Copyright 2015, 2012, and 2009 Pearson Education, Inc. 1

Lecture 26: Chapter 10, Section 2 Inference for Quantitative Variable Confidence Interval with t

Chapter 8: Estimating with Confidence

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.

Lectures 5 & 6: Hypothesis Testing

Student s t-distribution. The t-distribution, t-tests, & Measures of Effect Size

HYPOTHESIS TESTING. Hypothesis Testing

The t-test: A z-score for a sample mean tells us where in the distribution the particular mean lies

Keller: Stats for Mgmt & Econ, 7th Ed July 17, 2006

Multiple Linear Regression for the Salary Data

Statistics: revision

Epidemiology Principles of Biostatistics Chapter 10 - Inferences about two populations. John Koval

Unit 6 - Introduction to linear regression

Chapter 22. Comparing Two Proportions 1 /30

Nature vs. nurture? Lecture 18 - Regression: Inference, Outliers, and Intervals. Regression Output. Conditions for inference.

Chapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc.

Chapter 26: Comparing Counts (Chi Square)

Tables Table A Table B Table C Table D Table E 675

Simple Linear Regression for the Climate Data

Chapter 16. Simple Linear Regression and Correlation

Introduction to Business Statistics QM 220 Chapter 12

Formal Statement of Simple Linear Regression Model

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =

Confidence Intervals with σ unknown

Chapter 6. September 17, Please pick up a calculator and take out paper and something to write with. Association and Correlation.

HYPOTHESIS TESTING II TESTS ON MEANS. Sorana D. Bolboacă

Confidence Interval Estimation

Chapter 12: Estimation

Inferences for Correlation

Visual interpretation with normal approximation

Sampling distribution of t. 2. Sampling distribution of t. 3. Example: Gas mileage investigation. II. Inferential Statistics (8) t =

a. See the textbook for examples of proving logical equivalence using truth tables. b. There is a real number x for which f (x) < 0. (x 1) 2 > 0.

Lecture 11 - Tests of Proportions

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference for Means

Content by Week Week of October 14 27

Difference Between Pair Differences v. 2 Samples

Design of Engineering Experiments Part 2 Basic Statistical Concepts Simple comparative experiments

One sided tests. An example of a two sided alternative is what we ve been using for our two sample tests:

1 Introduction to Minitab

Data Analysis and Statistical Methods Statistics 651

Data analysis and Geostatistics - lecture VI

Transcription:

Chapter 24 Comparing Means Copyright 2010 Pearson Education, Inc.

Plot the Data The natural display for comparing two groups is boxplots of the data for the two groups, placed side-by-side. For example: Copyright 2010 Pearson Education, Inc. Slide 24-3

Comparing Two Means Once we have examined the side-by-side boxplots, we can turn to the comparison of two means. Comparing two means is not very different from comparing two proportions. This time the parameter of interest is the difference between the two means, 1 2. Copyright 2010 Pearson Education, Inc. Slide 24-4

Comparing Two Means (cont.) Remember that, for independent random quantities, variances add. So, the standard deviation of the difference between two sample means is SD y y n n 2 2 We still don t know the true standard deviations of the two groups, so we need to estimate and use the standard error 2 2 s 1 s2 SE y1 y2 n n Copyright 2010 Pearson Education, Inc. Slide 24-5

Comparing Two Means (cont.) Because we are working with means and estimating the standard error of their difference using the data, we shouldn t be surprised that the sampling model is a Student s t. The confidence interval we build is called a two-sample t-interval (for the difference in means). The corresponding hypothesis test is called a two-sample t-test. Copyright 2010 Pearson Education, Inc. Slide 24-6

Sampling Distribution for the Difference Between Two Means When the conditions are met, the standardized sample difference between the means of two independent groups t y y SE y can be modeled by a Student s t-model with a number of degrees of freedom found with a special formula. We estimate the standard error with 2 2 SE y1 y2 n1 n2 Copyright 2010 Pearson Education, Inc. Slide 24-7 y s s

Assumptions and Conditions Independence Assumption (Each condition needs to be checked for both groups.): Randomization Condition: Were the data collected with suitable randomization (representative random samples or a randomized experiment)? 10% Condition: We don t usually check this condition for differences of means. We will check it for means only if we have a very small population or an extremely large sample. Copyright 2010 Pearson Education, Inc. Slide 24-8

Assumptions and Conditions (cont.) Normal Population Assumption: Nearly Normal Condition: This must be checked for both groups. A violation by either one violates the condition. Independent Groups Assumption: The two groups we are comparing must be independent of each other. (See Chapter 25 if the groups are not independent of one another ) Copyright 2010 Pearson Education, Inc. Slide 24-9

Two-Sample t-interval When the conditions are met, we are ready to find the confidence interval for the difference between means of two independent groups, 1 2. The confidence interval is y y t SE y y df where the standard error of the difference of the means is SE y s y n 2 2 The critical value t* df depends on the particular confidence level, C, that you specify and on the number of degrees of freedom, which we get from the sample sizes and a special formula. Copyright 2010 Pearson Education, Inc. Slide 24-10 s n

Degrees of Freedom The special formula for the degrees of freedom for our t critical value is a bear: df s n 2 2 2 2 2 2 1 s2 1 s 1 n1 1 n1 n2 1 n2 Because of this, we will let technology calculate degrees of freedom for us! s n 2 Copyright 2010 Pearson Education, Inc. Slide 24-11

Testing the Difference Between Two Means The hypothesis test we use is the two-sample t- test for means. The conditions for the two-sample t-test for the difference between the means of two independent groups are the same as for the twosample t-interval. Copyright 2010 Pearson Education, Inc. Slide 24-12

A Test for the Difference Between Two Means We test the hypothesis H 0 : 1 2 = 0, where the hypothesized difference, 0, is almost always 0, using the statistic The standard error is t SE y y y When the conditions are met and the null hypothesis is true, this statistic can be closely modeled by a Student s t-model with a number of degrees of freedom given by a special formula. We use that model to obtain a P-value. Copyright 2010 Pearson Education, Inc. Slide 24-13 0 SE y y s s y n n 2 2

Back Into the Pool Remember that when we know a proportion, we know its standard deviation. Thus, when testing the null hypothesis that two proportions were equal, we could assume their variances were equal as well. This led us to pool our data for the hypothesis test. Copyright 2010 Pearson Education, Inc. Slide 24-14

Back Into the Pool (cont.) For means, there is also a pooled t-test. Like the two-proportions z-test, this test assumes that the variances in the two groups are equal. But, be careful, there is no link between a mean and its standard deviation Copyright 2010 Pearson Education, Inc. Slide 24-15

Back Into the Pool (cont.) If we are willing to assume that the variances of two means are equal, we can pool the data from two groups to estimate the common variance and make the degrees of freedom formula much simpler. We are still estimating the pooled standard deviation from the data, so we use Student s t- model, and the test is called a pooled t-test (for the difference between means). Copyright 2010 Pearson Education, Inc. Slide 24-16

*The Pooled t-test If we assume that the variances are equal, we can estimate the common variance from the numbers we already have: s 2 pooled n 1 s n 1 s 2 2 1 2 n 1 n 1 Substituting into our standard error formula, we get: 1 1 SEpooled y1 y2 spooled n n Our degrees of freedom are now df = n 1 + n 2 2. Copyright 2010 Pearson Education, Inc. Slide 24-17

*The Pooled t-test and Confidence Interval for Means The conditions for the pooled t-test and corresponding confidence interval are the same as for our earlier twosample t procedures, with the additional assumption that the variances of the two groups are the same. For the hypothesis test, our test statistic is which has df = n 1 + n 2 2. Our confidence interval is t y 0 pooled y SE y y y y t SE y y df pooled Copyright 2010 Pearson Education, Inc. Slide 24-18

Is the Pool All Wet? So, when should you use pooled-t methods rather than two-sample t methods? Never. (Well, hardly ever.) Because the advantages of pooling are small, and you are allowed to pool only rarely (when the equal variance assumption is met), don t. It s never wrong not to pool. Copyright 2010 Pearson Education, Inc. Slide 24-19

Why Not Test the Assumption That the Variances Are Equal? There is a hypothesis test that would do this. But, it is very sensitive to failures of the assumptions and works poorly for small sample sizes just the situation in which we might care about a difference in the methods. So, the test does not work when we would need it to. Copyright 2010 Pearson Education, Inc. Slide 24-20

Is There Ever a Time When Assuming Equal Variances Makes Sense? Yes. In a randomized comparative experiment, we start by assigning our experimental units to treatments at random. Each treatment group therefore begins with the same population variance. In this case assuming the variances are equal is still an assumption, and there are conditions that need to be checked, but at least it s a plausible assumption. Copyright 2010 Pearson Education, Inc. Slide 24-21

What Can Go Wrong? Watch out for paired data. The Independent Groups Assumption deserves special attention. If the samples are not independent, you can t use two-sample methods. Look at the plots. Check for outliers and non-normal distributions by making and examining boxplots. Copyright 2010 Pearson Education, Inc. Slide 24-22

What have we learned? We ve learned to use statistical inference to compare the means of two independent groups. We use t-models for the methods in this chapter. It is still important to check conditions to see if our assumptions are reasonable. The standard error for the difference in sample means depends on believing that our data come from independent groups, but pooling is not the best choice here. Once again, we ve see new can add variances. The reasoning of statistical inference remains the same; only the mechanics change. Copyright 2010 Pearson Education, Inc. Slide 24-23