Statistical Inference, Populations and Samples

Similar documents
What s the Average? Stu Schwartz

Descriptive Statistics (And a little bit on rounding and significant digits)

1. Regressions and Regression Models. 2. Model Example. EEP/IAS Introductory Applied Econometrics Fall Erin Kelley Section Handout 1

Supporting Australian Mathematics Project. A guide for teachers Years 11 and 12. Probability and statistics: Module 25. Inference for means

CONTINUOUS RANDOM VARIABLES

PHYSICS 15a, Fall 2006 SPEED OF SOUND LAB Due: Tuesday, November 14

Estadística I Exercises Chapter 4 Academic year 2015/16

CHAPTER 2 BASIC MATHEMATICAL AND MEASUREMENT CONCEPTS

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 10

Sampling, Frequency Distributions, and Graphs (12.1)

7.1: What is a Sampling Distribution?!?!

4/19/2009. Probability Distributions. Inference. Example 1. Example 2. Parameter versus statistic. Normal Probability Distribution N

Confidence Intervals

Chapter 1 Review of Equations and Inequalities

Midterm Exam 1 (Solutions)

STAT FINAL EXAM

Part 3: Parametric Models

EXPERIMENT 2 Reaction Time Objectives Theory

May 2015 Timezone 2 IB Maths Standard Exam Worked Solutions

Week 2: Review of probability and statistics

LECTURE 15: SIMPLE LINEAR REGRESSION I

1. AN INTRODUCTION TO DESCRIPTIVE STATISTICS. No great deed, private or public, has ever been undertaken in a bliss of certainty.

Inferential Statistics

Uncertainty. Michael Peters December 27, 2013

1 Measures of the Center of a Distribution

Chapter 7 Sampling Distributions

8.1 Frequency Distribution, Frequency Polygon, Histogram page 326

Senior Math Circles November 19, 2008 Probability II

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables

Two-sample inference: Continuous data

Statistics 511 Additional Materials

OCR Maths S1. Topic Questions from Papers. Bivariate Data

MATH 10 INTRODUCTORY STATISTICS

STANDARDS OF LEARNING CONTENT REVIEW NOTES. ALGEBRA I Part II 1 st Nine Weeks,

1-1. Chapter 1. Sampling and Descriptive Statistics by The McGraw-Hill Companies, Inc. All rights reserved.

Twin Case Study: Treatment for Articulation Disabilities

Chapter 23. Inference About Means

M(t) = 1 t. (1 t), 6 M (0) = 20 P (95. X i 110) i=1

Chapter 18. Sampling Distribution Models. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Chapter 6. Net or Unbalanced Forces. Copyright 2011 NSTA. All rights reserved. For more information, go to

Steve Smith Tuition: Maths Notes

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

Simple Regression Model. January 24, 2011

PS2: Two Variable Statistics

Probability and Samples. Sampling. Point Estimates

Measures of the Location of the Data

Please bring the task to your first physics lesson and hand it to the teacher.

COMP6053 lecture: Sampling and the central limit theorem. Jason Noble,

Chapter 5 Statistical Inference

Introduction This is a puzzle station lesson with three puzzles: Skydivers Problem, Cheryl s Birthday Problem and Fun Problems & Paradoxes

Chapter 6. Estimates and Sample Sizes

Module 8 Probability

Looking Ahead to Chapter 4

Park School Mathematics Curriculum Book 9, Lesson 2: Introduction to Logarithms

This chapter covers asymptotic analysis of function growth and big-o notation.

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 2 MATH00040 SEMESTER / Probability

LECTURE 2: SIMPLE REGRESSION I

Sociology 593 Exam 2 Answer Key March 28, 2002

Preptests 55 Answers and Explanations (By Ivy Global) Section 4 Logic Games

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

Inference in Regression Model

AP Statistics Review Ch. 7

Keeping well and healthy when it is really cold

STATISTICS 1 REVISION NOTES

EXPERIMENT: REACTION TIME

Chapter 1: Revie of Calculus and Probability

COMP6053 lecture: Sampling and the central limit theorem. Markus Brede,

A is one of the categories into which qualitative data can be classified.

Notes on Continuous Random Variables

1 MA421 Introduction. Ashis Gangopadhyay. Department of Mathematics and Statistics. Boston University. c Ashis Gangopadhyay

EE 345 MIDTERM 2 Fall 2018 (Time: 1 hour 15 minutes) Total of 100 points

Sampling Distribution Models. Chapter 17

Solutions to In-Class Problems Week 14, Mon.

1 A simple example. A short introduction to Bayesian statistics, part I Math 217 Probability and Statistics Prof. D.

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Lecture 4: Constructing the Integers, Rationals and Reals

Chris Piech CS109 CS109 Final Exam. Fall Quarter Dec 14 th, 2017

Hidden Markov Models: All the Glorious Gory Details

Discrete Mathematics and Probability Theory Fall 2010 Tse/Wagner MT 2 Soln

Answer all questions from part I. Answer two question from part II.a, and one question from part II.b.

Experiment 2 Random Error and Basic Statistics

Exam 1 Review With Solutions Instructor: Brian Powers

Unit Two Descriptive Biostatistics. Dr Mahmoud Alhussami

Solving Equations by Adding and Subtracting

Biostatistics and Epidemiology, Midterm Review

Two-sample inference: Continuous Data

Grades 7 & 8, Math Circles 24/25/26 October, Probability

Descriptive Statistics-I. Dr Mahmoud Alhussami

Figure 1. Distance depends upon time.

1 Probabilities. 1.1 Basics 1 PROBABILITIES

Exercises from Chapter 3, Section 1

Mock Exam - 2 hours - use of basic (non-programmable) calculator is allowed - all exercises carry the same marks - exam is strictly individual

Fitting a Straight Line to Data

B. Weaver (24-Mar-2005) Multiple Regression Chapter 5: Multiple Regression Y ) (5.1) Deviation score = (Y i

ST 371 (IX): Theories of Sampling Distributions

MATH STUDENT BOOK. 12th Grade Unit 9

SDS 321: Introduction to Probability and Statistics

Stat 20 Midterm 1 Review

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

Solving Quadratic & Higher Degree Equations

Transcription:

Chapter 3 Statistical Inference, Populations and Samples Contents 3.1 Introduction................................... 2 3.2 What is statistical inference?.......................... 2 3.2.1 Examples of statistical inference problems............... 2 3.3 Populations.................................... 3 3.4 Finite populations................................ 3 3.4.1 The distribution of a finite population................. 3 3.4.2 The mean and variance of a finite population............. 4 3.4.3 Drawing a random sample from a finite population.......... 4 3.4.4 The expectation of a randomly sampled observation from a finite population................................. 5 3.4.5 The variance of a randomly sampled observation from a finite population.................................... 7 3.4.6 A random sample from a finite population............... 8 3.5 Infinite populations............................... 8 3.5.1 Limitations of the finite population model: an example........ 8 3.5.2 Modelling infinite populations with probability distributions..... 9 3.5.3 The mean and variance of an infinite population............ 10 3.5.4 Population proportions for infinite populations............. 10 3.5.5 Interpreting the mean, variance and population proportions for an infinite population............................ 10 3.6 Summary..................................... 12 1

3.1 Introduction In this chapter we consider the problem of statistical inference: drawing conclusions about populations, from what we observe in a random sample from the population; consider two types of population: finite and infinite, and how we describe the population and define its characteristics in each case; show that by modelling a sampled observation as a random variable, we can (begin) to use probability theory from part I in this module to help us perform statistical inference. 3.2 What is statistical inference? Statistical inference is the process of estimating characteristics of a population when, as is often the case, it is only possible to observe a subset or sample of members from the population. 3.2.1 Examples of statistical inference problems Which country has the best education system for teaching schoolchildren? Every few years, the Organisation for Economic Co-operation and Development (OECD) conducts a survey, known as the Programme of International Student Assessment (PISA), to compare school systems across different countries. In the 2015 survey, 72 countries were compared, and about half a million 15-year-old children took a test. But this only represents a sample out of about 28 million (15-year-old) children who could have taken the test. What conclusions can be drawn if (approximately) only 2% of children have been tested? Which mobile phone brand/model is the most reliable? You are considering buying a particular phone, and ask your friends who own the same model whether they ve had any problems with theirs. One friend tells you her phone is great and has been trouble-free, another complains that the touchscreen on his phone doesn t work properly some of the time. Was he just unlucky, or is the problem common? It would be helpful to know the proportion of all phones of the brand/model that have reliability problems such as touchscreen issues. A consumer magazine has conducted a survey of its readers, and compares 9 brands given responses from 2950 readers. This still only represents a very small proportion of the number of phones produced by each manufacturer: will their findings be reliable? Does having a degree improve your career (and earning) prospects? It may be of interest to know what proportions of graduates and non-graduates are currently employed (perhaps in highly skilled jobs), and what the average earnings are in the two groups. No-one actually knows the correct values of any of these figures! Any figures you hear reported will only ever be estimates. The UK Government reports graduate labour market statistics every year, based on the Labour Force Survey (LFS), a sample of about 40,000 UK households and 100,000 individuals 2

per quarter (which includes graduates and non-graduates). How can estimates of employment and earnings for all graduates and non-graduates be obtained from this sample? Can playing brain-training games make you more intelligent? A company has developed a game which, it is claimed, will boost your intelligence if you play it regularly. How can we tell whether the claim is correct or not? As with any other claim (or theory/hypothesis) we need suitable data, either from a designed experiment or a suitable observational study. Redick et al. (2013) conducted an experiment to test the effects of particular type of memory game ( dual n-back training ) on other aspects of intelligence. In their study, a sample of 24 people were given training with the memory game, another 29 people received a different type of training, and 20 people had no training at all. How do we draw conclusions about the population of everyone who might play the memory game, based on this (apparently small) sample? 3.3 Populations Given the investigation of interest, the next step is to define what it is we want to know: how we represent the population, and what characteristics of the population we want to estimate. There are two ways that we might represent and describe a population: finite and infinite. 3.4 Finite populations We think of a finite population as a list of values. We have N members of the population, and we denote the ith member s value of interest by y i. Two examples: 1. In the PISA testing of 15-year-old schoolchildren, suppose we are just interested in the UK, so that the population of interest is all 15-year-olds in the UK. Suppose there are currently 800,000 15-year-olds, so we have N = 800, 000. We define y i to be the score that the ith 15 year old would get, if he/she were to take the test, so that the full population of interest is y 1, y 2,..., y 800000. 2. Suppose there are currently 12,000,000 graduates (below retirement age) in the UK, and we are interested in how many of them are currently employed. We can define y i = { 1 if graduate i is currently employed 0 if graduate i is currently unemployed (3.1) so that y 1,..., y 12,000,000 describes the employment status of all graduates in the population. 3.4.1 The distribution of a finite population By the distribution of a population, we mean the percentage of population members taking particular values (or values in particular ranges, if the population values are largely all distinct.) A (fictitious) example in the PISA testing would be 3

11% of the values y 1, y 2,..., y 800000 are in the interval [0, 200), 38% of the values y 1, y 2,..., y 800000 are in the interval [200, 400), 42% of the values y 1, y 2,..., y 800000 are in the interval [400, 600), 9% of the values y 1, y 2,..., y 800000 are in the interval [600, 800), and similarly for any other set of intervals we might choose. The distribution will usually be unknown, because we won t know what all the values y 1, y 2,..., y 800000 are. 3.4.2 The mean and variance of a finite population Finite population mean We define the finite population mean to be µ fin := 1 N N y i. (3.2) Note that in the example above of graduate employment and the definition of y i in equation (3.1), µ fin gives the population proportion of graduates who are employed. Hence finite population proportions can be expressed as finite population means. Finite population variance We define the finite population variance 1 to be σ 2 fin := 1 N N (y i µ fin ) 2 (3.3) (The symbols µ and σ 2 are often used to represent means and variances in different contexts, so we have included the subscript fin to make it clear that we are using µ fin and σfin 2 to represent the mean and variance of a finite population. Later on, we will drop these subscripts.) We will often be interested in estimating the mean of a population. In the PISA example, countries are ranked based on their mean test scores. The variance will often be of interest too. For example, it may be interesting to know if test scores vary more in some countries than others. Hence one example of a statistical inference problem is to estimate µ fin and σfin 2 based on knowing only a subset of the values y 1,..., y N. 3.4.3 Drawing a random sample from a finite population The key idea that will enable us to perform statistical inference is to choose the subset randomly, to obtain what we call a random sample from the population. In particular, each randomly sampled value can be thought of as an observation of a random variable. This will allow us to use probability theory both to justify our choice of estimates, and understand how accurate they are likely to be. Define a random variable X to be the outcome of picking one member of the population and observing its value. In particular: 1 Some textbooks use the denominator N 1 in the definition of a finite population variance, for reasons that we don t need to worry about here. For large N, we have 1 N 1 N 1 in any case. 4

1. we suppose each population member has the same probability 1/N of being selected; 2. if member j of the population is selected, the observed value of X will be y j. In practice, it s not always possible to achieve (1): some members of the population may be difficult to reach. More complicated cases where the probabilities are unequal are considered in MAS370, but will not be covered here. 3.4.4 The expectation of a randomly sampled observation from a finite population Theorem 3.1. For a random variable X as defined in Section 3.4.3, we have E(X) = µ fin. (3.4) But before we prove this result... 5

Confusion alert number 1! Means, means and means... A major source of confusion when studying probability and statistics is the word mean : it is used in different contexts to mean different (but often related) things. Try to avoid using the word mean on its own, and make sure you are always clear precisely what is meant by the word mean in any situation. 1. The arithmetic mean If we have a list or sequence of n numbers, the arithmetic mean is their sum, divided by n: the arithmetic mean of 64, 14, 21, 32 is 64 + 14 + 21 + 32 4 = 32.75. (3.5) 2. Finite population mean We have defined the finite population mean to be the arithmetic mean of all the population values: µ fin := 1 N y i (3.6) N 3. Expectation or mean of a random variable For any random variable X, its expectation E(X) is also known as its mean, but this is not the same thing as an arithmetic mean. For a discrete random variable X, its mean is defined as E(X) := x R x xp X (x), (3.7) and for a continuous random variable X, its mean is defined as E(X) := xf X (x)dx (3.8) Try to use the word expectation rather than mean in this context. You can, however, interpret an expectation of a random variable in terms of an arithmetic mean: if a very large number of random variables X 1, X 2,... were to be observed, each with same expectation so that E(X 1 ) = E(X 2 ) =..., informally, the value of their arithmetic mean would approximately be equal to the value of this expectation. Note that Theorem 3.1 is not saying that E(X) is defined to be µ fin ; its definition is given in (3.7). The theorem tells us that if we apply the definition (3.7), the result we will get is equal to µ fin. 6

Now back to the proof: Proof. Starting with the definition of expectation, we have E(X) = x R x xp (X = x), (3.9) where R x is the range of X, the set of possible values that X can take. Writing out the range is a little awkward, as some values in the population could be duplicated, but it is not difficult to see, for example, that if three members of the population all had the value 528, then P (X = 528) = 3. The corresponding term in the summation above would be N 528 3 N = 528 1 N + 528 1 N + 528 1 N, (3.10) and so we can write the expectation as E(X) = = N (value of population member i) P (population member i selected)(3.11) N y i 1 N = µ fin, (3.12) (3.13) 3.4.5 The variance of a randomly sampled observation from a finite population We find the variance of X via V ar(x) = E(X 2 ) E(X) 2. Again, instead of working with E(X 2 ) = x R x x 2 P (X = x), (3.14) we write E(X 2 ) = = N (value of population member i) 2 P (population member i selected) N yi 2 1 N. (3.15) Using this result we can prove the following Theorem 3.2. V ar(x) = σ 2 fin. (3.16) 7

3.4.6 A random sample from a finite population We now think of drawing a random sample of n observations as observing n random variables X 1,..., X n as described above: X j represents the jth instance of picking one member of the population at random. In practice, we wouldn t want to choose the same population member twice, but in this module, we will ignore this possibility, and suppose that the n members of the population are chosen independently of each other. (Picking the same member twice will be very unlikely if N is large and n is relatively small). We have X 1,..., X n independent and identically distributed, with E(X j ) = µ fin, (3.17) V ar(x j ) = σ 2 fin (3.18) for j = 1,..., n. Shortly, we will see how we can use these results both to estimate population means and variances from samples, and how to quantify uncertainty in these estimates. 3.5 Infinite populations In many cases, the idea of a finite population does not satisfactorily describe the situation of interest. Typically, this will be the case when the population we are interested includes items that may come into existence in the future, in addition to items that currently exist or existed in the past; we can t conceive of a finite population size N. 3.5.1 Limitations of the finite population model: an example A casual runner runs a 5K race most weeks. He judges his general fitness to have been constant over these races. He has recorded all his times, and has an average running time of 23 minutes. His fastest time was 21:49 minutes, and his slowest time was 24:22 minutes. A friend suggests that drinking an energy drink two hours before each race will improve his times (a little). For the next three races, he does this, and his running times are 22:36, 22:48, 23:06. Did the drink have an effect? Given that his times vary anyway, we would not be convinced of anything after only three races. What we would like to know is: what will his mean running time be, over a population of all the races that he could run, if he drinks the energy drink each time (assuming no other changes to his fitness). Is this mean value less than 23 minutes? We might think of three new running times as a sample from this population. But how exactly do we represent this population of running times, and in what sense are those three observed times a sample from it? The idea of sampling from a finite population doesn t make sense here: the three observed times were not randomly selected from some list of running times; he has only run three races using the energy drink so far; no other races have happened yet and so other population values have not yet been determined; we can t specify now a population size N; it has not been determined how many races he will run in the future. 8

3.5.2 Modelling infinite populations with probability distributions In the infinite population case, we again think of our observed data as observations of random variables X 1, X 2,..., but now we simply suppose that these are random draws from a probability distribution. This probability distribution will represent the population. An example: normally distributed running times We might suppose that each running time we observe is a random draw from a normal distribution N(µ, σ 2 ), and we say that the runner s times are normally distributed : the population distribution of running times is given by the N(µ, σ 2 ) distribution. We visualise this in Figure 3.1. probability density 0.0 0.4 0.8 1.2 21 22 23 24 25 Running time (minutes) Figure 3.1: We suppose the population distribution of running times (using the energy drink) is described by some probability distibution (the solid line). The three running times we observed (the red crosses) are treated as random draws from this distribution. More specifically, we suppose that the population distribution of running times can be represented by a density function f X, where we have supposed that this density function is f X (x) = ( 1 exp 1 ) (x µ)2. 2πσ 2 2σ2 You have, of course, met the normal distribution and this density function before. But two things are little different in this context: 1. we are using a density function to describe a population of running times, not just one single random running time; 2. the parameters µ and σ 2 will be unknown. In particular, we may wish to estimate them given the observed data of the three running times. 9

3.5.3 The mean and variance of an infinite population The (infinite) population mean and variance are defined to be the mean and variance of the corresponding probability distribution. If this population has probability density function f X, then the population mean and variance are defined as µ inf := σ 2 inf := xf X (x)dx, (3.19) (x µ inf ) 2 f X (x)dx. (3.20) (If the population values are discrete, the population mean and variance would be defined in terms of a probability mass function, with summation rather than integration). Note that, by definition, for any random draw X i from the population, we have E(X i ) = µ inf and V ar(x i ) = σ 2 inf. 3.5.4 Population proportions for infinite populations For a finite population, the proportion of population members talking values between some limits a and b is obtained by counting members of the population. For an infinite population, we suppose it is defined by the integral p [a,b] := b a f X (x)dx. (3.21) 3.5.5 Interpreting the mean, variance and population proportions for an infinite population The definitions we have given for the population mean, variance and proportion all match the definitions of E(X), V ar(x) and P (a X b) for a single random variable X that you have already met in Part 1: Probability of this module So why, in this context, have we called the above integrals the population mean, variances and proportion? To understand this, think about what happens when we have a large number of identically distributed (and independent) random variables X 1, X 2,... Suppose we could obtain a very large number of observations from an infinite population. We think of this as observing a large number of independent and identically distributed random variables X 1,..., X N. Denote the values we actually get by x 1,..., x N. If we now treat these values x 1,..., x N as a finite population we would find that: the finite population mean of x 1,..., x N, calculated using (3.2), would be approximately equal to µ inf, defined in (3.19); the finite population variance of x 1,..., x N, calculated using (3.3) would be approximately equal to σinf 2, defined in (3.20); the proportion of members in the finite population taking values between the limits of a and b would be approximately p [a,b] as defined in (3.21); a histogram of x 1,..., x N, scaled to have total area 1, would look very similar to the infinite population density function f X. 10

An illustration using a simulation experiment Suppose the population of interest concerns the length of time (in days) each patient will stay in a particular type of hospital ward. Suppose this population is represented by an infinite population, described by the Exp(rate = 0.2) distribution. We say that the length of time will be exponentially distributed. Note that for X Exp(rate = 0.2), we have E(X) = 5, V ar(x) = 25, so we have population mean µ inf = 5, and the population variance σinf 2 = 25. We also have p [5,10] = 10 5 0.2 exp( 0.2x)dx = 0.233. How do we interpret these values? In R, we do the following 1. We generate N random draws from the Exp(rate = 0.2) distribution, for some large value of N. Denote these generated values by x 1,..., x N. 2. We now treat x 1,..., x N as a separate finite population, and calculate the finite population mean and variance using (3.2) and (3.3). 3. We compare the finite population mean and variance calculated in step 2 with µ inf = 5 and σ 2 inf = 25. 4. We compare a histogram of the finite population x 1,..., x N, scaled to have total area 1, with the Exp(rate = 0.2) probability density function f X (x) = 0.2 exp( 0.2x) (for x > 0). 5. We compare the proportion of x 1,..., x N taking values in [5, 10] with p [5,10] = 0.233. We conduct this experiment in R as follows. par(mar=c(5, 4, 1, 2) + 0.1) x <- rexp(1000000, rate = 0.2) mean(x) ## [1] 4.995208 var(x) ## [1] 25.06156 sum(x>=5 & x<=10)/1000000 ## [1] 0.231775 hist(x, prob = TRUE, xlim=c(0, 40), breaks=0:ceiling(max(x)), xlab="length of stay (days)", main = "") curve(dexp(x, rate = 0.2),from = 0, to = 40,col ="red", add = TRUE) 11

Density 0.00 0.10 0 10 20 30 40 length of stay (days) Figure 3.2: A histogram of a finite population with N = 1000000, generated from a probability distribution with density function given by the red curve. Observe how the histogram lines up with the density function, note how the finite population mean and variance are very close to µ inf and σinf 2, and how the observed proportion was close to 0.233. Hence, informally, µ inf, σinf 2 and p [a,b] tell us what we would observe in a very large sample. Comment: why infinite population? Informally, f X, µ inf and σinf 2 can be thought of as the limiting case of the finite population distribution, mean and variance as the population size N. Another way to think about it is that in an infinite population model, we can define a mean, variance and proportion without any reference to a population size N, unlike the finite population case. 3.6 Summary We have two ways of thinking of a population: finite or infinite. In both cases, we will think of obtaining a sample from the population as observing a set of independent and identically distributed random variables X 1,..., X n. In the finite population case, we have E(X j ) = µ fin, (3.22) V ar(x j ) = σ 2 fin, (3.23) where µ fin and σfin 2 are the finite population mean and variance respectively, defined in equations (3.2) and (3.3). In the infinite population case, we have E(X j ) = µ inf, (3.24) V ar(x j ) = σ 2 inf (3.25) 12

where µ inf and σinf 2 are the infinite population mean and variance respectively, defined in equations (3.19) and (3.20). In this regard, the distinction between finite and infinite population is unimportant, and from now on we will simply write E(X j ) = µ, (3.26) V ar(x j ) = σ 2, (3.27) where µ is the population mean and σ 2 is the population variance. Regarding the interpretation of µ and σ 2 : in a finite population, µ will be the arithmetic mean of each population member s value (e.g., exam score, income, weight etc.); in an infinite population, µ will, approximately, be the arithmetic mean of a very large number of population members values, and we can interpret σ 2 in a similar way. 13