F79SM STATISTICAL METHODS

Size: px
Start display at page:

Download "F79SM STATISTICAL METHODS"

Transcription

1 F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is our model for the process which has generated our data e.g. X ~ N(µ,1), X ~ Poisson(λ). A hypothesis H is a statement about the distribution of X in particular, in this chapter, it is a statement about the unknown value of a parameter θ, ( or θ). A simple hypothesis is a statement which completely specifies the distribution: e.g. if X ~ N(µ,1) then H: µ = 5 is a simple hypothesis. If H is not simple, it is composite e.g. H: µ > 5. A test of H is a rule which partitions the sample space into two subsets: critical region: data in this subset are not consistent with H and we reject H acceptance region: data in this subset are consistent with H and we accept H. The null hypothesis H represents the current theory (the status quo ) e.g. H : θ =, H : θ = θ, H : P <.4, H : P > P, H : µ = 5, H : µ > 5, H : µ < µ H : µ 1 µ = (this is the no difference or no treatment effect hypothesis) H : σ 1 = σ (this is the equal variances or homoscedasticity hypothesis) The null hypothesis H is contrasted with an alternative hypotheses H 1 and our test is written, for example, as follows: H : θ = θ v H 1 : θ = θ 1 a test with simple null and alternative hypotheses H : θ = θ v H 1 : θ > θ a one-sided test with simple null and composite alternative hypotheses H : θ θ v H 1 : θ > θ a one-sided test with composite null and alternative hypotheses H : θ = θ v H 1 : θ θ a two-sided test with simple null and composite alternative hypotheses The fundamental questions we are asking are: do our data provide strong enough evidence to justify our rejecting the null hypothesis? how strong is our evidence against the null hypothesis? and, on a more general plane of enquiry, 1 RJG how good is our procedure given the data we have, are we using the best available test? is there perhaps a better experimental procedure we could have used to make a better testing approach available to us? The decision is based on the value of an appropriate function of the data called the test statistic (e.g. the sample mean x, sample proportion P, sample variance s, maximum value in the sample), whose distribution is completely known under H, that is when H is true. 9. Classical (Neyman Pearson) methodology (a) Simple H v simple H 1 There are two types of testing errors we are exposed to when making our decision: type I error: reject H when it is true type II error: accept H when it is false The probabilities of making these errors are conventionally denoted α and β: α = P(commit a type I error) = P(reject H H true) β = P(commit a type II error) = P(accept H H false)

2 1 β = P(reject H H false) is called the power of the test it is the probability of making a correct decision to reject the null hypothesis. It measures the effectiveness of the test at detecting departures from the null hypothesis. We want both α and β to be small, but, for a fixed sample size, it is not possible to lower both probabilities of error simultaneously we can of course lower the probabilities by increasing the sample size. The classical, Neyman Pearson, approach to testing is as follows: when testing H : θ = θ v H 1 : θ = θ 1 (i) fix/choose the value of α once chosen, α is called the level of significance of the test (popular choices are α =.5, giving a 5% test and α =.1, giving a 1% test ), and then (ii) use the test for which β is smallest that is to say the test with the highest power i.e. choose the most powerful available test of level α. The method for finding this best test is based on the likelihood function the result is the Neyman Pearson Lemmma, which is expressed in terms of the likelihood ratio L θ / L θ = L / L for short. The lemma states that the form of the best test is given by finding the ( ) ( ) [ ] 1 1 form of the critical region C such that C = {x; L kl 1 } for some constant k. The exact specification of C comes from the chosen level of the test α., and depends on θ. The power of the resulting test depends on θ 1. The criterion comes down in practice to defining C in terms of a range of values of the test statistic for a formal statement and proof of the Lemma see Miller & Miller 1.4. A test with pre-assigned level is often called a significance test. If the level is α and our decision is reject H we say that our result is statistically significant at 1α%. (b) Composite hypotheses In some cases we can use the N P Lemma when we have a composite alternative hypothesis. We may be able to find a test which is best for every value of the parameter specified by H 1. Such a test, if it exists, is said to be uniformly most powerful (UMP). Ex9.1 Random sample, size n, of X ~ N(µ,1). Test of H : µ = µ v H 1 : µ > µ. Consider first a test of H : µ = µ v H 1 : µ = µ 1 (where µ 1 > µ ). ( ) ( ) 1/ 1 f x; µ π exp ( x µ ) = 1 exp ( xi µ ) n L / L1 = = exp { x ( µ 1 µ ) ( µ 1 µ )} 1 exp ( xi µ 1) The best test has critical region defined by those data values x such that L /L 1 k, which is true for x µ µ µ µ k x µ µ k, that is, since µ 1 µ >, for x K. ( ) ( ), that is for ( ) So, the best test is such that we reject H if x exceeds some value, i.e. we reject H for large values of X.

3 Suppose we want to perform a test at the 1α% level. Under H, X ~ N( µ,1/ n). α = P(type I error) = P ( X K µ µ ) 1 ( µ ) = = Φ ( α ) n K z α 1 [For a 5% test, n ( K µ ) { } > = = ( ) z P Z > n K µ = α so = = and we reject H for X > µ 1.645/ n. For the case µ = 1, µ 1 = 1.5, and n = 5, we reject H for X > The power of the test is then given by P( X > 1.39 µ = µ 1), which is P Z > =P(Z >.855) =.84] 1/ 5 The test is best whatever the particular value of µ specified in H 1 and so is UMP for testing H : µ = µ v H 1 : µ > µ. No UMP test exists for testing H : µ = µ v H 1 : µ µ. The power function of a test (which generalises the concept of power we met earlier) is a function of the parameter given by π(θ) = P(reject H θ) Ex9.1 continued Consider testing H : µ µ v H 1 : µ > µ at the 5% level of significance. The best 5% test is as above and is UMP it rejects H for X > µ 1.645/ n. For general µ, X ~ N( µ,1/ n) > = { > } n { n ( µ µ ) } π(µ) = P(reject H µ) = P X µ µ P Z n ( µ µ ) = 1 Φ This function is graphed below (in the case n = 9, µ = 1): at µ =µ = 1, power =.5 = level of the test of H : µ = 1 v H 1 : µ = µ 1 ( > 1). Power function power When working with composite hypotheses, the largest value of the power function π(θ) under H is called the size of the test (this generalises the concept of the level of the test). 3 mu

4 9.3 Some standard cases Testing a population mean Suppose X ~ N(µ,σ ), random sample, size n, testing H : µ = µ (a) σ known X µ Test statistic is, which is ~ N(,1) under H σ / n (b) σ unknown X µ Test statistic is, which is ~ t n 1 under H this gives the famous t test S / n Large samples from any distribution: X µ Test statistic is S / n, which is ~ N(,1) (approximately) under H 9.3. Testing a population variance Suppose X ~ N(µ,σ ), random sample, size n, testing H : σ = σ n 1 S Test statistic is ( ) σ, which is ~ χ under H n Testing a population proportion Let X be the number of successes in n Bernoulli trials with P(success) = θ, testing H : θ = θ Test statistic is X, which is ~ b(n, θ ) under H, X nθ and, for large n, ~ N(,1) (approximately) under H nθ 1 θ ( ) Testing a Poisson mean Suppose X ~ Poisson(λ), random sample, size n, testing H : λ = λ Test statistic is X i, which ~ Poisson(nλ ) under H, X i ~ N nλ, nλ or X ~ N λ, under H and, for large n, ( ) Ex9. Random sample of X ~ N(µ,σ ). We want to test H : µ = 1.5 v H 1 : µ < 1.5 at the 5% level. We have data from a random sample of size 1: x = 9.1, x = λ n i X µ We reject H for small values of X. The test statistic is, which is ~ t 9 under H. S / n The data give x = 9.1, s = = X 1.5 Lower 5% point for t 9 is 1.833, so we reject H for < This defines the critical region. S / 1 For our sample x = 9.1, s = ; the test statistic has value.6 so we do reject H and accept H 1. i

5 An alternative, and simpler, approach is to calculate the observed value of the test statistic for the sample in hand and compare it with the tabulated percentage point (or go further see P-values later). Here, our observed t = ( ) / / =.64, which is lower than the relevant percentage point ( 1.833) our observed value is low enough to be in the tail of the reference distribution and we reject H. Ex9.3 A coin is tossed times and lands heads 5 times and tails 15 times. Investigate whether the coin is fair or biased in favour of tails (i.e. do we have strong enough evidence to conclude that the coin is biased in favour of tails?) Let X be the number of heads. Then X ~ b(, θ) where P(head) = θ. We will test H : θ =.5 v H 1 : θ <.5 at 5%. We reject H for small values of X. From NCST (p), P(X 5 θ =.5) =.7 which is less than.5. Our observation x = 5 is in the lower tail of the reference binomial distribution so we reject H. We conclude that the coin is biased in favour of tails. Suppose the coin was tossed times and landed heads 8 times. P(X 8 θ =.5) =.517. This is far too high to provide evidence against H, which can stand. But suppose now that the coin was tossed 1 times and landed heads 4 times (same proportion of heads, but on many more tosses). X nθ Now X ~ b(1, θ) and we can use the test statistic which is ~ N(,1) (approximately) nθ 1 θ under H. ( ) Our observed statistic is (4 5)/5 = which is less than the lower 5% point of the N(,1) distribution ( 1.645) our observed value is in the tail of the reference distribution this time we have sufficiently strong evidence against H to justify our rejecting it. We reject H and conclude that the coin is biased in favour of tails (but see next section for improved methodology which allows naturally for the use of a continuity correction). 9.4 Significance and P values A typical conclusion of a significance test is simply reject H at the 5% level of significance or just reject H at 5%. This is not as informative as we can be. It is more informative to quantify the strength of the evidence the data provide against H. We do this by calculating the probability value (P value) of our observed test statistic. The P value is the observed significance level of the test statistic it is the probability, assuming H is true, of observing a value of the test statistic as extreme (that is, as inconsistent with H ) as the value we have actually observed The P value is the probability of the smallest critical region which includes the observed test statistic. Given the data we have, the P value is the lowest level at which we can reject H. The smaller is the P value, the stronger is our evidence against H. The use of P values is very widespread in published statistical work and is strongly recommended. 5

6 In Ex9.1, consider again the case µ = 1, µ 1 = 1.5, and n = 5. Suppose we observe x = This value is in the critical region (which is x > 1.39 ) and has P X 1.41 µ = P Z >.5 =. (or.%). So we have strong enough P value given by ( ) ( ) evidence to justify rejecting H, at levels of testing down to %. Suppose however we observe x = 1.7. This value is not in the critical region and has P value P X 1.7 µ = P Z > 1.35 =.89 (or 8.9%). The P value is higher and the given by ( ) ( ) evidence is not strong enough to justify rejecting H. In Ex9., the observed test statistic is.64 and the P value of this statistic is P(t 9 <.64) =.5 (from NCST). So we have strong enough evidence to justify rejecting H, at levels of testing down to.5%. In Ex9.3 with 1 tosses, under H, X ~ N(5,5) approximately, and the P-value of our observation 4 heads is calculated as P ( X 4 H ) = P Z < = P ( Z < 1.9) =.9. 5 We have strong enough evidence to justify rejecting H, at levels of testing down to about 3%. [Note the use of the continuity correction when using the normal distribution (which is continuous) to calculate an approximation to a probability for the binomial distribution (which is discrete).] P-value Suitable language for your conclusions (in most applications) >.5 insufficient evidence against H to justify rejecting it evidence not strong enough to justify rejecting H H can stand <.5 we have some evidence against H we can reject H at the 5% level of testing <.1 we have strong evidence against H we can reject H at the 1% level of testing we can reject H at levels of testing down to 1% <.1 we have overwhelming evidence against H we can reject H at the.1% level of testing we can reject H at levels of testing down to.1% 9.5 Two sample situations see over 6

7 9.5 Two sample situations Difference between two population means Random sample size n 1 from X 1 ~ N(µ 1, σ 1 ); random sample size n from X ~ N(µ, σ ). All variables are independent. Sample means X1 and X and variances S 1 and S. We want to test hypotheses about µ 1 µ. H : µ 1 µ = δ (δ = is the no difference or no treatment effect hypothesis). (a) Population variances known Test statistic is X1 X δ, which is ~ N(,1) under H σ1 σ n n 1 (b) Common population variance σ 1 = σ = σ Test statistic is X1 X δ, which is ~ t with n 1 n df under H 1 1 S p n n 1 (recall the pooled estimator of σ is ( 1) ( 1) This gives the famous two sample t test. Large samples from any distribution: S p = n S n S 1 1 n n 1 Test statistic is X1 X δ, or X1 X δ, both of which are ~ N(,1) (approximately) under H 1 1 S p S1 S n n n n Ratio of two population variances Random sample size n 1 from X 1 ~ N(µ 1, σ 1 ); random sample size n from X ~ N(µ, σ ). All variables are independent. Sample variances S 1 and S. We want to test hypotheses about σ 1 /σ. H : σ 1 /σ = 1 (i.e. σ 1 = σ this is the homoscedasticity hypothesis) S1 Test statistic is S, which is ~ Fn 1 1, n 1 under H Difference between two population proportions X 1 ~ b(n 1,θ 1 ), X ~ b(n,θ ); large samples; sample proportions P 1 and P respectively H : θ 1 θ = δ ( δ = is the no difference hypothesis in regard to the population proportions) P1 P δ Test statistic is, which is ~ N(,1) under H P1 ( 1 P1 ) P ( 1 P ) n n 1 In the case δ =, H specifies a common population proportion θ = θ 1 = θ, and, under H, 1 X 1 X ~ b(n 1 n, θ). The MLE of the common proportion is then ˆ X X θ =. n n ) 1 7

8 1 1 = θ θ n1 n In this case the estimated standard error of P 1 P under H is ese( P1 P ) ˆ( 1 ˆ) and the test statistic is P1 P ese P ( P ) 1, which is ~ N(,1) (approximately) under H Difference between two Poisson means Random sample size n 1 from X 1 ~ Poisson(λ 1 ), random sample size n from X ~ Poisson(λ ); large samples; all variables independent. Sample means X1 and X. H : λ 1 = λ. X1 X The test statistic normally used is, which is ~ N(,1) (approximately) under H X1 X n n 1 Noting that under H, the MLE of λ = λ 1 = λ is λ = statistic X X ese X 1 ( 1 X ) where ese( X1 X ) X X, one can also argue for the test n n ˆ 1i i = ˆ λ. n1 n Paired data (non-independent samples) Data arise as physical pairs (x i, y i ), i = 1,,, n with differences d i = x i y i. H : µ D = µ X µ Y = Problem reverts to the one-sample problem of Ex9.4 See Ex8.1 Test H : µ 1 = µ v H 1 : µ 1 µ Test statistic is t = X1 X 1 1 S p n n 1 and, for a 5% test, we reject H for t >.8 (t has 1 df) For our data, t = 1.87/.717 =.56, and we reject H. The P-value of our statistic is P( t >.56) =.9 =.18 (approx., from NCST). Ex9.5 See Ex8.11 Test H :θ 1 = θ v H :θ 1 θ P 1 =.18, P =.115, P 1 P =.65 Under H, ˆ θ = 77/5 =.154 and ese( P P ) 1 1 = = Test statistic =.65/.395 = P-value of result = P(Z > 1.973) =.4 =.48 We reject H at levels of testing down to 4.8% 8

9 9.6 Tests and confidence intervals A CI for a parameter θ is a set of values which, given the data we have, are plausible for the parameter. So any value θ contained in the CI should be such that the hypothesis H : θ = θ will be accepted in a corresponding hypothesis test. This is in fact generally the case For example, sampling from N(µ,1). A 95% two-sided CI for µ is given by X, X n n X µ and this interval contains µ if and only if < < 1.96, which is the condition under which 1/ n H : µ = µ is accepted in a 5% significance test when testing H : µ = µ v H 1 : µ µ. In general there is this direct link between the two-sided 1(1 α)% CI and the 1α% two-sided test. Similarly one-sided CIs correspond to one-sided tests. For example, consider again sampling from N(µ,1). A 95% lower CI for µ is given by X, and this interval contains precisely those n values of µ which, when specified under H in the 5% test of H : µ = µ v H 1 : µ > µ result in H being accepted. If a CI has already been calculated for a parameter, then many questions which arise in a hypothesis testing framework are answerable immediately, at least in so far as giving us a basic reject or accept decision. Ex9.6 Ex9.1 revisited: N(µ,1), n = 5 H : µ = 1 v H 1 : µ > Suppose we observe x =1.4. Then a lower 95% CI for µ is given by 1.4, 5 i.e. (1.7, ). This interval does not contain the value µ = 1 which we therefore reject as being implausible (inconsistent with the value of the sample mean) it is too low. This is the same conclusion we come to in the test, for which the critical region is X > Suppose we observe x =1.3. Then a lower 95% CI for µ is given by 1.3, 5 i.e. (9.97, ). This interval does contain the value µ = 1 which we therefore accept as being plausible (consistent with the value of the sample mean). This again is the same conclusion we come to in the test. For a general x, the lower limit of the CI is x.39 and so any hypothesised value for µ such that µ > x.39 is contained in the CI i.e. we accept a hypothesised µ provided x < µ.39 and hence reject it for x < µ.39, as in Ex9.1. S Ex9.7 Ex9. revisited: an upper 95% CI for µ is given by, X 1.833, which with 1 x = 9.1 and s = gives (, 1.5). The interval does not contain the value µ = 1.5 which we therefore reject as being inconsistent with the value of the sample mean it is too high. This is the same conclusion we come to in the test. 9

10 Ex9.8 See Ex8.. Testing a population proportion: H : θ =.38 v H 1 : θ.38 based on the result that a random sample of 1 includes 4 with the property. We reject H for large values of X, where X ~ b(1,.38) N(456, 8.7) P(X 4) = P[Z < ( )/ 8.7] = P(Z <.11) =.174 so the P-value of this (two sided) test is.34. So at 5% we reject H. The 95% CI for θ is.35 ±.7 i.e. (.33,.377) the value.38 is not contained in this interval. 9.7 Other matters (a) When a single best test (in the Neyman Pearson sense) is not available, another, more general approach is used. The test statistic and critical region are found by setting an upper bound on the ratio maxl / maxl where maxl is the maximum value of the likelihood L under the restrictions imposed by H, and maxl is the unrestricted maximum value of L. This method produces tests called likelihood ratio tests. For example, in sampling from N(µ,σ ) and testing H : µ = µ, the method leads to the t test of (b) We may be able to reject H at a specified level simply by using so much data that our test statistic has a small enough standard error to enable us to detect a departure from H. This departure may, however, be of little or no physical significance. (c) A failure to reject H does not imply that H is true. It indicates that we have failed to reject it our data do not provide sufficiently strong evidence against it. H represents a theory which lives on to fight another day. (d) Good practice in testing State your hypotheses the test statistic the distribution of the test statistic under H the observed value of the test statistic the P-value (at least approximately) of the test statistic your conclusion as regards the hypotheses your conclusion in words which relate to the physical situation concerned 1

11 Appendix R code to produce the display in Ex9.1 continued x=c(-:6)*.5 y=1 pnorm(3*(1 x)1.6449) c=c( 1,1) d=c(.5,.5) e=c(1,1) f=c(,.5) plot(x,y,type="l",xlab="mu",ylab="power",main="power function") lines(c,d,lty=) lines(e,f,lty=) 11

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:

More information

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

Partitioning the Parameter Space. Topic 18 Composite Hypotheses Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one

More information

Topic 15: Simple Hypotheses

Topic 15: Simple Hypotheses Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is

More information

Basic Concepts of Inference

Basic Concepts of Inference Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).

More information

6.4 Type I and Type II Errors

6.4 Type I and Type II Errors 6.4 Type I and Type II Errors Ulrich Hoensch Friday, March 22, 2013 Null and Alternative Hypothesis Neyman-Pearson Approach to Statistical Inference: A statistical test (also known as a hypothesis test)

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

z and t tests for the mean of a normal distribution Confidence intervals for the mean Binomial tests

z and t tests for the mean of a normal distribution Confidence intervals for the mean Binomial tests z and t tests for the mean of a normal distribution Confidence intervals for the mean Binomial tests Chapters 3.5.1 3.5.2, 3.3.2 Prof. Tesler Math 283 Fall 2018 Prof. Tesler z and t tests for mean Math

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2017 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

Chapter 7: Hypothesis Testing

Chapter 7: Hypothesis Testing Chapter 7: Hypothesis Testing *Mathematical statistics with applications; Elsevier Academic Press, 2009 The elements of a statistical hypothesis 1. The null hypothesis, denoted by H 0, is usually the nullification

More information

LECTURE 5 HYPOTHESIS TESTING

LECTURE 5 HYPOTHESIS TESTING October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.

More information

Chapter 10. Hypothesis Testing (I)

Chapter 10. Hypothesis Testing (I) Chapter 10. Hypothesis Testing (I) Hypothesis Testing, together with statistical estimation, are the two most frequently used statistical inference methods. It addresses a different type of practical problems

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

Hypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses

Hypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses Testing Hypotheses MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline Hypothesis Testing 1 Hypothesis Testing 2 Hypothesis Testing: Statistical Decision Problem Two coins: Coin 0 and Coin 1 P(Head Coin 0)

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

Introduction to Statistics

Introduction to Statistics MTH4106 Introduction to Statistics Notes 15 Spring 2013 Testing hypotheses about the mean Earlier, we saw how to test hypotheses about a proportion, using properties of the Binomial distribution It is

More information

http://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Introduction to Statistics

Introduction to Statistics MTH4106 Introduction to Statistics Notes 6 Spring 2013 Testing Hypotheses about a Proportion Example Pete s Pizza Palace offers a choice of three toppings. Pete has noticed that rather few customers ask

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

ME3620. Theory of Engineering Experimentation. Spring Chapter IV. Decision Making for a Single Sample. Chapter IV

ME3620. Theory of Engineering Experimentation. Spring Chapter IV. Decision Making for a Single Sample. Chapter IV Theory of Engineering Experimentation Chapter IV. Decision Making for a Single Sample Chapter IV 1 4 1 Statistical Inference The field of statistical inference consists of those methods used to make decisions

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1 4 Hypothesis testing 4. Simple hypotheses A computer tries to distinguish between two sources of signals. Both sources emit independent signals with normally distributed intensity, the signals of the first

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Chapter 8 of Devore , H 1 :

Chapter 8 of Devore , H 1 : Chapter 8 of Devore TESTING A STATISTICAL HYPOTHESIS Maghsoodloo A statistical hypothesis is an assumption about the frequency function(s) (i.e., PDF or pdf) of one or more random variables. Stated in

More information

CSE 312 Final Review: Section AA

CSE 312 Final Review: Section AA CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material

More information

PHP2510: Principles of Biostatistics & Data Analysis. Lecture X: Hypothesis testing. PHP 2510 Lec 10: Hypothesis testing 1

PHP2510: Principles of Biostatistics & Data Analysis. Lecture X: Hypothesis testing. PHP 2510 Lec 10: Hypothesis testing 1 PHP2510: Principles of Biostatistics & Data Analysis Lecture X: Hypothesis testing PHP 2510 Lec 10: Hypothesis testing 1 In previous lectures we have encountered problems of estimating an unknown population

More information

Hypothesis Testing. ) the hypothesis that suggests no change from previous experience

Hypothesis Testing. ) the hypothesis that suggests no change from previous experience Hypothesis Testing Definitions Hypothesis a claim about something Null hypothesis ( H 0 ) the hypothesis that suggests no change from previous experience Alternative hypothesis ( H 1 ) the hypothesis that

More information

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.

More information

Introduction 1. STA442/2101 Fall See last slide for copyright information. 1 / 33

Introduction 1. STA442/2101 Fall See last slide for copyright information. 1 / 33 Introduction 1 STA442/2101 Fall 2016 1 See last slide for copyright information. 1 / 33 Background Reading Optional Chapter 1 of Linear models with R Chapter 1 of Davison s Statistical models: Data, and

More information

Binomial and Poisson Probability Distributions

Binomial and Poisson Probability Distributions Binomial and Poisson Probability Distributions Esra Akdeniz March 3, 2016 Bernoulli Random Variable Any random variable whose only possible values are 0 or 1 is called a Bernoulli random variable. What

More information

Introductory Econometrics. Review of statistics (Part II: Inference)

Introductory Econometrics. Review of statistics (Part II: Inference) Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing

More information

For use only in [the name of your school] 2014 S4 Note. S4 Notes (Edexcel)

For use only in [the name of your school] 2014 S4 Note. S4 Notes (Edexcel) s (Edexcel) Copyright www.pgmaths.co.uk - For AS, A2 notes and IGCSE / GCSE worksheets 1 Copyright www.pgmaths.co.uk - For AS, A2 notes and IGCSE / GCSE worksheets 2 Copyright www.pgmaths.co.uk - For AS,

More information

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.

More information

Chapter 9 Inferences from Two Samples

Chapter 9 Inferences from Two Samples Chapter 9 Inferences from Two Samples 9-1 Review and Preview 9-2 Two Proportions 9-3 Two Means: Independent Samples 9-4 Two Dependent Samples (Matched Pairs) 9-5 Two Variances or Standard Deviations Review

More information

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

STAT Chapter 8: Hypothesis Tests

STAT Chapter 8: Hypothesis Tests STAT 515 -- Chapter 8: Hypothesis Tests CIs are possibly the most useful forms of inference because they give a range of reasonable values for a parameter. But sometimes we want to know whether one particular

More information

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test. Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:

More information

Econ 325: Introduction to Empirical Economics

Econ 325: Introduction to Empirical Economics Econ 325: Introduction to Empirical Economics Chapter 9 Hypothesis Testing: Single Population Ch. 9-1 9.1 What is a Hypothesis? A hypothesis is a claim (assumption) about a population parameter: population

More information

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49 4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether

More information

Chapter 5: HYPOTHESIS TESTING

Chapter 5: HYPOTHESIS TESTING MATH411: Applied Statistics Dr. YU, Chi Wai Chapter 5: HYPOTHESIS TESTING 1 WHAT IS HYPOTHESIS TESTING? As its name indicates, it is about a test of hypothesis. To be more precise, we would first translate

More information

Chapter Three. Hypothesis Testing

Chapter Three. Hypothesis Testing 3.1 Introduction The final phase of analyzing data is to make a decision concerning a set of choices or options. Should I invest in stocks or bonds? Should a new product be marketed? Are my products being

More information

Composite Hypotheses. Topic Partitioning the Parameter Space The Power Function

Composite Hypotheses. Topic Partitioning the Parameter Space The Power Function Toc 18 Simple hypotheses limit us to a decision between one of two possible states of nature. This limitation does not allow us, under the procedures of hypothesis testing to address the basic question:

More information

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline)

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) Gujarati D. Basic Econometrics, Appendix A.8 Barrow M. Statistics

More information

Quantitative Methods for Economics, Finance and Management (A86050 F86050)

Quantitative Methods for Economics, Finance and Management (A86050 F86050) Quantitative Methods for Economics, Finance and Management (A86050 F86050) Matteo Manera matteo.manera@unimib.it Marzio Galeotti marzio.galeotti@unimi.it 1 This material is taken and adapted from Guy Judge

More information

Lecture 21. Hypothesis Testing II

Lecture 21. Hypothesis Testing II Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric

More information

Topic 17: Simple Hypotheses

Topic 17: Simple Hypotheses Topic 17: November, 2011 1 Overview and Terminology Statistical hypothesis testing is designed to address the question: Do the data provide sufficient evidence to conclude that we must depart from our

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

SUFFICIENT STATISTICS

SUFFICIENT STATISTICS SUFFICIENT STATISTICS. Introduction Let X (X,..., X n ) be a random sample from f θ, where θ Θ is unknown. We are interested using X to estimate θ. In the simple case where X i Bern(p), we found that the

More information

Hypothesis tests

Hypothesis tests 6.1 6.4 Hypothesis tests Prof. Tesler Math 186 February 26, 2014 Prof. Tesler 6.1 6.4 Hypothesis tests Math 186 / February 26, 2014 1 / 41 6.1 6.2 Intro to hypothesis tests and decision rules Hypothesis

More information

Tests and Their Power

Tests and Their Power Tests and Their Power Ling Kiong Doong Department of Mathematics National University of Singapore 1. Introduction In Statistical Inference, the two main areas of study are estimation and testing of hypotheses.

More information

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior: Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu.30 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. .30

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment more effective than the old one? 3. Quality

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12

ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12 ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12 Winter 2012 Lecture 13 (Winter 2011) Estimation Lecture 13 1 / 33 Review of Main Concepts Sampling Distribution of Sample Mean

More information

Introductory Econometrics

Introductory Econometrics Session 4 - Testing hypotheses Roland Sciences Po July 2011 Motivation After estimation, delivering information involves testing hypotheses Did this drug had any effect on the survival rate? Is this drug

More information

Frequentist Statistics and Hypothesis Testing Spring

Frequentist Statistics and Hypothesis Testing Spring Frequentist Statistics and Hypothesis Testing 18.05 Spring 2018 http://xkcd.com/539/ Agenda Introduction to the frequentist way of life. What is a statistic? NHST ingredients; rejection regions Simple

More information

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006 Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)

More information

VTU Edusat Programme 16

VTU Edusat Programme 16 VTU Edusat Programme 16 Subject : Engineering Mathematics Sub Code: 10MAT41 UNIT 8: Sampling Theory Dr. K.S.Basavarajappa Professor & Head Department of Mathematics Bapuji Institute of Engineering and

More information

1; (f) H 0 : = 55 db, H 1 : < 55.

1; (f) H 0 : = 55 db, H 1 : < 55. Reference: Chapter 8 of J. L. Devore s 8 th Edition By S. Maghsoodloo TESTING a STATISTICAL HYPOTHESIS A statistical hypothesis is an assumption about the frequency function(s) (i.e., pmf or pdf) of one

More information

Statistical Preliminaries. Stony Brook University CSE545, Fall 2016

Statistical Preliminaries. Stony Brook University CSE545, Fall 2016 Statistical Preliminaries Stony Brook University CSE545, Fall 2016 Random Variables X: A mapping from Ω to R that describes the question we care about in practice. 2 Random Variables X: A mapping from

More information

MTMS Mathematical Statistics

MTMS Mathematical Statistics MTMS.01.099 Mathematical Statistics Lecture 12. Hypothesis testing. Power function. Approximation of Normal distribution and application to Binomial distribution Tõnu Kollo Fall 2016 Hypothesis Testing

More information

Performance Evaluation and Comparison

Performance Evaluation and Comparison Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Cross Validation and Resampling 3 Interval Estimation

More information

Topic 19 Extensions on the Likelihood Ratio

Topic 19 Extensions on the Likelihood Ratio Topic 19 Extensions on the Likelihood Ratio Two-Sided Tests 1 / 12 Outline Overview Normal Observations Power Analysis 2 / 12 Overview The likelihood ratio test is a popular choice for composite hypothesis

More information

Quality Control Using Inferential Statistics In Weibull Based Reliability Analyses S. F. Duffy 1 and A. Parikh 2

Quality Control Using Inferential Statistics In Weibull Based Reliability Analyses S. F. Duffy 1 and A. Parikh 2 Quality Control Using Inferential Statistics In Weibull Based Reliability Analyses S. F. Duffy 1 and A. Parikh 2 1 Cleveland State University 2 N & R Engineering www.inl.gov ASTM Symposium on Graphite

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants 18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009

More information

Problems ( ) 1 exp. 2. n! e λ and

Problems ( ) 1 exp. 2. n! e λ and Problems The expressions for the probability mass function of the Poisson(λ) distribution, and the density function of the Normal distribution with mean µ and variance σ 2, may be useful: ( ) 1 exp. 2πσ

More information

hypothesis testing 1

hypothesis testing 1 hypothesis testing 1 Does smoking cause cancer? competing hypotheses (a) No; we don t know what causes cancer, but smokers are no more likely to get it than nonsmokers (b) Yes; a much greater % of smokers

More information

18.05 Practice Final Exam

18.05 Practice Final Exam No calculators. 18.05 Practice Final Exam Number of problems 16 concept questions, 16 problems. Simplifying expressions Unless asked to explicitly, you don t need to simplify complicated expressions. For

More information

Null Hypothesis Significance Testing p-values, significance level, power, t-tests Spring 2017

Null Hypothesis Significance Testing p-values, significance level, power, t-tests Spring 2017 Null Hypothesis Significance Testing p-values, significance level, power, t-tests 18.05 Spring 2017 Understand this figure f(x H 0 ) x reject H 0 don t reject H 0 reject H 0 x = test statistic f (x H 0

More information

Lecture 4: Parameter Es/ma/on and Confidence Intervals. GENOME 560, Spring 2015 Doug Fowler, GS

Lecture 4: Parameter Es/ma/on and Confidence Intervals. GENOME 560, Spring 2015 Doug Fowler, GS Lecture 4: Parameter Es/ma/on and Confidence Intervals GENOME 560, Spring 2015 Doug Fowler, GS (dfowler@uw.edu) 1 Review: Probability DistribuIons Discrete: Binomial distribuion Hypergeometric distribuion

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Class 6 AMS-UCSC Thu 26, 2012 Winter 2012. Session 1 (Class 6) AMS-132/206 Thu 26, 2012 1 / 15 Topics Topics We will talk about... 1 Hypothesis testing

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

Introduction to Statistical Inference

Introduction to Statistical Inference Introduction to Statistical Inference Dr. Fatima Sanchez-Cabo f.sanchezcabo@tugraz.at http://www.genome.tugraz.at Institute for Genomics and Bioinformatics, Graz University of Technology, Austria Introduction

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

Lecture 7: Hypothesis Testing and ANOVA

Lecture 7: Hypothesis Testing and ANOVA Lecture 7: Hypothesis Testing and ANOVA Goals Overview of key elements of hypothesis testing Review of common one and two sample tests Introduction to ANOVA Hypothesis Testing The intent of hypothesis

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

Hypothesis Testing. ECE 3530 Spring Antonio Paiva

Hypothesis Testing. ECE 3530 Spring Antonio Paiva Hypothesis Testing ECE 3530 Spring 2010 Antonio Paiva What is hypothesis testing? A statistical hypothesis is an assertion or conjecture concerning one or more populations. To prove that a hypothesis is

More information

6 The normal distribution, the central limit theorem and random samples

6 The normal distribution, the central limit theorem and random samples 6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline

More information

Linear Models: Comparing Variables. Stony Brook University CSE545, Fall 2017

Linear Models: Comparing Variables. Stony Brook University CSE545, Fall 2017 Linear Models: Comparing Variables Stony Brook University CSE545, Fall 2017 Statistical Preliminaries Random Variables Random Variables X: A mapping from Ω to ℝ that describes the question we care about

More information

INTERVAL ESTIMATION AND HYPOTHESES TESTING

INTERVAL ESTIMATION AND HYPOTHESES TESTING INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS 1a. Under the null hypothesis X has the binomial (100,.5) distribution with E(X) = 50 and SE(X) = 5. So P ( X 50 > 10) is (approximately) two tails

More information

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 801: Mathematical Statistics. Hypothesis Testing STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o

More information

Homework for 1/13 Due 1/22

Homework for 1/13 Due 1/22 Name: ID: Homework for 1/13 Due 1/22 1. [ 5-23] An irregularly shaped object of unknown area A is located in the unit square 0 x 1, 0 y 1. Consider a random point distributed uniformly over the square;

More information