8: Hypothesis Testing
|
|
- Pearl Sharp
- 5 years ago
- Views:
Transcription
1 Some definitions 8: Hypothesis Testing. Simple, compound, null and alternative hypotheses In test theory one distinguishes between simple hypotheses and compound hypotheses. A simple hypothesis Examples: is an hypothesis that completely specifies the probability distribution. The parameter of this binomial distribution is p = 0.6. This distribution is a normal one of average µ = 4.5 and standard deviation σ =.23. The new treatments gives identical results to the previous one. A compound hypothesis does not completely specify the distribution. Examples: The parameter p of this binomial distribution is greater than 0.6. These two distributions of common variance have the same mean. The new treatment gives better results than the previous one. Often one has to consider the alternative to the proposed hypothesis. For example, if the parameter of this binomial law is not 0.6, does this just mean that data are not distributed according to p = 0.6, or more specifically that p = 0.7, p < 0.6, or that the distribution is not binomial at all? There is somehow an asymmetry between the hypotheses one aims at checking: The default hypothesis, traditionally noted H 0, is called the null hypothesis The alternative hypothesis is noted H.2 Type I and Type II Errors Two types of errors can be made: a Type I Error happens when the null hypothesis H 0 was rejected though it should have been accepted, and a Type II Error occurs when the alternative hypothesis H was rejected though it should have been accepted, or, equivalently, when the null hypothesis H 0 was accepted though it should have been rejected. An example can be seen in the case of a criminal trial, in a democratic state where the rule is innocent until proven guilty. The null hypothesis is thus he is innocent, the alternative one is he is guilty ; a type I error consists of condemning an innocent person, and a type II error consists of letting free a guilty. A test is a procedure which divides the space of observations into 2 regions, R and A. The two important characteristics of a test are called significance and power, which refer to errors of type I and II respectively: Significance = α = Prob (xϵr H 0 ) = Prob (x H 0 ) dx = Prob (x H 0 ) dx R A Power = β = Prob (xϵa H ) = Prob (x H ) dx = Prob (x H ) dx A R
2 The determination of a test is usually a trade-off between α and β. One commonly encountered procedure is to set a priori the significance to a fixed value (α = 0.0, 0.05,...) and find the most powerful test. To make β as small as possible for a given α, the integral over the chosen rejection region R Prob (x H ) dx = β must be as large as possible, for a given R Prob (x H 0) dx = α. In the case where data consist of one measurement, say x, the choice of α sets β, through the test x < x c. In other cases, different tests correspond tho the same given α. It should be noted that in the following nothing is known about the a priori probability, if such a thing exists, of the hypothesis H 0 with respect to that of H. For example, if we are dealing with the one-by-one identification of two types of cell in a test-tube, the formalism makes no use of their relative concentration. Back to our example of a trial, the procedure is : given a priori a (low) risk of condemning an innocent, what is the most powerful method to convict guilty people? However, in a non-democratic State, where the null hypothesis is he is guilty, the procedure would be given a priori a low risk of releasing a real guilty, what is the most powerful method to prove one s innocence?. This shows how asymmetric H 0 and H are. 2 The Neyman Pearson Test The Neyman Pearson test applies to the case of a simple null hypothesis against a simple alternative hypothesis. The rejection region is determined by the following theorem: For a given α, the most powerful test rejects H 0 in a region such as Prob (x H ) Prob (x H 0 ) > k Let s first give a rigorous proof, then a more intuitive one. Rigorous proof. Let R be the rejection region, defined by Prob (x H ) / Prob (x H 0 ) > k. By definition of the acceptance α, we have Prob (xϵr H 0 ) = α. Let be another test of significance α, of rejection region S, with α α. We want to show that this new test is less powerful, i.e. that One has: Prob (xϵs H ) < Prob (xϵr H ) α = Prob (xϵr S H 0 ) + Prob (xϵr S H 0 ) α = Prob (xϵr S H 0 ) + Prob (xϵs R H 0 ) α α Prob (xϵr S H 0 ) Prob (xϵr S H 0 ) R is defined by For any region I inside R, Prob (x H ) Prob (x H 0 ) > k Prob (xϵi H ) > k Prob (xϵi H 0 ) and for any region O outside R, Prob (xϵo H ) k Prob (xϵo H 0 ) Thus 2
3 Prob (xϵr S H 0 ) < k Prob (xϵr S H ) Prob (xϵr S H 0 ) k Prob (xϵr S H ) k Prob (xϵr S H ) < Prob (xϵr S H 0 ) < k Prob (xϵr S H ) Prob (xϵr S H ) < Prob (xϵr S H ) Adding to the latter two terms the quantity Prob (xϵr S H ), one gets the final result: Prob (xϵs H ) < Prob (xϵr H ) The requirement of H 0 and H be simple is essential to be able to write expressions involving probabilities like Prob (xϵs H ),... A more intuitive proof can be given at follows: assume we have defined an acceptance region A. Let s further assume we want to slightly modify it, keeping the same significance α; this is achieved by adding a small region 2 and removing, of equal weight in terms of H 0 : Prob (xϵ H 0 ) = Prob (xϵ 2 H 0 ). If we want to increase the power of the test, we must fulfill the condition Prob (xϵ 2 H ) > Prob (xϵ H ) which is, by construction of A, impossible. 3 The Neyman Pearson Theorem at work 3. First Example: Gaussian distributions Let be n observations from a Gaussian distribution of unknown mean µ, but of known variance σ 2. µ can be either µ 0 (null hypothesis), or µ (alternative hypothesis). Note that both hypotheses are simple ones. Let s assume that µ > µ 0. The likelihood of the observation is Prob (x µ) = Π i e (x i µ)2 2 σ 2 2π σ = = ( Σ i (x i µ) 2 2π) n σ n e 2 σ 2 ( 2π) n σ n e nµ 2 2 σ 2 e nµx σ 2 e Σx2 2 σ 2 In the framework of the Neyman Pearson test, the ratio of the likelihood reads: Prob (x µ ) Prob (x µ 0 ) = K e n(µ µ0)x σ 2 where K is a constant which does not depend on the observations. The Neyman Pearson theorem tells us to reject H 0 if e n(µ µ 0 )x σ 2 where k is here the generic term for a constant. Since µ µ 0 > 0, the test will reject the hypothesis µ = µ 0 if x > µ c. The value of µ c is determined by the equation > k Prob (x > µ c ) = α 3
4 As an example, if we set α = 0.05, this corresponds to µ c = µ σ/ n If we now test H against H 0, we get the result that one has to reject H if x < µ D, with Prob (x < µ d ) = α µ d = µ.645 σ/ n, for α = 0.05 These striking results show how asymmetric are the roles played by the null and the alternative hypotheses. 3.2 Second Example: Binomial distributions We have performed n observations from a binomial law, and got r successes. The null hypothesis is the parameter of this distribution is p = p 0, and the alternative one is p = p, with p > p 0. In the Neyman Pearson formalism, the ratio of likelihoods is Prob (r H ) Prob (r H 0 ) Prob (r H ) Prob (r H 0 ) = ( p p 0 ) r ( p p 0 ) n r = ( p p 0 ) n ( p /( p ) p 0 /( p 0 ) )r > k ( p p 0 ) n ( p /( p ) p 0 /( p 0 ) )r > k r log p /( p ) p 0 /( p 0 ) > k r > r c A numerical example will introduce the concept of randomized tests: assume N = 0, p 0 = 0.5, p = 0.6. Let s set the significance to α = We are looking for r c such as Prob (r > r c p = 0.5) = Looking at the tables show that Two options are opened: Prob (r > 7 p = 0.5) = Prob (r > 8 p = 0.5) = Change the significance of the test to α = by rejecting if r > 7; Decide that in the case of r = 8, reject H 0 with a probability γ such as; γ Prob (r = 8 p = 0.5) + Prob (r > 8 p = 0.5) = 0.05 One finds γ = In other words, chance will help in deciding. Let s now consider a slightly modified test: we set a priori r to a given value, and perform n experiments until we get r successes. The formulae read: ( ) n Prob (n) = p r ( p) n r r Prob (n p = p ) Prob (n p = p 0 ) = ( p ) r ( p ) r ( p 0 ) n p 0 p 0 p Prob (n p = p ) Prob (n p = p 0 ) > k ( p 0 ) n > k p 4
5 where the last equation comes from the fact that it is the only term depends on n. This leads to reject the hypothesis p = p 0 if n < n c, since p 0 p < One can see, comparing with the first method described at the beginning of this section, that the same pair of results (n, r) can lead to a different conclusion, depending on the way it was obtained. This is the classical criticism addressed by Bayesian statisticians to the Neynam Pearson theory. 4 Uniformly Powerful Tests 4. Introduction In the previous two sections we have discussed tests theory in the case of a simple null hypothesis against a simple alternative hypothesis. We have seen that the Neyman Pearson theorem gives the framework for finding the most powerful test, given a priori the significance. In this section we will extend these results to compound hypotheses, restricting ourselves to a specific class of distributions: the Exponential family. 4.2 Distributions of the Exponential family The Exponential family comprises all distributions which can be generically written: θt (x) Prob (x, θ) = C(θ) h(x)e Let s start with the particular case of the exponential distribution Prob (x, µ) = µ e x µ and lets again test the null hypothesis µ = µ 0 against the alternative one µ = µ, with µ > µ 0. The ratio of likelihoods can be written Prob (x µ = µ ) = ( µ 0 ) n e Σ x i Σ x i µ 0 µ Prob (x µ = µ 0 ) µ Prob (x µ = µ ) Prob (x µ = µ 0 ) > k Σ x i Σ x i > k µ 0 µ n Σx i > x c where x c is defined by Prob ( n Σx i > x c ) = α In the general case, the result is basically the same: Prob (x θ = θ ) = ( C(θ ) Prob (x θ = θ 0 ) C(θ 0 ) )n eθσt (xi) e θ 0ΣT (x i ) Prob (x θ = θ ) Prob (x θ = θ 0 ) > k (θ θ 0 )ΣT (x i ) > k As a consequence, the rejection region is n ΣT (x i) > T c 5
6 4.3 Simple null hypothesis against compound unilateral alternative hypothesis Let s now consider the test of the simple hypothesis H 0 : θ = θ 0 against the compound unilateral alternative H : θ > θ 0. We just saw that the test of H 0 against any simple alternative H : θ = θ > θ 0 does not depend at all on the value of θ. This test is therefore the most powerful one for each of the simple hypotheses composing H. It it said to be uniformly the most powerful. 4.4 Simple null hypothesis against compound bilateral alternative hypothesis We now want to test the simple hypothesis H 0 : θ = θ 0 against the alternative H : θ θ 0. The situation becomes more complex. One possibility is to envisage a reasonable solution: reject H 0 if ΣT (x i ) < k or if ΣT (x i ) > k 2, sharing the risk of type I errors: Prob (ΣT (x i ) < k H 0 ) = α 2 Prob (ΣT (x i ) > k 2 H 0 ) = α 2 5 Null compound hypothesis against simple alternative In this section we will show a generalization of the Neyman Pearson theorem to the case of a null compound hypothesis against simple alternative one. H 0 is the union of a (infinite) number of simple hypotheses. Any test of H 0 against H will always result to split the set of observations into two areas, the acceptance region A and the rejection region R. In such a situation one can generally not set a priori the risk of a type I error to a given value α. At best, one can set the conditions Prob (xϵr H 0) α, H 0 Prob (xϵr H ) maximum The generalization of the Neyman Pearson theorem reads: given H 0 compound of H 0, H the most powerful test of H 0 against the simple hypothesis H rejects if 0, H(3) 0 Prob (x H ) > k 0 Prob (x H 0) + k 0 Prob (x H 0 ) k (n) 0 Prob (x H (n) 0 ) The k (n) 0 being chosen to fulfill the condition: Prob (xϵr H 0) α, H 0,..., H(n) 0, Example: Let x be a sample from a Gaussian distribution N(µ, ); one wishes to test H 0 : µ = µ 0 or µ 0 against H : µ = µ. Let s assume µ 0 < µ 0 < µ. The extended Neyman Pearson theorem yields: e nµ x > k 0 e nµ 0 x + k 0 e nµ 0 x e n(µ µ 0 )x > k 0 + k 0 e n(µ 0 µ 0 )x The problem is now to determine k 0 and k 0. They can t both be negative, otherwise we would always reject H 0. Let s assume that k 0 is positive and k 0 either positive or negative. In such a case, the test rejects if x > x c, which was a predictable result. Because of the properties of the exponential distribution, Prob (x > x c µ 0) > Prob (x > x c µ 0) 6
7 In order to limit the risk of a Type I error to α, one has to choose x c so that Prob (x > x c µ 0 ) = α. Remains the case of k 0 negative and k 0 positive. Depending on the value of k 0, this would lead either to always reject H 0, or accept it inside a given interval, which is impossible. Conclusion: k, k > 0; the test is: reject H 0 is x > x c, with x c so that Prob (x > x c µ 0 ) = α. 6 A last word Absence of evidence is not evidence of absence 7
Preliminary Statistics. Lecture 5: Hypothesis Testing
Preliminary Statistics Lecture 5: Hypothesis Testing Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Elements/Terminology of Hypothesis Testing Types of Errors Procedure of Testing Significance
More informationPreliminary Statistics Lecture 5: Hypothesis Testing (Outline)
1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) Gujarati D. Basic Econometrics, Appendix A.8 Barrow M. Statistics
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing
ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.
More informationLecture Testing Hypotheses: The Neyman-Pearson Paradigm
Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These
More informationHypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes
Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationTopic 17: Simple Hypotheses
Topic 17: November, 2011 1 Overview and Terminology Statistical hypothesis testing is designed to address the question: Do the data provide sufficient evidence to conclude that we must depart from our
More informationPartitioning the Parameter Space. Topic 18 Composite Hypotheses
Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one
More information14.30 Introduction to Statistical Methods in Economics Spring 2009
MIT OpenCourseWare http://ocw.mit.edu.30 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. .30
More informationHypothesis Testing Chap 10p460
Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies
More informationECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12
ECO220Y Review and Introduction to Hypothesis Testing Readings: Chapter 12 Winter 2012 Lecture 13 (Winter 2011) Estimation Lecture 13 1 / 33 Review of Main Concepts Sampling Distribution of Sample Mean
More informationTopic 17 Simple Hypotheses
Topic 17 Simple Hypotheses Terminology and the Neyman-Pearson Lemma 1 / 11 Outline Overview Terminology The Neyman-Pearson Lemma 2 / 11 Overview Statistical hypothesis testing is designed to address the
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationTUTORIAL 8 SOLUTIONS #
TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level
More informationSTAT 801: Mathematical Statistics. Hypothesis Testing
STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o
More informationTopic 15: Simple Hypotheses
Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is
More informationIn any hypothesis testing problem, there are two contradictory hypotheses under consideration.
8.1 Hypotheses and Test Procedures: A hypothesis One example of a hypothesis is p =.5, if we are testing if a new formula for a soda is preferred to the old formula (p=.5 assumes that they are preferred
More informationHypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004
Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),
More informationexp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1
4 Hypothesis testing 4. Simple hypotheses A computer tries to distinguish between two sources of signals. Both sources emit independent signals with normally distributed intensity, the signals of the first
More informationThe University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80
The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple
More informationCHAPTER 9, 10. Similar to a courtroom trial. In trying a person for a crime, the jury needs to decide between one of two possibilities:
CHAPTER 9, 10 Hypothesis Testing Similar to a courtroom trial. In trying a person for a crime, the jury needs to decide between one of two possibilities: The person is guilty. The person is innocent. To
More informationHYPOTHESIS TESTING: FREQUENTIST APPROACH.
HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous
More informationHypothesis Testing. ECE 3530 Spring Antonio Paiva
Hypothesis Testing ECE 3530 Spring 2010 Antonio Paiva What is hypothesis testing? A statistical hypothesis is an assertion or conjecture concerning one or more populations. To prove that a hypothesis is
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationCherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants
18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009
More information4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49
4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More informationLecture 12 November 3
STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1
More informationTopic 3: Hypothesis Testing
CS 8850: Advanced Machine Learning Fall 07 Topic 3: Hypothesis Testing Instructor: Daniel L. Pimentel-Alarcón c Copyright 07 3. Introduction One of the simplest inference problems is that of deciding between
More informationSection 9.1 (Part 2) (pp ) Type I and Type II Errors
Section 9.1 (Part 2) (pp. 547-551) Type I and Type II Errors Because we are basing our conclusion in a significance test on sample data, there is always a chance that our conclusions will be in error.
More informationBias Variance Trade-off
Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]
More informationHypotheses and Errors
Hypotheses and Errors Jonathan Bagley School of Mathematics, University of Manchester Jonathan Bagley, September 23, 2005 Hypotheses & Errors - p. 1/22 Overview Today we ll develop the standard framework
More informationDirection: This test is worth 250 points and each problem worth points. DO ANY SIX
Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes
More informationf (1 0.5)/n Z =
Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.
More informationBEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.
BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider
More informationEcon 325: Introduction to Empirical Economics
Econ 325: Introduction to Empirical Economics Chapter 9 Hypothesis Testing: Single Population Ch. 9-1 9.1 What is a Hypothesis? A hypothesis is a claim (assumption) about a population parameter: population
More informationCompute f(x θ)f(θ) dθ
Bayesian Updating: Continuous Priors 18.05 Spring 2014 b a Compute f(x θ)f(θ) dθ January 1, 2017 1 /26 Beta distribution Beta(a, b) has density (a + b 1)! f (θ) = θ a 1 (1 θ) b 1 (a 1)!(b 1)! http://mathlets.org/mathlets/beta-distribution/
More informationHypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006
Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More informationStatistical hypothesis testing The parametric and nonparametric cases. Madalina Olteanu, Université Paris 1
Statistical hypothesis testing The parametric and nonparametric cases Madalina Olteanu, Université Paris 1 2016-2017 Contents 1 Parametric hypothesis testing 3 1.1 An introduction on statistical hypothesis
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Class 6 AMS-UCSC Thu 26, 2012 Winter 2012. Session 1 (Class 6) AMS-132/206 Thu 26, 2012 1 / 15 Topics Topics We will talk about... 1 Hypothesis testing
More information6.4 Type I and Type II Errors
6.4 Type I and Type II Errors Ulrich Hoensch Friday, March 22, 2013 Null and Alternative Hypothesis Neyman-Pearson Approach to Statistical Inference: A statistical test (also known as a hypothesis test)
More informationSTAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015
STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis
More informationIntroductory Econometrics. Review of statistics (Part II: Inference)
Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationWith our knowledge of interval estimation, we can consider hypothesis tests
Chapter 10 Hypothesis Testing 10.1 Testing Hypotheses With our knowledge of interval estimation, we can consider hypothesis tests An Example of an Hypothesis Test: Statisticians at Employment and Immigration
More informationDefinition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationAnnouncements. Unit 3: Foundations for inference Lecture 3: Decision errors, significance levels, sample size, and power.
Announcements Announcements Unit 3: Foundations for inference Lecture 3:, significance levels, sample size, and power Statistics 101 Mine Çetinkaya-Rundel October 1, 2013 Project proposal due 5pm on Friday,
More informationReview of Statistics
Review of Statistics Topics Descriptive Statistics Mean, Variance Probability Union event, joint event Random Variables Discrete and Continuous Distributions, Moments Two Random Variables Covariance and
More informationhttp://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random
More informationChapter Three. Hypothesis Testing
3.1 Introduction The final phase of analyzing data is to make a decision concerning a set of choices or options. Should I invest in stocks or bonds? Should a new product be marketed? Are my products being
More information20 Hypothesis Testing, Part I
20 Hypothesis Testing, Part I Bob has told Alice that the average hourly rate for a lawyer in Virginia is $200 with a standard deviation of $50, but Alice wants to test this claim. If Bob is right, she
More information14.30 Introduction to Statistical Methods in Economics Spring 2009
MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2017 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationFYST17 Lecture 8 Statistics and hypothesis testing. Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons
FYST17 Lecture 8 Statistics and hypothesis testing Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons 1 Plan for today: Introduction to concepts The Gaussian distribution Likelihood functions Hypothesis
More informationChapter 5: HYPOTHESIS TESTING
MATH411: Applied Statistics Dr. YU, Chi Wai Chapter 5: HYPOTHESIS TESTING 1 WHAT IS HYPOTHESIS TESTING? As its name indicates, it is about a test of hypothesis. To be more precise, we would first translate
More informationHypothesis Testing - Frequentist
Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating
More informationFirst we look at some terms to be used in this section.
8 Hypothesis Testing 8.1 Introduction MATH1015 Biostatistics Week 8 In Chapter 7, we ve studied the estimation of parameters, point or interval estimates. The construction of CI relies on the sampling
More informationECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters
ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters D. Richard Brown III Worcester Polytechnic Institute 26-February-2009 Worcester Polytechnic Institute D. Richard Brown III 26-February-2009
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationBasic Concepts of Inference
Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).
More informationRecall the Basics of Hypothesis Testing
Recall the Basics of Hypothesis Testing The level of significance α, (size of test) is defined as the probability of X falling in w (rejecting H 0 ) when H 0 is true: P(X w H 0 ) = α. H 0 TRUE H 1 TRUE
More informationStat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS
Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS 1a. Under the null hypothesis X has the binomial (100,.5) distribution with E(X) = 50 and SE(X) = 5. So P ( X 50 > 10) is (approximately) two tails
More informationHypothesis testing (cont d)
Hypothesis testing (cont d) Ulrich Heintz Brown University 4/12/2016 Ulrich Heintz - PHYS 1560 Lecture 11 1 Hypothesis testing Is our hypothesis about the fundamental physics correct? We will not be able
More informationStatistical Inference: Estimation and Confidence Intervals Hypothesis Testing
Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire
More information2) There should be uncertainty as to which outcome will occur before the procedure takes place.
robability Numbers For many statisticians the concept of the probability that an event occurs is ultimately rooted in the interpretation of an event as an outcome of an experiment, others would interpret
More informationSpace Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses
Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William
More informationHypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses
Testing Hypotheses MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline Hypothesis Testing 1 Hypothesis Testing 2 Hypothesis Testing: Statistical Decision Problem Two coins: Coin 0 and Coin 1 P(Head Coin 0)
More informationF79SM STATISTICAL METHODS
F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is
More informationChapter 4. Theory of Tests. 4.1 Introduction
Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule
More informationPrimer on statistics:
Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood
More informationLectures 5 & 6: Hypothesis Testing
Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationLecture notes on statistical decision theory Econ 2110, fall 2013
Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic
More informationECE531 Lecture 4b: Composite Hypothesis Testing
ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction
More informationStatistical Inference: Uses, Abuses, and Misconceptions
Statistical Inference: Uses, Abuses, and Misconceptions Michael W. Trosset Indiana Statistical Consulting Center Department of Statistics ISCC is part of IU s Department of Statistics, chaired by Stanley
More informationSummary: the confidence interval for the mean (σ 2 known) with gaussian assumption
Summary: the confidence interval for the mean (σ known) with gaussian assumption on X Let X be a Gaussian r.v. with mean µ and variance σ. If X 1, X,..., X n is a random sample drawn from X then the confidence
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationUnit 19 Formulating Hypotheses and Making Decisions
Unit 19 Formulating Hypotheses and Making Decisions Objectives: To formulate a null hypothesis and an alternative hypothesis, and to choose a significance level To identify the Type I error and the Type
More informationHypothesis Testing The basic ingredients of a hypothesis test are
Hypothesis Testing The basic ingredients of a hypothesis test are 1 the null hypothesis, denoted as H o 2 the alternative hypothesis, denoted as H a 3 the test statistic 4 the data 5 the conclusion. The
More informationStatistical Preliminaries. Stony Brook University CSE545, Fall 2016
Statistical Preliminaries Stony Brook University CSE545, Fall 2016 Random Variables X: A mapping from Ω to R that describes the question we care about in practice. 2 Random Variables X: A mapping from
More informationCONTINUOUS RANDOM VARIABLES
the Further Mathematics network www.fmnetwork.org.uk V 07 REVISION SHEET STATISTICS (AQA) CONTINUOUS RANDOM VARIABLES The main ideas are: Properties of Continuous Random Variables Mean, Median and Mode
More informationSection 5.4: Hypothesis testing for μ
Section 5.4: Hypothesis testing for μ Possible claims or hypotheses: Ball bearings have μ = 1 cm Medicine decreases blood pressure For testing hypotheses, we set up a null (H 0 ) and alternative (H a )
More informationStatistical Inference
Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Week 12. Testing and Kullback-Leibler Divergence 1. Likelihood Ratios Let 1, 2, 2,...
More informationLecture 21. Hypothesis Testing II
Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric
More informationParameter Estimation, Sampling Distributions & Hypothesis Testing
Parameter Estimation, Sampling Distributions & Hypothesis Testing Parameter Estimation & Hypothesis Testing In doing research, we are usually interested in some feature of a population distribution (which
More informationBayesian Learning (II)
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP
More informationStatistical Inference. Hypothesis Testing
Statistical Inference Hypothesis Testing Previously, we introduced the point and interval estimation of an unknown parameter(s), say µ and σ 2. However, in practice, the problem confronting the scientist
More informationSTA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4
STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.
More informationEstimating the accuracy of a hypothesis Setting. Assume a binary classification setting
Estimating the accuracy of a hypothesis Setting Assume a binary classification setting Assume input/output pairs (x, y) are sampled from an unknown probability distribution D = p(x, y) Train a binary classifier
More informationDerivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques
Vol:7, No:0, 203 Derivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques D. A. Farinde International Science Index, Mathematical and Computational Sciences Vol:7,
More informationWooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics
Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics A short review of the principles of mathematical statistics (or, what you should have learned in EC 151).
More informationPermutation Tests. Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods
Permutation Tests Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods The Two-Sample Problem We observe two independent random samples: F z = z 1, z 2,, z n independently of
More informationStatistical Methods for Astronomy
Statistical Methods for Astronomy Probability (Lecture 1) Statistics (Lecture 2) Why do we need statistics? Useful Statistics Definitions Error Analysis Probability distributions Error Propagation Binomial
More information