BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

Size: px
Start display at page:

Download "BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized."

Transcription

1 BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider the simple case where Θ = {θ 0, θ 1 }, so that the family of pdfs only contain two pdfs. Let θ Θ, where θ is unknown. Consider the following simple hypothesis test with null hypothesis H 0 : θ = θ 0 and critical region C; so that we reject H 0 if X C. Suppose α = P θ0 (X C). We say that the test is a best test, at size α, if for any other possible regions, A, with P θ0 (X A) α, we have P θ1 (X C) P θ1 (X A). Sometimes best tests are also called most powerful tests. In these notes, sometimes, we will write P 0 = P θ0 and P 1 = P θ1. Exercise 1. Suppose that f 0 is the pdf for a N(0, 1) random variable and f 1 is the pdf for a N(1, 1) random variable. We wish to test the hypothesis H 0 : µ = 0 versus H 1 : µ = 1. We have only one single sample X. Consider the set C = {x R : 1 x 2}. Show that as a critical region, the set C does not correspond to a best test. Solution. Let b so that P 0 (X b) = P 0 (X C). Set A := {x R : x b}. In fact, P 0 (X C) and b We claim that A := {x R : x b} gives a rejection region with more power. We have that P 1 (X C) = P 0 (0 X 1) whereas, P 1 (X A) = P 0 (X b 1) Randomization Consider the following simple test. We have one sample X N(µ, 1) and we want to test H 0 : µ = 0 against H 1 : µ = 1. It is easy to define to a test with exact size α, for any α (0, 1); in particular, we even have notation for this: P 0 (X > z α ) = α, so that we can consider the 1

2 2 BEST TESTS test φ(x) = 1[x > z α ], where we reject H 0 if φ(x) = 1. This is possible since X is a continuous random. When X is a discrete random variable, this is no longer possible, without additional randomization. Consider the following randomized test. We have one sample X Bern(p), and we want to test H 0 : p = 1/2 against p = 3/4. Any regular non-randomized test will have reject H 0, when H 0 is true, with probability 1/2. However, we can get different values of α in the following way. Suppose α < 1/2. Let φ(x) = 1[x = 1]2α. Let U be independent of X. We reject H 0 if U φ(x), in other words, if X = 1, we reject H 0 with probability 2α, so that E 0 φ(x) = α is the probability that we reject H 0, when H 0 is true. Let X = (X 1,..., X n ) be a random sample from f θ, where θ {θ 0, θ 1 }. Consider the hypothesis test of H 0 : θ = θ 0 with critical function φ. In the randomized setting, the power function is given by β φ (θ) = E θ φ(x). The power of a test is given by β φ (θ 1 ). We say that a critical function φ defines a best test at level α, if E 0 φ(x) α and for all critical functions φ with E 0 φ α we have that β φ (θ 1 ) β φ (θ 1 ). Theorem 2 (Neyman-Pearson). Let X = (X 1,..., X n ) be a random sample for f θ, where θ {θ 0, θ 1 }. Consider the null hypothesis θ = θ 0. Let α (0, 1). Set R(X) := L(X; θ 0) L(X; θ 1 ). There exists a critical function φ and a constant k > 0 such that (a) E 0 φ(x) = α and (b) φ(x) = 1 when R(X) < k, and φ(x) = 0, when R(X) > k. Moreover, if a critical function satisfies both conditions, then it is a most powerful (randomized) test at level α. In addition, if φ is (another) most powerful (randomized) test at level α, then it satisfies second condition, and it also satisfies the first condition, except in the case where there is a test of size α < α with power 1. Note that if x is such that L(x; θ 1 ) = 0 and L(x; θ 0 ) > 0, then it does not make any practical sense to reject H 0, if x is observed. Similarly, if L(x; θ 0 ) = 0 and L(x; θ 1 ) > 0, then we should reject H 0, if x is observed.

3 BEST TESTS 3 Let us remark that in Theorem 2, we do not specify what happens to φ when R(X) = k; it is on this event that the randomization is necessary on this event φ will take values between (0, 1). Often, when the random variables involved are continuous, R(X) = k happens with probability zero and when the random variables involved are discrete we may required additional randomization. The idea of the proof of Theorem 2 is nice. Consider the case where the X i take values in A; we want to find C A n that maximizes L(x; θ 1 ) subject to x C P 1 (X = x) = x C L(x; θ 0 ) α. P 0 (X = x) = x C x C Which elements of A n should be allowed to be in the set C? Think of each element x C as having a cost L(x; θ 0 ) and a value L(x; θ 1 ). One guess would be elements x A n that have high relative value; that is, x A n where R(x) is small how small, well that depends on α. So, one way to build the set C is to order the elements of A n in terms of R(x), and we add elements to C starting from high to low value. However, as the cost approaches α, we may be forced to either break the order, that is, choose an element of lower relative value and/or stop before reaching spending limit α. Randomization solves this problem, as it allows us to spend the maximum limit α. Exercise 3. Let X be an integer-valued random variable with pdf f f 0, f 1, where f 0 is the discrete uniform distribution on the 13 numbers, {0, 1, 2,..., 12} and f 1 is the tent function given by f 1 (x) = x/36 for all x {0, 1,..., 6} and f 1 (x) = 1/3 x/36 for all x {7, 8,..., 12}. Consider the null hypothesis H 0 : f = f 0. On the basis of one single observation X, find the best test at significance level α = 3/ Define a randomized best test at level α = Find the power of your tests; that is, compute the power function on the alternate hypothesis. Solution. Consider the set C = {5, 6, 7} and critical function given by 1[X C], so that reject H 0 if X C. We have that P 0 (X C) = 3/13 = α. The power of this test is given P 1 (X C) = 5/36 + 6/36 + 5/36 = 4/9. In order to show that it is a best test, we will appeal to Theorem 2. We need to find a k such that R(X) < k if and only if X C. Note that R(6) = (1/13)/(6/36) 0.461, R(5) = R(7) = (1/13)/(5/36) 0.553

4 4 BEST TESTS and R(4) = R(8) = (1/13)/(4/36) Take k = 0.6, then R(X) < k if and only if X {5, 6, 7}. If α = 0.25, then we can apply randomization in the following way. Note that P 0 (X = 4) = 1/13. So that if we expanded our critical set to contain 4, P 0 (X (4, 5, 6, 7)) = 4/ > Moreover, if we wanted apply the Theorem 2, we would be force to include 8, since R(4) = R(8). Consider the test that is exactly the same as before, except that when X = 4, we reject H 0 with probability 1/4; that is, set φ(x) = 1 if x {5, 6, 7}, φ(x) = 0 if x {0, 1, 2, 3, 8, 9, 10, 11, 12}, and φ(4) = 1/4. Clearly, E 0 φ(x) = α. The power is given by 4/9 + 1/9(1/4) = 17/36. Notice that we can take k = R(4), then Theorem 2 applies. Let us remark that referring to Exercise 3, in practice, one would prefer the non-randomized best test at level α = 3/13 over the randomized best test at level α = Exercise 4. Let X be a continuous random variable with pdf f {f 0, f 1 }, where f 0 is the pdf of a uniform distribution on [0, 1] and f 1 is the pdf of a uniform distribution on [0, 2]. Consider the null hypothesis H 0 : f = f 0. On the basis of one single observation X, find the best test at significance level α. Exercise 5. Let X be a continuous random variable with pdf given by f(x; θ) = θx θ 1 1[x (0, 1)], where θ {1, 2} Consider the null hypothesis H 0 : θ = 1. On the basis of one single observation X, find the best test at significance level α. Solution. By Theorem 2, we want to find k > 0 so that P 0 (R < k) = α. We reject H 0 if R < k. Notice that under H 0, we have that X is uniformly distributed in [0, 1]. We have that R(X) = 1, so that 2X thus k = 1 2(1 α). P 0 (R < k) = P 0 (1/2k < X) = 1 1/2k = α; Exercise 6. Let X = (X 1,..., X n ) be a random sample where X 1 N(µ, 1), where µ {0, 1}. Consider the null hypothesis H 0 : µ = 0. On the basis of the random sample X, find the best test at significance level α. Solution. By Theorem 2, we want to find k > 0 so that P 0 (R < k) = α. Let T = X X n. We know that T N(nµ, n) is a sufficient

5 statistic for µ; in particular, we know that BEST TESTS 5 L(x; µ) = g(t(x); µ)h(x), where h does not depend on µ and g(t, µ) is the pdf for T. So that R(X) = g(t ; 0)/g(T ; 1) = exp[ T + n 2 ]. Thus R(X) < k if and only if T + n < log k if and only if 2 Z := T n > n 2 log k =: c(k). n Notice that under H 0, we have that Z N(0, 1). Choose k so that c(k) = z α. Exercise 7. Let X = (X 1,..., X n ) be a random sample where X 1 is an exponential random variable with mean µ {2, 3} Consider the null hypothesis H 0 : µ = 2. On the basis of the random sample X, find the best test at significance level α. Proof of Theorem 2. Let F (t) = P 0 (R(X) t) be the cdf for R(X) under H 0. We have that lim t F (t) = 0 and lim t F (t) = 1 (assuming that P 0 (R(X) = ) = 0 ) Recall that F is right continuous, so that lim t a + F (t) = F (a). However, it may not be left continuous. We set F (a ) := lim x a F (t) = P 0(R(X) < a). We have that F (a ) F (a). and P 0 (R(X) = a) = F (a) F (a ). Given α (0, 1), let k > 0 be a point such that that F (k ) α F (k). (We may be forced to take k = if P 0 (R(X) = ) > 0). Set φ(x) := 1[R(x) < k] + if P 0 (R(X) = k) > 0; otherwise, set α F (k ) 1[R(x) = k], P 0 (R(X) = k) φ(x) := 1[R(x) < k]. Clearly, E 0 (φ(x)) = α, so that we constructed a critical function with the required two properties. Moreover, our φ has the property that it is a constant when R(x) = k.

6 6 BEST TESTS Suppose now that φ is a critical function that satisfies the two properties, we will show that φ is a best test at level α. Let φ be another critical function with E 0 φ (X) α. Note that [φ(x) φ (x)][l(x; θ 0 ) kl(x; θ 1 )] 0, since the two terms being multiplied always have the different signs. Write dx 1 dx n = dx. In the case that X i are continuous random variables, we have that [φ(x) φ (x)][l(x; θ 0 ) kl(x; θ 1 )]dx 0. This gives us a bound on the difference of the powers, since it implies that β φ (θ 1 ) β φ (θ 1 ) = [φ(x) φ (x)]l(x; θ 1 )dx 1 [φ(x) φ (x)]l(x; θ 0 )dx k = 1 k [α P 0(φ(X))] 0. In the discrete case, one replaces the integrals by sums. Finally, suppose φ is a most powerful test. Let φ be a most powerful test satisfying the two conditions. We will show that φ and φ are equal on the set {x : R(x) k}. Towards a contradiction, let D = {x : φ(x) φ (x) 0} {x : R(x) k}. For the continuous case, assume that D has positive Lebesgue measure; that is, 1[x D]dx > 0. Now we have that for x D [φ(x) φ (x)][l(x; θ 0 ) kl(x; θ 1 )] < 0, from which we deduce that φ is more powerful than φ, a contradiction. In the discrete case, we need to only assume that D is non-empty for a similar contradiction. Thus φ satisfies the two conditions. In order to argue that E 0 φ (X) = α, we note that if E 0 φ (X) < α, then we could include more points to be (randomly) rejected, thereby

7 BEST TESTS 7 increasing the power. Thus we must have that the power is 1 or the size is α. Exercise 8. Find an example where you have that the power is 1 and the size is not 1. Exercise 9. Let X = (X 1,..., X n ) be a random sample for f θ, where θ {θ 0, θ 1 }. Consider the null hypothesis θ = θ 0. Let α (0, 1). Suppose φ is a critical function which gives E 0 φ(x) < α and E 1 φ(x) < 1. Show that there exists a critical function φ with φ > φ, E 0 φ(x) α, and E 1 φ (X) > E 1 φ(x). Corollary 10. In the context of Theorem 2, if b is the power of a most powerful test at level α (0, 1), then α < b, unless we are in the trivial case that f θ0 = f θ1. Proof of Corollary 10. Consider the test which ignores the data, where D(x) = α for all x. Clearly, E 0 D(X) = α and E 1 D(X) = α. So the critical function D gives a test of size α with power α. So, we must have that α b. Moreover, if α = b, then D is also a most powerful test, and we have that D satisfies the second condition of Theorem 2 since α (0, 1), this forces the condition that L(X; θ 0 ) = kl(x; θ 1 ), for some k, which also forces the condition that k = 1 from which we deduce that f θ0 = f θ1. Sometimes, a test with the property that the significance level is no greater than power is called unbiased. Corollary 10 gives that a best test is unbiased.

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

SUFFICIENT STATISTICS

SUFFICIENT STATISTICS SUFFICIENT STATISTICS. Introduction Let X (X,..., X n ) be a random sample from f θ, where θ Θ is unknown. We are interested using X to estimate θ. In the simple case where X i Bern(p), we found that the

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

Importance Sampling and. Radon-Nikodym Derivatives. Steven R. Dunbar. Sampling with respect to 2 distributions. Rare Event Simulation

Importance Sampling and. Radon-Nikodym Derivatives. Steven R. Dunbar. Sampling with respect to 2 distributions. Rare Event Simulation 1 / 33 Outline 1 2 3 4 5 2 / 33 More than one way to evaluate a statistic A statistic for X with pdf u(x) is A = E u [F (X)] = F (x)u(x) dx 3 / 33 Suppose v(x) is another probability density such that

More information

February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM

February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Abstract. The Rao-Blacwell theorem told us how to improve an estimator. We will discuss conditions on when the Rao-Blacwellization of an estimator

More information

Masters Comprehensive Examination Department of Statistics, University of Florida

Masters Comprehensive Examination Department of Statistics, University of Florida Masters Comprehensive Examination Department of Statistics, University of Florida May 6, 003, 8:00 am - :00 noon Instructions: You have four hours to answer questions in this examination You must show

More information

Hypothesis Testing Chap 10p460

Hypothesis Testing Chap 10p460 Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Lecture 21. Hypothesis Testing II

Lecture 21. Hypothesis Testing II Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric

More information

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Chapter 4. Continuous Random Variables

Chapter 4. Continuous Random Variables Chapter 4. Continuous Random Variables Review Continuous random variable: A random variable that can take any value on an interval of R. Distribution: A density function f : R R + such that 1. non-negative,

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

HOMEWORK ASSIGNMENT 6

HOMEWORK ASSIGNMENT 6 HOMEWORK ASSIGNMENT 6 DUE 15 MARCH, 2016 1) Suppose f, g : A R are uniformly continuous on A. Show that f + g is uniformly continuous on A. Solution First we note: In order to show that f + g is uniformly

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

Partitioning the Parameter Space. Topic 18 Composite Hypotheses Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one

More information

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES 557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp

More information

Review. December 4 th, Review

Review. December 4 th, Review December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.

More information

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes

More information

Lecture 16 November Application of MoUM to our 2-sided testing problem

Lecture 16 November Application of MoUM to our 2-sided testing problem STATS 300A: Theory of Statistics Fall 2015 Lecture 16 November 17 Lecturer: Lester Mackey Scribe: Reginald Long, Colin Wei Warning: These notes may contain factual and/or typographic errors. 16.1 Recap

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

8. Limit Laws. lim(f g)(x) = lim f(x) lim g(x), (x) = lim x a f(x) g lim x a g(x)

8. Limit Laws. lim(f g)(x) = lim f(x) lim g(x), (x) = lim x a f(x) g lim x a g(x) 8. Limit Laws 8.1. Basic Limit Laws. If f and g are two functions and we know the it of each of them at a given point a, then we can easily compute the it at a of their sum, difference, product, constant

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

f (1 0.5)/n Z =

f (1 0.5)/n Z = Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.

More information

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test. Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the

More information

Hypothesis Testing - Frequentist

Hypothesis Testing - Frequentist Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating

More information

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell, Recall The Neyman-Pearson Lemma Neyman-Pearson Lemma: Let Θ = {θ 0, θ }, and let F θ0 (x) be the cdf of the random vector X under hypothesis and F θ (x) be its cdf under hypothesis. Assume that the cdfs

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then

More information

On the Inefficiency of the Adaptive Design for Monitoring Clinical Trials

On the Inefficiency of the Adaptive Design for Monitoring Clinical Trials On the Inefficiency of the Adaptive Design for Monitoring Clinical Trials Anastasios A. Tsiatis and Cyrus Mehta http://www.stat.ncsu.edu/ tsiatis/ Inefficiency of Adaptive Designs 1 OUTLINE OF TOPICS Hypothesis

More information

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number

More information

MAT 271E Probability and Statistics

MAT 271E Probability and Statistics MAT 71E Probability and Statistics Spring 013 Instructor : Class Meets : Office Hours : Textbook : Supp. Text : İlker Bayram EEB 1103 ibayram@itu.edu.tr 13.30 1.30, Wednesday EEB 5303 10.00 1.00, Wednesday

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the

More information

ECE 275B Homework # 1 Solutions Version Winter 2015

ECE 275B Homework # 1 Solutions Version Winter 2015 ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2

More information

Hypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004

Hypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),

More information

http://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

THE DIVISION THEOREM IN Z AND R[T ]

THE DIVISION THEOREM IN Z AND R[T ] THE DIVISION THEOREM IN Z AND R[T ] KEITH CONRAD 1. Introduction In both Z and R[T ], we can carry out a process of division with remainder. Theorem 1.1. For any integers a and b, with b nonzero, there

More information

Exercises Chapter 4 Statistical Hypothesis Testing

Exercises Chapter 4 Statistical Hypothesis Testing Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

Topic 19 Extensions on the Likelihood Ratio

Topic 19 Extensions on the Likelihood Ratio Topic 19 Extensions on the Likelihood Ratio Two-Sided Tests 1 / 12 Outline Overview Normal Observations Power Analysis 2 / 12 Overview The likelihood ratio test is a popular choice for composite hypothesis

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Chapter 4. Theory of Tests. 4.1 Introduction

Chapter 4. Theory of Tests. 4.1 Introduction Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule

More information

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49 4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

ECE 275B Homework # 1 Solutions Winter 2018

ECE 275B Homework # 1 Solutions Winter 2018 ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1 4 Hypothesis testing 4. Simple hypotheses A computer tries to distinguish between two sources of signals. Both sources emit independent signals with normally distributed intensity, the signals of the first

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

Problem 1 (20) Log-normal. f(x) Cauchy

Problem 1 (20) Log-normal. f(x) Cauchy ORF 245. Rigollet Date: 11/21/2008 Problem 1 (20) f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.6 0.8 4 2 0 2 4 Normal (with mean -1) 4 2 0 2 4 Negative-exponential x x f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.5

More information

Section 21. The Metric Topology (Continued)

Section 21. The Metric Topology (Continued) 21. The Metric Topology (cont.) 1 Section 21. The Metric Topology (Continued) Note. In this section we give a number of results for metric spaces which are familar from calculus and real analysis. We also

More information

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 20, 2009, 8:00 am - 2:00 noon Instructions:. You have four hours to answer questions in this examination. 2. You must show

More information

8: Hypothesis Testing

8: Hypothesis Testing Some definitions 8: Hypothesis Testing. Simple, compound, null and alternative hypotheses In test theory one distinguishes between simple hypotheses and compound hypotheses. A simple hypothesis Examples:

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 801: Mathematical Statistics. Hypothesis Testing STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o

More information

Answers to the 8th problem set. f(x θ = θ 0 ) L(θ 0 )

Answers to the 8th problem set. f(x θ = θ 0 ) L(θ 0 ) Answers to the 8th problem set The likelihood ratio with which we worked in this problem set is: Λ(x) = f(x θ = θ 1 ) L(θ 1 ) =. f(x θ = θ 0 ) L(θ 0 ) With a lower-case x, this defines a function. With

More information

Numerical Sequences and Series

Numerical Sequences and Series Numerical Sequences and Series Written by Men-Gen Tsai email: b89902089@ntu.edu.tw. Prove that the convergence of {s n } implies convergence of { s n }. Is the converse true? Solution: Since {s n } is

More information

1 Complete Statistics

1 Complete Statistics Complete Statistics February 4, 2016 Debdeep Pati 1 Complete Statistics Suppose X P θ, θ Θ. Let (X (1),..., X (n) ) denote the order statistics. Definition 1. A statistic T = T (X) is complete if E θ g(t

More information

Upper Bounds for Partitions into k-th Powers Elementary Methods

Upper Bounds for Partitions into k-th Powers Elementary Methods Int. J. Contemp. Math. Sciences, Vol. 4, 2009, no. 9, 433-438 Upper Bounds for Partitions into -th Powers Elementary Methods Rafael Jaimczu División Matemática, Universidad Nacional de Luján Buenos Aires,

More information

Lecture notes on statistical decision theory Econ 2110, fall 2013

Lecture notes on statistical decision theory Econ 2110, fall 2013 Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Lecture 12: Interactive Proofs

Lecture 12: Interactive Proofs princeton university cos 522: computational complexity Lecture 12: Interactive Proofs Lecturer: Sanjeev Arora Scribe:Carl Kingsford Recall the certificate definition of NP. We can think of this characterization

More information

LECTURE NOTES 57. Lecture 9

LECTURE NOTES 57. Lecture 9 LECTURE NOTES 57 Lecture 9 17. Hypothesis testing A special type of decision problem is hypothesis testing. We partition the parameter space into H [ A with H \ A = ;. Wewrite H 2 H A 2 A. A decision problem

More information

In any hypothesis testing problem, there are two contradictory hypotheses under consideration.

In any hypothesis testing problem, there are two contradictory hypotheses under consideration. 8.1 Hypotheses and Test Procedures: A hypothesis One example of a hypothesis is p =.5, if we are testing if a new formula for a soda is preferred to the old formula (p=.5 assumes that they are preferred

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

MATH 103 Pre-Calculus Mathematics Test #3 Fall 2008 Dr. McCloskey Sample Solutions

MATH 103 Pre-Calculus Mathematics Test #3 Fall 2008 Dr. McCloskey Sample Solutions MATH 103 Pre-Calculus Mathematics Test #3 Fall 008 Dr. McCloskey Sample Solutions 1. Let P (x) = 3x 4 + x 3 x + and D(x) = x + x 1. Find polynomials Q(x) and R(x) such that P (x) = Q(x) D(x) + R(x). (That

More information

Analysis II: The Implicit and Inverse Function Theorems

Analysis II: The Implicit and Inverse Function Theorems Analysis II: The Implicit and Inverse Function Theorems Jesse Ratzkin November 17, 2009 Let f : R n R m be C 1. When is the zero set Z = {x R n : f(x) = 0} the graph of another function? When is Z nicely

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment more effective than the old one? 3. Quality

More information

Non-parametric Inference and Resampling

Non-parametric Inference and Resampling Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing

More information

MATH 425, HOMEWORK 3 SOLUTIONS

MATH 425, HOMEWORK 3 SOLUTIONS MATH 425, HOMEWORK 3 SOLUTIONS Exercise. (The differentiation property of the heat equation In this exercise, we will use the fact that the derivative of a solution to the heat equation again solves the

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Interval Estimation. Chapter 9

Interval Estimation. Chapter 9 Chapter 9 Interval Estimation 9.1 Introduction Definition 9.1.1 An interval estimate of a real-values parameter θ is any pair of functions, L(x 1,..., x n ) and U(x 1,..., x n ), of a sample that satisfy

More information

Mathematical statistics

Mathematical statistics October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation

More information

Chapter 8 of Devore , H 1 :

Chapter 8 of Devore , H 1 : Chapter 8 of Devore TESTING A STATISTICAL HYPOTHESIS Maghsoodloo A statistical hypothesis is an assumption about the frequency function(s) (i.e., PDF or pdf) of one or more random variables. Stated in

More information

PRIME NUMBERS YANKI LEKILI

PRIME NUMBERS YANKI LEKILI PRIME NUMBERS YANKI LEKILI We denote by N the set of natural numbers: 1,2,..., These are constructed using Peano axioms. We will not get into the philosophical questions related to this and simply assume

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information