Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Similar documents
Lecture 16 November Application of MoUM to our 2-sided testing problem

TUTORIAL 8 SOLUTIONS #

Chapter 9: Hypothesis Testing Sections

STAT 830 Hypothesis Testing

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

Lecture 12 November 3

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 830 Hypothesis Testing

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Lecture 21. Hypothesis Testing II

STAT 461/561- Assignments, Year 2015

Statistics 300A, Homework 7. T (x) = C(b) =

Lecture 26: Likelihood ratio tests

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

Chapter 7. Hypothesis Testing

Stat 710: Mathematical Statistics Lecture 27

Statistical Inference

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS

Hypothesis Testing: The Generalized Likelihood Ratio Test

Lecture 17: Likelihood ratio and asymptotic tests

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

Master s Written Examination

Some General Types of Tests

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

Statistical Inference

Chapter 4. Theory of Tests. 4.1 Introduction

The International Journal of Biostatistics

Spring 2012 Math 541B Exam 1

Lecture 23: UMPU tests in exponential families

Mathematical Statistics

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

A Very Brief Summary of Statistical Inference, and Examples

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Hypothesis Testing - Frequentist

STT 843 Key to Homework 1 Spring 2018

Theory of Statistical Tests

Testing Statistical Hypotheses

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES

Methods of evaluating tests

Stat 5421 Lecture Notes Fuzzy P-Values and Confidence Intervals Charles J. Geyer March 12, Discreteness versus Hypothesis Tests

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

Statistical hypothesis testing The parametric and nonparametric cases. Madalina Olteanu, Université Paris 1

STAT 514 Solutions to Assignment #6

Lecture 10: Generalized likelihood ratio test

Exercises Chapter 4 Statistical Hypothesis Testing

Topic 10: Hypothesis Testing

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.

Institute of Actuaries of India

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing

10. Composite Hypothesis Testing. ECE 830, Spring 2014

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

Summary of Chapters 7-9

On the GLR and UMP tests in the family with support dependent on the parameter

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests

Tests and Their Power

Topic 10: Hypothesis Testing

SOLUTION FOR HOMEWORK 4, STAT 4352

We know from STAT.1030 that the relevant test statistic for equality of proportions is:

Lecture 3. Inference about multivariate normal distribution

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Statistics 3858 : Maximum Likelihood Estimators

Topic 15: Simple Hypotheses

8.1-4 Test of Hypotheses Based on a Single Sample

simple if it completely specifies the density of x

2.6.3 Generalized likelihood ratio tests

Chapters 10. Hypothesis Testing

Composite Hypotheses and Generalized Likelihood Ratio Tests

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

2014/2015 Smester II ST5224 Final Exam Solution

Math 494: Mathematical Statistics

More Empirical Process Theory

Introductory Econometrics. Review of statistics (Part II: Inference)

j=1 π j = 1. Let X j be the number

Ling 289 Contingency Table Statistics

ECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing

MATH 240. Chapter 8 Outlines of Hypothesis Tests

Nonparametric Tests for Multi-parameter M-estimators

Hypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Math 181B Homework 1 Solution

Math 152. Rumbos Fall Solutions to Assignment #12

Hypothesis Testing Chap 10p460

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Interval Estimation. Chapter 9

Ch. 5 Hypothesis Testing

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Testing Statistical Hypotheses

Detection Theory. Composite tests

Stat 5102 Final Exam May 14, 2015

An Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions

ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters

Hypothesis Testing One Sample Tests

Transcription:

Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each problem. We will be using Gradescope https://www.gradescope.com for homework submission you should have received an invitation - no paper homework will be accepted. Handwritten solutions are still fine though, just make a good quality scan and upload it to Gradescope. You are welcome to discuss problems with your colleagues, but should write and submit your own solution. P3. from Lehmann, Romano, Testing Statistical Hypotheses. Let X,..., X n an i.i.d. sample from N ξ, σ 2. i If σ = σ 0 known, there exists a UMP test for testing H : ξ ξ 0 against ξ > ξ 0 which rejects when Xi ξ 0 is large. ii If ξ = ξ 0 known, there exists a UMP test for testing H : σ σ 0 against σ > σ 0 which rejects when Xi ξ o 2 is too large. i Let us first derive the optimal test for H : ξ = ξ 0 against K : ξ = ξ, where ξ > ξ 0. The optimal test is of course the Neyman-Pearson NP test. Writing f ξ X = n i= f ξx i for the likelihood under ξ, we know that the NP test rejects for large values of: f ξ f ξ0 X = exp 2σ0 2 i= 2ξ ξ 0 n = exp n [ Xi ξ 2 X i ξ 0 2] i= X i nξ 2 ξ 2 0 Observe that since ξ > ξ 0, rejecting for large values of f ξ f ξ0 X must be equivalent to rejecting for large values of n i= X i, hence also for large values of T X = n i= X i ξ 0, i.e. the NP test rejects when T X c for some constant c note that all distributions here have density w.r.t. Lebesgue measure hence we do not need to be careful about > or. The constant c is of course determined so as to control the type-i error, i.e. Normal pdf we require: c α = P ξ0 [T X c] = Φ n with Φ the standard In other words c = nz α, with z α the α quantile of the standard Normal distribution.

To summarize: The NP test for H : ξ = ξ 0 versus K : ξ = ξ rejects when: T X nz α We now make two observations: First, this is still a valid test for H : ξ ξ 0, since for any ξ ξ 0, we have that: c nξ ξ0 c P ξ [T X c] = Φ Φ = α n n Thus it must also be UMP for H : ξ ξ 0 vs. K : ξ = ξ. Second its form does not depend on the specific choice of ξ > ξ 0, hence it must be UMP against any ξ > ξ 0, i.e. it is UMP for H : ξ ξ 0 against K : ξ > ξ 0. Note that this argument could have been shortened by sufficiency and monotone likelihood ratio considerations. ii We proceed as in part i. Fix σ > σ 0 and let us derive the NP test for H : σ = σ 0 vs. K : σ = σ. The likelihood ratio is: Since f σ f σ0 X = σn 0 σ n exp n 2σ 2 X i ξ 0 2 i=, we see that the NP test rejects for large values of T X = n 2σ 2 i= X i ξ 0 2. Since under H : σ = σ 0 it holds that T X χ 2 σ n, we see that the NP test takes rejects the null when 0 2 T X c, where c = σ0χ 2 2 n, α with χ 2 n, α the α quantile of the Chi-squared distribution with n degrees of freedom. Now note that for σ σ 0 : P σ [T X c] = P σ [ T X σ 2 0 χ 2 n, α ] P σ [ T X σ 2 ] χ 2 n, α = α So the proposed test is even most powerful for H : σ σ 0 against K : σ = σ. Its form does not depend on the specific choice of σ > σ 0, hence it is UMP for H : σ σ 0 versus K : σ > σ 0. 2

P3.2 from Lehmann, Romano, Testing Statistical Hypotheses. Let X = X,..., X n an i.i.d. sample from the uniform distribution U[0, θ]. i For testing H : θ θ 0 against K : θ > θ 0 any test is UMP at level α for which E θ0 φx = α, E θ φx α for θ θ 0 and φx = when max{x,..., x n } > θ 0. ii For testing H : θ = θ 0 against K : θ θ 0, a unique UMP test exists and is given by φx = when max{x,..., x n } > θ 0 or max{x,..., x n } θ 0 α /n and φx = 0 otherwise. i Let us fix θ > θ 0 and show that the specified test is a Neyman-Pearson test for H : θ = θ 0 versus K : θ = θ. For convenience we also write T X = max {X,..., X n } and f θ X for the likelihood ratio: f θ X = θn 0 θ n δ θ0,θ T X We defined: δ θ0,θ u = {, for u θ 0, θ ], for u [0, θ 0 ] This defines δ θ0,θ T X on sets with both P θ0 and P θ probability equal to. Any test φ of the form specified specified in the question, is a test with size α, i.e. E θ0 φx = α and furthermore it rejects when f θ X > θn 0 θ and accepts when f θ n X < θn 0 θ this last statement is n vacuous since P θ0 and P θ almost surely this never happens. By the sufficient condition of the NP theorem Theorem 3.2.. in TSH, it must be most powerful and since θ > θ 0 was arbitrary, it is UMP for H : θ = θ 0 against K : θ > θ 0. Finally this test is also valid for H : θ θ 0, since by assumptions it also satisfies E θ φx α for θ θ 0. Thus it is UMP for H : θ θ 0 against K : θ > θ 0. ii We start by showing that the specified test is indeed UMP. Let us first check that under θ 0 : P θ0 [T X θ 0 α /n] = n P θ0 [X i θ 0 α /n] = i= α /n n = α Hence by our argument in part i, the specified test is a UMP test for testing H : θ = θ 0 against K : θ > θ 0. Let us now check that it is UMP at the other tail too. Thus let us test θ 0 vs θ with 0 < θ < θ 0 and show it is most powerful. Here the Likelihood ratio takes the form: f θ X = θn 0 δ θ n θ,θ 0 T X Where we define δ θ,θ 0 on the P θ0, P θ -support of T X as: 3

δ θ,θ 0 u = { 0, for u θ, θ 0 ], for u [0, θ ] We see that the test given in the question rejects when f θ X > θn 0 θ n 2 and accepts when f θ X < θn 0 θ n 2. It furthermore has size α, thus by the sufficient condition of the NP theorem Theorem 3.2.. in TSH, it must be most powerful. It remains to check uniqueness. Let us first find the unique most powerful test of θ 0 against θ = α /n θ 0. The necessary condition of Theorem 3.2.. now implies that there exists a k such that the most powerful test rejects for δ θ,θ 0 T X > k and accepts for δ θ,θ 0 T X < k. This k clearly cannot be < 0, for otherwise we would not have size α. It also clearly cannot be >, for otherwise the test would have power 0. Let us consider the following cases for k. When we write almost surely below we require the statement to hold P θ0 and P θ almost surely. If k 0,, then the test takes exactly the form of accepting when δ θ,θ 0 T X = 0 and rejecting when it is =. But this is equivalent to rejecting when T X θ = α /n θ 0 and accepting otherwise. If k = 0, then the test still rejects at T X θ = α /n θ 0 and it remains to consider what happens when δ θ,θ 0 T X = 0. However almost surely the test needs to accept on this event, since otherwise the test will have size > α. If k =, the test accepts when δ θ,θ 0 T X = 0 and we need to check what happens when it is =. Clearly power is strictly maximized by always rejecting when k = vs. potentially not rejecting with some positive probability, yet still has the correct size α. To recap: The necessary condition of the NP theorem for θ 0 vs θ 0 α /n forces the test to take the form at least almost everywhere w.r.t. Lebesgue measure of rejecting when T X θ 0 α /n and accepting when T X θ 0 α /n, θ 0 ]. The same condition hence must also apply to the UMP test of H : θ = θ 0 vs K : θ θ 0. We still need to check what the test does for T X > θ 0. It is clear that the test should always reject then, since this happens with probability 0 under P θ0, hence this does not influence the size of the test, yet increases power under any θ > θ 0. To be more precise: Fixing any θ > θ 0 we see that power under this θ will be strictly maximized if we always reject for T X θ 0, θ ] and now note that this holds for any θ > θ 0, 4

P3.9 from Lehmann, Romano, Testing Statistical Hypotheses. Let X distributed according to P θ, θ Ω and let T sufficient for θ. If φx is any test of a hypothesis concerning θ, then ψt given by ψt = E[φX T = t] is a test depending on T only and its power is identical with that of φx. First let us observe that since φ is a potentially randomized test, it means that φ [0, ]. Hence also ψ = E[φX T = ] [0, ] to be more precise: there exists a version of the conditional expectation with this property. Furthermore ψt is a well-defined statistic, since T is sufficient for θ and hence the conditional expectation E[φX T = t] = E θ [φx T = t] does not depend on θ. Now take any θ Ω and note that by the tower property iterated expectation we get: E θ [ψt ] = E θ [E θ [φx T ]] = E θ [φx] But the right-hand side is just the power function of the test φ. So in particular, if φx is a level α test, then so is ψt, i.e. if we let H Ω denote the parameters corresponding to the null hypothesis being tested, then: sup θ H E θ [ψt ] = sup E θ [φx] α θ H If we also let K Ω for the alternative, then for any θ K the two tests have the same power. 5