Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.
|
|
- Christine Flynn
- 5 years ago
- Views:
Transcription
1 Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each problem. We will be using Gradescope for homework submission you should have received an invitation - no paper homework will be accepted. Handwritten solutions are still fine though, just make a good quality scan and upload it to Gradescope. You are welcome to discuss problems with your colleagues, but should write and submit your own solution. P3. from Lehmann, Romano, Testing Statistical Hypotheses. Let X,..., X n an i.i.d. sample from N ξ, σ 2. i If σ = σ 0 known, there exists a UMP test for testing H : ξ ξ 0 against ξ > ξ 0 which rejects when Xi ξ 0 is large. ii If ξ = ξ 0 known, there exists a UMP test for testing H : σ σ 0 against σ > σ 0 which rejects when Xi ξ o 2 is too large. i Let us first derive the optimal test for H : ξ = ξ 0 against K : ξ = ξ, where ξ > ξ 0. The optimal test is of course the Neyman-Pearson NP test. Writing f ξ X = n i= f ξx i for the likelihood under ξ, we know that the NP test rejects for large values of: f ξ f ξ0 X = exp 2σ0 2 i= 2ξ ξ 0 n = exp n [ Xi ξ 2 X i ξ 0 2] i= X i nξ 2 ξ 2 0 Observe that since ξ > ξ 0, rejecting for large values of f ξ f ξ0 X must be equivalent to rejecting for large values of n i= X i, hence also for large values of T X = n i= X i ξ 0, i.e. the NP test rejects when T X c for some constant c note that all distributions here have density w.r.t. Lebesgue measure hence we do not need to be careful about > or. The constant c is of course determined so as to control the type-i error, i.e. Normal pdf we require: c α = P ξ0 [T X c] = Φ n with Φ the standard In other words c = nz α, with z α the α quantile of the standard Normal distribution.
2 To summarize: The NP test for H : ξ = ξ 0 versus K : ξ = ξ rejects when: T X nz α We now make two observations: First, this is still a valid test for H : ξ ξ 0, since for any ξ ξ 0, we have that: c nξ ξ0 c P ξ [T X c] = Φ Φ = α n n Thus it must also be UMP for H : ξ ξ 0 vs. K : ξ = ξ. Second its form does not depend on the specific choice of ξ > ξ 0, hence it must be UMP against any ξ > ξ 0, i.e. it is UMP for H : ξ ξ 0 against K : ξ > ξ 0. Note that this argument could have been shortened by sufficiency and monotone likelihood ratio considerations. ii We proceed as in part i. Fix σ > σ 0 and let us derive the NP test for H : σ = σ 0 vs. K : σ = σ. The likelihood ratio is: Since f σ f σ0 X = σn 0 σ n exp n 2σ 2 X i ξ 0 2 i=, we see that the NP test rejects for large values of T X = n 2σ 2 i= X i ξ 0 2. Since under H : σ = σ 0 it holds that T X χ 2 σ n, we see that the NP test takes rejects the null when 0 2 T X c, where c = σ0χ 2 2 n, α with χ 2 n, α the α quantile of the Chi-squared distribution with n degrees of freedom. Now note that for σ σ 0 : P σ [T X c] = P σ [ T X σ 2 0 χ 2 n, α ] P σ [ T X σ 2 ] χ 2 n, α = α So the proposed test is even most powerful for H : σ σ 0 against K : σ = σ. Its form does not depend on the specific choice of σ > σ 0, hence it is UMP for H : σ σ 0 versus K : σ > σ 0. 2
3 P3.2 from Lehmann, Romano, Testing Statistical Hypotheses. Let X = X,..., X n an i.i.d. sample from the uniform distribution U[0, θ]. i For testing H : θ θ 0 against K : θ > θ 0 any test is UMP at level α for which E θ0 φx = α, E θ φx α for θ θ 0 and φx = when max{x,..., x n } > θ 0. ii For testing H : θ = θ 0 against K : θ θ 0, a unique UMP test exists and is given by φx = when max{x,..., x n } > θ 0 or max{x,..., x n } θ 0 α /n and φx = 0 otherwise. i Let us fix θ > θ 0 and show that the specified test is a Neyman-Pearson test for H : θ = θ 0 versus K : θ = θ. For convenience we also write T X = max {X,..., X n } and f θ X for the likelihood ratio: f θ X = θn 0 θ n δ θ0,θ T X We defined: δ θ0,θ u = {, for u θ 0, θ ], for u [0, θ 0 ] This defines δ θ0,θ T X on sets with both P θ0 and P θ probability equal to. Any test φ of the form specified specified in the question, is a test with size α, i.e. E θ0 φx = α and furthermore it rejects when f θ X > θn 0 θ and accepts when f θ n X < θn 0 θ this last statement is n vacuous since P θ0 and P θ almost surely this never happens. By the sufficient condition of the NP theorem Theorem in TSH, it must be most powerful and since θ > θ 0 was arbitrary, it is UMP for H : θ = θ 0 against K : θ > θ 0. Finally this test is also valid for H : θ θ 0, since by assumptions it also satisfies E θ φx α for θ θ 0. Thus it is UMP for H : θ θ 0 against K : θ > θ 0. ii We start by showing that the specified test is indeed UMP. Let us first check that under θ 0 : P θ0 [T X θ 0 α /n] = n P θ0 [X i θ 0 α /n] = i= α /n n = α Hence by our argument in part i, the specified test is a UMP test for testing H : θ = θ 0 against K : θ > θ 0. Let us now check that it is UMP at the other tail too. Thus let us test θ 0 vs θ with 0 < θ < θ 0 and show it is most powerful. Here the Likelihood ratio takes the form: f θ X = θn 0 δ θ n θ,θ 0 T X Where we define δ θ,θ 0 on the P θ0, P θ -support of T X as: 3
4 δ θ,θ 0 u = { 0, for u θ, θ 0 ], for u [0, θ ] We see that the test given in the question rejects when f θ X > θn 0 θ n 2 and accepts when f θ X < θn 0 θ n 2. It furthermore has size α, thus by the sufficient condition of the NP theorem Theorem in TSH, it must be most powerful. It remains to check uniqueness. Let us first find the unique most powerful test of θ 0 against θ = α /n θ 0. The necessary condition of Theorem now implies that there exists a k such that the most powerful test rejects for δ θ,θ 0 T X > k and accepts for δ θ,θ 0 T X < k. This k clearly cannot be < 0, for otherwise we would not have size α. It also clearly cannot be >, for otherwise the test would have power 0. Let us consider the following cases for k. When we write almost surely below we require the statement to hold P θ0 and P θ almost surely. If k 0,, then the test takes exactly the form of accepting when δ θ,θ 0 T X = 0 and rejecting when it is =. But this is equivalent to rejecting when T X θ = α /n θ 0 and accepting otherwise. If k = 0, then the test still rejects at T X θ = α /n θ 0 and it remains to consider what happens when δ θ,θ 0 T X = 0. However almost surely the test needs to accept on this event, since otherwise the test will have size > α. If k =, the test accepts when δ θ,θ 0 T X = 0 and we need to check what happens when it is =. Clearly power is strictly maximized by always rejecting when k = vs. potentially not rejecting with some positive probability, yet still has the correct size α. To recap: The necessary condition of the NP theorem for θ 0 vs θ 0 α /n forces the test to take the form at least almost everywhere w.r.t. Lebesgue measure of rejecting when T X θ 0 α /n and accepting when T X θ 0 α /n, θ 0 ]. The same condition hence must also apply to the UMP test of H : θ = θ 0 vs K : θ θ 0. We still need to check what the test does for T X > θ 0. It is clear that the test should always reject then, since this happens with probability 0 under P θ0, hence this does not influence the size of the test, yet increases power under any θ > θ 0. To be more precise: Fixing any θ > θ 0 we see that power under this θ will be strictly maximized if we always reject for T X θ 0, θ ] and now note that this holds for any θ > θ 0, 4
5 P3.9 from Lehmann, Romano, Testing Statistical Hypotheses. Let X distributed according to P θ, θ Ω and let T sufficient for θ. If φx is any test of a hypothesis concerning θ, then ψt given by ψt = E[φX T = t] is a test depending on T only and its power is identical with that of φx. First let us observe that since φ is a potentially randomized test, it means that φ [0, ]. Hence also ψ = E[φX T = ] [0, ] to be more precise: there exists a version of the conditional expectation with this property. Furthermore ψt is a well-defined statistic, since T is sufficient for θ and hence the conditional expectation E[φX T = t] = E θ [φx T = t] does not depend on θ. Now take any θ Ω and note that by the tower property iterated expectation we get: E θ [ψt ] = E θ [E θ [φx T ]] = E θ [φx] But the right-hand side is just the power function of the test φ. So in particular, if φx is a level α test, then so is ψt, i.e. if we let H Ω denote the parameters corresponding to the null hypothesis being tested, then: sup θ H E θ [ψt ] = sup E θ [φx] α θ H If we also let K Ω for the alternative, then for any θ K the two tests have the same power. 5
Lecture 16 November Application of MoUM to our 2-sided testing problem
STATS 300A: Theory of Statistics Fall 2015 Lecture 16 November 17 Lecturer: Lester Mackey Scribe: Reginald Long, Colin Wei Warning: These notes may contain factual and/or typographic errors. 16.1 Recap
More informationTUTORIAL 8 SOLUTIONS #
TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level
More informationChapter 9: Hypothesis Testing Sections
Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationLecture 12 November 3
STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1
More informationSTAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015
STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis
More informationSTAT 801: Mathematical Statistics. Hypothesis Testing
STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two
More informationChapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma
Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationLecture 21. Hypothesis Testing II
Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric
More informationSTAT 461/561- Assignments, Year 2015
STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and
More informationStatistics 300A, Homework 7. T (x) = C(b) =
Statistics 300A, Homework 7 Modified based on Brad Klingenberg s Solutions 3.34 (i Suppose that g is known, then the joint density of X,..., X n is given by ( [ n Γ(gb g (x x n g exp b which is (3.9 with
More informationLecture 26: Likelihood ratio tests
Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for
More informationThe University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80
The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple
More informationChapter 7. Hypothesis Testing
Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function
More informationStat 710: Mathematical Statistics Lecture 27
Stat 710: Mathematical Statistics Lecture 27 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 27 April 3, 2009 1 / 10 Lecture 27:
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will
More informationHypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes
Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis
More informationStat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS
Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS 1a. Under the null hypothesis X has the binomial (100,.5) distribution with E(X) = 50 and SE(X) = 5. So P ( X 50 > 10) is (approximately) two tails
More informationHypothesis Testing: The Generalized Likelihood Ratio Test
Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random
More informationLecture 17: Likelihood ratio and asymptotic tests
Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f
More informationBEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.
BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth
More informationSome General Types of Tests
Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.
More informationDefinition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationPartitioning the Parameter Space. Topic 18 Composite Hypotheses
Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Class 7 AMS-UCSC Tue 31, 2012 Winter 2012. Session 1 (Class 7) AMS-132/206 Tue 31, 2012 1 / 13 Topics Topics We will talk about... 1 Hypothesis testing
More informationChapter 4. Theory of Tests. 4.1 Introduction
Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule
More informationThe International Journal of Biostatistics
The International Journal of Biostatistics Volume 7, Issue 1 2011 Article 12 Consonance and the Closure Method in Multiple Testing Joseph P. Romano, Stanford University Azeem Shaikh, University of Chicago
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationLecture 23: UMPU tests in exponential families
Lecture 23: UMPU tests in exponential families Continuity of the power function For a given test T, the power function β T (P) is said to be continuous in θ if and only if for any {θ j : j = 0,1,2,...}
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More informationDirection: This test is worth 250 points and each problem worth points. DO ANY SIX
Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes
More informationHypothesis Testing - Frequentist
Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating
More informationSTT 843 Key to Homework 1 Spring 2018
STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ
More informationTheory of Statistical Tests
Ch 9. Theory of Statistical Tests 9.1 Certain Best Tests How to construct good testing. For simple hypothesis H 0 : θ = θ, H 1 : θ = θ, Page 1 of 100 where Θ = {θ, θ } 1. Define the best test for H 0 H
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions
More information557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES
557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp
More informationMethods of evaluating tests
Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed
More informationStat 5421 Lecture Notes Fuzzy P-Values and Confidence Intervals Charles J. Geyer March 12, Discreteness versus Hypothesis Tests
Stat 5421 Lecture Notes Fuzzy P-Values and Confidence Intervals Charles J. Geyer March 12, 2016 1 Discreteness versus Hypothesis Tests You cannot do an exact level α test for any α when the data are discrete.
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationStatistical hypothesis testing The parametric and nonparametric cases. Madalina Olteanu, Université Paris 1
Statistical hypothesis testing The parametric and nonparametric cases Madalina Olteanu, Université Paris 1 2016-2017 Contents 1 Parametric hypothesis testing 3 1.1 An introduction on statistical hypothesis
More informationSTAT 514 Solutions to Assignment #6
STAT 514 Solutions to Assignment #6 Question 1: Suppose that X 1,..., X n are a simple random sample from a Weibull distribution with density function f θ x) = θcx c 1 exp{ θx c }I{x > 0} for some fixed
More informationLecture 10: Generalized likelihood ratio test
Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual
More informationExercises Chapter 4 Statistical Hypothesis Testing
Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationReview: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.
1 Review: Let X 1, X,..., X n denote n independent random variables sampled from some distribution might not be normal!) with mean µ) and standard deviation σ). Then X µ σ n In other words, X is approximately
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationQuantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing
Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October
More information10. Composite Hypothesis Testing. ECE 830, Spring 2014
10. Composite Hypothesis Testing ECE 830, Spring 2014 1 / 25 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve unknown parameters
More informationHYPOTHESIS TESTING: FREQUENTIST APPROACH.
HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationOn the GLR and UMP tests in the family with support dependent on the parameter
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 3, September 2015, pp 221 228. Published online in International Academic Press (www.iapress.org On the GLR and UMP tests
More informationChapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests
Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Throughout this chapter we consider a sample X taken from a population indexed by θ Θ R k. Instead of estimating the unknown parameter, we
More informationTests and Their Power
Tests and Their Power Ling Kiong Doong Department of Mathematics National University of Singapore 1. Introduction In Statistical Inference, the two main areas of study are estimation and testing of hypotheses.
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2017 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationSOLUTION FOR HOMEWORK 4, STAT 4352
SOLUTION FOR HOMEWORK 4, STAT 4352 Welcome to your fourth homework. Here we begin the study of confidence intervals, Errors, etc. Recall that X n := (X 1,...,X n ) denotes the vector of n observations.
More informationWe know from STAT.1030 that the relevant test statistic for equality of proportions is:
2. Chi 2 -tests for equality of proportions Introduction: Two Samples Consider comparing the sample proportions p 1 and p 2 in independent random samples of size n 1 and n 2 out of two populations which
More informationLecture 3. Inference about multivariate normal distribution
Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates
More informationHypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.
Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:
More informationReview Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the
Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationTopic 15: Simple Hypotheses
Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is
More information8.1-4 Test of Hypotheses Based on a Single Sample
8.1-4 Test of Hypotheses Based on a Single Sample Example 1 (Example 8.6, p. 312) A manufacturer of sprinkler systems used for fire protection in office buildings claims that the true average system-activation
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More information2.6.3 Generalized likelihood ratio tests
26 HYPOTHESIS TESTING 113 263 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : θ Θ against H 1 : θ Θ\Θ It can be used
More informationChapters 10. Hypothesis Testing
Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the
More informationComposite Hypotheses and Generalized Likelihood Ratio Tests
Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More information2014/2015 Smester II ST5224 Final Exam Solution
014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationMore Empirical Process Theory
More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory
More informationIntroductory Econometrics. Review of statistics (Part II: Inference)
Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing
More informationj=1 π j = 1. Let X j be the number
THE χ 2 TEST OF SIMPLE AND COMPOSITE HYPOTHESES 1. Multinomial distributions Suppose we have a multinomial (n,π 1,...,π k ) distribution, where π j is the probability of the jth of k possible outcomes
More informationLing 289 Contingency Table Statistics
Ling 289 Contingency Table Statistics Roger Levy and Christopher Manning This is a summary of the material that we ve covered on contingency tables. Contingency tables: introduction Odds ratios Counting,
More informationECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing
ECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III 1 / 8 Basics Hypotheses H 0
More informationMATH 240. Chapter 8 Outlines of Hypothesis Tests
MATH 4 Chapter 8 Outlines of Hypothesis Tests Test for Population Proportion p Specify the null and alternative hypotheses, ie, choose one of the three, where p is some specified number: () H : p H : p
More informationNonparametric Tests for Multi-parameter M-estimators
Nonparametric Tests for Multi-parameter M-estimators John Robinson School of Mathematics and Statistics University of Sydney The talk is based on joint work with John Kolassa. It follows from work over
More informationHypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004
Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),
More informationMathematics Ph.D. Qualifying Examination Stat Probability, January 2018
Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from
More informationMath 181B Homework 1 Solution
Math 181B Homework 1 Solution 1. Write down the likelihood: L(λ = n λ X i e λ X i! (a One-sided test: H 0 : λ = 1 vs H 1 : λ = 0.1 The likelihood ratio: where LR = L(1 L(0.1 = 1 X i e n 1 = λ n X i e nλ
More informationMath 152. Rumbos Fall Solutions to Assignment #12
Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus
More informationHypothesis Testing Chap 10p460
Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies
More informationCherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants
18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009
More informationInterval Estimation. Chapter 9
Chapter 9 Interval Estimation 9.1 Introduction Definition 9.1.1 An interval estimate of a real-values parameter θ is any pair of functions, L(x 1,..., x n ) and U(x 1,..., x n ), of a sample that satisfy
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationMathematics Qualifying Examination January 2015 STAT Mathematical Statistics
Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........
More informationDetection Theory. Composite tests
Composite tests Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below Thu Chapter 5: Correction Thu I claimed that the above, which is the most general
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More informationAn Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions
An Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions H. V. Poor Princeton University March 17, 5 Exercise 1: Let {h k,l } denote the impulse response of a
More informationECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters
ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters D. Richard Brown III Worcester Polytechnic Institute 26-February-2009 Worcester Polytechnic Institute D. Richard Brown III 26-February-2009
More informationHypothesis Testing One Sample Tests
STATISTICS Lecture no. 13 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 12. 1. 2010 Tests on Mean of a Normal distribution Tests on Variance of a Normal
More information