Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes
|
|
- Bryan Washington
- 5 years ago
- Views:
Transcription
1 Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis test. A widely used formalization of this process is due to Neyman and Pearson. Here we begin with null hypothesis that the new drug has no effect, denoted by H 0. The null hypothesis is often the reverse of what we actually believe why? Because the researcher hopes to reject the hypothesis and announce that the new drug leads to significant improvements. If the hypothesis is not rejected, the researcher can announce nothing and move on to a new trial. Hypothesis test of population mean. Hospital workers are subject to a radiation exposure emanating from the skin of the patient. A researcher is interested in the plausibility of the statement that the population mean µ of radiation level is µ 0 the researcher s hypothesis. Then the null hypothesis is H 0 : µ = µ 0. The opposite of the null hypothesis, called an alternative hypothesis, becomes H A : µ µ 0. Thus, the hypothesis test problem H 0 versus H A is formed. The problem here is to whether or not to reject H 0 in favor of H A. Assessment of null hypothesis. To assess the null hypothesis, the radiation levels X 1,..., X n are measured from n patients who had been injected with a radioactive tracer, and assumed to be independent and normally distributed with the mean µ. Under the null hypothesis, the random variable T = X µ 0 S/ n has the t-distribution with (n 1 degrees of freedom, and it is called a test statistic. Thus, we obtain the exact probability P ( T t α/2,n 1 = α. When α is chosen to be a small value (0.05 or 0.01, for example, it is unlikely that the absolute value T is larger than the critical point t α/2,n 1. Assessment of null hypothesis, continued. We say that the null hypothesis H 0 is rejected with significance level α (or, size α when the observed value t of T satisfies t > t α/2,n 1. Example 1. We have µ 0 = 5.4 for the hypothesis, and decided to give a test with significance level α = Suppose that we have obtained X = and S = from the actual data with n = 28. Solution. We can compute T = / Since T = 1.79 t 0.025,27 = 2.052, the null hypothesis cannot be rejected. Thus, the evidence against the null hypothesis is not persuasive. Page 1 Mathematical Statistics/October 30, 2018
2 One-sided hypothesis test. In the same case of hospital workers subject to a radiation exposure, this time the researcher is interested in the plausibility of the statement that the population mean µ is less than µ 0. Then the hypothesis test problem is H 0 : µ µ 0 versus H A : µ < µ 0. Here we use the same test statistic T = X µ 0 S/ n, and we reject H 0 with significant level α when you find that t < t α,n 1 for the observed value t of T. Example 2. We use the same µ 0 = 5.4 for the hypotheses and the same significance level α = 0.05, but use the one-sided test. Solution. Recall that X = and S = were obtained from the data with n = 28. Since T = 1.79 < t 0.05,27 = 1.703, the null hypothesis H 0 is rejected. Thus, the outcome is statistically significant so that the population mean µ is smaller than 5.4. Simple and composite hypotheses. Let θ be a parameter of an underlying probability density function f(x; θ for a certain population. The hypothesis H 0 : θ = θ 0 is called a simple hypothesis, since it completely specifies the underlying distribution. Whereas, the hypothesis H 0 : θ Θ 0 with a set Θ 0 of parameters is called a composite hypothesis if the set Θ 0 contains more than one element. The opposite of the null hypothesis is called an alternative hypothesis, and is similarly expressed as H A : θ Θ 1 where Θ 1 is another set of parameters and it satisfies Θ 0 Θ 1 =. The set Θ 1 is typically (but not necessarily chosen to be the complement of Θ 0. Thus, the hypothesis test problem can be formed as H 0 : θ Θ 0 versus H A : θ Θ 1 (4.1 in order to determine whether or not to reject H 0 in favor of H A. Power function. Given a random sample X = (X 1,..., X n, a function 1 if H 0 is rejected; δ(x = 0 otherwise. (4.2 is called a test function. Given the test (4.2, we can define the power function by K(θ 0 = P ( Reject H 0 θ = θ 0 = E (δ(x θ = θ 0. Test statistic. A typical test, however, is presented in the form H 0 is rejected if T (X c. Here T (X is called a test statistic, and c is called a critical value. Then the test function can be expressed as 1 T (X c; δ(x = (4.3 0 otherwise. Thus, we obtain K(θ 0 = P (T (X c θ = θ 0. Page 2 Mathematical Statistics/October 30, 2018
3 Type I error and significance level. The probability of type I error (i.e., H 0 is incorrectly rejected when H 0 is true is defined by α = sup θ 0 Θ 0 K(θ 0, which is also known as the size of the test. Having calculated the size α of the test, (4.2 or (4.3 is said to be a level α test, or a test with significance level α. Type II error and power of test. What is the probability that we incorrectly accept H 0 when it is actually false? Such probability β is called the probability of type II error. Then the value (1 β is known as the power of the test, indicating how correctly we can reject H 0 when it is actually false. Suppose that H 0 is in fact false, say θ = θ 1 for some θ 1 Θ 1. Then the power of the test is calculated by K(θ 1. Example 3. Suppose that the true population mean is µ = 5.1 (versus the value µ 0 = 5.4 in our hypotheses. Then calculate the power of the test with significance level α = Solution. (a In the two-sided hypothesis testing, we reject H 0 when T > t 0.025,27 = Therefore, the power of the test is K(5.1 = P ( T > µ = (b In the one-sided hypothesis testing, we reject H 0 when T < t 0.05,27 = Therefore, the power of the test is K(5.1 = P (T < µ = This explains why we could not reject H 0 in the two-sided hypothesis testing. Our chance to detect the falsehood of H 0 is only 52%, while we have 66% of the chance in the one-sided hypothesis testing. Uniformly most powerful test. Suppose that the test (4.2 has the size α. This test is said to be uniformly most powerful (UMP if it satisfies K(θ 1 K (θ 1 for all θ 1 Θ 1 for the power function K of every other level α test. Furthermore, if this is given in the form (4.3 with test statistic T (X, then the test statistic T (X is said to be optimal. Neyman-Pearson lemma. Consider the testing problem with simple (null and alternative hypotheses: H 0 : θ = θ 0 versus H A : θ = θ 1. Then we can construct the likelihood ratio by where is the likelihood function. L(θ 0, θ 1 ; x = L(θ 1, x L(θ 0, x L(θ; x = n f(x i ; θ Page 3 Mathematical Statistics/October 30, 2018
4 Lemma 4. δ(x = 1 L(θ 0, θ 1 ; X c; 0 otherwise. is uniformly most powerful, and called the Neyman Pearson test. (4.4 Solution. For any function ψ(x satisfying 0 ψ(x 1, we obtain E(ψ(X δ(x θ = θ 1 ( } L(θ0, θ 1 ; X = c E [ψ(x δ(x] θ = θ 0 c c E(ψ(X δ(x θ = θ 0. Observe that the right-hand side vanishes when the two test functions ψ(x and δ(x share the same size α. Monotone likelihood ratio family. Let f(x; θ be a joint density function with parameter θ, and let L(θ 0, θ 1 ; x be the likelihood ratio. Suppose that T (X is a statistic and does not depend on the parameter θ. Then f(x; θ is called a monotone likelihood ratio family in T (X if (a f(x; θ 0 and f(x; θ 1 are distinct for θ 0 θ 1 ; (b L(θ 0, θ 1 ; x is a strictly increasing function of T (x whenever θ 0 < θ 1. Monotone likelihood ratio family. Let θ 0 < θ 1, and consider the following test problem: H 0 : θ = θ 0 versus H A : θ = θ 1. Suppose that f(x; θ is a monotone likelihood ratio family in T (X. Then we can express the UMP test (4.4 by 1 T (X c; δ(x = 0 otherwise. By setting ψ(x K(θ 0 in the proof of Neyman-Pearson lemma we can observe that K(θ 0 = E(ψ(X θ = θ 1 K(θ 1 Optimal tests. Consider the following test problem: H 0 : θ θ 0 (or H 0 : θ = θ 0 versus H A : θ > θ 0. (4.5 If f(x; θ is a monotone likelihood ratio family in T (X, then the test functions (4.3 and (4.4 is equivalent whenever θ 0 < θ 1, and the power function K(θ for these tests becomes an increasing function. Furthermore, T (X is an optimal test statistic, and the size of the test is simply given by α = K(θ 0. Optimal tests, continued. Page 4 Mathematical Statistics/October 30, 2018
5 (a Essentially, uniformly most powerful tests exist only for the test problem (4.5. (b Suppose that f(x; θ is of the exponential family f(x; θ = exp [c(θu(x + h(x + d(θ], x A and that c(θ is a strictly increasing function. Then f(x; θ is a monotone likelihood ratio family in u(x. And the natural sufficient statistic u(x becomes an optimal test statistic. Likelihood ratio test procedure. The Neyman Pearson test (4.4 can be generalized for the composite hypotheses in (4.1: (i obtain the maximum likelihood estimate (MLE ˆθ of θ, (ii calculate also the MLE ˆθ 0 restricted for θ Θ 0, and (iii construct the likelihood ratio λ(x = L(ˆθ; X L(ˆθ 0 ; X = sup L(θ; X θ sup L(θ; X = max θ Θ 0 sup L(θ; X θ Θ 1 sup L(θ; X, 1. θ Θ 0 The test statistic λ(x yields an excellent test procedure in many practical applications, though it is not an optimal test in general. Page 5 Mathematical Statistics/October 30, 2018
6 Problem 1. Suppose that (X 1,..., X n and (Y 1,..., Y n are two independent random samples respectively from N(µ 1, 400 and N(µ 2, 225. Let θ = µ 1 µ 2, and let K(θ be the power function for the test δ( X, Ȳ = 1 if X Ȳ c; where X = 1 n n X i and Ȳ = 1 n n Y i. Calculate n and c so that K(0 = 0.05 and K(10 = 0.9. Solution. Observe that X N ( µ 1, ( 400 n, Ȳ N µ 2, 225 n, and therefore, that X Ȳ N ( θ, 625 n. In order for K(θ to achieve K(0 = P ( X Ȳ c θ = 0 = 0.05 and K(10 = 0.9, c we must find c and n satisfying = z 0.05 and c 10 = z 0.9. Therefore, we obtain 625/n 625/n c 5.62 and n (or 54. Problem 2. Suppose that (X 1,..., X n is a random sample from N(0, θ with parameter 0 < θ <. Then show that the joint density f(x; θ is a monotone likelihood ratio family in T (X = n X2 i. Find a UMP test for H 0 : θ θ 0 versus H A : θ > θ 0. Solution. The joint density f(x; θ is of the exponential family f(x; θ = exp [c(θt (x + h(x + d(θ] with c(θ = 1, h(x 0 and d(θ = n ln(2πθ. Thus, c(θ is an increasing function of θ, 2θ 2 and therefore, 1 if T (X c; δ(x = is the UMP test, Problem 3. Suppose that (X 1,..., X 25 is a random sample of size n = 25 from N(θ, 100. Find the UMP test of size α = 0.1 for testing H 0 : θ 75 versus H A : θ > 75. Solution. When X 1,..., X iid n N(θ, σ 2 with known σ 2 = 100, the joint density [ ] θ n f(x; θ = exp x σ 2 i 1 n x 2 2σ 2 i nθ2 2σ n 2 2 ln(2πσ2 becomes a monotone likelihood ratio family in T (X = n X i, and 1 if T (X c; δ(x = is the UMP test for H 0 : θ θ 0 versus H A : θ > θ 0. Since P (T (X c θ = 75 = 0.1 with n = 25, we can find c = (50z (75( , where z is the critical point with level α = 0.1. Page 6 Mathematical Statistics/October 30, 2018
7 Problem 4. Suppose that (X 1,..., X n is a random sample from N(θ, 16. Find the UMP test of H 0 : θ 25 versus H A : θ < 25 so that the power function K(θ achieves K(25 = 0.1 and K(23 = 0.9. Solution. We can re-parametrize the density function by setting θ = λ. Then the hypotheses are restated as H 0 : λ = 25 versus H A : λ > 25, and 1 if X c; δ(x = becomes a UMP test with X = 1 n n X i. Here we want to achieve P ( X c θ = 25 = 0.1 and P ( X c θ = 23 = 0.9 by choosing appropriate n and c. Since c = 25 z /n and c = 23 z /n, we obtain c = 24 and n Problem 5. Suppose that (X 1,..., X n is a random sample from the pdf f(x; θ = θx θ 1, 0 < x < 1, with parameter 0 < θ <. Then show that the joint density f(x; θ is a monotone likelihood ratio family in T (X = n ln X i. Find a UMP test for H 0 : θ θ 0 versus H A : θ > θ 0. Solution. The joint density f(x; θ is of the exponential family f(x; θ = exp [c(θt (x + h(x + d(θ] with c(θ = θ 1, h(x 0 and d(θ = n ln θ. Since c(θ is increasing, f(x; θ is a monotone likelihood ratio family in T (X, and 1 if T (X c; δ(x = is the UMP test. Problem 6. Suppose that (X 1,..., X 5 is a random sample of five Bernoulli trials having the frequency function f(x; θ = θ x (1 θ 1 x, x = 0, 1, with parameter 0 < θ < 1. (a Show that the joint frequency f(x; θ is a monotone likelihood ratio family in T (X = 5 X i, and that 1 if T (X c; δ(x = is a UMP test for H 0 : θ 1 2 versus H A : θ > 1 2. (b Find the size of test (i.e., significance level when c = 4. (c Find the size of test (i.e., significance level when c = 5. Solution. The joint frequency f(x; θ is of the exponential family f(x; θ = exp [c(θt (x + h(x + d(θ] with c(λ = ln ( θ 1 θ, h(x 0 and d(θ = 5 ln(1 θ. Page 7 Mathematical Statistics/October 30, 2018
8 (a Since c(θ is an increasing function, f(x; θ is a monotone likelihood ratio family in T (X, and therefore, δ(x is the UMP test. (b P (T (X 4 θ = 1 = ( ( ( ( = (c P (T (X 5 θ = 1 = ( ( = Problem 7. Suppose that X = (X 1,..., X n and Y = (Y 1,..., Y m are two random samples from group 1 and 2 respectively distributed as N(θ 1, θ 3 and N(θ 2, θ 3. (a Under H 0 : θ 1 = θ 2 = θ 0, calculate the MLE ˆθ 0 and ˆθ 3, and simplify L(ˆθ 0, ˆθ 3. (b Under H A : θ 1 θ 2, calculate the MLE ˆθ1, ˆθ2, and ˆθ3. Then obtain L(ˆθ1, ˆθ2, ˆθ3. (c Now suppose that n = m = 8, X = 75.2, Ȳ = 78.6, n (X i X 2 = 71.2, and m (Y j Ȳ 2 = Then construct a likelihood ratio test procedure for H 0 : θ 1 = θ 2. Test it at the significance level Obtain the p-value and write a conclusion of the test. Solution. (a Under H 0 we obtain the log likelihood function ln L(θ 0, θ 3 of θ 0 and θ 3 by n + m ln(2πθ 3 1 (X i θ (Y j θ θ 3 Then we can find the MLE by solving ln L(θ 0, θ 3 = 1 (X i θ 0 + (Y j θ 0 = 0; θ 0 θ 3 ln L(θ 0, θ 3 = n + m + 1 (X θ 3 2θ 3 2θ3 2 i θ (Y j θ 0 2 = 0. Thus, we can simplify ( (n+m/2 ( L(ˆθ 0, ˆθ 1 3 = exp n + m 2πˆθ 3 2 by applying the solution ˆθ 0 = 1 X i + Y j n + m ˆθ 3 = 1 (X i n + m ˆθ (Y j ˆθ 0 2 Page 8 Mathematical Statistics/October 30, 2018
9 (b Under H A we obtain the log likelihood function ln L(θ 1, θ 2, θ 3 by n + m ln(2πθ 3 1 (X i θ (Y j θ θ 3 Then we can find the MLE by solving θ 1 ln L(θ 1, θ 2, θ 3 = 1 θ 3 θ 2 ln L(θ 1, θ 2, θ 3 = 1 θ 3 θ 3 ln L(θ 1, θ 2, θ 3 = n + m 2θ θ 2 3 n (X i θ 1 = 0; (Y j θ 2 = 0; (X i θ (Y j θ 2 2 = 0, We can simplify by applying the solution L(ˆθ1, ˆθ2, ˆθ3 ( 1 = 2πˆθ3 (n+m/2 ( exp n + m 2 ˆθ 1 = X := 1 n X i n ˆθ 2 = Ȳ := 1 Y j m ˆθ 3 = 1 (X i ˆθ1 2 + n + m (Y j ˆθ2 2 (c We can construct the likelihood ratio test statistic λ(x, Y from a random sample X = (X 1,..., X n and Y = (Y 1,..., Y m of group 1 and 2 by λ(x, Y = L(ˆθ1, ˆθ2, ˆθ3 L(ˆθ 0, ˆθ 3 ( ˆθ3 = ˆθ 3 (n+m/2 Here the test statistic can expressed as ( (n + m 2 + T 2 λ(x, Y = n + m 2 (n+m/2 Page 9 Mathematical Statistics/October 30, 2018
10 where T = X Ȳ with the pooled variance 1 S + 1 n m S 2 = 1 (X i n + m 2 X 2 + (Y j Ȳ 2 The test function is equivalently constructed by 1 if T c; δ(x, Y = 0 otherwise. Note that T has a t-distribution with (n + m 2 degrees of freedom under H 0. choosing the critical value c = t α/2,n+m 2 we can achieve the significance level Thus, by P ( T t α/2,n+m 2 θ 1 = θ 2 = α for type I error. Finally suppose that n = m = 8, X = 75.2, Ȳ = 78.6, n (X i X 2 = 71.2, and m (Y j Ȳ 2 = Then we obtain S = 3 and T = By comparing T with the critical value t 0.025,14 = , we can reject H 0. The same conclusion was obtained by calculating the p-value 0.04 which is less than α = Page 10 Mathematical Statistics/October 30, 2018
Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationTopic 19 Extensions on the Likelihood Ratio
Topic 19 Extensions on the Likelihood Ratio Two-Sided Tests 1 / 12 Outline Overview Normal Observations Power Analysis 2 / 12 Overview The likelihood ratio test is a popular choice for composite hypothesis
More informationHypothesis testing: theory and methods
Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable
More informationHypothesis Testing: The Generalized Likelihood Ratio Test
Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationTheory of Statistical Tests
Ch 9. Theory of Statistical Tests 9.1 Certain Best Tests How to construct good testing. For simple hypothesis H 0 : θ = θ, H 1 : θ = θ, Page 1 of 100 where Θ = {θ, θ } 1. Define the best test for H 0 H
More informationTopic 15: Simple Hypotheses
Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationThe University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80
The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationSTA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4
STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.
More informationChapter 4. Theory of Tests. 4.1 Introduction
Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule
More informationHypothesis Testing Chap 10p460
Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationPartitioning the Parameter Space. Topic 18 Composite Hypotheses
Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one
More informationLecture Testing Hypotheses: The Neyman-Pearson Paradigm
Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:
More informationChapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests
Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Throughout this chapter we consider a sample X taken from a population indexed by θ Θ R k. Instead of estimating the unknown parameter, we
More informationDirection: This test is worth 250 points and each problem worth points. DO ANY SIX
Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These
More information4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49
4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether
More informationMath 152. Rumbos Fall Solutions to Assignment #12
Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationReview: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.
1 Review: Let X 1, X,..., X n denote n independent random variables sampled from some distribution might not be normal!) with mean µ) and standard deviation σ). Then X µ σ n In other words, X is approximately
More informationhttp://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random
More informationORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing
ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationHypothesis Testing - Frequentist
Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating
More informationChapters 10. Hypothesis Testing
Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the
More informationLecture 21. Hypothesis Testing II
Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric
More informationChapters 10. Hypothesis Testing
Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment more effective than the old one? 3. Quality
More informationF79SM STATISTICAL METHODS
F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is
More information557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES
557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp
More informationChapter 7. Hypothesis Testing
Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationStatistics 135 Fall 2008 Final Exam
Name: SID: Statistics 135 Fall 2008 Final Exam Show your work. The number of points each question is worth is shown at the beginning of the question. There are 10 problems. 1. [2] The normal equations
More informationComposite Hypotheses and Generalized Likelihood Ratio Tests
Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve
More information2014/2015 Smester II ST5224 Final Exam Solution
014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationChapter 9: Hypothesis Testing Sections
Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationOptimal Tests of Hypotheses (Hogg Chapter Eight)
Optimal Tests of Hypotheses Hogg hapter Eight STAT 406-0: Mathematical Statistics II Spring Semester 06 ontents Most Powerful Tests. Review of Hypothesis Testing............ The Neyman-Pearson Lemma............3
More informationUniformly Most Powerful Bayesian Tests and Standards for Statistical Evidence
Uniformly Most Powerful Bayesian Tests and Standards for Statistical Evidence Valen E. Johnson Texas A&M University February 27, 2014 Valen E. Johnson Texas A&M University Uniformly most powerful Bayes
More informationMaster s Written Examination - Solution
Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More informationLecture 16 November Application of MoUM to our 2-sided testing problem
STATS 300A: Theory of Statistics Fall 2015 Lecture 16 November 17 Lecturer: Lester Mackey Scribe: Reginald Long, Colin Wei Warning: These notes may contain factual and/or typographic errors. 16.1 Recap
More informationMcGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper
McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section
More informationparameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).
4. Interval estimation The goal for interval estimation is to specify the accurary of an estimate. A 1 α confidence set for a parameter θ is a set C(X) in the parameter space Θ, depending only on X, such
More informationFinal Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.
1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically
More informationStatistics. Statistics
The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,
More informationCherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants
18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More information8: Hypothesis Testing
Some definitions 8: Hypothesis Testing. Simple, compound, null and alternative hypotheses In test theory one distinguishes between simple hypotheses and compound hypotheses. A simple hypothesis Examples:
More informationRecall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n
Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationSolution: First note that the power function of the test is given as follows,
Problem 4.5.8: Assume the life of a tire given by X is distributed N(θ, 5000 ) Past experience indicates that θ = 30000. The manufacturere claims the tires made by a new process have mean θ > 30000. Is
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2017 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationHypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.
Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:
More informationMathematics Ph.D. Qualifying Examination Stat Probability, January 2018
Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More informationInterval Estimation. Chapter 9
Chapter 9 Interval Estimation 9.1 Introduction Definition 9.1.1 An interval estimate of a real-values parameter θ is any pair of functions, L(x 1,..., x n ) and U(x 1,..., x n ), of a sample that satisfy
More informationLecture 26: Likelihood ratio tests
Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for
More informationPolitical Science 236 Hypothesis Testing: Review and Bootstrapping
Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The
More informationINTERVAL ESTIMATION AND HYPOTHESES TESTING
INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,
More informationMethods for Statistical Prediction Financial Time Series I. Topic 1: Review on Hypothesis Testing
Methods for Statistical Prediction Financial Time Series I Topic 1: Review on Hypothesis Testing Hung Chen Department of Mathematics National Taiwan University 9/26/2002 OUTLINE 1. Fundamental Concepts
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More informationf(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain
0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher
More information14.30 Introduction to Statistical Methods in Economics Spring 2009
MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More information14.30 Introduction to Statistical Methods in Economics Spring 2009
MIT OpenCourseWare http://ocw.mit.edu.30 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. .30
More informationChapter 8.8.1: A factorization theorem
LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will
More informationHypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses
Testing Hypotheses MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline Hypothesis Testing 1 Hypothesis Testing 2 Hypothesis Testing: Statistical Decision Problem Two coins: Coin 0 and Coin 1 P(Head Coin 0)
More informationDepartment of Mathematics
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 20: Significance Tests, I Relevant textbook passages: Larsen Marx [8]: Sections 7.2, 7.4, 7.5;
More informationHypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004
Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),
More informationLecture 12 November 3
STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1
More informationHomework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.
Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each
More information10. Composite Hypothesis Testing. ECE 830, Spring 2014
10. Composite Hypothesis Testing ECE 830, Spring 2014 1 / 25 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve unknown parameters
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth
More informationLecture 21: October 19
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use
More informationLECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.
Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the
More informationSTAT 514 Solutions to Assignment #6
STAT 514 Solutions to Assignment #6 Question 1: Suppose that X 1,..., X n are a simple random sample from a Weibull distribution with density function f θ x) = θcx c 1 exp{ θx c }I{x > 0} for some fixed
More informationMathematics Qualifying Examination January 2015 STAT Mathematical Statistics
Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,
More informationStat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS
Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS 1a. Under the null hypothesis X has the binomial (100,.5) distribution with E(X) = 50 and SE(X) = 5. So P ( X 50 > 10) is (approximately) two tails
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More informationStatistics Masters Comprehensive Exam March 21, 2003
Statistics Masters Comprehensive Exam March 21, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationBEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.
BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider
More informationTwo-stage Adaptive Randomization for Delayed Response in Clinical Trials
Two-stage Adaptive Randomization for Delayed Response in Clinical Trials Guosheng Yin Department of Statistics and Actuarial Science The University of Hong Kong Joint work with J. Xu PSI and RSS Journal
More informationIntroductory Econometrics
Session 4 - Testing hypotheses Roland Sciences Po July 2011 Motivation After estimation, delivering information involves testing hypotheses Did this drug had any effect on the survival rate? Is this drug
More information