Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Size: px
Start display at page:

Download "Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3"

Transcription

1 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest time it takes to run 30 miles is hours. (statement about population max) Stocks are more volatile than bonds. (statement about variances of stock and bond returns) In hypothesis testing, you are interested in testing between two mutually exclusive hypotheses, called the null hypothesis (denoted H 0 ) and the alternative hypothesis (denoted H ). H 0 and H are complementary hypotheses, in the following sense: If the parameter being hypothesized about is θ, and the parameter space (i.e., possible values for θ) is Θ, then the null and alternative hypotheses form a partition of Θ: H 0 : θ Θ 0 Θ H : θ Θ c 0 (the complement of Θ 0 in Θ). Examples:. H 0 : θ = 0 vs. H : θ 0. H 0 : θ 0 vs. H : θ > 0 Definitions of test statistics A test statistic, similarly to an estimator, is just some real-valued function T n T (X,..., X n ) of your data sample X,..., X n. Clearly, a test statistic is a random variable. A test is a function mapping values of the test statistic into {0, }, where 0 implies that you accept the null hypothesis H 0 reject the alternative hypothesis H. implies that you reject the null hypothesis H 0 accept the alternative hypothesis H.

2 The subset of the real line R for which the test is equal to is called the rejection (or critical ) region. The complement of the critical region (in the support of the test statistic) is the acceptance region. Example: let µ denote the (unknown) mean male age in Sweden. You want to test: H 0 : µ = 7 vs. H : µ 7 Let your test statistic be X 00, the average age of 00 randomly-drawn Swedish males. Just as with estimators, there are many different possible tests, for a given pair of hypotheses H 0, H. Consider the following four tests:. Test : ( X00 [5, 9] ). Test : ( X00 9 ) 3. Test 3: ( X00 35 ) 4. Test 4: ( X00 7 ) Which ones make the most sense? Also, there are many possible test statistics, such as: (i) med 00 (sample median); (ii) max(x,..., X 00 ) (sample maximum); (iii) mode 00 (sample mode); (iv) sin X (the sine of the first observation). In what follows, we refer to a test as a combination of both (i) a test statistic; and (ii) the mapping from realizations of the test statistic to {0, }. Next we consider some common types of tests.. Likelihood Ratio Test Let: X = X,..., X n i.i.d. f(x θ), and likelihood function L(θ X) = n i= f(x i θ). Define: the likelihood ratio test statistic for testing H 0 : θ Θ 0 vs. H : θ Θ c 0 is λ( X) sup θ Θ 0 L(θ X) sup θ Θ L(θ X). The numerator of λ( X) is the restricted likelihood function, and the denominator is the unrestricted likelihood function.

3 The support of the LR test statistic is [0, ]. Intuitively speaking, if H 0 is true (i.e., θ Θ 0 ), then λ( X) = (since the restriction of θ Θ 0 will not bind). However, if H 0 is false, then λ( X) can be small (close to zero). So an LR test should be one which rejects H 0 when λ( X) is small; for example, (λ( X) < 0.75) Example: X,..., X n i.i.d. N(θ, ) Test H 0 : θ = vs. H : θ. Then λ( X) = exp ( i (X i ) ) exp ( i (X i X n ) ) (the denominator arises because X n is the unrestricted MLE estimator for θ. Example: X,..., X n U[0, θ]. (i) Test H 0 : θ = vs. H : θ. Restricted likelihood function { ( L( X ) ) n if max(x =,..., X n ) 0 if max(x,..., X n ) >. Unrestricted likelihood function { ( L( X θ) ) n if max(x = θ,..., X n ) θ 0 if max(x,..., X n ) > θ which is maximized at θn MLE = max(x,..., X n ). ( n, Hence the denominator of the LR statistic is max(x,...,x n)) so that { 0 if λ( X) max(x,..., X n ) > = ( ) n max(x,...,x n) otherwise LR test would say: (λ( X) c). Critical region consists of two disconnected parts (graph). 3

4 (ii) Test H 0 : θ [0, ] vs. H : θ >. In this case: the restricted likelihood is { ( so sup L( X θ) = θ [0,] n max(x,...,x n)) if max(x,..., X n ) 0 otherwise. { λ( X) if max(x,..., X = n ) 0 otherwise. () (graph) So now LR test is (λ( X) c) = (max(x,..., X n ) > ).. Wald Tests Another common way to generate test statistics is to focus on statistics which are asymptotically normal distributed, under H 0 (i.e., if H 0 were true). A common situation is when the estimator for θ, call it ˆθ n, is asymptotically normal, with some asymptotic variance V (eg. MLE). Let the null be H 0 : θ = θ 0. Then, if the null were true: (ˆθ n θ 0 ) n V The quantity on the LHS is the T-test statistic. d N(0, ). () Note: in most cases, the asymptotic variance V will not be known, and will also need to be estimated. However, if we have an estimator ˆV n such that ˆV p V, then the 4

5 statement Z n (ˆθ n θ 0 ) ˆV n d N(0, ) still holds (using the plim operator and Slutsky theorems). In what follows, therefore, we assume for simplicity that we know V. We consider two cases: (i) Two-sided test: H 0 : θ = θ 0 vs. H : θ θ 0. Under H 0 : the CLT holds, and the t-stat is N(0, ) Under H : assume that the true value is some θ θ 0. Then the t-stat can be written as (ˆθ n θ 0 ) = V n (ˆθ n θ ) + V n (θ θ 0 ). V n The first term d N(0, ), but the second (non-stochastic) term diverges to or, depending on whether the true θ exceeds or is less than θ 0. Hence the t-stat diverges to or with probability. Hence, in this case, your test should be ( Z n > c), where c should be some number in the tails of the N(0, ) distribution. Multivariate version: θ is K-dimensional, and asymptotic normal, so that under H 0, we have n(θn θ 0 ) d N(0, Σ). Then we can test H 0 : θ = θ 0 vs. H : θ θ 0 using the quadratic form Test takes the form: (Z n > c). Z n n (θ n θ 0 ) Σ (θ n θ 0 ) d χ k. (ii) One-sided test: H 0 : θ θ 0 vs. H : θ > θ 0. Here the null hypothesis specifies a whole range of true θ (Θ 0 = (, θ 0 ]), whereas the t-test statistic is evaluated at just one value of θ. Just as for the two-sided test, the one-sided t-stat is evaluated at θ 0, so that Z n = ˆθ n θ 0 n V. 5

6 Under H 0 and θ < θ 0 : Z n diverges to with probability. Under H 0 and θ = θ 0 : the CLT holds, and the t-stat is N(0, ). Under H : Z n diverges to with probability. Hence, in this case, you will reject the null only for very large values of Z n. Correspondingly, your test should be (Z n > c), where c should be some number in the right tail of the N(0, ) distribution. Later, we will discuss how to choose c..3 Score test Consider a model with log-likelihood function log L(θ X) = n Let H 0 : θ = θ 0. The sample score function evaluated at θ 0 is S(θ 0 ) θ log L(θ X) θ=θ0 = n i i log f(x i θ). θ log f(x i θ) θ=θ0. Define W i = θ log f(x i θ) θ=θ0. Under the null hypothesis, S(θ 0 ) converges to d dθ E θ0 W i = f(x θ) θ=θ 0 f(x θ 0 )dx f(x θ 0 ) = f(x θ)dx θ = θ = 0 (the information inequality). Hence, (Note that V 0 V θ0 W i = E θ0 W i = E θ0 ( θ log f(x θ) θ=θ0 ) V 0. is the usual variance matrix for MLE, which is the CRLB.) Therefore, you can apply the CLT to get that, under H 0, S(θ 0 ) n V 0 d N(0, ). (If we don t know V 0, we can use some consistent estimator ˆV 0 of it.) 6

7 ( So a test of H 0 : θ = θ 0 could be formulated as right tail of the N(0, ) distribution. Multivariate version: if θ is K-dimensional: n S(θ 0) n V 0 S n ns(θ 0 ) V 0 S(θ 0 ) d χ k. ) > c, where c is in the Recall that, in the previous lecture notes, we derived the following asymptotic equality for the MLE: n (ˆθn θ 0 ) a = n n n i = n S(θ 0) V 0. log f(x i θ) i θ θ=θ0 log f(x i θ) θ=θ0 θ (3) (The notation a = means that the LHS and RHS differ by some quantity which is o p ().) Hence, the above implies that (applying the information inequality, as we did before) ) a n (ˆθn θ 0 = n S(θ 0) d N(0, ) }{{} V }{{ 0 V } 0 () () The Wald statistic is based on (), while the Score statistic is based on (). In this sense, these two tests are asymptotically equivalent. Note: the asymptotic variance of the Wald statistic () equal the reciprocal of V 0. (Later, you will also see that the Likelihood Ratio Test statistic is also asymptotically equivalent to these two.) The LR, Wald, and Score tests (the trinity of test statistics) require different models to be estimated. LR test requires both the restricted and unrestricted models to be estimated Wald test requires only the unrestricted model to be estimated. 7

8 Score test requires only the restricted model to be estimated. Applicability of each test then depends on the nature of the hypotheses. For H 0 : θ = θ 0, the restricted model is trivial to estimate, and so LR or Score test might be preferred. For H 0 : θ θ 0, restricted model is a constrained maximization problem, so Wald test might be preferred. Methods of evaluating tests Consider X,..., X n i.i.d. (µ, σ ). Test statistic X n. Test H 0 : µ = vs. H : µ. Why are the following good or bad tests?. ( X n ). ( X n.) 3. ( X n [.8,.]) 4. ( X n [ 0, 30]) Test rejects too often (in fact, for every n, you reject with probability ). Test is even worse, since it rejects even when X n is close to. Test 3 is not so bad, Test 4 accepts too often. Basically, we are worried when a test is wrong. Since the test itself is a random variable, we cannot guarantee that a test is never wrong, but we can characterize how often it would be wrong. There are two types of mistakes that we are worried about: Type I error: rejecting H 0 when it is true. (This is the problem with tests and.) Type II error: Accepting H 0 when it is false. (This is the problem with test 4.) 8

9 Let T n T (X,..., X n ) denote the sample test statistic. Consider a test with rejection region R (i.e., test is (T n R)). Then: P (type I error) = P (T n R θ Θ 0 ) P (type II error) = P (T n R θ Θ c 0) Example: X, X i.i.d. Bernoulli, with probability p. Test H 0 : p = vs. H : p. Consider the test ( X +X ). Type I error: rejecting H 0 when p =. P(Type I error) = P( X +X p = ) = P( X +X = 0 p = )+ P( X +X p = ) = + = Type II error: Accepting H 0 when p. For p : X + X So P( X +X = p)=p. Graph. 0 with prob ( p) = with prob ( p)p with prob p = Power function More generally, type I and type II errors are summarized in the power function. 9

10 Definition: the power function of a hypothesis test with a rejection region R is the function of θ defined by β(θ) = P (T n R θ). Example: For above example, β(p) = P ( X +X p ) = p. Graph Power function gives the Type I error probabilities, for any singleton null hypothesis H 0 : θ = θ 0. From β(p), see that if you are worried about Type I error, then you should only use this test when your null is that p is close to (because only for p 0 close to is the power function low). β(p) gives you the Type II error probabilities, for any point alternative hypothesis So if you are worried about Type II error, then β(p) tells you that you should use this test when your alternative hypothesis postulates that p is low (close to zero). We say that this test has good power against alternative values of p close to zero. Important: power function is specific to a given test (T n R), regardless of the specific hypotheses that the test may be used for. Example: X,..., X n U[0, θ]. Test H 0 : θ vs. H : θ >. Derive β(θ) for the LR test (λ( X) < c). 0

11 Recall our earlier derivation of LR test in Eq. (). Hence, Graph. β(θ) = P (λ( X) < c θ) = P (max(x,..., X n ) > θ) = P (max(x,..., X n ) < θ) { 0 for θ = ( n θ) for θ >. In practice, researchers often concerned about Type I error (i.e., don t want to reject H 0 unless evidence overwhelming against it): conservative bias? But if this is so, then you want a test with a power function β(θ) which is low for θ Θ 0, but high elsewhere: This motivates the definition of size and level of a test. For 0 α, a test with power function β(θ) is a size α test if sup θ Θ0 β(θ) = α. For 0 α, a test with power function β(θ) is a level α test if sup θ Θ0 β(θ) α. The θ Θ 0 at which the sup is achieved is called the least favorable value of θ under the null, for this test. It is the value of θ Θ 0 for which the null holds, but which is most difficult to distinguish (in the sense of having the highest rejection probability) from any alternative parameter θ Θ 0.

12 Reflecting perhaps the conservative bias, researcher often use tests of size α =0.05, or 0.0. Example: X,..., X n i.i.d. N(µ, ). Then X n N(µ, /n), and Z n (µ) n( Xn µ) N(0, ). Consider the test (Z n () > c), for the hypotheses H 0 : µ vs. H : µ >. The power function β(µ) = P ( n( X n ) > c µ) = P ( n( X n µ) > c + n( µ) µ) = Φ(c + n( µ)) where Φ( ) is the standard normal CDF. Note that β(µ) is increasing in µ. Size of test= sup µ Φ(c + n( µ)). Since β(µ) is increasing in µ, the supremum occurs at µ =, so that size is β() = Φ(c). Assume you want a test with size α. Then you want to set c such that Φ(c) = α c = Φ ( α). Graph: c is the ( α)-th quantile of the standard normal distribution. You can get these from the usual tables. For α = 0.05, then c =.96. For α = 0.05, then c =.64. Now consider the above test, with c =.96, but change the hypotheses to H 0 : µ = vs. H : µ.

13 Test still has size α = But there is something intuitively wrong about this test. You are less likely to reject when µ <. So the Type II error is very high for the alternatives µ <. We wish to rule out such tests Definition: a test with power function β(θ) is unbiased if β(θ ) β(θ ) for every pair (θ, θ ) where θ Θ c 0 and θ Θ 0. Clearly, the test above is biased for the stated hypotheses. What would be an unbiased test, with the same size α = 0.05? Definition: Let C be a class of tests for testing H 0 : θ Θ 0 vs. H : θ Θ c 0. A test in class C, with power function β(θ), is uniformly most powerful (UMP) in class C if, for every other test in class C with power function β(θ) β(θ) β(θ), for every θ Θ c 0. Often, the classes you consider are test of a given size α. Graphically, the power function for a UMP test lies above the upper envelope of power functions for all other tests in the class, for θ Θ c 0: (graph). UMP tests in special case: two simple hypotheses It can be difficult in general to see what form an UMP test for any given size α is. But in the simple case where both the null and alternative hypotheses are simple hypotheses, we can appeal to the following result. Theorem 8.3. (Neyman-Pearson Lemma): Consider testing H 0 : θ = θ 0 vs. H : θ = θ 3

14 (so both the null and alternative are singletons). The pdf or pmf corresponding to θ i is f( X θ i ), for i = 0,. The test has a rejection region R which satisfies: X R if f( X θ ) > k f( X θ 0 ) ( ) X R c if f( X θ ) < k f( X θ 0 ) and α = P ( X R θ 0 ). (4) Then: Any test satisfying (*) is a UMP test with level α. If there exists a test satisfying (*) with k > 0, then every UMP level α test is a size α test and every UMP level α test satisfies (*). In other words, for simple hypotheses, a likelihood ratio test of the sort ( f( X θ ) ) f( X θ 0 ) > k is UMP-size α (where k is chosen so that P ( f( X θ ) f( X θ 0 ) > k θ 0) = α). Note that this LR statistic is different than λ( X), which for these hypotheses is f( X θ 0 ) max[f( X θ 0 ),f( X θ )]. Example: return to -coin toss again. Test H 0 : p = vs. H : p = 3 4. The likelihood ratios for the three possible outcomes i X i = 0,, are: f(0 p = 3 4 ) f(0 p = ) = 4 f( p = 3 4 ) f( p = ) = 3 4 f( p = 3 4 ) f( p = ) = 9 4. Let l( X) = f( i X i p= 3 4 ) f( i X i p= ). Hence, there are 4 possible rejection regions, for values of l( X): 4

15 (i) ( 9, + ) (size α = 0) 4 (ii) ( 3, + ) (size α = ) 4 4 (iii) (, + ) (size α = 3) 4 4 (iv) (, + ) (size α = ). At all other values for α, no value of k satisfies (*) exactly, and so the test above are level-α tests. This is due to the discreteness of this example. Example A: X,..., X n N(µ, ). Test H 0 : µ = µ 0 vs. H : µ = µ. (Assume µ > µ 0.) By the NP-Lemma, the UMP test has rejection region characterized by Taking logs of both sides: exp ( i (X i µ ) ) exp ( i (X i µ 0 ) ) > k. (µ µ 0 ) i X i > log k + n (µ µ 0 ) X i > n i n log k + (µ µ 0 ) (µ µ 0 ) d. where d is determined such that P ( X n > d) = α, where α is the desired size. This makes intuitive sense: you reject the null when the sample mean is too large (because µ > µ 0 ). Under the null, n( X n µ 0 ) N(0, ), so for α = 0.05, you want to set d = µ / n. 5

16 .. Discussion of NP Lemma (From Amemiya pp. 90-9) Rather than present a proof of NP Lemma (you can find one in CB), let s consider some intuition for the NP test. In doing so, we will derive an interesting duality between Bayesian and classical approaches to hypothesis testing (the NP Lemma being a seminal result for the latter). Start by considering a Bayesian approach to this testing problem. Assume that decisionmaker incurs loss γ if he mistakenly chooses H when H 0 is true, and γ if he mistakenly chooses H 0 when H is true. Then, given data observations x, he will Reject H 0 (=accept H ) γ P (θ 0 x) < γ P (θ x) ( ) where P (θ 0 x) denotes the posterior probability of the null hypothesis given data x. In other words, this Bayesian s rejection region R 0 is given by { R 0 = x : P (θ x) P (θ 0 x) > γ }. γ If we multiply and divide the ratio of posterior probabilities by f( x), the (marginal) joint density of x, and use the laws of probability, we get: { R 0 = x : P (θ x)f( x) P (θ 0 x)f( x) > γ } γ { = x : L( x θ )P (θ ) L( x θ 0 )P (θ 0 ) > γ } γ { = x : L( x θ ) L( x θ 0 ) > γ } P (θ 0 ) γ P (θ ) c. In the above, P (θ 0 ) and P (θ ) denote the prior probabilities for, respectively, the null and alternative hypotheses. This provides some intuition for the likelihood ratio form of the NP rejection region. Next we show a duality between the Bayesian and classical approaches to finding the best test. In the case of two simple hypotheses, a best test should be such that for a given size α, it should have the smallest type error β (these are called admissible tests). For any testing scenario, the frontier (in (β, α) space) of tests is convex. (Why?) Both the Bayesian and the classical statistician want to choose a test that is on the frontier, but they might differ in the one they choose. First consider the Bayesian. She wishes to employ a test, equivalently, choose a rejection region R, to minimize expected loss min φ(r) γ P (θ 0 X R)P ( X R) + γ P (θ X R)P ( X R). R 6

17 We will show that the region R 0 defined above optimizes this problem. Consider any other region R. Recall that R 0 = (R 0 R ) (R 0 R ). Then we can rewrite φ(r 0 ) =γ P (θ 0 R 0 R )P (R 0 R ) + γ P (θ 0 R 0 R )P (R 0 R ) + γ P (θ R 0 R )P ( R 0 R ) + γ P (θ R 0 R )P ( R 0 R ). φ(r ) =γ P (θ 0 R R 0 )P (R R 0 ) + γ P (θ 0 R R 0 )P (R R 0 ) + γ P (θ R R 0 )P ( R R 0 ) + γ P (θ R R 0 )P ( R R 0 ). First and fourth terms of equations above are identical. From the definition of R 0 above, we know that φ(r 0 ) ii < φ(r ) iii. That is, for all x R 0 R 0 R, we know from (*) that γ P (θ 0 x) < γ P (θ x). Similarly, we know that φ(r 0 ) iii < φ(r ) ii, implying that φ(r 0 ) < φ(r ). Moreover, using the laws of probability, we can re-express φ(r) = γ P (θ 0 )P (R θ 0 ) + γ P (θ )P ( R θ ) η 0 α R + η β R with η 0 = γ P (θ 0 ) and η = γ P (θ ). In (β, α) space, the iso-loss curves are parallel lines with slope η 0 /η, decreasing towards the origin. Hence, the optimal test (with region R 0 ) lies at tangency of (α, β) frontier with a line with slope η 0 /η. Hence, the optimal Bayesian test (R 0 ). Now consider the classical statistician. He doesn t want to specify priors P (θ 0 ), P (θ ), so η 0, η are not fixed. But he is willing to specify a desired size α. Hence the optimal test is the one with a rejection region characterized by the slope of the line tangent at α in the (β, α) space. This is in fact the NP test. From the above calculations, we know that this slope is equivalent to c. That is, the NP test corresponds to an optimal Bayesian test with η 0 /η = c.. Extension of NP Lemma: models with monotone likelihood ratio The case covered by NP Lemma that both null and alternative hypotheses are simple is somewhat artificial. For instance, we may be interested in the one-sided hypotheses H 0 : θ θ 0 vs. H : θ θ 0, where θ is scalar. It turns out UMP for one-sided hypotheses { exists under an additional assumption on the family of density functions f( X θ) } : θ Definition: the family of densities f( X θ) has monotone likelihood ratio in T ( X) is there exists a function T ( X) such that for any pair θ < θ the densities f( X θ) 7

18 and f( X θ ) are distinct and the ratio f( X θ )/f( X θ) is a nondecreasing function of T ( X). That is, there exists a nondecreasing function g( ) and function T ( X) such that f( X θ )/f( X θ) = g(t ( X)). For instance: let X = x and T (x) = x, then MLR in x means that f(x θ )/f(x θ) is nondecreasing in x. Roughly speaking: larger x s are more likely under larger θ s. Theorem: Let θ be scalar, and let f( X θ) have MLR in T ( X). Then: (i) For testing H 0 : θ θ 0 vs. H : θ θ 0, there exists a UMP test φ( X), which is given by φ( X) when T ( X) > C = γ when T ( X) = C 0 when T ( X) < C where (γ, C) are chosen to satisfy E θ0 φ( X) = α. (ii) The power function β(θ) = E θ φ( X) is strictly increasing for all points θ for which 0 < β(θ) <. (iii) For all θ, the test described in part i is UMP for testing H 0 : θ θ vs. H : θ θ at level α = β(θ ). Sketch of Proof: Consider testing θ 0 vs. θ > θ 0. By NP Lemma, this depends on ratio f( X θ )/f( X θ 0 ). Given MLR condition, this ratio can be written g(t ( X)) where g( ) is nondecreasing. Then UMP test rejects when f( X θ )/f( X θ 0 ) is large, which is when T (X) is large; this is test φ( X). Since this test does not depend on θ, φ( X) is also UMP-size α for testing H 0 : θ 0 vs. H : θ > θ 0 (composite alternative). Now since MLR holds for all (θ, θ ; θ > θ ), the test φ( X) is also UMP-size E θ φ( X) for testing θ vs. θ. Hence β(θ ) β(θ ), otherwise φ( X) cannot be UMP (why? ). Furthermore, the distinctiveness of f( X θ ) and f( X θ ) rules out β(θ ) = β(θ ). Hence we get (ii). Then since the power function is monotonic increasing in θ, the UMP-size α feature of φ( X) for testing θ 0 vs. H : θ > θ 0 extends to the composite null hypothesis H 0 : θ θ 0 vs. H : θ > θ 0, which is (i). (iii) follows immediately. Example A cont d: From above, we see that for µ > µ 0, the likelihood ratio simplifies to exp ( (µ µ 0 ) i X i n (µ µ 0) ) which is increasing in X n. Hence, Lehmann and Romano, Testing Statistical Hypotheses, pg. 65. Consider the purely random test φ( X) = with probability α. 8

19 this satisfies MLR with T ( X) = X n. Using the theorem above, the one-sided T-test which rejects when X n > µ n is also UMP for size α = 0.05 for the one-sided hypotheses H 0 : µ µ 0 vs. H : µ > µ 0. Call this test. Taking this example further, we have that for the one-sided hypotheses H 0 : µ > µ 0 vs. H : µ < µ 0, the one-sided T-test which rejects when X n < µ 0.64 n will be UMP for size α = Call this test. Now consider testing H 0 : µ = µ 0 vs. H : µ µ 0. The alternative hypothesis is equivalent to µ < µ 0 or µ > µ 0. Can we find an UMP size-α test? Note that both Test and Test are size-α tests for hypotheses (*). So we consider each in turn. For any alternative point µ > µ 0, test is UMP, implying that β (µ ) is maximal among all size-α tests. For µ < µ 0, however, test is UMP, implying that β (µ ) is maximal. Furthermore, β (µ ) < α < β (µ ) from part (ii) of theorem above, so neither test can be uniformly most powerful for all µ µ 0. And indeed there is no UMP size-α test for problem (*). But note that both Test and Test are biased for the hypotheses (*). It turns out that the two-sided T-test which rejects when X n > µ / n or X n < µ 0.96/ n is UMP among size-α unbiased tests. See discussion in CB, pp ( ) 9

20 3 Large-sample properties of tests In practice, we use large-sample theory that is, LLN s and CLT s in order to determine the approximate critical regions for the most common test statistics. Why? Because finite-sample properties can be difficult to determine: Example: X,..., X n i.i.d. Bernoulli with prob. p. Want to test H 0 : p vs. H : p >, using the test stat X n = n i X i. n is finite. The exact finite-sample distribution for X n is the distribution of times n a B(n, p) random variable, which is: 0 with prob ( ) n 0 ( p) n with prob ( ) n n p( p) n with prob ( ) n n p ( p) n with prob p n Assume your test is of the following form: ( X n > c), where the critical value c is to be determined such that the size sup p P ( X n > c p) = α, for some specified α. 3 This equation is difficult to solve for! On the other hand, by the CLT, we know that n(x n p) d N(0, ). Hence, consider p( p) the T-test statistic Z n n ( ) Xn / = n ( Xn 4 ). Under any p in the null hypothesis, P (Z n > Φ ( α)) α, for n large enough. (In fact, this equation holds with equality for p =, and holds with strict inequality for p <.) Corresponding to this test, you can derive the asymptotic power function, which is β a (p) lim n P (Z n > c), for c = Φ ( α): (Graph) 3 Indeed, the Clopper-Pearson (934) confidence intervals for p are based on inverting this exact finite-sample test. 0

21 Note that the asymptotic power function is equal to at all values under the alternative. This is the notion for consistency for a test: that it has asymptotic power under every alternative. Note also that asymptotic power (rejection probability) is zero under every p of the null, except p =. (skip) Accordingly, we see that asymptotic power, vs. fixed alternatives, is not a sufficiently discerning criterion for distinguishing between tests. We can deal with this by considering local alternatives of the sort p = + h/ n. Under additional smoothness assumptions on the distributional convergence of n( X n p) p( p) around p =, we can obtain asymptotic power functions under these local alternatives. 3. Likelihood Ratio Test Statistic: asymptotic distribution Theorem 0.3. (Wilks Theorem): For testing H 0 : θ = θ 0 vs. H : θ θ 0, if X,..., X n i.i.d. f(x θ), and f(x θ) satisfies the regularity conditions in Section Then under H 0, as n : log λ( X) d χ. Note: χ denotes a random variable from the Chi-squared distribution with degree of freedom. By Lemma 5.3. in CB, if Z N(0, ), then Z χ. Clearly, χ random variables only have positive support. Proof: Assume null holds. Use Taylor-series expansion of log-likelihood function around the MLE estimator ˆθ n : log f(x i θ 0 ) = i i + = i log f(x i ˆθ n ) + i i θ log f(x i θ) θ=ˆθn (θ 0 ˆθ n ) θ log f(x i θ) θ=ˆθn (θ 0 ˆθ n ) +... log f(x i ˆθ n ) + i θ log f(x i θ) θ=ˆθn (θ 0 ˆθ n ) +... where (i) the second term disappeared because the MLE ˆθ n sets the first-order condition i log f(x θ i θ) θ=ˆθn = 0; (ii) the remainder term is o p (). This is a secondorder Taylor expansion. (5)

22 Rewrite the above as: ( ) f(x i θ 0 ) log = i f(x i ˆθ n ) n θ log f(x i θ) [ n(θ0 θ=ˆθn ˆθ ] n ) + op (). i Now n i θ log f(x i θ) θ=ˆθn p Eθ0 θ log f(x i θ 0 ) = V 0 (θ 0 ), where V 0 (θ 0 ) denotes the asymptotic variance of the MLE estimator (and is equal to the familiar information bound /I(θ 0 )). Finally, we note that variable: n(ˆθ n θ 0) V0 (θ 0 ) d N(0, ). Hence, by the definition of the χ random log λ( X) d χ. In the multivariate case (θ being k-dimensional), the above says that log(λ( X)) a = n(θ 0 θ n )I(θ 0 ) (θ 0 θ n ) χ k. Hence, LR-statistic is asymptotically equivalent to Wald and Score tests. Example: X,..., X N i.i.d. Bernoulli with prob. p. Test H 0 : p = vs. H : p. Let Y n denote the number of s. λ( X) = For test with asymptotic size α: ( n ) ( ) yn ( n yn y n ) ( n ) ( yn ) yn ( n yn ) n yn. y n n n α = P (λ( X) c) (c < ) = P ( log λ( X) log c) = P (χ log c) = F χ ( log c) ( c = exp ) F χ ( α). x For instance, for α = 0.05, then F χ ( α) = 3.84; α = 0.0, then F x χ ( α) = x.706.

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test. Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Chapter 4. Theory of Tests. 4.1 Introduction

Chapter 4. Theory of Tests. 4.1 Introduction Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test. Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

BTRY 4090: Spring 2009 Theory of Statistics

BTRY 4090: Spring 2009 Theory of Statistics BTRY 4090: Spring 2009 Theory of Statistics Guozhang Wang September 25, 2010 1 Review of Probability We begin with a real example of using probability to solve computationally intensive (or infeasible)

More information

Composite Hypotheses and Generalized Likelihood Ratio Tests

Composite Hypotheses and Generalized Likelihood Ratio Tests Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Introduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007

Introduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007 Hypothesis Testing Daniel Schmierer Econ 312 March 30, 2007 Basics Parameter of interest: θ Θ Structure of the test: H 0 : θ Θ 0 H 1 : θ Θ 1 for some sets Θ 0, Θ 1 Θ where Θ 0 Θ 1 = (often Θ 1 = Θ Θ 0

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Methods for Statistical Prediction Financial Time Series I. Topic 1: Review on Hypothesis Testing

Methods for Statistical Prediction Financial Time Series I. Topic 1: Review on Hypothesis Testing Methods for Statistical Prediction Financial Time Series I Topic 1: Review on Hypothesis Testing Hung Chen Department of Mathematics National Taiwan University 9/26/2002 OUTLINE 1. Fundamental Concepts

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Chapter 3. Point Estimation. 3.1 Introduction

Chapter 3. Point Estimation. 3.1 Introduction Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.

More information

F79SM STATISTICAL METHODS

F79SM STATISTICAL METHODS F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is

More information

Maximum-Likelihood Estimation: Basic Ideas

Maximum-Likelihood Estimation: Basic Ideas Sociology 740 John Fox Lecture Notes Maximum-Likelihood Estimation: Basic Ideas Copyright 2014 by John Fox Maximum-Likelihood Estimation: Basic Ideas 1 I The method of maximum likelihood provides estimators

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Lecture 21. Hypothesis Testing II

Lecture 21. Hypothesis Testing II Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two

More information

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants 18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak.

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak. Large Sample Theory Large Sample Theory is a name given to the search for approximations to the behaviour of statistical procedures which are derived by computing limits as the sample size, n, tends to

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Hypothesis Testing - Frequentist

Hypothesis Testing - Frequentist Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment more effective than the old one? 3. Quality

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

ECE531 Lecture 4b: Composite Hypothesis Testing

ECE531 Lecture 4b: Composite Hypothesis Testing ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction

More information

Topic 15: Simple Hypotheses

Topic 15: Simple Hypotheses Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

LECTURE NOTES 57. Lecture 9

LECTURE NOTES 57. Lecture 9 LECTURE NOTES 57 Lecture 9 17. Hypothesis testing A special type of decision problem is hypothesis testing. We partition the parameter space into H [ A with H \ A = ;. Wewrite H 2 H A 2 A. A decision problem

More information

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Theory of Maximum Likelihood Estimation. Konstantin Kashin Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical

More information

Asymptotic Tests and Likelihood Ratio Tests

Asymptotic Tests and Likelihood Ratio Tests Asymptotic Tests and Likelihood Ratio Tests Dennis D. Cox Department of Statistics Rice University P. O. Box 1892 Houston, Texas 77251 Email: dcox@stat.rice.edu November 21, 2004 0 1 Chapter 6, Section

More information

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)] LECTURE 7 NOTES 1. Convergence of random variables. Before delving into the large samle roerties of the MLE, we review some concets from large samle theory. 1. Convergence in robability: x n x if, for

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu.30 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. .30

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)? ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Lecture 10: Generalized likelihood ratio test

Lecture 10: Generalized likelihood ratio test Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

Statistics and econometrics

Statistics and econometrics 1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics A short review of the principles of mathematical statistics (or, what you should have learned in EC 151).

More information

Lecture 21: October 19

Lecture 21: October 19 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use

More information

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49 4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider

More information

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood

More information

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

Greene, Econometric Analysis (6th ed, 2008)

Greene, Econometric Analysis (6th ed, 2008) EC771: Econometrics, Spring 2010 Greene, Econometric Analysis (6th ed, 2008) Chapter 17: Maximum Likelihood Estimation The preferred estimator in a wide variety of econometric settings is that derived

More information

Lecture notes on statistical decision theory Econ 2110, fall 2013

Lecture notes on statistical decision theory Econ 2110, fall 2013 Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Hypothesis Testing (May 30, 2016)

Hypothesis Testing (May 30, 2016) Ch. 5 Hypothesis Testing (May 30, 2016) 1 Introduction Inference, so far as we have seen, often take the form of numerical estimates, either as single points as confidence intervals. But not always. In

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

10-704: Information Processing and Learning Fall Lecture 24: Dec 7

10-704: Information Processing and Learning Fall Lecture 24: Dec 7 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis

More information

Political Science 236 Hypothesis Testing: Review and Bootstrapping

Political Science 236 Hypothesis Testing: Review and Bootstrapping Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The

More information