Chapter 7. Hypothesis Testing

Size: px
Start display at page:

Download "Chapter 7. Hypothesis Testing"

Transcription

1 Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, / 63

2 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function f(x : θ) where θ Ω. Suppose we think that θ Ω 0 or θ Ω 1 where Ω 0 and Ω 1 are disjoint subsets of Ω and Ω 0 Ω 1 = Ω. We label the hypotheses as H 0 : θ Ω 0 vs H 1 : θ Ω 1. Joonpyo Kim Ch7 June 24, / 63

3 Basic Concepts of Testing The decision rule to take H 0 or H 1 is based on a sample X 1,, X n from the distribution of X and hence, the decision could be wrong. Table displays the various situations. True State of Nature Decision H 0 is True H 1 is True Reject H 0 Type I error Correct decision Accept H 0 Correct decision Type II error Table: Testing Hypothesis and Decision Error Joonpyo Kim Ch7 June 24, / 63

4 Basic Concepts of Testing A test of H 0 versus H 1 is based on a subset C of data. This set C is called the rejection region and its corresponding decision rule is: Reject H 0 if (X 1,, X n ) C Retain H 0 if (X 1,, X n ) / C A type I error occurs if H 0 is rejected when it is true while a type II error occurs if H 0 is accepted when H 1 is true. The goal is to select a rejection region from all possible rejection regions which minimizes the probabilities of these errors, which often have a trade-off relationship. So we often consider type I error to be the worse of the two errors. Joonpyo Kim Ch7 June 24, / 63

5 Basic Concepts of Testing Definition. We say a rejection region C is of size α if where X = (X 1,, X n ) t. α = max θ Ω 0 P θ (X C) Over all rejection regions of size α, we want to consider rejection regions which have lower probabilities of type II error. Joonpyo Kim Ch7 June 24, / 63

6 Basic Concepts of Testing Definition. The probability 1 P (type II error) = P θ (X C) is called the power of the test at θ. It is the probability that the test detects the alternative θ when θ Ω 1. So minimizing the probability of type II error is equivalnet to maximizing power. Further, we define the power function of a rejection region to be γ C (θ) = P θ (X C), θ Ω 1. Joonpyo Kim Ch7 June 24, / 63

7 Basic Concepts of Testing Example. Let X 1,, X n be a random sample from N(µ, σ 2 ). Suppose that we want test the hypotheses H 0 : µ = 0 vs H 1 : µ > 0 with size α. (0 < α < 1) In here, we may use X. If X is large, it supports H 1, so if X is sufficiently large we reject H 0 and accpet H 1. Hence {X c} is a reasonable rejection region. Now we have P µ=0 (X c) = α and it implies that c = z α σ n. Joonpyo Kim Ch7 June 24, / 63

8 Basic Concepts of Testing Example. Now we assume the same situation as in previous one, except that in this example we consider H 0 : µ 0 vs H 1 : µ > 0. Also in this case the rejection region is {X c} and now we have Note that max µ 0 P µ(x c) = α. ( X µ max P µ(x c) = max P µ µ 0 µ 0 σ/ n c µ ) ( σ/ = P µ=0 Z c µ ) n σ/ = α n where Z N(0, 1). Joonpyo Kim Ch7 June 24, / 63

9 Basic Concepts of Testing Example. (Cont d) So we get c = z α σ n. Joonpyo Kim Ch7 June 24, / 63

10 Randomized Test Example. Now consider X 1,, X 8 P oisson(λ) and H 0 : λ = 5, H 1 : λ < 5. We again consider a rejection region {X c }, which is equivalent to {X 1 + X X 8 c}. Our goal is to find c which satisfies P λ=5 (X X 8 c) α where α = 0.05 is the size of the test. Joonpyo Kim Ch7 June 24, / 63

11 Randomized Test Example. (Cont d) Following table shows probabilities P (Z n) where Z P oisson(40). n P (Z n) If we set c = 29, then the size is , and if c = 30, then the size is Thus we can say that the rejection region is {X X 8 29}, but in this case size of the test is smaller then α, which may yield smaller power. Joonpyo Kim Ch7 June 24, / 63

12 Randomized Test Example. (Cont d) Let s consider following test rule. If X X 8 29, we reject H 0. If X X 8 > 30, we do not reject H 0. If X X 8 = 30, we reject H 0 with fixed probability p (0, 1) randomly. With this procedure we can make P λ=5 (reject H 0 ) = 0.05 exactly. Such p should satisfy ( )p = 0.05 and we can find p. Such test containing random step is called a randomized test. This example suggests that we need more general definition about size and power of the test. Joonpyo Kim Ch7 June 24, / 63

13 Randomized Test Definition. Let φ(x) be a rejection probability of H 0, i.e., Then Reject H 0 if φ(x) = 1 Reject H 0 randomly with probability φ(x) if φ(x) (0, 1) Accept H 0 if φ(x) = 0 is called a size of the test, and max E θ φ(x) θ Ω 0 γ φ (θ) = E θ φ(x), θ Ω is called a power function or test function of the test. The test with significance level α means that the size is less than or equal to α. Joonpyo Kim Ch7 June 24, / 63

14 Likelihood Ratio Test Definition. Let X = (X 1,, X n ) be a sample from f( : θ) and L(θ) be a likelihood function. A test with rejection region C α = { } { } supθ Ω L(θ) sup θ Ω0 L(θ) λ L(ˆθ Ω ) = λ = {2(l(ˆθ Ω ) l(ˆθ Ω 0 )) λ } L(ˆθ Ω 0 ) and satisfying max θ Ω0 P θ (X C α ) = α is called a likelihood ratio test. In here, ˆθ Ω is MLE on Ω, ˆθ Ω 0 is MLE on Ω 0, and l( ) is a log-likelihood. Joonpyo Kim Ch7 June 24, / 63

15 Likelihood Ratio Test Example. Let X 1,, X n N(µ, σ 2 ), < µ <, and 0 < σ 2 <. Consider the hypotheses H 0 : µ = µ 0, H 1 : µ µ 0. Our goal is to describe likelihood ratio test with level α. (0 < α < 1) For θ = (µ, σ 2 ) t, we can easily show that (exercise) ( ˆθ Ω = x, 1 n ( ˆθ Ω 0 = µ 0, 1 n ) t n (x i x) 2 i=1 ) t n (x i µ 0 ) 2 i=1 Joonpyo Kim Ch7 June 24, / 63

16 Likelihood Ratio Test Example. (Cont d) and so 2(l(ˆθ Ω ) l(ˆθ Ω 0 )) = n log Therefore the rejection region of the test is ( x µ 0 s/ n c 1 + (x µ 0) 2 ˆσ 2Ω where s is a sample standard deviation, and from sample distribution theory, we can easily obtain c = t α/2 (n 1). ). Joonpyo Kim Ch7 June 24, / 63

17 Likelihood Ratio Test Example. Let X 1,, X n N(µ, σ 2 ), < µ <, and 0 < σ 2 <. Consider the hypotheses H 0 : µ µ 0, H 1 : µ > µ 0. Our goal is to describe likelihood ratio test with level α. (0 < α < 1) For θ = (µ, σ 2 ) t, we can easily show that (exercise) ( ˆθ Ω = x, 1 n ( ˆθ Ω 0 = x µ 0, 1 n ) t n (x i x) 2 i=1 ) t n (x i ˆµ Ω 0 ) 2 i=1 Joonpyo Kim Ch7 June 24, / 63

18 Likelihood Ratio Test Example. (Cont d) and so 2(l(ˆθ Ω ) l(ˆθ Ω 0 )) = n log Therefore the rejection region of the test is ( x x µ 0 s/ n c 1 + (x x µ 0) 2 ˆσ 2Ω where s is a sample standard deviation. Note that x x µ 0 0. So if c 0, α = 1 and it is unreasonable. Thus we can assume that c > 0 and then rejection region is x µ 0 s/ n c. ). Joonpyo Kim Ch7 June 24, / 63

19 Likelihood Ratio Test Example. (Cont d) Meanwhile we know that ( ) ( X µ0 P θ S/ n c Z + (µ µ 0 )/(σ/ ) n) = P θ c V/(n 1) ( = P θ c V/(n 1) Z µ µ ) 0 σ/ n ( ) X µ0 P (µ0,σ 2 ) S/ n c for θ Ω 0 so we can obtain the test with level α if c = t α (n 1). Joonpyo Kim Ch7 June 24, / 63

20 Likelihood Ratio Test Example. Consider X 1,, X n Exp(θ) and hypotheses H 0 : θ = θ 0 vs H 1 : θ θ 0. We want to find the test under level α. (0 < α < 1) Log-likelihood is l(θ) = n log θ n x θ and hence ( 2(l(ˆθ Ω ) l(ˆθ Ω 0 x )) = 2n 1 log x ) c θ 0 θ 0 is rejection region. Joonpyo Kim Ch7 June 24, / 63

21 Likelihood Ratio Test Example. (Cont d) It is equivalent to x x c 1, c 2 θ 0 θ 0 c 1 log c 1 = c 2 log c 2 Figure: Graph of y = x 1 log x. Joonpyo Kim Ch7 June 24, / 63

22 Likelihood Ratio Test Example. (Cont d) So rejection region is X X c 1, c 2 θ 0 θ 0 c 1 log c 1 = c 2 log c 2 2nc2 2nc 1 pdf χ 2 (2n)(t)dt = 1 α because 2nX θ 0 χ 2 (2n). Joonpyo Kim Ch7 June 24, / 63

23 Likelihood Ratio Test Example. Let X 11,, X 1n1 N(µ 1, σ 2 ) and X 21,, X 2n2 N(µ 2, σ 2 ) be random samples. Consider hypotheses H 0 : µ 1 = µ 2 vs. H 1 : µ 1 µ 2. We would find a likelihood ratio test under level α. Let θ = (µ 1, µ 2, σ 2 ) t. Joonpyo Kim Ch7 June 24, / 63

24 Likelihood Ratio Test Example. (Cont d) Then log-likelihood is l(θ) = 1 2σ 2 2 n i i=1 j=1 (x ij µ i ) 2 n 1 + n 2 2 log(2πσ 2 ) and where ˆθ Ω 1 = X 1, X 2, n 1 + n 2 ˆθ Ω 0 1 = ˆµ 0, ˆµ 0, n 1 + n 2 2 n i (x ij x i ) 2 i=1 j=1 2 n i (x ij ˆµ 0 ) 2 i=1 j=1 ˆµ 0 = n 1X 1 + n 2 X 2 n 1 + n 2. t t Joonpyo Kim Ch7 June 24, / 63

25 Likelihood Ratio Test Example. (Cont d) From this and sample distribution theory, rejection region is X 1 X S n1 n2 p t α/2(n 1 + n 2 2). Joonpyo Kim Ch7 June 24, / 63

26 Likelihood Ratio Test Example. Similarly, we can test H 0 : σ 2 1 = σ 2 2 vs H 1 : σ 2 1 σ 2 2 under level α with X 11,, X 1n1 N(µ 1, σ 2 1 ) and X 21,, X 2n2 N(µ 2, σ2 2 ). It is left as exercise. Joonpyo Kim Ch7 June 24, / 63

27 Optimality of Testing In this section, optimal testing will be introduced. In testing, type I error probability is controlled via significance level, so type II error probability may determine the performance of test. Definition. Consider simple hypotheses H 0 : θ = θ 0 vs H 1 : θ = θ 1. Then test φ satisfying (i) E θ0 φ (X) α (ii) E θ1 φ (X) E θ1 φ(x) for any test φ such that Eθ0 φ(x) α is called Most Powerful test (MP test) at level α. Joonpyo Kim Ch7 June 24, / 63

28 Optimality of Testing Definition. Consider hypotheses H 0 : θ Ω 0 vs H 1 : θ Ω 1 where Ω = Ω 0 Ω 1 and Ω 0 Ω 1 = φ. Then test φ satisfying (i) sup θ Ω0 E θ φ (X) α (ii) E θ φ (X) E θ φ(x) θ Ω1 for any test φ such that sup θ Ω0 E θ φ(x) α is called Uniformly Most Powerful test (UMP test) at level α. Joonpyo Kim Ch7 June 24, / 63

29 Optimality of Testing Remark. Obviously, our interest is how to find MP and UMP test. Which test has maximum power among level α tests? Neyman-Pearson lemma gives the answer in simple vs simple case. Joonpyo Kim Ch7 June 24, / 63

30 Neyman-Pearson Lemma Theorem. Neyman-Pearson Lemma. Suppose that X p θ (x), where θ = θ 0 or θ 1. Consider testing problem of hypotheses H 0 : θ = θ 0 vs H 1 : θ = θ 1. Then ϕ satisfying 1 p θ1 (x)/p θ0 (x) > k (i) ϕ (x) = γ p θ1 (x)/p θ0 (x) = k 0 p θ1 (x)/p θ0 (x) < k (ii) E θ0 ϕ (X) = α Then ϕ is MP test at level α. Furthermore, for 0 < α < 1 such ϕ (i.e., such k > 0 and 0 < γ < 1) always exist. Joonpyo Kim Ch7 June 24, / 63

31 Neyman-Pearson Lemma Proof. Existence part will not be proved here. For the rest part, consider E θ1 (ϕ (X) ϕ(x)) ke θ0 (ϕ (X) ϕ(x)) for level α test ϕ. Define X 0 = {x : p θ0 (x) > 0} and X 1 = {x : p θ1 (x) > 0}. Note that E θ1 (ϕ (X) ϕ(x)) ke θ0 (ϕ (X) ϕ(x)) = (ϕ (x) ϕ(x))(p θ1 (x) kp θ0 (x))dx X 1 + k(ϕ (x) ϕ(x))p θ0 (x)dx X 0 \X 1 Joonpyo Kim Ch7 June 24, / 63

32 Neyman-Pearson Lemma Proof. (Cont d) so E θ1 (ϕ (X) ϕ(x)) ke θ0 (ϕ (X) ϕ(x)) 0. Also note that E θ0 (ϕ (X) ϕ(x)) = α E θ0 ϕ(x) 0. Therefore, E θ1 (ϕ (X) ϕ(x)) 0. Joonpyo Kim Ch7 June 24, / 63

33 Example: MP test Example. Let X 1,, X n be random sample from N(µ, σ 2 ), where σ 2 > 0 is known. Consider testing hypotheses H 0 : µ = µ 0 vs H 1 : µ = µ 1 where µ 0 and µ 1 are known. Also assume that µ 0 < µ 1. Then, likelihood ratio is so test p µ1 (x) ( p µ0 (x) = exp n ) 2σ 2 (µ 1 µ 0 )(µ 1 + µ 0 2x) ϕ (x) = { 1 x > k 0 x < k with E µ0 ϕ (X) = α is MP at level α (0 < α < 1). It can be easily shown that k = µ 0 + z α/2 σ/ n. Joonpyo Kim Ch7 June 24, / 63

34 Example: UMP test Example. Continued from previous example. (i) First, consider H 0S : µ = µ 0 vs H 1 : µ > µ 0. For any alternative H 1 (µ 1 ) : µ = µ 1 for µ 1 > µ 0, ϕ is MP at level α for testing H 0S vs H 1 (µ 1 ). In addition, ϕ does not depend on µ 1, so for any level α test ϕ, E µ ϕ (X) E µ ϕ(x) µ > µ 0 holds. Therefore, ϕ is UMP at level α for testing H 0S vs H 1. Joonpyo Kim Ch7 June 24, / 63

35 Example: UMP test Example. (Cont d) (ii) Next, consider Note that H 0C : µ µ 0 vs H 1 : µ > µ 0. sup E µ ϕ (X) = E µ0 ϕ (X) = α µ µ 0 so ϕ is also level α for testing H 0C vs H 1. Since ϕ was the best among level α tests for H 0S, so it is also the best among level α tests for H 0C. Therefore, ϕ is UMP at level α for testing H 0C vs H 1. Joonpyo Kim Ch7 June 24, / 63

36 Example: UMP test Remark. It is well known that, UMP test for H 0 : µ = µ 0 vs H 1 : µ µ 0 does not exist. For another optimality, we find the best test among special class of tests, which is called unbiased test, and such optimal one is Uniformly Most Powerful Unbiased test. Joonpyo Kim Ch7 June 24, / 63

37 Asymptotics of LRT In this section we see an asymptotic test. First recall that n(ˆθ θ) = [I(θ)] 1 n l(θ) + o p (1). Then by Taylor s theorem, there exists θ such that and by LLN holds under H 0. l(θ 0 ) = l(ˆθ) + l(ˆθ) t (θ 0 ˆθ) (θ 0 ˆθ) t l(θ )(θ 0 ˆθ) 1 n l(θ ) p n I(θ 0) Joonpyo Kim Ch7 June 24, / 63

38 Asymptotics of LRT Further under H 0 l(ˆθ) l(θ 0 ). Thus likelihood ratio test statistics of is H 0 : θ = θ 0 vs H 1 : θ θ 0 2(l(ˆθ) l(θ 0 )) n(ˆθ θ 0 ) t I(θ 0 ) n(ˆθ θ 0 ) d n χ2 (k). Joonpyo Kim Ch7 June 24, / 63

39 Asymptotics of LRT Theorem. Let θ Ω R k and consider hypotheses Under regularity conditions, Under H 0, H 0 : θ = θ 0 vs H 1 : θ θ 0. 2(l(ˆθ) l(θ 0 )) d n χ2 (k). Joonpyo Kim Ch7 June 24, / 63

40 Asymptotics of LRT Theorem. (Cont d) Under H 0, 2(l(ˆθ) l(θ 0 )) = n(ˆθ θ 0 ) t I(θ 0 )(ˆθ θ 0 ) + o p (1) =: W n (θ 0 ) + o p (1). W n (θ 0 ) is called Wald s statistics. Under H 0, 2(l(ˆθ) l(θ 0 )) = n l(θ 0 ) t [I(θ 0 )] 1 l(θ0 ) + o p (1) =: R n (θ 0 ) + o p (1). R n (θ 0 ) is called Rao s statistics. Joonpyo Kim Ch7 June 24, / 63

41 Asymptotics of LRT Theorem. Let Ω R k be an open set. Let θ = (ξ t, η t ) t. Consider Ω 0 = {(ξ t 0, ηt ) t η R k 0 } and hypotheses H 0 : ξ = ξ 0 vs H 1 : ξ ξ 0. Then under H 0, 2(l(ˆθ Ω ) l(ˆθ Ω 0 )) d n χ2 (k k 0 ). 2(l(ˆθ Ω ) l(ˆθ Ω 0 )) = n(ˆθ Ω ˆθ Ω 0 ) t I(θ 0 )(ˆθ Ω ˆθ Ω 0 ) + o p (1). 2(l(ˆθ Ω ) l(ˆθ Ω 0 )) = n l(ˆθ Ω 0 ) t [I(θ 0 )] 1 l(ˆθω 0 ) + o p (1). Joonpyo Kim Ch7 June 24, / 63

42 Asymptotics of LRT Example. Consider X 1,, X n Exp(θ) and H 0 : θ = θ 0 vs H 1 : θ θ 0. Then W n (θ 0 ) = R n (θ 0 ) = n(x θ 0) 2. θ 2 0 Joonpyo Kim Ch7 June 24, / 63

43 Asymptotics of LRT Example. Consider X 1,, X n Multi(1, p) and H 0 : p = p 0 vs H 1 : p p 0 for p 0 = (p 01,, p 0k ) t, p p 0k = 1. Let X i = (X i1,, X ik ) t and X j = n i=1 X ij. Define θ = (p 1,, p k 1 ) t. Then log-likelihood is l(θ) = k k 1 x j log p j = x j log p j + x k log(1 p 1 p k 1 ) j=1 and we can easily show that j=1 ˆp j Ω = x j n. Joonpyo Kim Ch7 June 24, / 63

44 Asymptotics of LRT Example. (Cont d) Also we can show that p p I(θ) = t p k 0 0 p k 1 so W n (θ 0 ) = k j=1 (X j np 0j ) 2 np 0j. Joonpyo Kim Ch7 June 24, / 63

45 Asymptotics of LRT Example. (Cont d) Similarly we can show that R n (θ 0 ) = k j=1 In here the fact that for vectors v, w may be used. (I + vw t ) 1 = I (X j np 0j ) 2 np 0j w t v vwt Joonpyo Kim Ch7 June 24, / 63

46 Asymptotics of LRT Example. Consider a random sample X 1,, X n Bin(n, p) and H 0 : p = p 0, vs H 1 : p p 0. We already know that asymptotic test with size α has a rejection region X np 0 np0 q 0 z α/2. Joonpyo Kim Ch7 June 24, / 63

47 Asymptotics of LRT Example. (Cont d) It is equivalent to (S np 0 ) 2 np 0 + (F nq 0) 2 nq 0 = (X np 0) 2 np 0 + (X np 0) 2 = (X np 0) 2 np 0 q 0 nq 0 χ 2 α(1) where S and F are the number of success and fail respectively. So test statistics can be denoted as 2 (O c Ê0 c ) 2. c=1 Ê 0 c Joonpyo Kim Ch7 June 24, / 63

48 Asymptotics of LRT Example. Similarly, we can test hypotheses H 0 : p ij = p i p j vs H 1 : else on the r c contingency table with model (X ij ) Multi(n, (p ij )), r c p ij = 1 i=1 j=1 and its test statistics is r i=1 j=1 For this show the textbook. c (O ij Ê0 ij )2 Ê 0 ij χ 2 ((r 1)(c 1)). H 0 Joonpyo Kim Ch7 June 24, / 63

49 Asymptotics of LRT Example. Now we see some other case. Suppose that for i = 1, 2,, k, X i1,, X ini Ber(p i ). Our goal is to test H 0 : p 1 = = p k vs H 1 : not H 0. First it s easy to show that 2[l(ˆp) l(ˆp 0 )] = 2 k i=1 ( n i ˆp i log ˆp i ˆp 0 i + n iˆq i log ˆq ) i ˆq i 0 where MLE ˆp i and MLE under H 0 ˆp 0 i ˆp i = 1 n i n i j=1 X ij, ˆp 0 i = 1 n are defined as k n i X ij, n = i=1 j=1 k n i. i=1 Joonpyo Kim Ch7 June 24, / 63

50 Asymptotics of LRT Example. (Cont d) Now let X i = n i j=1 X ij and consider the situation that n i, n i n n k γ i, 0 < γ i < 1, i = 1, 2,, k. (1) Then under H 0 if we define p 1 = p 2 = = p k =: p 0. then ˆp i p 0 and ˆp 0 i p0 with assumption (1) so ˆp i /ˆp 0 i 1 and hence the approximation Joonpyo Kim Ch7 June 24, / 63

51 Asymptotics of LRT 2[l(ˆp) l(ˆp 0 )] = 2 k ( n i ˆp i log ˆp i ˆp 0 i i=1 ( k 2 n i ˆp 0 i i=1 k = 2 i=1 k = + n i ˆq i log ˆq ) i ˆq i 0 ( ˆp i ˆp ( ) ) 2 ˆpi i 2 ˆp n i ˆq i 0 i n i ( ˆp i ˆp 0 i + ˆp2 i 2ˆp 0 i + ˆq2 i ˆq 0 i ) 1 = ˆp i ˆp0 i + ˆq i ˆq 0 i + ˆq2 i 2ˆq 0 i k ( ˆq i ˆq i ( ) )) 2 ˆqi 2 ˆq i 0 1 ) ˆq i ˆq0 i i=1 ( ˆp 2 n i i ˆp 0 i i=1 ˆq i 0 n ˆp2 i ˆp0 i ˆq0 i + ˆp0 i ˆq2 i i ˆp 0 i ˆq0 i k ˆq i 0 = i n2 i ˆp0 i ˆq0 i + ˆp0 i (n i X i ) 2 n i=1 i ˆp 0 i ˆq0 i k (1 ˆp 0 i = i n2 i ˆp0 i (1 ˆp0 i ) + ˆp0 i (n2 i 2n ix i + Xi ) 2 n i=1 i ˆp 0 i ˆq0 i 2 k = X i nˆp 0 i i=1 n i ˆp 0 i ˆq0 i Joonpyo Kim Ch7 June 24, / 63

52 Asymptotics of LRT Example. (Cont d) might be used. In here the fact that (1 + x) log(1 + x) x x2 if x 0 used. Now define Z = (Z 1,, Z k ) t and w = (w 1,, w k ) t where Z i := n i X i p 0 p 0 q 0, w i := n i. Then Joonpyo Kim Ch7 June 24, / 63

53 Asymptotics of LRT Example. (Cont d) k X i nˆp 0 i n i ˆp 0 i ˆq0 i i=1 2 = = k i=1 k i=1 ( ) 2 X i ˆp 0 i ni p 0 q 0 ( ) 2 X i p 0 ni p 0 q X p 0 n i 0 p 0 q 0 k (Z i Y i ) 2 i=1 where Y = w(w t w) 1 w t Z. Joonpyo Kim Ch7 June 24, / 63

54 Asymptotics of LRT Example. (Cont d) Therefore k X i nˆp 0 i n i ˆp 0 i ˆq0 i i=1 2 Z t (I w(w t w) 1 w t )Z χ 2 (k 1) holds and using this fact we can find an asymptotic test of given hypotheses. Joonpyo Kim Ch7 June 24, / 63

55 Confidence Region Definition. Suppose that statistics S(X) based on X satisfies inf P θ(q(θ) S(X)) 1 α. θ Θ Then S(X) is called confidence region for q(θ) of confidence level 1 α. Joonpyo Kim Ch7 June 24, / 63

56 Duality Theorem. Let A(q(θ 0 )) be an acceptance region of level α for testing H 0 (θ 0 ) : q(θ) = q(θ 0 ) for fixed but arbitrary θ 0 Θ. Then S(X) defined by q(θ) S(X) x A(q(θ)) is a level (1 α) confidence region of q(θ). Proof. For any θ 0 Θ, P θ0 (q(θ 0 ) S(X)) = P θ0 (X A(q(θ 0 ))) 1 α holds. Joonpyo Kim Ch7 June 24, / 63

57 Confidence Region Example. Consider a random sample X 1,, X n N(µ, σ 2 ) and hypotheses Then acceptance region of LRT is A(µ 0 ) = H 0 : µ = µ 0 vs H 1 : µ µ 0. { x : x µ 0 s/ n } t α/2(n 1) so [ x t α/2 (n 1) s, x + t n α/2 (n 1) s ] n is a level 1 α confidence region for µ. Joonpyo Kim Ch7 June 24, / 63

58 Confidence Region Example. (Cont d) Now consider H 0 : µ = µ 0 vs H 1 : µ > µ 0. Then acceptance region of LRT is A(µ 0 ) = { x : x µ } 0 s/ n t α(n 1) and from x A(µ 0 ) µ 0 [ x t α (n 1) s ), n we get the (one-sided) confidence interval for µ [ x t α (n 1) s ),. n Joonpyo Kim Ch7 June 24, / 63

59 Simultaneous Confidence Region Example. Bonferroni approach. Consider a linear regression model y i = β 0 + x i1 β x ik β k + ɛ i, i = 1, 2,, n. Suppose that we carry out the k tests of H 0j : β j = 0, j = 1, 2,, k. Let E j be the event that the jth test rejects H 0j even if it is true, where P (E j ) = α j. The overall size α f can be defined as k α f = P (reject at least one H 0j when all H 0j are true) = P Note that if α j = 0.05 for all j, actual size of this simultaneous testing is 1 (1 0.05) k, which is not small enough. For example, if k = 10, α f 0.40, which makes our testing procedure unreliable. j=1 E j. Joonpyo Kim Ch7 June 24, / 63

60 Simultaneous Confidence Region Example. (Cont d) To overcome this problem, if α is desired overall error rate, using k P j=1 E j k P (E j ) j=1 k α j, making k j=1 α j = α, we obtain α f α. Our choice is α j = α/k. Such method is called Bonferroni correction. Using Bonferroni correction, we get the simultaneous confidence interval, j=1 β j ( ˆβ j ± t α/2k (n k 1)s g jj ) j, where g jj = V ar( ˆβ j )/σ 2. Joonpyo Kim Ch7 June 24, / 63

61 Simultaneous Confidence Region Example. Scheffe s method. Now consider the null H 0 : a t β = 0 a R p. In the same way, denoting E a = {reject H 0 (a) : a t β = 0 when H 0 (a) is true}, we should make ( ) P a R p E a = α. Joonpyo Kim Ch7 June 24, / 63

62 Simultaneous Confidence Region Example. (Cont d) Note that, if we use F a = (at ˆβ) t (a t (X t X) 1 a) 1 a t ˆβ ˆσ 2 = (at ˆβ) t (a t ˆβ) s 2 a t (X t X) 1 a as a test statistic for H 0 (a), we get P ( a R p E a ) = P ( max a (a t ˆβ a t β) 2 a t Sa c ), (2) where S 2 = ˆσ 2 (X t X) 1. It s known that max a (a t ˆβ a t β) 2 a t Sa = (ˆβ β) t S 1 (ˆβ β), and hence we can find suitable c that makes (2) equals α. Joonpyo Kim Ch7 June 24, / 63

63 Simultaneous Confidence Region Example. (Cont d) Therefore, we get the simultaneous confidence intervals for all possible linear functions a t ˆβ as a t β ( ) a t ˆβ ± s (k + 1)Fα (k + 1, n k 1)a t (X t X) 1 a a R k. Such method is called Scheffe s method. Joonpyo Kim Ch7 June 24, / 63

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

2014/2015 Smester II ST5224 Final Exam Solution

2014/2015 Smester II ST5224 Final Exam Solution 014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Topic 19 Extensions on the Likelihood Ratio

Topic 19 Extensions on the Likelihood Ratio Topic 19 Extensions on the Likelihood Ratio Two-Sided Tests 1 / 12 Outline Overview Normal Observations Power Analysis 2 / 12 Overview The likelihood ratio test is a popular choice for composite hypothesis

More information

Theory of Statistical Tests

Theory of Statistical Tests Ch 9. Theory of Statistical Tests 9.1 Certain Best Tests How to construct good testing. For simple hypothesis H 0 : θ = θ, H 1 : θ = θ, Page 1 of 100 where Θ = {θ, θ } 1. Define the best test for H 0 H

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

Chapter 4. Theory of Tests. 4.1 Introduction

Chapter 4. Theory of Tests. 4.1 Introduction Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test. Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

Partitioning the Parameter Space. Topic 18 Composite Hypotheses Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one

More information

Lecture 10: Generalized likelihood ratio test

Lecture 10: Generalized likelihood ratio test Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider

More information

ML Testing (Likelihood Ratio Testing) for non-gaussian models

ML Testing (Likelihood Ratio Testing) for non-gaussian models ML Testing (Likelihood Ratio Testing) for non-gaussian models Surya Tokdar ML test in a slightly different form Model X f (x θ), θ Θ. Hypothesist H 0 : θ Θ 0 Good set: B c (x) = {θ : l x (θ) max θ Θ l

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

BTRY 4090: Spring 2009 Theory of Statistics

BTRY 4090: Spring 2009 Theory of Statistics BTRY 4090: Spring 2009 Theory of Statistics Guozhang Wang September 25, 2010 1 Review of Probability We begin with a real example of using probability to solve computationally intensive (or infeasible)

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two

More information

Topic 15: Simple Hypotheses

Topic 15: Simple Hypotheses Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Hypothesis Testing. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA

Hypothesis Testing. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA Hypothesis Testing Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA An Example Mardia et al. (979, p. ) reprint data from Frets (9) giving the length and breadth (in

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These

More information

8. Hypothesis Testing

8. Hypothesis Testing FE661 - Statistical Methods for Financial Engineering 8. Hypothesis Testing Jitkomut Songsiri introduction Wald test likelihood-based tests significance test for linear regression 8-1 Introduction elements

More information

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test. Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the

More information

Model comparison and selection

Model comparison and selection BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

Loglikelihood and Confidence Intervals

Loglikelihood and Confidence Intervals Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,

More information

Statistics Ph.D. Qualifying Exam: Part II November 9, 2002

Statistics Ph.D. Qualifying Exam: Part II November 9, 2002 Statistics Ph.D. Qualifying Exam: Part II November 9, 2002 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your

More information

Composite Hypotheses and Generalized Likelihood Ratio Tests

Composite Hypotheses and Generalized Likelihood Ratio Tests Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve

More information

Notes on the Multivariate Normal and Related Topics

Notes on the Multivariate Normal and Related Topics Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions

More information

Statistics 135 Fall 2008 Final Exam

Statistics 135 Fall 2008 Final Exam Name: SID: Statistics 135 Fall 2008 Final Exam Show your work. The number of points each question is worth is shown at the beginning of the question. There are 10 problems. 1. [2] The normal equations

More information

4.5.1 The use of 2 log Λ when θ is scalar

4.5.1 The use of 2 log Λ when θ is scalar 4.5. ASYMPTOTIC FORM OF THE G.L.R.T. 97 4.5.1 The use of 2 log Λ when θ is scalar Suppose we wish to test the hypothesis NH : θ = θ where θ is a given value against the alternative AH : θ θ on the basis

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

2.6.3 Generalized likelihood ratio tests

2.6.3 Generalized likelihood ratio tests 26 HYPOTHESIS TESTING 113 263 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : θ Θ against H 1 : θ Θ\Θ It can be used

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Hypothesis Testing - Frequentist

Hypothesis Testing - Frequentist Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating

More information

2017 Financial Mathematics Orientation - Statistics

2017 Financial Mathematics Orientation - Statistics 2017 Financial Mathematics Orientation - Statistics Written by Long Wang Edited by Joshua Agterberg August 21, 2018 Contents 1 Preliminaries 5 1.1 Samples and Population............................. 5

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Introduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007

Introduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007 Hypothesis Testing Daniel Schmierer Econ 312 March 30, 2007 Basics Parameter of interest: θ Θ Structure of the test: H 0 : θ Θ 0 H 1 : θ Θ 1 for some sets Θ 0, Θ 1 Θ where Θ 0 Θ 1 = (often Θ 1 = Θ Θ 0

More information

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

f(y θ) = g(t (y) θ)h(y)

f(y θ) = g(t (y) θ)h(y) EXAM3, FINAL REVIEW (and a review for some of the QUAL problems): No notes will be allowed, but you may bring a calculator. Memorize the pmf or pdf f, E(Y ) and V(Y ) for the following RVs: 1) beta(δ,

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49 4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

Nonconcave Penalized Likelihood with A Diverging Number of Parameters

Nonconcave Penalized Likelihood with A Diverging Number of Parameters Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized

More information

Answer Key for STAT 200B HW No. 7

Answer Key for STAT 200B HW No. 7 Answer Key for STAT 200B HW No. 7 May 5, 2007 Problem 2.2 p. 649 Assuming binomial 2-sample model ˆπ =.75, ˆπ 2 =.6. a ˆτ = ˆπ 2 ˆπ =.5. From Ex. 2.5a on page 644: ˆπ ˆπ + ˆπ 2 ˆπ 2.75.25.6.4 = + =.087;

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

Asymptotic Tests and Likelihood Ratio Tests

Asymptotic Tests and Likelihood Ratio Tests Asymptotic Tests and Likelihood Ratio Tests Dennis D. Cox Department of Statistics Rice University P. O. Box 1892 Houston, Texas 77251 Email: dcox@stat.rice.edu November 21, 2004 0 1 Chapter 6, Section

More information

Statistics and econometrics

Statistics and econometrics 1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample

More information

The outline for Unit 3

The outline for Unit 3 The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants 18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2017 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 801: Mathematical Statistics. Hypothesis Testing STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

Likelihood Ratio tests

Likelihood Ratio tests Likelihood Ratio tests For general composite hypotheses optimality theory is not usually successful in producing an optimal test. instead we look for heuristics to guide our choices. The simplest approach

More information

MATH5745 Multivariate Methods Lecture 07

MATH5745 Multivariate Methods Lecture 07 MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction

More information

Practical Econometrics. for. Finance and Economics. (Econometrics 2)

Practical Econometrics. for. Finance and Economics. (Econometrics 2) Practical Econometrics for Finance and Economics (Econometrics 2) Seppo Pynnönen and Bernd Pape Department of Mathematics and Statistics, University of Vaasa 1. Introduction 1.1 Econometrics Econometrics

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

Chapter 10. Hypothesis Testing (I)

Chapter 10. Hypothesis Testing (I) Chapter 10. Hypothesis Testing (I) Hypothesis Testing, together with statistical estimation, are the two most frequently used statistical inference methods. It addresses a different type of practical problems

More information

LECTURE 5 HYPOTHESIS TESTING

LECTURE 5 HYPOTHESIS TESTING October 25, 2016 LECTURE 5 HYPOTHESIS TESTING Basic concepts In this lecture we continue to discuss the normal classical linear regression defined by Assumptions A1-A5. Let θ Θ R d be a parameter of interest.

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

Generalized Linear Models Introduction

Generalized Linear Models Introduction Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,

More information

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2

More information

http://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information