40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

Size: px
Start display at page:

Download "40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology"

Transcription

1 Singapore University of Design and Technology

2 Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson framework of hypothesis testing on P is as follows: Two hypotheses, a null hypothesis H 0 and an alternative hypothesis H 1, are formulated as H 0 : P P 0, H 1 : P P 1, where P 0 P 1 = φ and P 0 P 1 = P. Let S(X ) be a test statistic based on the sample X and c be a constant. The decision is made by the rule: if S(X ) c, reject H 0, otherwise, do not reject H 0. The constant c is chosen such that, for a specified small α, sup P(S(X ) c) α. P P 0

3 Type I and Type II errors When H 0 is rejected but H 0 is indeed true, the error is called the type I error. When H 0 is accepted but H 0 is in fact wrong, the error is called the type II error. Probabilities of errors and power function Type I error rate: Type II error rate: β S (P) = P(S(X ) > c) P P 0 1 β S (P) = 1 P(S(X ) > c) P P 1, β S (P), as a function of P, is called the power function of S.

4 Remarks Type I and type II error rates cannot be minimized simultaneously. Under the Neyman-Pearson framework, the Type I error rate is controlled by α, but Type II error rate is not controlled. Under the Neyman-Pearson framework, the two hypotheses should be formulated in a way such that the Type I error is a more serious one than the Type II error from a practical point-of-view. Significance level and size of the test If sup P P 0 P(S(X ) c) α, the number α, which controls the Type I error rate, is called the significance level of the test. The supremum sup P P0 P(S(X ) c) is called the size of the test.

5 Choice of significance level The choice of α is somewhat subjective. Standard values, 0.10, 0.05, and 0.01, are often used for convenience. p-value The smallest possible level of significance at which H 0 would be rejected for the computed S(x) is given by ˆα(x) = sup P P 0 P(S(X ) > S(x)) = sup P P 0 {1 Ψ(S(x) P)}, where Ψ(S(x) P) is the c.d.f. of S(X ) under distribution P. Such an ˆα(x) is called the p-value of the test S. Remark The p-value provides additional information for a test, using p-values is more appropriate than using fixed-level tests in a scientific problem. ˆα(X ) is a random variable. The p-value of the test is the realization of this random variable.

6 Critical (acceptance) region and random test The region of X : C S = {X : S(X ) c}, is called the critical region of the test. The complement of the critical region, i.e., A S = {X : S(X ) < c}, is called the acceptance region of the test. A test can also be defined by the indicator function of the critical region, i.e., T (X ) = I CS (X ). This indicator funtion is simply refered to as a test. The test T (X ) is a statistic taking values 0 or 1. In general, we can define a test as a statistic T (X ) taking values in [0, 1], which gives the probability T (x) to reject H 0 when X = x is observed. Such a test are called a randomized test. The power function of a randomized test T is given by β T (P) = E[T (X ) P].

7 Comparison of tests If there are many tests, we wish to find an optimal one in a certain sense. The ideal case is to have a test to minimize both the Type I and Type II error rates simultaneously. However, this is impossible. The strategy is confine ourselves to the class of tests satisfying sup P P 0 β T (P) α, where α [0, 1] is a given level of significance. Then select the test in the class that maximizes the power β T (P) over all P P 1. Definition (Uniformly most poserful test) A test T of size α is a uniformly most powerful (UMP) test if and only if β T (P) β T (P) for all P P 1 and T of level α.

8 A remark If U(X ) is a sufficient statistic for P P, then for any test T (X ), E(T U) has the same power function as T and, therefore, to find a UMP test we may consider tests that are functions of U only. Neyman-Pearson lemma Let P 0 = {P 0 } and P 1 = {P 1 }. Let f j be the p.d.f. of P j, j = 0, 1. (i) (Existence). For every α, there exists a UMP test of size α, given by 1 f 1 (X ) > cf 0 (X ) T (X ) = γ f 1 (X ) = cf 0 (X ) 0 f 1 (X ) < cf 0 (X ), where γ [0, 1] and c 0 are some constants chosen so that E[T (X )] = α when P = P 0 (c = is allowed). (ii) (Uniqueness). If T is a UMP test of size α, then { 1 f1 (X ) > cf T (X ) = 0 (X ) a.s. P. 0 f 1 (X ) < cf 0 (X )

9 Remarks The Neyman-Pearson lemma shows that if both H 0 and H 1 are simple (a hypothesis is simple iff the corresponding set of populations contains exactly one element), there exists a UMP test that can be determined uniquely (a.s. P) except on the set B = {x : f 1 (x) = cf 0 (x)}. If ν(b) = 0, there is a unique nonrandomized UMP test; o.w., UMP tests are randomized on the set B and the randomization is necessary for UMP tests to have the given size α We can always choose a UMP test that is constant on B. Example 9.1 Suppose that X is a sample of size 1, P 0 = {P 0 }, and P 1 = {P 1 }, where P 0 is N(0, 1) and P 1 is the double exponential distribution DE(0, 2) with the p.d.f. 4 1 e x /2. Since P(f 1 (X ) = cf 0 (X )) = 0, there is a unique nonrandomized UMP test.

10 Example 9.1 (cont.) By Neyman-Pearson lemma, the UMP test T (x) = 1 if and only if π 8 ex2 x > c 2 for some c > 0, which is equivalent to x > t or x < 1 t for some t > 1 2. It is not necessary to find out c. Suppose that α < 1 3. To determine t, we use α = E 0 [T (X )] = P 0 ( X > t) + P 0 ( X < 1 t). If t 1, then P 0 ( X > t) P 0 ( X > 1) = > α. Hence t should be larger than 1 and α = P 0 ( X > t) = Φ( t) + 1 Φ(t). Thus, t = Φ 1 (1 α/2) and T (X ) = I (t, ) ( X ). Intuitively, the reason why the UMP test in this example rejects H 0 when X is large is that the probability of getting a large X is much higher under H 1 (i.e., P is the double exponential distribution DE(0, 2)). The power of T when P P 1 is E 1 [T (X )] = P 1 ( X > t) = t t e x /2 dx = e t/2.

11 Example 9.2 Let X 1,..., X n be i.i.d. binary variables with p = P(X 1 = 1). Suppose that H 0 : p = p 0 and H 1 : p = p 1, where 0 < p 0 < p 1 < 1. By Neyman-Pearson lemma, a UMP test of size α is 1 λ(y ) > c T (Y ) = γ λ(y ) = c 0 λ(y ) < c, where Y = ( ) Y ( ) n Y n i=1 X i and λ(y ) = p1 1 p1 p 0 1 p 0. Since λ(y ) is increasing in Y, there is an integer m > 0 such that 1 Y > m T (Y ) = γ Y = m 0 Y < m, where m and γ satisfy α = E 0 [T (Y )] = P 0 (Y > m) + γp 0 (Y = m).

12 Example 9.2 (cont.) Since Y has the binomial distribution Bi(p, n), we can determine m and γ from n ( ) ( ) n α = p j n 0 j (1 p 0) n j + γ p0 m (1 p 0 ) n m. m j=m+1 Unless n ( ) n α = p j 0 j (1 p 0) n j j=m+1 for some integer m, in which case we can choose γ = 0, the UMP test T is a randomized test. Remark An interesting phenomenon in Example 9.2 is that the UMP test T does not depend on p 1. In such a case, T is in fact a UMP test for testing H 0 : p = p 0 versus H 1 : p > p 0. Lemma 9.1 Suppose that there is a test T of size α such that for every P 1 P 1, T is UMP for testing H 0 versus the hypothesis P = P 1. Then T is UMP for testing H 0 versus H 1.

13 Monotone likelihood ratio family Suppose that the distribution of X is in P = {P θ : θ Θ}, a parametric family indexed by a real-valued θ, and that P is dominated by a σ-finite measure ν. Let f θ = dp θ /dν be the p.d.f. of P. The family P is said to have monotone likelihood ratio in Y (X ) (a real-valued statistic) if and only if, for any θ 1 < θ 2, f θ2 (x)/f θ1 (x) is a nondecreasing function of Y (x) for values x at which at least one of f θ1 (x) and f θ2 (x) is positive. Lemma 9.2 Suppose that the distribution of X is in a parametric family P indexed by a real-valued θ and that P has monotone likelihood ratio in Y (X ). If ψ is a nondecreasing function of Y, then g(θ) = E[ψ(Y )] is a nondecreasing function of θ.

14 Example 9.3 Let θ be real-valued and η(θ) be a nondecreasing function of θ. Then the one-parameter exponential family with f θ (x) = exp{η(θ)y (x) ξ(θ)}h(x) has monotone likelihood ratio in Y (X ). Example 9.4 Let X 1,..., X n be i.i.d. from the uniform distribution on (0, θ), where θ > 0. The Lebesgue p.d.f. of X = (X 1,..., X n ) is f θ (x) = θ n I (0,θ) (x (n) ), where x (n) is the value of the largest order statistic X (n). For θ 1 < θ 2, f θ2 (x) f θ1 (x) = θn I 1 (0,θ2 )(x (n) ) θ2 n I (0,θ1 )(x (n) ), which is a nondecreasing function of x (n) for x s at which at least one of f θ1 (x) and f θ2 (x) is positive, i.e., x (n) < θ 2. Hence the family of distributions of X has monotone likelihood ratio in X (n).

15 Example 9.5 The following families have monotone likelihood ratio: the double exponential distribution family {DE(θ, c)} with a known c; the exponential distribution family {E(θ, c)} with a known c; the logistic distribution family {LG(θ, c)} with a known c; the hypergeometric distribution family {HG(r, θ, N θ)} with known r and N. An example of a family that does not have monotone likelihood ratio is the Cauchy distribution family {C(θ, c)} with a known c. Testing one sided hypotheses Hypotheses of the form H 0 : θ θ 0 (or H 0 : θ θ 0 ) versus H 1 : θ > θ 0 (or H 1 : θ < θ 0 ) are called one-sided hypotheses for any fixed constant θ 0.

16 Theorem 9.1 Suppose that X has a distribution in P = {P θ : θ Θ} (Θ R) that has monotone likelihood ratio in Y (X ). Consider the problem of testing H 0 : θ θ 0 versus H 1 : θ > θ 0, where θ 0 is a given constant. (i) There exists a UMP test of size α, which is given by 1 Y (X ) > c T (X ) = γ Y (X ) = c 0 Y (X ) < c, where c and γ are determined by β T (θ 0 ) = α, and β T (θ) = E[T (X )] is the power function of a test T. (ii) β T (θ) is strictly increasing for all θ s for which 0 < β T (θ) < 1. (iii) For any θ < θ 0, T minimizes β T (θ) (the type I error probability of T ) among all tests T satisfying β T (θ 0 ) = α.

17 (iv) Assume that P θ (f θ (X ) = cf θ0 (X )) = 0 for any θ > θ 0 and c 0, where f θ is the p.d.f. of P θ. If T is a test with β T (θ 0 ) = β T (θ 0 ), then for any θ > θ 0, either β T (θ) < β T (θ) or T = T a.s. P θ. (v) For any fixed θ 1, T is UMP for testing H 0 : θ θ 1 versus H 1 : θ > θ 1, with size β T (θ 1 ). Corollary 9.1 (one-parameter exponential families) Suppose that X has a p.d.f. in a one-parameter exponential family with η being a strictly monotone function of θ. If η is increasing, then T given by Theorem 7.1 is UMP for testing H 0 : θ θ 0 versus H 1 : θ > θ 0, where γ and c are determined by β T (θ 0 ) = α. If η is decreasing or H 0 : θ θ 0 (H 1 : θ < θ 0 ), the result is still valid by reversing inequalities in the definition of T.

18 Example 9.6 Let X 1,..., X n be i.i.d. from the N(µ, σ 2 ) distribution with an unknown µ R and a known σ 2. Consider H 0 : µ µ 0 versus H 1 : µ > µ 0, where µ 0 is a fixed constant. The p.d.f. of X = (X 1,..., X n ) is from a one-parameter exponential family with Y (X ) = X and η(µ) = nµ/σ 2. By Corollary 9.1 and the fact that X is N(µ, σ 2 /n), the UMP test is T (X ) = I (cα, )( X ), where c α = σz 1 α / n + µ 0 and z a = Φ 1 (a). Discussion To derive a UMP test for testing H 0 : θ θ 0 versus H 1 : θ > θ 0 when X has a p.d.f. in a one-parameter exponential family, it is essential to know the distribution of Y (X ). Typically, a nonrandomized test can be obtained if the distribution of Y is continuous; otherwise UMP tests are randomized.

19 Example 9.7 Let X 1,..., X n be i.i.d. random variables from the Poisson distribution P(θ) with an unknown θ > 0. The p.d.f. of X = (X 1,..., X n ) is from a one-parameter exponential family with Y (X ) = n i=1 X i and η(θ) = log θ. Note that Y has the Poisson distribution P(nθ). By Corollary 9.1, a UMP test for H 0 : θ θ 0 versus H 1 : θ > θ 0 is given by Theorem 6.2 with c and γ satisfying α = j=c+1 e nθ 0 (nθ 0 ) j j! + γ enθ 0 (nθ 0 ) c. c! Example 9.8 Let X 1,..., X n be i.i.d. random variables from the uniform distribution U(0, θ), θ > 0. Consider the hypotheses H 0 : θ θ 0 and H 1 : θ > θ 0. The p.d.f. of X = (X 1,..., X n ) is in a family with monotone likelihood ratio in Y (X ) = X (n) (Example 9.4).

20 Example 9.8 (continued) By Theorem 9.1, a UMP test is T. Since X (n) has the Lebesgue p.d.f. nθ n x n 1 I (0,θ) (x), the UMP test T is nonrandomized and α = β T (θ 0 ) = n θ n 0 θ0 Hence c = θ 0 (1 α) 1/n. The power function of T when θ > θ 0 is β T (θ) = n θ n θ c c x n 1 dx = 1 cn θ0 n. x n 1 dx = 1 θn 0 (1 α) θ n. In this problem, however, UMP tests are not unique. (Note that the condition P θ (f θ (X ) = cf θ0 (X )) = 0 in Theorem 9.1(iv) is not satisfied.) It can be shown (exercise) that the following test is also UMP with size α: { 1 X(n) > θ T (X ) = 0 α X (n) θ 0.

21 Generalized Neyman-Pearson lemma Let f 1,..., f m+1 be functions on R p integrable w.r.t. a σ-finite ν. For given constants t 1,..., t m, let T be the class of functions φ (from R p to [0, 1]) satisfying φf i dν t i, i = 1,..., m, (1) and T 0 be the set of φ s in T satisfying (1) with all inequalities replaced by equalities. (i) If there are constants c 1,..., c m such that { 1 fm+1 (x) > c φ (x) = 1 f 1 (x) + + c m f m (x) 0 f m+1 (x) < c 1 f 1 (x) + + c m f m (x) is a member of T 0, then φ maximizes φf m+1 dν over φ T 0. (ii) If c i 0 for all i, then φ maximizes φf m+1 dν over φ T. (iii) M = { ( φf 1 dν,..., φf m dν) : φ is from R p to [0, 1] } is convex and closed. If (t 1,..., t m ) is an interior point of M, the constants c 1,..., c m in (i) exist.

22 Two-sided hypotheses The following hypotheses are called two-sided hypotheses: H 0 : θ θ 1 or θ θ 2 versus H 1 : θ 1 < θ < θ 2, (2) H 0 : θ 1 θ θ 2 versus H 1 : θ < θ 1 or θ > θ 2, (3) H 0 : θ = θ 0 versus H 1 : θ θ 0, (4) where θ 0, θ 1, and θ 2 are given constants and θ 1 < θ 2. Remark UMP tests for hypotheses (2) exist for one-parameter exponential families and some non-exponential families. A UMP test does not exist in general for testing hypotheses (3) or (4). The reason is that UMP tests for testing one-sided hypotheses do not have level α for testing (2); but they are of level α for testing (3) or (4) and there does not exist a single test more powerful than all tests that are UMP for testing one-sided hypotheses.

23 Theorem 9.2 (UMP tests for two-sided hypotheses) Suppose that X has a p.d.f. w.r.t. a σ-finite measure in a one-parameter exponential family: f θ (x) = exp{η(θ)y (x) ξ(θ)}h(x). (i) For testing hypotheses (2), a UMP test of size α is 1 c 1 < Y (X ) < c 2 T (X ) = γ i Y (X ) = c i, i = 1, 2 0 Y (X ) < c 1 or Y (X ) > c 2, c i s and γ i s are determined by (5) β T (θ 1 ) = β T (θ 2 ) = α. (6) (ii) T minimizes β T (θ) over θ < θ 1, θ > θ 2 and T satisfying (6). (iii) If T and T are two tests satisfying (5) and β T (θ 1 ) = β T (θ 1 ) and if the region {T = 1} is to the right of {T = 1}, then β T (θ) < β T (θ) for θ > θ 1 and β T (θ) > β T (θ) for θ < θ 1. If both T and T satisfy (5) and (6), then T = T a.s. P.

24 Example 9.9 Let X 1,..., X n be i.i.d. from N(θ, 1). By Theorem 9.2, a UMP test for testing (2) is T (X ) = I (c1,c 2 )( X ), where c i s are determined by and Φ ( n(c 2 θ 1 ) ) Φ ( n(c 1 θ 1 ) ) = α Φ ( n(c 2 θ 2 ) ) Φ ( n(c 1 θ 2 ) ) = α.

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

Lecture 26: Likelihood ratio tests

Lecture 26: Likelihood ratio tests Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

Lecture 23: UMPU tests in exponential families

Lecture 23: UMPU tests in exponential families Lecture 23: UMPU tests in exponential families Continuity of the power function For a given test T, the power function β T (P) is said to be continuous in θ if and only if for any {θ j : j = 0,1,2,...}

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These

More information

Lecture 21. Hypothesis Testing II

Lecture 21. Hypothesis Testing II Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric

More information

Stat 710: Mathematical Statistics Lecture 27

Stat 710: Mathematical Statistics Lecture 27 Stat 710: Mathematical Statistics Lecture 27 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 27 April 3, 2009 1 / 10 Lecture 27:

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Monday, September 26, 2011 Lecture 10: Exponential families and Sufficient statistics Exponential Families Exponential families are important parametric families

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Lecture 17: Minimal sufficiency

Lecture 17: Minimal sufficiency Lecture 17: Minimal sufficiency Maximal reduction without loss of information There are many sufficient statistics for a given family P. In fact, X (the whole data set) is sufficient. If T is a sufficient

More information

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.

More information

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider

More information

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple

More information

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES 557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

Fundamentals of Statistics

Fundamentals of Statistics Chapter 2 Fundamentals of Statistics This chapter discusses some fundamental concepts of mathematical statistics. These concepts are essential for the material in later chapters. 2.1 Populations, Samples,

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Hypothesis Testing - Frequentist

Hypothesis Testing - Frequentist Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

Chapter 4. Theory of Tests. 4.1 Introduction

Chapter 4. Theory of Tests. 4.1 Introduction Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule

More information

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 801: Mathematical Statistics. Hypothesis Testing STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action

More information

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence

More information

Lecture 13: p-values and union intersection tests

Lecture 13: p-values and union intersection tests Lecture 13: p-values and union intersection tests p-values After a hypothesis test is done, one method of reporting the result is to report the size α of the test used to reject H 0 or accept H 0. If α

More information

LECTURE NOTES 57. Lecture 9

LECTURE NOTES 57. Lecture 9 LECTURE NOTES 57 Lecture 9 17. Hypothesis testing A special type of decision problem is hypothesis testing. We partition the parameter space into H [ A with H \ A = ;. Wewrite H 2 H A 2 A. A decision problem

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49 4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

8 Testing of Hypotheses and Confidence Regions

8 Testing of Hypotheses and Confidence Regions 8 Testing of Hypotheses and Confidence Regions There are some problems we meet in statistical practice in which estimation of a parameter is not the primary goal; rather, we wish to use our data to decide

More information

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

Partitioning the Parameter Space. Topic 18 Composite Hypotheses Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one

More information

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test. Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

Hypothesis Testing Chap 10p460

Hypothesis Testing Chap 10p460 Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies

More information

4 Invariant Statistical Decision Problems

4 Invariant Statistical Decision Problems 4 Invariant Statistical Decision Problems 4.1 Invariant decision problems Let G be a group of measurable transformations from the sample space X into itself. The group operation is composition. Note that

More information

8: Hypothesis Testing

8: Hypothesis Testing Some definitions 8: Hypothesis Testing. Simple, compound, null and alternative hypotheses In test theory one distinguishes between simple hypotheses and compound hypotheses. A simple hypothesis Examples:

More information

Theory of Statistical Tests

Theory of Statistical Tests Ch 9. Theory of Statistical Tests 9.1 Certain Best Tests How to construct good testing. For simple hypothesis H 0 : θ = θ, H 1 : θ = θ, Page 1 of 100 where Θ = {θ, θ } 1. Define the best test for H 0 H

More information

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:

More information

F79SM STATISTICAL METHODS

F79SM STATISTICAL METHODS F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is

More information

Hypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses

Hypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses Testing Hypotheses MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline Hypothesis Testing 1 Hypothesis Testing 2 Hypothesis Testing: Statistical Decision Problem Two coins: Coin 0 and Coin 1 P(Head Coin 0)

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

CSE 312 Final Review: Section AA

CSE 312 Final Review: Section AA CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell, Recall The Neyman-Pearson Lemma Neyman-Pearson Lemma: Let Θ = {θ 0, θ }, and let F θ0 (x) be the cdf of the random vector X under hypothesis and F θ (x) be its cdf under hypothesis. Assume that the cdfs

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Topic 15: Simple Hypotheses

Topic 15: Simple Hypotheses Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is

More information

P Values and Nuisance Parameters

P Values and Nuisance Parameters P Values and Nuisance Parameters Luc Demortier The Rockefeller University PHYSTAT-LHC Workshop on Statistical Issues for LHC Physics CERN, Geneva, June 27 29, 2007 Definition and interpretation of p values;

More information

Tests and Their Power

Tests and Their Power Tests and Their Power Ling Kiong Doong Department of Mathematics National University of Singapore 1. Introduction In Statistical Inference, the two main areas of study are estimation and testing of hypotheses.

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

10. Composite Hypothesis Testing. ECE 830, Spring 2014

10. Composite Hypothesis Testing. ECE 830, Spring 2014 10. Composite Hypothesis Testing ECE 830, Spring 2014 1 / 25 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve unknown parameters

More information

Statistical hypothesis testing The parametric and nonparametric cases. Madalina Olteanu, Université Paris 1

Statistical hypothesis testing The parametric and nonparametric cases. Madalina Olteanu, Université Paris 1 Statistical hypothesis testing The parametric and nonparametric cases Madalina Olteanu, Université Paris 1 2016-2017 Contents 1 Parametric hypothesis testing 3 1.1 An introduction on statistical hypothesis

More information

Chapter 1. Statistical Spaces

Chapter 1. Statistical Spaces Chapter 1 Statistical Spaces Mathematical statistics is a science that studies the statistical regularity of random phenomena, essentially by some observation values of random variable (r.v.) X. Sometimes

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

the convolution of f and g) given by

the convolution of f and g) given by 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information

Solution: First note that the power function of the test is given as follows,

Solution: First note that the power function of the test is given as follows, Problem 4.5.8: Assume the life of a tire given by X is distributed N(θ, 5000 ) Past experience indicates that θ = 30000. The manufacturere claims the tires made by a new process have mean θ > 30000. Is

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Hypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004

Hypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Chapter 8.8.1: A factorization theorem

Chapter 8.8.1: A factorization theorem LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

2014/2015 Smester II ST5224 Final Exam Solution

2014/2015 Smester II ST5224 Final Exam Solution 014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of

More information

Importance Sampling and. Radon-Nikodym Derivatives. Steven R. Dunbar. Sampling with respect to 2 distributions. Rare Event Simulation

Importance Sampling and. Radon-Nikodym Derivatives. Steven R. Dunbar. Sampling with respect to 2 distributions. Rare Event Simulation 1 / 33 Outline 1 2 3 4 5 2 / 33 More than one way to evaluate a statistic A statistic for X with pdf u(x) is A = E u [F (X)] = F (x)u(x) dx 3 / 33 Suppose v(x) is another probability density such that

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

http://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random

More information

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

On the GLR and UMP tests in the family with support dependent on the parameter

On the GLR and UMP tests in the family with support dependent on the parameter STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 3, September 2015, pp 221 228. Published online in International Academic Press (www.iapress.org On the GLR and UMP tests

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

On likelihood ratio tests

On likelihood ratio tests IMS Lecture Notes-Monograph Series 2nd Lehmann Symposium - Optimality Vol. 49 (2006) 1-8 Institute of Mathematical Statistics, 2006 DOl: 10.1214/074921706000000356 On likelihood ratio tests Erich L. Lehmann

More information

ECE531 Lecture 4b: Composite Hypothesis Testing

ECE531 Lecture 4b: Composite Hypothesis Testing ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction

More information

ECE 275B Homework # 1 Solutions Version Winter 2015

ECE 275B Homework # 1 Solutions Version Winter 2015 ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment more effective than the old one? 3. Quality

More information