Math 494: Mathematical Statistics

Size: px
Start display at page:

Download "Math 494: Mathematical Statistics"

Transcription

1 Math 494: Mathematical Statistics Instructor: Jimin Ding Department of Mathematics Washington University in St. Louis Class materials are available on course website ( jmding/math494/ ) Spring 2018 Jimin Ding, Math WUSTL Math 494 Spring / 21

2 Optimal Estimations and Tests Jimin Ding, Math WUSTL Math 494 Spring / 21

3 Motivation We have learned how to find an estimator, how to construct a CI, and how to construct a test in statistics. But when there are many possible estimators or different ways of testing, how do we evaluate or compare different estimators, different CIs, and different test? Jimin Ding, Math WUSTL Math 494 Spring / 21

4 Motivation We have learned how to find an estimator, how to construct a CI, and how to construct a test in statistics. But when there are many possible estimators or different ways of testing, how do we evaluate or compare different estimators, different CIs, and different test? For example, Both sample mean and sample median can be used to estimate the population mean, which one is better? Jimin Ding, Math WUSTL Math 494 Spring / 21

5 Motivation We have learned how to find an estimator, how to construct a CI, and how to construct a test in statistics. But when there are many possible estimators or different ways of testing, how do we evaluate or compare different estimators, different CIs, and different test? For example, Both sample mean and sample median can be used to estimate the population mean, which one is better? In the first exam, the confidence interval for the intensity of Poisson random variables, someone suggested to use X ± 1.96S/ n, while the posted solution says which one is better? X 1±1.96/ n, Jimin Ding, Math WUSTL Math 494 Spring / 21

6 Motivation We have learned how to find an estimator, how to construct a CI, and how to construct a test in statistics. But when there are many possible estimators or different ways of testing, how do we evaluate or compare different estimators, different CIs, and different test? For example, Both sample mean and sample median can be used to estimate the population mean, which one is better? In the first exam, the confidence interval for the intensity of Poisson random variables, someone suggested to use X ± 1.96S/ n, while the posted solution says which one is better? X 1±1.96/ n, When the data are normality distributed with known variance σ 2, for testing the mean, both t-test and z-test can be used, which one is better? Jimin Ding, Math WUSTL Math 494 Spring / 21

7 Motivation We have learned how to find an estimator, how to construct a CI, and how to construct a test in statistics. But when there are many possible estimators or different ways of testing, how do we evaluate or compare different estimators, different CIs, and different test? For example, Both sample mean and sample median can be used to estimate the population mean, which one is better? In the first exam, the confidence interval for the intensity of Poisson random variables, someone suggested to use X ± 1.96S/ n, while the posted solution says which one is better? X 1±1.96/ n, When the data are normality distributed with known variance σ 2, for testing the mean, both t-test and z-test can be used, which one is better? Jimin Ding, Math WUSTL Math 494 Spring / 21

8 Outline Optimal Estimations and Tests UMVUE UMP Tests Jimin Ding, Math WUSTL Math 494 Spring / 21

9 UMVUE Jimin Ding, Math WUSTL Math 494 Spring / 21

10 Minimum Variance Unbiased estimator (MVUE) If ˆθ 1 and ˆθ 2 are both unbiased estimators for θ, then we would prefer to use the one with smaller variance, since it is more precise and leads to a narrower confidence interval. Definition An unbiased estimator ˆθ of θ is called a minimum variance unbiased estimator (MVUE) if it has the smallest variance among all unbiased estimators. That is, V ar(ˆθ) V ar(ˆθ ), where ˆθ is any estimator of θ with E(ˆθ ) = θ. Jimin Ding, Math WUSTL Math 494 Spring / 21

11 Uniform Minimum Variance Unbiased estimator (UMVUE) Note the variance of estimators may depend on the true parameter θ. For example, let X 1,, X n iid Ber(θ), θ (0, 1). Then we can consider ˆθ = X as an estimator of θ. Here V ar(ˆθ) = θ(1 θ)/n, which depends on θ. Of course, we often don t know θ. So we would want an unbiased estimator that has the smallest variance no matter what value θ is. Definition An unbiased estimator ˆθ of θ is called uniform a minimum variance unbiased estimator (UMVUE) if it has the smallest variance among all unbiased estimators for all θ Θ. That is, V ar(ˆθ) V ar(ˆθ ), θ Θ where ˆθ is any estimator of θ with E(ˆθ ) = θ. Jimin Ding, Math WUSTL Math 494 Spring / 21

12 How to Find MVUE or UMVUE Theorem (Rao-Blackwell (Theorem 7.3.1)) Let ˆθ 1 be an unbiased estimator of θ and T be a sufficient statistic for θ. Define ˆθ 2 := E(ˆθ 1 T ), which is a function of T. Then ˆθ 2 is also unbiased for θ and V ar(ˆθ 2 ) V ar(ˆθ 1 ), θ Θ. Remark: So ˆθ 2 is an improved estimator for θ. Theorem (Lehmann & Scheffé (Theorem 7.4.1)) Let T be a complete and sufficient statistic for θ. If E(g(T )) = θ, then g(t ) is the unique UMVUE of θ. Remark: If an unbiased estimator ˆθ is a function of complete and sufficient statistic, then it is the unique UMVE. Jimin Ding, Math WUSTL Math 494 Spring / 21

13 Examples 1. Ex : Let X 1,, X n iid Ber(θ), θ (0, 1). Jimin Ding, Math WUSTL Math 494 Spring / 21

14 Examples 1. Ex : Let X 1,, X n iid Ber(θ), θ (0, 1). 2 T = i=1 X i is complete and sufficient. (Exponential family.) Using the unbiased ˆθ1 = (X 1 + X 2 )/2 to construct ˆθ 2 = E(ˆθ 1 T ) = E(X 1 T ) = T/n. E(ˆθ 2 ) = θ, V ar(ˆθ 2 ) = θ(1 θ) n < θ(1 θ) 2 = V ar(ˆθ 1 ). ˆθ2 is the unique UMVUE by Lehmann & Scheffé theorem. Jimin Ding, Math WUSTL Math 494 Spring / 21

15 Examples 1. Ex : Let X 1,, X n iid Ber(θ), θ (0, 1). 2 T = i=1 X i is complete and sufficient. (Exponential family.) Using the unbiased ˆθ1 = (X 1 + X 2 )/2 to construct ˆθ 2 = E(ˆθ 1 T ) = E(X 1 T ) = T/n. E(ˆθ 2 ) = θ, V ar(ˆθ 2 ) = θ(1 θ) n < θ(1 θ) 2 = V ar(ˆθ 1 ). ˆθ2 is the unique UMVUE by Lehmann & Scheffé theorem. 2. Ex & Example 7.4.2: Let X 1,, X n iid U(0, θ), θ > 0. Jimin Ding, Math WUSTL Math 494 Spring / 21

16 Examples 1. Ex : Let X 1,, X n iid Ber(θ), θ (0, 1). 2 T = i=1 X i is complete and sufficient. (Exponential family.) Using the unbiased ˆθ1 = (X 1 + X 2 )/2 to construct ˆθ 2 = E(ˆθ 1 T ) = E(X 1 T ) = T/n. E(ˆθ 2 ) = θ, V ar(ˆθ 2 ) = θ(1 θ) n < θ(1 θ) 2 = V ar(ˆθ 1 ). ˆθ2 is the unique UMVUE by Lehmann & Scheffé theorem. 2. Ex & Example 7.4.2: Let X 1,, X n iid U(0, θ), θ > 0. T = X(n) is sufficient and the MLE of θ. It is also complete. Let ˆθ1 = T. Since E(ˆθ) = n n+1 θ < θ, ˆθ 1 is biased. Consider ˆθ 2 = n+1 n T. It is unbiased. By Lehmann & Scheffé theorem, ˆθ2 is the unique UMVUE. Jimin Ding, Math WUSTL Math 494 Spring / 21

17 Examples 1. Ex : Let X 1,, X n iid Ber(θ), θ (0, 1). 2 T = i=1 X i is complete and sufficient. (Exponential family.) Using the unbiased ˆθ1 = (X 1 + X 2 )/2 to construct ˆθ 2 = E(ˆθ 1 T ) = E(X 1 T ) = T/n. E(ˆθ 2 ) = θ, V ar(ˆθ 2 ) = θ(1 θ) n < θ(1 θ) 2 = V ar(ˆθ 1 ). ˆθ2 is the unique UMVUE by Lehmann & Scheffé theorem. 2. Ex & Example 7.4.2: Let X 1,, X n iid U(0, θ), θ > 0. T = X(n) is sufficient and the MLE of θ. It is also complete. Let ˆθ1 = T. Since E(ˆθ) = n n+1 θ < θ, ˆθ 1 is biased. Consider ˆθ 2 = n+1 n T. It is unbiased. By Lehmann & Scheffé theorem, ˆθ2 is the unique UMVUE. Remark: this is a non regular case and does not belong to exponential family. Jimin Ding, Math WUSTL Math 494 Spring / 21

18 More Examples Not Efficient 1 An UMVUE may not always reach the CR lower bound, hence is not efficient. iid 3. Let X 1,, X n f(x; θ) = θe θx, x > 0, θ > 0. T = Xi is complete and sufficient for θ. (Exponential family.) T Gamma(n, 1 θ ) since X i Gamma(1, 1 θ ). The MLE ˆθ 1 = 1/ X, but E(ˆθ 1 ) = n θ, biased. Consider ˆθ2 = n 1 ˆθ n 1 = n 1 n So ˆθ 2 is the unique UMVUE. n 1 / X. a function of T, unbiased. But the variance of this UMVUE does not reach CR lower bound: I(θ) = V ar( θ log f(x; θ)) = V ar( 1 θ X) = θ2, V ar(ˆθ 2 ) = θ 2 /(n 2) > θ 2 /n = (ni(θ)) 1, So here the UMVUE is not efficient. Jimin Ding, Math WUSTL Math 494 Spring / 21

19 More Examples Not Efficient 2 An UMVUE may not always reach the CR lower bound, hence is not efficient. 4. Let X 1,, X n iid N(µ, σ 2 ), µ R, σ 2 R +. T = ( X i, X 2 i ) is complete and sufficient for (µ, σ2 ). The MLE: ˆµ = X, ˆσ 2 = 1 n n i=1 (X i X) 2. a function of T. But biased: E(ˆµ) = µ, E(ˆσ 2 ) = n n 1 σ2. Consider ˆσ 2 2 = n n 1 ˆσ2 1 = S 2. Still a function of T. Now unbiased. So ˆθ2 = (ˆµ, ˆσ 2) 2 is the unique UMVUE. But the UMVUE is not efficient since the variance does not reach CR lower ( bound ) ( ) σ 2 V ar(ˆθ 2 ) = n 0 σ 2 2σ 0 4 > n 0 = (ni(θ)) 1 2σ (n 1) 0 4 n Jimin Ding, Math WUSTL Math 494 Spring / 21

20 UMP Tests Jimin Ding, Math WUSTL Math 494 Spring / 21

21 Review Let X 1,, X n iid f(x; θ), θ Θ. H 0 : θ Θ 0 v.s. H 1 : θ Θ 1 = Θ \ Θ 0 (Simple, Composite) Rejection rule: Reject H 0 if (X 1,, X n ) C (rejection region). Fail to reject H 0 if (X 1,, X n ) C c (acceptance region). Probability of Type I error (false rejection): α = max θ Θ 0 P ((X 1,, X n ) C) = max P (rejection H 0 H 0 is true) Power function: γ C (θ) = P ((X 1,, X n ) C), θ Θ Power: 1 β = P ((X 1,, X n ) C), θ Θ 1 Probability of Type II error (false acceptance): β Jimin Ding, Math WUSTL Math 494 Spring / 21

22 Visualizing α, β, and Power in Test Power α β µ µ Reject Null Fail to Reject Null Jimin Ding, Math WUSTL Math 494 Spring / 21

23 Comparison of Tests Definition If there are two rejection regions C 1 and C 2 s.t. they have the same type II error, i.e. α = max θ Θ 0 P ((X 1,, X n ) C 1 ) = max θ Θ 0 P ((X 1,, X n ) C 2 ) and γ C1 (θ) > γ C2 (θ), θ Θ 1, then we say C 1 is better than C 2 (that is, the rejection rule based on C 1 is better than the rejection rule based on C 2 ). Jimin Ding, Math WUSTL Math 494 Spring / 21

24 Comparison of Tests Definition If there are two rejection regions C 1 and C 2 s.t. they have the same type II error, i.e. α = max θ Θ 0 P ((X 1,, X n ) C 1 ) = max θ Θ 0 P ((X 1,, X n ) C 2 ) and γ C1 (θ) > γ C2 (θ), θ Θ 1, then we say C 1 is better than C 2 (that is, the rejection rule based on C 1 is better than the rejection rule based on C 2 ). Q: Is there a /the best? If so, how to find it? Jimin Ding, Math WUSTL Math 494 Spring / 21

25 Comparison of Tests Definition If there are two rejection regions C 1 and C 2 s.t. they have the same type II error, i.e. α = max θ Θ 0 P ((X 1,, X n ) C 1 ) = max θ Θ 0 P ((X 1,, X n ) C 2 ) and γ C1 (θ) > γ C2 (θ), θ Θ 1, then we say C 1 is better than C 2 (that is, the rejection rule based on C 1 is better than the rejection rule based on C 2 ). Q: Is there a /the best? If so, how to find it? First control α, then maximize power. Jimin Ding, Math WUSTL Math 494 Spring / 21

26 Most Powerful Test & Uniform Most Powerful Test Consider to test a simple hypothesis H 0 : θ = θ 0 v.s. H 1 : θ = θ 1. We say C is a best rejection region of size α for the test if and P ((X 1,, X n ) C θ = θ 0 ) = α P ((X 1,, X n ) C θ = θ 1 ) P ((X 1,, X n ) C θ = θ 1 ), for any C s.t. P ((X 1,, X n ) C θ = θ 0 ) = α. The test based on C is called a most powerful test. Jimin Ding, Math WUSTL Math 494 Spring / 21

27 Most Powerful Test & Uniform Most Powerful Test Consider to test a simple hypothesis H 0 : θ = θ 0 v.s. H 1 : θ = θ 1. We say C is a best rejection region of size α for the test if and P ((X 1,, X n ) C θ = θ 0 ) = α P ((X 1,, X n ) C θ = θ 1 ) P ((X 1,, X n ) C θ = θ 1 ), for any C s.t. P ((X 1,, X n ) C θ = θ 0 ) = α. The test based on C is called a most powerful test. Next consider to test a composite hypothesis H 0 : θ = θ 0 v.s. H 1 : θ Θ 1. We say C is a uniform most powerful (UMP) rejection region of size α for the test, if C is a best rejection region of size α for H 0 : θ = θ 0 v.s. H 1 : θ = θ 1, for all θ 1 Θ 1. And the test based on C is called a UMP test. Jimin Ding, Math WUSTL Math 494 Spring / 21

28 Neyman-Pearson Theorem Recall that the likelihood ration test is based on the ratio of the likelihood functions evaluated under the null and alternative hypothesis: Λ = L(θ 0; X 1,, X n ) L(θ 1 ; X 1,, X n ). We naturally reject H 0 : θ = θ 0 in favor of H 1 : θ = θ 1 if Λ k for some small k. Jimin Ding, Math WUSTL Math 494 Spring / 21

29 Neyman-Pearson Theorem Recall that the likelihood ration test is based on the ratio of the likelihood functions evaluated under the null and alternative hypothesis: Λ = L(θ 0; X 1,, X n ) L(θ 1 ; X 1,, X n ). We naturally reject H 0 : θ = θ 0 in favor of H 1 : θ = θ 1 if Λ k for some small k. The critical value should be chosen to meet α (the boundary of type I error). Jimin Ding, Math WUSTL Math 494 Spring / 21

30 Neyman-Pearson Theorem Recall that the likelihood ration test is based on the ratio of the likelihood functions evaluated under the null and alternative hypothesis: Λ = L(θ 0; X 1,, X n ) L(θ 1 ; X 1,, X n ). We naturally reject H 0 : θ = θ 0 in favor of H 1 : θ = θ 1 if Λ k for some small k. The critical value should be chosen to meet α (the boundary of type I error). Theorem (Neyman-Pearson Theorem (Theorem 8.1.1)) The likelihood ratio test (LRT) is a most powerful test and provides a best rejection region of size α for testing H 0 : θ = θ 0 v.s. H 1 : θ = θ 1. Jimin Ding, Math WUSTL Math 494 Spring / 21

31 Example Jimin Ding, Math WUSTL Math 494 Spring / 21

32 Example Jimin Ding, Math WUSTL Math 494 Spring / 21

33 Unbiased Tests Jimin Ding, Math WUSTL Math 494 Spring / 21

34 Unbiased Tests Definition If a rejection region of size α satisfies γ C (θ) α, θ Θ 1, we say it is unbiased (or the test is unbiased). Jimin Ding, Math WUSTL Math 494 Spring / 21

35 Unbiased Tests Definition If a rejection region of size α satisfies γ C (θ) α, θ Θ 1, we say it is unbiased (or the test is unbiased). Theorem A most powerful test for simple hypothesis is always unbiased. Jimin Ding, Math WUSTL Math 494 Spring / 21

36 Unbiased Tests Definition If a rejection region of size α satisfies γ C (θ) α, θ Θ 1, we say it is unbiased (or the test is unbiased). Theorem A most powerful test for simple hypothesis is always unbiased. Proof. Consider a randomized test in which we ignore the data and simply reject H 0 with probability α. For example, we can randomly generate U Ber(α) and reject H 0 if U = 1. Although this is a silly est, its level is α and γ C (θ) α. A most powerful test has power no smaller than the power of this silly test γ C (θ) α, hence is unbiased. Jimin Ding, Math WUSTL Math 494 Spring / 21

37 Uniform Most Powerful Unbiased Test Although the likelihood ratio test is a most powerful test, it may depend on the alternative hypothesis. Hence the uniform most powerful test may not always exists. See example Jimin Ding, Math WUSTL Math 494 Spring / 21

38 Uniform Most Powerful Unbiased Test Although the likelihood ratio test is a most powerful test, it may depend on the alternative hypothesis. Hence the uniform most powerful test may not always exists. See example Recall in estimation, we did not just look for the estimator with minimal variance (a silly constant estimator has the smallest variance 0). Instead, we first consider restrict ourself to unbiased estimators, and then minimize the variance of the estimators among all unbiased estimators. Can we do this for tests? Jimin Ding, Math WUSTL Math 494 Spring / 21

39 Uniform Most Powerful Unbiased Test Although the likelihood ratio test is a most powerful test, it may depend on the alternative hypothesis. Hence the uniform most powerful test may not always exists. See example Recall in estimation, we did not just look for the estimator with minimal variance (a silly constant estimator has the smallest variance 0). Instead, we first consider restrict ourself to unbiased estimators, and then minimize the variance of the estimators among all unbiased estimators. Can we do this for tests? Definition If a test is a UMP test among all unbiased tests, then it is called a Uniform Most Powerful Unbiased Test (UMPU) test. Remark: Unfortunately, UMPU test may not exist too. But, for exponential family distributions with p(θ) = θ, a UMPU test (of size α) exists and is based on T = i K(X i). Jimin Ding, Math WUSTL Math 494 Spring / 21

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Chapters 9. Properties of Point Estimators

Chapters 9. Properties of Point Estimators Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Statistics and Econometrics I

Statistics and Econometrics I Statistics and Econometrics I Point Estimation Shiu-Sheng Chen Department of Economics National Taiwan University September 13, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I September 13,

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

1. (Regular) Exponential Family

1. (Regular) Exponential Family 1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes

More information

M(t) = 1 t. (1 t), 6 M (0) = 20 P (95. X i 110) i=1

M(t) = 1 t. (1 t), 6 M (0) = 20 P (95. X i 110) i=1 Math 66/566 - Midterm Solutions NOTE: These solutions are for both the 66 and 566 exam. The problems are the same until questions and 5. 1. The moment generating function of a random variable X is M(t)

More information

BTRY 4090: Spring 2009 Theory of Statistics

BTRY 4090: Spring 2009 Theory of Statistics BTRY 4090: Spring 2009 Theory of Statistics Guozhang Wang September 25, 2010 1 Review of Probability We begin with a real example of using probability to solve computationally intensive (or infeasible)

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Chapter 8.8.1: A factorization theorem

Chapter 8.8.1: A factorization theorem LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.

More information

Elements of statistics (MATH0487-1)

Elements of statistics (MATH0487-1) Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Review. December 4 th, Review

Review. December 4 th, Review December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Statistics Masters Comprehensive Exam March 21, 2003

Statistics Masters Comprehensive Exam March 21, 2003 Statistics Masters Comprehensive Exam March 21, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Theory of Statistics.

Theory of Statistics. Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

Chapter 3. Point Estimation. 3.1 Introduction

Chapter 3. Point Estimation. 3.1 Introduction Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.

More information

Introductory Econometrics

Introductory Econometrics Session 4 - Testing hypotheses Roland Sciences Po July 2011 Motivation After estimation, delivering information involves testing hypotheses Did this drug had any effect on the survival rate? Is this drug

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Statistics Ph.D. Qualifying Exam: Part II November 3, 2001

Statistics Ph.D. Qualifying Exam: Part II November 3, 2001 Statistics Ph.D. Qualifying Exam: Part II November 3, 2001 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your

More information

Proof In the CR proof. and

Proof In the CR proof. and Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé

More information

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 STAB57H3 Introduction to Statistics Duration: 3 hours Last Name: First Name: Student number:

More information

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider

More information

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm Statistics GIDP Ph.D. Qualifying Exam Theory Jan, 06, 9:00am-:00pm Instructions: Provide answers on the supplied pads of paper; write on only one side of each sheet. Complete exactly 5 of the 6 problems.

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

Basic Concepts of Inference

Basic Concepts of Inference Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Suggested solutions to written exam Jan 17, 2012

Suggested solutions to written exam Jan 17, 2012 LINKÖPINGS UNIVERSITET Institutionen för datavetenskap Statistik, ANd 73A36 THEORY OF STATISTICS, 6 CDTS Master s program in Statistics and Data Mining Fall semester Written exam Suggested solutions to

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

Statistics 135 Fall 2008 Final Exam

Statistics 135 Fall 2008 Final Exam Name: SID: Statistics 135 Fall 2008 Final Exam Show your work. The number of points each question is worth is shown at the beginning of the question. There are 10 problems. 1. [2] The normal equations

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

Math 152. Rumbos Fall Solutions to Exam #2

Math 152. Rumbos Fall Solutions to Exam #2 Math 152. Rumbos Fall 2009 1 Solutions to Exam #2 1. Define the following terms: (a) Significance level of a hypothesis test. Answer: The significance level, α, of a hypothesis test is the largest probability

More information

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009 EAMINERS REPORT & SOLUTIONS STATISTICS (MATH 400) May-June 2009 Examiners Report A. Most plots were well done. Some candidates muddled hinges and quartiles and gave the wrong one. Generally candidates

More information

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Seven Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Seven Notes Spring 2011 1 / 42 Outline

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

10-704: Information Processing and Learning Fall Lecture 24: Dec 7

10-704: Information Processing and Learning Fall Lecture 24: Dec 7 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of

More information

Statistics 135 Fall 2007 Midterm Exam

Statistics 135 Fall 2007 Midterm Exam Name: Student ID Number: Statistics 135 Fall 007 Midterm Exam Ignore the finite population correction in all relevant problems. The exam is closed book, but some possibly useful facts about probability

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

STA 260: Statistics and Probability II

STA 260: Statistics and Probability II Al Nosedal. University of Toronto. Winter 2017 1 Properties of Point Estimators and Methods of Estimation 2 3 If you can t explain it simply, you don t understand it well enough Albert Einstein. Definition

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

Topic 15: Simple Hypotheses

Topic 15: Simple Hypotheses Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

F79SM STATISTICAL METHODS

F79SM STATISTICAL METHODS F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is

More information

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)

ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) 1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

Master s Examination Solutions Option Statistics and Probability Fall 2011

Master s Examination Solutions Option Statistics and Probability Fall 2011 Master s Examination Solutions Option Statistics and Probability Fall 211 1. (STAT 41) Suppose that X, Y and Z are i.i.d. Uniform(,1). Let t > be a fixed constant. (i) Compute P ( X Y t). (ii) Compute

More information

2014/2015 Smester II ST5224 Final Exam Solution

2014/2015 Smester II ST5224 Final Exam Solution 014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of

More information

f(y θ) = g(t (y) θ)h(y)

f(y θ) = g(t (y) θ)h(y) EXAM3, FINAL REVIEW (and a review for some of the QUAL problems): No notes will be allowed, but you may bring a calculator. Memorize the pmf or pdf f, E(Y ) and V(Y ) for the following RVs: 1) beta(δ,

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

Simple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation.

Simple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation. Statistical Computation Math 475 Jimin Ding Department of Mathematics Washington University in St. Louis www.math.wustl.edu/ jmding/math475/index.html October 10, 2013 Ridge Part IV October 10, 2013 1

More information

Probability and Statistics qualifying exam, May 2015

Probability and Statistics qualifying exam, May 2015 Probability and Statistics qualifying exam, May 2015 Name: Instructions: 1. The exam is divided into 3 sections: Linear Models, Mathematical Statistics and Probability. You must pass each section to pass

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

Hypothesis Testing Chap 10p460

Hypothesis Testing Chap 10p460 Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

2018 2019 1 9 sei@mistiu-tokyoacjp http://wwwstattu-tokyoacjp/~sei/lec-jhtml 11 552 3 0 1 2 3 4 5 6 7 13 14 33 4 1 4 4 2 1 1 2 2 1 1 12 13 R?boxplot boxplotstats which does the computation?boxplotstats

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Topic 19 Extensions on the Likelihood Ratio

Topic 19 Extensions on the Likelihood Ratio Topic 19 Extensions on the Likelihood Ratio Two-Sided Tests 1 / 12 Outline Overview Normal Observations Power Analysis 2 / 12 Overview The likelihood ratio test is a popular choice for composite hypothesis

More information

Space Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses

Space Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William

More information

Mathematical statistics

Mathematical statistics October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation

More information

Charles Geyer University of Minnesota. joint work with. Glen Meeden University of Minnesota.

Charles Geyer University of Minnesota. joint work with. Glen Meeden University of Minnesota. Fuzzy Confidence Intervals and P -values Charles Geyer University of Minnesota joint work with Glen Meeden University of Minnesota http://www.stat.umn.edu/geyer/fuzz 1 Ordinary Confidence Intervals OK

More information