STAT 611. R. J. Carroll

Size: px
Start display at page:

Download "STAT 611. R. J. Carroll"

Transcription

1 STAT 611 R. J. Carroll January 17, 1999

2 Contents 1 OFFICE HOURS, etc OfficeHours Text Grading AbouttheInstructor COURSE SCHEDULE 3 3 OLD EXAMS FROM OTHER INSTRUCTORS Introduction FirstSetofOldExams SecondSetofOldExams HOMEWORK ASSIGNMENTS, Homework# Homework# Homework# Homework# Homework# Homework# Homework# Homework# MY EXAMS FROM MY EXAMS FROM MY EXAMS FROM LECTURE REVIEWS FOR i

3 Chapter 1 OFFICE HOURS, etc. STAT 611 Spring 1998 Raymond J. Carroll University Distinguished Professor Professor of Statistics, Nutrition and Toxicology Department of Statistics Blocker Building, Room 430 carroll@stat.tamu.edu (Leave messages with M. Randall) Office Hours My office hours are as follows: Tuesdays, 4:00 5:00 Wednesdays, 3:00 5:00 Thursdays, 10:30 12:00 If you need to speak with me otherwise, please make an appointment at ; ask for Ms. Randall. 1.2 Text I will nominally follow the book of Casella & Berger. However, I will not follow it too closely, nor will I follow it in the order they use. 1

4 2 CHAPTER 1. OFFICE HOURS, ETC. 1.3 Grading Homework will count 20% of your final grade. The midterm will count 35%, and the final 45%. The final is cumulative. No late homework assignments will be accepted. I am extremely reluctant to give makeup exams, and will do so only in the most extreme circumstances, e.g. a death in the immediate family. For all exams, you may bring six regular sheets of paper filled with whatever notes and formulae you think are necessary. Not all homework problems will be graded. Typically, I will ask the grader to select one problem to grade in detail. The grader will also make up answer sheets. I will give an extra 2% on the final exam for each mistake you find in the writeup the grader distributes (10 points maximum). There is no guarantee that the grader will actually make mistakes! 1.4 About the Instructor I grew up in D.C., Germany and Wichita Falls, got a B.A. from U.T. Austin in 1971 and a Ph.D. in Statistics from Purdue in From I was on the faculty at the University of North Carolina at Chapel Hill, also spending time at the Universities of Heidelberg and Wisconsin, as well as at the National Heart, Lung & Blood Institute. From 1987 to the present I have been a full professor here at A&M, along with having visiting appointments at the Australian National University and the National Cancer Institute. I was head of the department from I currently work as a consultant to the National Cancer Institute, and hold two research grants from that institute. My research interests lie in the general area of regression. I wrote a book, Transformation and Weighting in Regression (Chapman & Hall, 1988), which is essentially concerned with heteroscedasticity in nonlinear regression. A new book, Measurement Error in Nonlinear Models appeared in My special interest is nonlinear modeling under unusual error situations. What I try to do is to develop new statistical techniques for solving important practical problems arising out of my consulting. The techniques I ve developed have been applied to cancer epidemiology, marine biology, chemometics, pharmacokinetics, marketing, and image processing, among others. The work has won my research team a number of awards and honors. I m currently very interested in the problems of regression when some of the predictors are measured with error, and my newest techniques are being used by cancer epidemiologists in their study of the relationship between breast cancer and nutrition. My hobbies are trout fishing, golf and cycling.

5 Chapter 2 COURSE SCHEDULE The attached Table 2.1 is tentative, and probably a bit slower than the course will go. The major difference with Casella & Berger is that I will do some multidimensional problems, and I will do somewhat more with asymptotic theory. Casella & Berger spend an enormous amount of time on confidence intervals because that is their research interest, but the topic is not of particular importance for the purpose of this course. I must say that I don t like many of the things that they do, and only use the book nominally to save you money. 3

6 4 CHAPTER 2. COURSE SCHEDULE Lecture # Topic C&B sections 1 Review of Stat Sufficiency Exponential families Method of moments estimation Maximum likelihood estimation More Maximum likelihood estimation Rao-Blackwell Theorem Lehmann-Scheffe Theorem Information, Cramer-Rao inequality Multivariate Cramer-Rao Class notes Distribution of the sample variance Class notes 11 Consistency of the mle Asymptotic normality of mle Introduction to decision theory Bayes Rules , Bayes examples Various examples in Chapter Midterm over estimation 17 Introduction to hypothesis testing Neyman Pearson Lemma 8.2.1, 8.3.1, Monotone likelihood ratio families UMP tests More on testing Multiparameter tests via conditioning Class notes 21 Matched pairs and the waterbuck data Class notes 22 Generalized LR tests Wald tests Score tests Class notes 25 Local power for setting sample sizes Class notes Tests = confidence intervals Generalized Linear Models Class Notes 27 Generalized Linear Models Class Notes 28 Generalized Linear Models Class Notes Table 2.1: This table gives a tentative (note the emphasis) schedule of the course, along with the relevant sections in Casella & Berger.

7 Chapter 3 OLD EXAMS FROM OTHER INSTRUCTORS 3.1 Introduction These are old exams given by other instructors of this course in the recent past. questions might be useful to study for qualifying exams. Some of the 3.2 First Set of Old Exams Problem #1. Let X be a random variable with probability density function f(x θ) given below: f(x 0) = 1, if 0 x 1, = 0, otherwise, f(x 1) = 6x(1 x), if 0 x 1, = 0, otherwise, f(x 2) = 2x, if 0 x 1, = 0, otherwise. (a.) (18 points) Find the most powerful level 0.1 test of H 0 : θ =0versusH 1 : θ =1. Obtainits power. (b.) (18 points) Is the test obtained in part (b.) the uniformly most powerful level 0.1 test for H 0 : θ =0versusH 1 : θ {1, 2}? Justify your answer. Note: If you could not find the MP test in part (a.), use the test with rejection region {3, 4} to answer part (b.). 5

8 6 CHAPTER 3. OLD EXAMS FROM OTHER INSTRUCTORS Problem #2. Let X 1,...,X m be a random sample from an exponential distribution with density f(x θ) = 1 θ e x/θ, x > 0. An important function in reliability and survival analysis is the survival function, S(x) = P[X>x]. (a.) (20 points) Obtain the maximum likelihood estimator of the survival function, S(x), for a specified value x o of x. (b.) (8 points) Derive a lower bound for the variance of unbiased estimators of S(x). (c.) Investigate the limiting distribution of the m.l.e. of S(x). (d.) Obtain the UMVU estimator of S(x). Hint: The conditional density of X 1 given n i=1 X i = t is f(x 1 X i = t) =n(t x 1 ) n 1 /t n, 0 <x 1 <t. Problem #3. Let X 1,...,X n be independent random variables with cumulative distribution function F (x θ) = 0, x < 0 = ( x θ )2, 0 x θ = 1, x > θ. (a.) (18 points) Obtain a pivot based upon the sufficient statistic for θ. Then obtain a 1-α confidence interval for θ based upon this pivot. (b.) (18 points) Suppose now that θ is a random variable with prior distribution π(θ) = 1 θ, 2 θ > 1 = 0, otherwise. Obtain a 1-α highest posterior density Bayes region for θ. Problem #4. Let X 1,...,X m and Y 1,...,Y n be independent random samples from normal distributions with means µ 1 and µ 2, respectively, and variances both equal to 1. Obtain the likelihood ratio test of H 0 : µ 1 = µ 2 versus H 1 : µ 1 µ 2. Problem #5. Let X 1,...,X n be independent Poisson random variables with probability mass function f(x θ) = e θ θ x, x =0, 1, 2,... x! Suppose that θ has prior density π(θ) = 1 Γ(α)β α θα 1 e θ/beta, θ > 0. (a.) Obtain the Bayes estimator of θ with respect to squared error loss.

9 3.2. FIRST SET OF OLD EXAMS 7 (b) Find the bias, variance and mean squared error of the Bayes estimator. consistency of this estimator. Investigate the Problem #6. Let X 1,...,X n be a random sample from the distribution with p.d.f. f(x θ) =θx θ 1, 0 <x<1, θ > 0. (a.) (12 points) Obtain a method of moments estimator of θ. (b.) (12 points) Obtain the maximum likelihood estimator of θ. (c.) (12 points) Obtain the Crameŕ-Rao lower bound for the variance of unbiased estimators of θ. (d.) (13 points) Is either of the two above estimators the uniformly minimum variance unbiased (UMVU) estimator of θ? Justify your answer. Problem #7. (14 points) Let X 1,...,X n be a random sample from the Pareto distribution with density f(x α, β) = βαβ, α < x <, α > 0, β > 0. xβ+1 Determine whether or not the distribution is of the exponential class. If it is of an exponential class, find a complete sufficient statistic. If it is not of exponential class, find a minimal sufficient statistic. Problem #8. Let X be a single observation from the N(µ,1) distribution, µ>0. Notice that the unknown mean is assumed to be positive. (a.) (12 points) Obtain the UMVU estimator of µ. (b.) (12 points) Obtain the maximum likelihood estimator (MLE) of µ. (Be sure to consider the parameter space.) (c.) (13 points) Show that the MLE is biased, but has smaller mean squared error than the UMVUE. Problem #9. Let X be a random variable with probability mass function f(x θ) given in the table. x = f(x 0) = f(x 1) = f(x 2) = (a.) (18 points) Find the most powerful level 0.2 test of H 0 : θ =0versusH 1 : θ =1. Obtainits power. (b.) (18 points) Is the test obtained in part (b.) the uniformly most powerful level 0.1 test for H 0 : θ =0versusH 1 : θ {1, 2}? Justify your answer.

10 8 CHAPTER 3. OLD EXAMS FROM OTHER INSTRUCTORS Note: If you could not find the MP test in part (a.), use the test with rejection region {3, 4} to answer part (b.). Problem #10. Let X 1,...,X m and Y 1,...,Y n be independent random samples from uniform distributions on [0,θ 1 ]and[0,θ 2 ], respectively. (a.) (20 points) Obtain the likelihood ratio test of H 0 : θ 1 = θ 2 versus H 1 : θ 1 θ 2. Show that it can be expressed in terms of T m,n = 2log[X(m) m Y (n) n (m),y (n) } m+n ], where X (m) =max{x 1,...,X m } and Y (n) =max{y 1,...,Y n }. (b.) (8 points) When H 0 is true, T m,n in part (a.) has a chi-squared distribution with 2 degrees of freedom. Write out an expression for the rejection region of a level α test of H 0 versus H 1. Problem #11. Let X 1,...,X n be independent geometric random variables with probability mass function f(x θ) =θ(1 θ) x 1, x =1, 2,..., 0 θ 1. (a.) (18 points) Obtain the UMP test of H 0 : θ 0.5 versush 1 : θ>0.5 and show that it has a one-sided rejection region in terms of n i=1 x i. (b.) (18 points) Use the Central Limit Theorem to determine the sample size n so that the level 0.05 UMP test of H 0 : θ 0.5 for has power of 0.90 when θ =5/9. (Note: For the standard normal distribution, Φ(1.282) = 0.90, Φ(1.645) = 0.95, and Φ(1.96) = ) Problem #12. Let X 1,...,X n be a random sample from a distribution with density f(x θ) =θ 2 xe θx,x>0, θ>0. (a.) (16 points) Consider the problem of testing H 0 : θ = θ 0 versus H 1 : θ θ 0. Find the level α likelihood ratio test and express it in terms of some commonly used distribution. (b.) (10 points) Does a uniformly most powerful level α test exist for testing H 0 versus H 1? Carefully explain your reasoning. (c.) (12 points) Invert the test in part (a.) to obtain a level 1 α confidence interval for θ. If you could not work part (a.), you may assume that the test had rejection region R = {x : 2θ 0 xi <a 1 or 2θ 0 xi >a 2 } where a 1 and a 2 are appropriate table values. (d.) (12 points) Suppose now that θ has a gamma(α, β) prior distribution with density π(θ) = 1 Γ(α)β α θα 1 e θ/β, θ > 0, where α and β are known positive constants. Derive the level 1 α H.P.D. region for θ. Problem #13. Let X 1,...,X n be a random sample from a geometric distribution with pmf f(x; p) =p(1 p) x,x=0, 1, 2,..., 0 <p<1.

11 3.2. FIRST SET OF OLD EXAMS 9 We are interested in estimating the odds ratio, ξ =(1 p)/p with squared error loss. To do this, we reparameterize f to obtain the new p.m.f. f(x; ξ) = ξ x, x =0, 1, 2,..., ξ >0. (1 + ξ) 1+x (a.) (14 points) Suppose that ξ has a prior distribution of the form π(ξ) = ξ α 1 Γ(α + β),,ξ > 0, Γ(α)Γ(β) (1 + ξ) (α+β) where α and β are known positive numbers. Show that the Bayes estimator of ξ with respect to π is ni=1 X i + α T n = n + β 1. (b.) (12 points) Obtain the bias and mean squared error of T n. Hint: Use the results for the mean and variance of negative binomial random variables in Casella and Berger. You need to obtain p as a function of ξ. (c.) (12 points) Investigate the consistency and asymptotic normality of {T n }. (d.) (12 points) Obtain Fisher s information for estimating ξ based on a single observation. Then determine whether or not {T n } is asymptotically efficient. Problem #14. (12 points each for (a.) and (b.)) For each of the following distributions, let X 1,...,X n be a random sample. (i) State whether or not the distribution is of the exponential class. (ii) If it is of an exponential class, identify c(θ), h(x), w(θ), and t(x) and find the complete sufficient statistic. If it is not of exponential class, find the minimal sufficient statistic. (a) f(x; θ) =θx θ 1, 0 <x<1, θ > 0. (b) f(x; θ) = 3x2 θ 3, 0 <x<θ, θ>0. Problem #15. Let X 1,...,X n be a random sample from the distribution with density f(x; θ) = θ 2 xe θx,x>0, θ>0. (a.) (15 points) Find a method-of-moments estimator of θ. Is it unbiased? (b.) (15 points) Obtain the UMVU estimator of θ. (c.) (15 points) Find the Cramér-Rao Lower Bound on variances of unbiased estimators of θ. Does the variance of the estimator in (b.) attain the lower bound? Problem #16. Let X 1,...,X n be a random sample from a discrete distribution with pmf f(x; θ) = θ(1 θ) x,x=0, 1,..., 0 <θ<1. (a) (15 points) Obtain the maximum likelihood estimator of θ. (b.) (15 points) Obtain the UMVU estimator of θ. NoticethatP [X 1 =0]=θ.

12 10 CHAPTER 3. OLD EXAMS FROM OTHER INSTRUCTORS Problem #17. Let X be a single observation from a distribution with density % f(x θ) = 1, 1 <x<1, if θ =0 = 3 4 (1 x2 ), 1 <x<1, if θ =1. (a) (20 points) Consider testing H 0 : θ =0versusH 1 : θ = 1. Find the level of significance and power of the test that has rejection region R = {x : x > 0.9}. (b) (18 points) Is the test in part (a) the most powerful test of its size for testing H 0 versus H 1? If it is, prove that it is. If it is not, find the most powerful test. Problem #18. Let X 1,...,X n be a random sample from a Poisson distribution with pmf f(x; θ) = θ x e θ /x!, x=0, 1, 2,..., θ >0. (a.) (20 points) Obtain the uniformly most powerful test of its size of H 0 : θ 4versusH 1 : θ<4. (b.) (18 points) Suppose that n = 100. Use the Central Limit Theorem to obtain an approximately level.05 test of H 0 versus H 1. What is the power of this test for the alternative θ =1? You may express your answer in terms of the standard normal distribution function, Φ(x). (Note: Φ(1.645) =.95 and Φ(1.96) =.975.) Problem #19. (24 points) Let X 1,...,X n be a random sample from a normal distribution with density f(x µ, σ) =( 2πσ) 1 exp( (x µ)2 ). Consider the problem of testing H 2σ 2 0 : µ = µ 0 versus H 1 : µ µ 0. Find the level α likelihood ratio test and show that it can be based on the statistic t = X µ 0 S/ n where X = n 1 n i=1 X i and S 2 = 1 ni=1 (X n 1 i X) Second Set of Old Exams Problem #1. Suppose that X 1,...,X n are independent random variables with X i Poisson(θ/i), i =1,...,n. The parameter θ is unknown and the parameter space is Θ = (0, ). For ease of notation, define c n = n i=1 i 1 /n. (a) Identify a sufficient statistic and prove that it is sufficient using the factorization theorem. (b) The maximum likelihood estimator of θ is X/c n,where X = n i=1 X i /n. Calculate the mean squared error of the MLE of θ. Problem #2. Let X 1,...,X n be a random sample from the density f(x; θ) =θ(1 + x) (1+θ) I (0, ) (x), θ > 0. (a) Assuming that the parameter space is Θ = (1, ), find a method of moments estimator of θ. Hint: A useful change of variable in this problem is y =log(1+x).

13 3.3. SECOND SET OF OLD EXAMS 11 x f 0 (x) f 1 (x) Table 3.1: Table for Problem #3 in second set of old exams. (b) Derive the maximum likelihood estimator of θ 1. (c) Find the Cramér-Rao lower bound for the variance of unbiased estimators of θ 2. (d) Find the UMVUE of θ. Hint: If Y has the gamma(α, β) distribution (α >1), then E(1/Y )= [β(α 1)] 1. Problem #3. A random variable X has one of the following two pmfs given in table 3.1: It is of interest to test the hypotheses H 0 : X has pmf f 0 versus H 1 : X has pmf f 1. A single value of X is to be observed. Find the most powerful size.10 test of H 0 versus H 1. Define the rejection region in terms of the value of X. Also, find the power of the test. Problem #4. X 1,...,X n is a random sample from N(θ, 1), where θ could be any real number. It is of interest to estimate the parameter e 2θ. Produce an estimator T n of e 2θ with the property that n(tn e 2θ ) converges in distribution to U, whereu N(0,σ 2 θ). Give an explicit expression for σ 2 θ. Problem #5. Let X 1,...,X n be a random sample from the density f(x; θ) =θ(1 + x) (1+θ) I (0, ) (x), where the parameter space is {θ : θ>0}. It is of interest to test the hypotheses H 0 : θ 1versus H 1 : θ>1. (a) Find the uniformly most powerful test of H 0 versus H 1. (b) Consider testing H 0 versus H 1 using a likelihood ratio test. The log-likelihood ratio is log λ(x 1,...,x n )= { 0, ˆθ 1 n(log ˆθ + ˆθ 1 1), ˆθ >1, where ˆθ = n/ n i=1 log(1 + x i ). Write down the form of the rejection region of the likelihood ratio test in terms of ˆθ. Is this test always the same as the test from part (a)? Why or why not? (Hint: Think about the size of the test.) Problem #6. Let X 1,...,X n be a random sample from the density f(x; θ) =e (x θ) exp ( e (x θ)),

14 12 CHAPTER 3. OLD EXAMS FROM OTHER INSTRUCTORS where <θ<. (a) (11) Find the Cramér-Rao lower bound for unbiased estimators of θ. (b) (12) Is there a function of θ for which there exists an estimator whose variance coincides with the Cramér-Rao lower bound? If so, find it. (c) (12) Derive the size α likelihood ratio test of H 0 : θ = θ 0 versus H 1 : θ θ 0. Express the rejection region of the test in terms of a minimal sufficient statistic, and argue that this region is the union of two disjoint intervals. Problem #7. Consider a population with three kinds of individuals labeled 1, 2 and 3 and occurring in the proportions f(1; θ) =θ 2, f(2; θ) =2θ(1 θ), f(3; θ) =(1 θ) 2, where 0 <θ<1. Let X 1,...,X n be a random sample from this distribution, and define the statistics N 1,N 2,N 3 by N i =numberofx j sequaltoi, i =1, 2, 3. (Note that N 1 + N 2 + N 3 = n.) (a) (6) For any vector (x 1,...,x n ), let n i =thenumberofx j s equal to i, i =1, 2, 3. Argue that the joint pmf of X 1,...,X n is { θ 2n 1 {2θ(1 θ)} n 2 (1 θ) f(x 1,...,x n ; θ) = 3, each x j =1,2,or3 0, otherwise. (b) (11) Prove that 2N 1 + N 2 is a sufficient statistic. (c) (12) Find the form of the most powerful level α test of H 0 : θ = θ 0 versus H 1 : θ =1 θ 0, where θ 0 (0, 1/2). Express the rejection region in terms of n 1 and n 3. Problem #8. Consider our old friend from tests I and II, in which X 1,...,X n is a random sample from the density f(x; θ) =θ(1 + x) (1+θ) I (0, ) (x), where Θ = (0, ). (a) (12) Find a pivotal quantity and use it to find a (1 α) confidence interval for θ. (b) (12) Define ˆθ = n/ n i=1 log(1 + X i ). Argue that ( 1 ˆθ z α/2 ˆθ n, 1 ˆθ + z ) α/2 ˆθ n is an approximate (1 α) confidence interval for 1/θ when n is large, where z p is the (1 p)th quantile of the standard normal distribution. (c) Now consider a Bayesian approach to estimating θ in the above model. Suppose the prior density for θ is π(θ) =e θ I (0, ) (θ). Find the posterior density π(θ x 1,...,x n ) and use it to construct the (1 α) HPD region for θ. You need not obtain the region precisely; just describe how you would obtain it from π(θ x 1,...,x n ).

15 Chapter 4 HOMEWORK ASSIGNMENTS, 1998 These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in the statements of these problems, due to typographical and conceptual errors on my part. I will give 1% addition to the first exam scores for all those finding errors, under the following conditions: (i) typos are fixed, stated properly and then worked out; (ii) conceptual errors on my part are explained by you. 4.1 Homework #1 Problem #1 Problem 6.1 in Casella & Berger (p. 280). Problem #2 Problem 6.6 in Casella & Berger (p. 280). (a) What are the sufficient statistics for α and β? (b) If α is known, show that the gamma density is a member of the class of exponential families. (c) If β is known, is the gamma density a member of the class of exponential families? Why or why not? (d) With neither α nor β known, is the gamma density a member of the multiple parameter exponential family? Why or why not? Problem #3 Suppose that x 1,..., x n are fixed constants. distributed with mean β 0 + β 1 x i and variance σ 2. (a) What are the sufficient statistics for (β 0,β 1,σ 2 )? Suppose further that Y i is normally Problem #4 The Rayleigh family has the density f(x, θ) =2(x/θ 2 )exp ( x 2 /θ 2),x>0,θ >0. Use the fact that this is an exponential family to compute the mean, variance and 3rd and 4th moments of X 2,whereX is Rayleigh. 13

16 14 CHAPTER 4. HOMEWORK ASSIGNMENTS, 1998 Problem #5 Suppose I have a sample X 1,..., X n from the normal distribution with mean θ and variance θ. LetX be the sample mean, and s 2 be the sample variance. Remember that X and s 2 are independent. (a) For any 0 α 1, compute the mean and variance of the statistic T (α) =αx +(1 α)s 2. (b) Compute the limiting distribution of T (α), i.e., what does n 1/2 {T (α) θ) convergetoin distribution. (c) Is there a unique best value of α as n? Problem #6 Suppose that we have a sample X 1,..., X n from the density f(x, θ) = Find a minimal sufficient statistic for θ. x!γ(θ)γ(x + θ) 2 x+θ. Problem #7 Work problem 6.20 in Casella and Berger (page 280). A function T (X) isacomplete sufficient statistic if E [g{t (X) θ}] = 0 for all θ and for all g = T (X) 0.

17 4.2. HOMEWORK # Homework #2 Problem #1 Find the mle of θ in the Rayleigh family of Homework #1. Problem #2 Find the mle s of (β 0,β 1,σ 2 ) in the linear regression problem of Homework #1. Problem #3 Suppose that X 1,..., X n are a sample with mass function Find the mle of θ. pr(x = k) = (k 2)(k 1) (1 θ) k 3 θ 3. 2 Problem #4 Suppose that X 1,..., X n are i.i.d. uniform on the interval [θ, θ 2 ], where θ>1. (a) Show that a method of moments estimator of θ is ( θ(mm)= 8n 1 n i=1 1/2 X i +1) 1 /2. (b) Find the mle for θ. (c) By combining the central limit theorem and the delta method (Taylor-Slutsky), compute the limiting distribution of θ(mm). Problem #5 Work Problem 7.7 in Casella & Berger (page 332). Problem #6 Work Problem 7.12 of Casella & Berger (page 333). Problem #7 Suppose that z 1,..., z n are fixed constants, and that the responses Y 1,..., Y n are independent and normally distributed with mean z i β and variance σ 2 v(z i ), where v(z i )areknown constants. (a) Compute the mle of the parameters. (b) Compute the mean and variance of β. Problem #8 Suppose that z 1,..., z n are fixed constants, and that the responses Y 1,..., Y n are independently distributed according to a gamma distribution with mean exp z i β and variance σ 2 exp 2z i β. (a) It turns out that there is a function ψ(y,z,β) such that the mle for β solves n i=1 ψ(y i,z i,β)= 0. What is ψ( )?

18 16 CHAPTER 4. HOMEWORK ASSIGNMENTS, Homework #3 Problem #1 If X Poisson(θ), show that X is UMVUE for θ. Problem #2 If X Binomial(n, θ), show that there exists no unbiased estimator of the odds ratio g(θ) = θ 1 θ. HINT: Suppose there does exist an S(X) which is unbiased. Write out E θ {S(X)} and then find a contradiction. Problem #3 Suppose that X has the mass function Pr(X = k θ) =θ(1 θ) k, k =0, 1, 2,.../ Find the mle for θ from a sample of size n, and discuss its properties, namely: (a) mean (b) variance (c) is it UMVUE? Problem #4 Suppose that (z 1,..., z n ) are fixed constants, and that for i =1,..., n, X i is normally distributed with mean z i and variance θz 2 i. Find the mle for θ from a sample of size n, and discuss its properties, namely: (a) mean (b) variance (c) is it UMVUE? HINT: If Z Normal(0, 1), E(Z 3 )=0andE(Z 4 )=3. Problem #5 Work problem 7.56 in Casella & Berger (page 341).

19 4.4. HOMEWORK # Homework #4 Problem #1 Find the Fisher information for the Rayleigh family. Problem #2 If X 1,..., X n are i.i.d. and normally distributed with mean equal to its variance, find the mle and the Fisher information for θ. Problem #3 Let X be Poisson(λ x )andlety be independent of X and distributed as a Poisson(λ y ). Define θ = λ x /(λ x + λ y )andξ = λ x + λ y. (a) Suppose that θ is known. Show that T = X + Y is sufficient for ξ. (b) Compute the conditional distribution of X given T. (c) Conditioning on T, find the UMVUE for θ. I want you to show that this is really a conditional UMVUE, so I want you to cite theorems from class to justify your steps. Problem #4 Suppose I have a sample X 1,..., X n from the normal distribution with mean θ and variance θ 2.LetX be the sample mean, and s 2 bethesamplevariance. (a) For any 0 α 1, compute the mean and variance of the statistic (b) Compute the limiting distribution of T (α). T (α) =αx 2 +(1 α)s 2. (c) Compute the limiting distribution of T (α). Problem #5 Work Problem 7.55(a) in Casella & Berger (p. 340). Hint #1: An unbiased estimator is I(X 1 =0),whereI( ) is the indicator function. Hint #2: what is the distribution of X 1 given the sufficient statistic? Problem #6 Suppose that (z 1,..., z n ) are fixed constants, and that for i =1,..., n, X i is normally distributed with mean z i and variance θz 2 i. Find the mle for θ from a sample of size n. Does the mle achieve the Fisher information bound? Does this in two ways: (a) by direct calculation (b) by using properties of OPEF s. Problem #7 Suppose that X 1,..., X n follow the Weibull model with density f(x λ, κ) =κλ(λx) κ 1 exp { (λy) κ }. (a) What equations must be solved to compute the mle?

20 18 CHAPTER 4. HOMEWORK ASSIGNMENTS, Homework #5 Problem #1 In the Rayleigh family, show directly using the weak law of large numbers that the mle is consistent. Also show it is consistent using the general theory in class about consistency of mle s in exponential families. Problem #2 What is the asymptotic limit distribution of the mle in the Rayleigh family? Problem #3 Let X 1,..., X n be i.i.d. negative exponential with mean θ. (a) Find the mle for θ. (b) Find the mle for pr θ (X>t 0 ). (c) Prove that the mle for pr θ (X>t 0 ) is consistent. (d) Compute the limit distribution for the mle of pr θ (X>t 0 ). Problem #4 Let X 1,..., X n be i.i.d. Poisson with mean θ. It moment generating function is known to be E {exp(tx)} =exp[θ {exp(t) 1}]. (a) Show that E(X θ) 2 = θ, E(X θ) 3 = θ and E(X θ) 4 = θ +3θ 2. Imayhavemadean error here, so correct it if I have. (b) Compute the limiting distribution for the mle of θ. (c) The sample variance s 2 is unbiased for θ. Compute its limiting distribution. (d) Compare the limiting variances you found in parts (b) and (c). Problem #5 Let X 1,..., X n be i.i.d. from a one parameter exponential family in canonical form, with the density function p(x θ) =S(x)exp {θx + d(θ)}. (a) Show that if the mle exists, it must satisfy X = {E θ (X)} θ= θ. (b) Cite a theorem from class showing that the mle must be consistent. Problem #6 Suppose that X 1,..., X n follow the Weibull model with density f(x λ, κ) =κλ(λx) κ 1 exp { (λy) κ }. (a) Suppose that κ is known. What is the limit distribution for the mle of λ?

21 4.5. HOMEWORK #5 19 Problem #7 In many problems, time to event data would naturally be modeled via a negative exponential density. However, in some of these problems, there is the worry that there is a certain probability that the event will never occur. Such a model has the distribution (not density) function Pr(X <x< θ, κ) = (1 κ){1 exp( x/θ)}; Pr(X = θ, κ) = κ. Note that the value of x = has a positive probability. This model is not in the form of an exponential family, and in fact the data do not even have a density function. (a) interpret κ as a cure rate. (b) Show that the likelihood function for this model is κ I(x= ) {(1 κ)exp( x/θ)/θ} I(x< ). (c) Show that E(X) = and hence standard method of moments will not work. (d) Compute the mle for κ and θ. (e) Compute the limit distribution for the mle of κ.

22 20 CHAPTER 4. HOMEWORK ASSIGNMENTS, Homework #6 Problem #1 Suppose that, given θ, X is Poisson with mean θ. Letθ have a negative exponential prior distribution with mean θ 0. Let the loss function be L(θ, t) =(t θ) 2 /θ. (a) Show that the posterior distribution of θ is a gamma random variable. (b) What is the Bayes estimator of θ? Hint: you have been told a characterization of Bayes estimators in terms of minimizing a certain function. You should try to do this minimization explicitly here. Problem #2 Let X be Binomial(n, θ 1 )andlety be Binomial(n, θ 2 ). Suppose the loss function is L(θ 1,θ 2,t)=(θ 1 θ 2 t) 2.Letθ 1 and θ 2 have independent prior beta-distributions with parameters (α, β). Find the Bayes estimator for this loss function. Problem #3 Work problem 7.24 in Casella & Berger (p. 335). Problem #4 Let X 1,..., X n be i.i.d. Normal(0, variance = θ). Suppose I am interested only in the special class of estimators of θ defined by { F = T : T n (m) =(n + m) 1 n }. Xi 2 i=1 Suppose that the loss function is L(t, θ) =θ 2 (t θ) 2. (a) In this class of estimators, which values of m, if any, yield an admissible estimator? (b) Is m = 0 minimax? (c) Answer (a) if the loss function is changed to L(t, θ) =θ 1 (t θ) 2. (c) What is the asymptotic limiting distribution of the mle, in terms of derivatives of the function d(θ)? Problem #5 One of the more difficult aspects of Bayesian inference done by frequentists is to find a noninformative prior. Jeffreys Prior is the one in which the prior density is proportional to the square root of the Fisher information. Suppose that X 1,..., X n are independent and identically distributed Bernoulli(θ). (a) Find the Jeffreys prior for this model. (b) Interpret the Jeffreys prior as a uniform prior for arcsin( θ). Problem #6 One criticism of the use of beta priors for Bernoulli sampling is that they are unimodel. Thus, various people have proposed the use of a mixture of betas prior, namely π(θ) =ɛg B (θ a, b)+(1 ɛ)g B (θ c, d), where g B (θ a, b) isthebeta(a, b) density. Show that this prior is conjugate for Bernoulli sampling.

23 4.6. HOMEWORK #6 21 Problem #7 Suppose that X 1,..., X n are iid with a negative exponential distribution with mean 1/θ. (a) Find the Jeffreys prior for θ. (b) Compute the posterior distribution for θ. (c) Compute the posterior distribution for λ = 1/θ. (d) Discuss computing the posterior mean and model for λ.

24 22 CHAPTER 4. HOMEWORK ASSIGNMENTS, Homework #7 Problem #1 If X 1,..., X n are i.i.d. normal with mean θ and variance 1.0, consider testing the hypothesis H 0 : θ 0 against the alternative H 1 : θ>0. What is the power function of the UMP level α test? Problem #2 In Problem #1, suppose that θ has a prior Normal distribution with mean 0.0 and variance σ 2. Consider the 0 1 loss function discussed in class, i.e., the loss is zero if a correct decision is made, and the loss is one otherwise. What is the Bayes procedure for this problem? Problem #3 Let X 1,..., X n be i.i.d. with a common density p(x θ) =exp{ (x θ)}, x θ. Let U =min(x 1,..., X n ). (a) Show that U and U (1/n) are (respectively) an mle and a UMVUE for θ. (b) In testing H 0 : θ θ 0 against H 1 : θ>θ 0 at level α, show that the UMP level α test is of the form to reject H 0 when U>c. (c) In part (b), express c as a function of θ 0 and α. (d) In parts (b) (c), what is the power function for the test? Problem #4 Suppose that X 1,..., X n are a sample with mass function pr(x = k) = (k 2)(k 1) (1 θ) k 3 θ 3. 2 (a) Show that if we wish to test H 0 : θ θ 0 against H 1 : θ>θ 0, find the form of the the UMP test. (b) What is the Fisher information and the asymptotic distribution of the mle here? Problem #5 Let X be a Binomial random variable based on a sample of size n = 10 with success probability θ. LetS = X 5, and suppose this is all that is observed, i.e., I only observe S, and I cannot observe X. Consider testing H 0 : θ 1/3 orθ 2/3 against H 1 : θ =1/2. Suppose I use the test which rejects H 0 when S =0orS =1. (a) What is the distribution of S? (b) Find the level of this test. Remember to consider carefully what level means with this composite hypothesis. (c) Is the test UMP of its level? Why or why not? Problem #6 Suppose I take n observations from a multinomial distribution with cell probabilities as arranged in Table 4.1, and data as in Table 4.2 I am interested in testing the hypothesis H 0 : θ yy θ yn against the alternative H 0 : θ yy >θ yn. By thinking carefully, find an appropriate

25 4.7. HOMEWORK #7 23 Yes No Yes θ yy θ yn No θ ny θ nn Table 4.1: Table of probabilities for Problem #7. The θ s sum to 1.0. Yes No Yes N yy N yn No N ny N nn Table 4.2: Table of probabilities for Problem #7. The N s sum to n. conditional test for this hypothesis. By a conditional test, I mean that you should condition on part or all of the data.

26 24 CHAPTER 4. HOMEWORK ASSIGNMENTS, Homework #8 Problem #1 Let X 1,..., X n be i.i.d. Poisson(θ). Suppose I want to test the hypothesis H 0 : θ = θ 0 against the alternative H 1 : θ θ 0. (a) What is the form of the GLR test here? (b) What is the form of the Wald test? (c) What is the form of the score test? (d) Prove directly that as n, the score test achieves its nominal level α. Problem #2 Repeat problem #1 but for the case of sampling from the normal distribution with mean and variance both equal to θ. Problem #3 Let X be Binomial(n, θ 1 )andlety be Binomial(n, θ 2 ). Let S = X + Y. (a) What is the distribution of X given S? You may find it useful to reparameterize θ 1 = {1+exp( )} 1 and θ 2 = {1+exp( η)} 1. (c) Is this distribution a member of the one-parameter exponential family with a monotone likelihood ratio? (c) Use the result in (a) to find a UMP conditional test of the hypothesis H 0 : θ 1 θ 2 against the alternative H 1 : θ 1 >θ 2. (d) What is the conditional Wald test for this problem? This is a one-sided test, and we did not cover one-sided testing in class. I m asking that you come up with a reasonable guess. Problem #4 Suppose we are concerned with the lower endpoint of computer generated random numbers which purport to be uniform on (0, 1). We have a sample X 1,..., X n are consider the density f(x, θ) = I(θ x) 1 θ. Consider the following observations: (.87,.84,.79,.33,.02,.97,.20,.47,.51,.29,.58,.69). Suppose we adopt a prior distribution with density π(θ) =(1+a)(1 θ) a. (a) What kind of prior beliefs does this prior represent? (b) Compute the posterior density for θ. (c) Plot this posterior density for a few values of a. (d) Compute a 95% credible confidence interval for θ, i.e., one which covers 95% of the posterior density. Problem #5 Consider the same model as in problem #4, but this time compute a 95% likelihood ratio confidence interval for θ.

27 Chapter 5 MY EXAMS FROM 1994 EXAM #1, 1994 PROBLEM #1. Let X 1,X 2,...,X n be iid uniform[0, 2θ]. The method of moments estimator of θ is X, which has variance θ 2 /3. It is known that What is σ 2 (θ)? n 1/2 { log(x) log(θ) } Normal { 0,σ 2 (θ) }. PROBLEM #2. Let X 1,X 2,...,X n be iid with common density function f X (x θ) =exp{ (x θ)} exp { e (x θ)} <θ,x<. a. Compute the Cramer-Rao lower bound for unbiased estimators of e θ. PROBLEM # 3. Let Z 1,Z 2,...,Z n be fixed constants. Let Y 1,Y 2,...,Y n be independent Bernoulli random variables with Pr(Y i =1)=H(θZ i ), where H(v) ={1+e v } 1 and H (1) (v) = H(v) {1 H(v)}. (a) Show that the mle is the solution to 0 = n i=1 Z i {Y i H(Z i θ)}. (b) Find a sufficient statistic for θ. (c) Compute the Cramer-Rao lower bound for unbiased estimators of θ. PROBLEM #4. Let X 1,X 2,...,X n be iid Normal(0,variance= θ). Consider the estimators T c (X) =(n + m) 1 n X 2 i i=1 Suppose the loss function is L(t, θ) =θ 2 (t θ) 2. Remember that a chi-squared random variable with n degrees of freedom has mean n and variance 2n. 25

28 26 CHAPTER 5. MY EXAMS FROM 1994 Is the value m = 0 (which yields the mle) either minimax or admissible among this class of estimators? Why or why not? PROBLEM #5. Let X 1,X 2,...,X n be iid with mean θ and variance θ. If the observations were normally distributed, the mle is ( ) 1 1/2 θ mle = 4 + T n 1 2, T n = n 1 n Xi 2. Suppose the data are not normally distributed, but that instead EX 4 = θ 2 K +6θ 3 + θ 4 (K =3 at the normal). Of course, EX 2 = θ + θ 2. Use these facts to compute the limiting distribution of n 1/2 ( θ mle θ ). i=1 PROBLEM #6. Suppose that X and Y are independent, with X Binomial (n, θ x )and Y Binomial (n, θ y ). Let S = X + Y, and suppose that θ x = θ y = e 1 θ 1+e x = 1 1+e e +η 1+e +η 1 θ y = 1 1+e +η. Show that Y S (Y given S) has a distribution which is OPEF, is in canonical form with natural parameter η, i.e., Pr(Y = y S = s) =exp{ηy + d o (s, η)+r(y, s)}. PROBLEM #7. Let X 1,X 2 be independent Binomial (n, θ), so that Pr(X = K) = ( ) n θ k (1 θ) n K K Find the UMVUE for Pr(X 3) = 3 K=0 ( ) n θ K (1 θ) n K. K EXAM #2, 1994 PROBLEM #1. The inverse Gaussian has the density function { } f X (x) =(2πx 3 ) 1/2 (x µ)2 exp for x 0. 2µ 2 x

29 (a) Show that E(X) =µ, Var(X) =µ 3. (b) If X 1,...,X n are iid, find a conjugate prior for µ. (c) Suppose you want to test Ω 0 : {µ µ 0 } versus Ω 1 : {µ >µ 0 }. Citing class results, show that the UMP level α test chooses Ω 1 when n 1/2 n i=1 (X i µ 0 ) c, for some constant c. (d) Indicate in detail how to choose C based on a fixed sample of size n. By in detail, I mean that you must convince me that you can actually get a number within 24 hours. (e) If n, where does c converge? (f) What is the level α Wald test for Ω 0 : {µ = µ 0 } versus Ω 1 : {µ µ 0 }? PROBLEM #2 Let X 1,X 2,...,X n be iid geometric with The prior for θ is Beta (α, β): f X (k θ) =(1 θ)θ k k =0, 1, 2,... E(X θ) =θ(1 θ) 1 V (X θ) =θ(1 θ) 2 π(θ) = θα 1 (1 θ) β 1 Γα + β) ; Γ(α) =(α 1)Γ(α 1). Γ(α)Γ(β) (a) Find the posterior mean for θ given X =(X 1,...,X n ). (b) If the loss function is L(t, θ) =(t θ) 2 /(1 θ), what is the Bayes estimator of θ? (c) What is the mle of θ? Is there any legitimate choice of (α, β) for which the mle is Bayes in part (b)? (d) Suppose I have a 0 1 loss function and only two decisions, Ω 0 : {θ 1/2} and Ω 1 : {θ >1/2}. Describe that the Bayes decision procedure is in this case, in detail. 27

30 28 CHAPTER 5. MY EXAMS FROM 1994

31 Chapter 6 MY EXAMS FROM 1996 EXAM #1, 1996 Problem #1: Suppose that X 1,..., X n are iid Uniform[0, Θ]. Define Θ 1 = 2X; Θ 2 = (n +1)X (n) /n; Θ 3 (c) = cx (n), where X (1) X (2)... X (n). You will want to remember that n E(X (n) ) = n +1 Θ; E(X(n) 2 ) = n n +2 Θ2 ; n var(x (n) ) = (n +2)(n +1) 2 Θ2. (a) Which estimator is method of moments, and which is the MLE? (b) Comparing Θ 1 and Θ 2, both of which are unbiased for Θ, which is the better estimator of Θ, and why? To make life easier for you, I have actually essentially computed the variances, but in this problem you are not allowed to look at the answers. You should get your result strictly from Theorems from class, without citing the actual variances. (c) Among the class of estimators Θ 3 (c) forc>0, using the loss function L(Θ,s)=(s Θ) 2 /Θ 2, is Θ 3 (c )forc =(n +1)/n minimax? (d) What is the limiting distribution of log(method of moments estimator)? (e) Suppose in (c) I change the loss function to L(Θ,s)=(s Θ) 2, so that the risk functions are all multipled by Θ 2.Is Θ 3 (c )forc =(n +1)/n minimax now? 29

32 30 CHAPTER 6. MY EXAMS FROM 1996 Problem #2: Suppose that X 1,..., X n are iid Beta(Θ, Θ), so that for 0 <x<1, f(x Θ) = x Θ 1 (1 x) Θ 1 Γ(2Θ)/Γ 2 (Θ). Here are some facts (X is the sample mean and s 2 is the sample variance): E Θ (X) = 1/2 for all Θ; var Θ (X) = {4(2Θ + 1)} 1 ; E Θ (s 2 ) = {4(2Θ + 1)} 1 ; var Θ (s 2 ) = (κ/n) (exact value of κ not important). You may assume that the gamma function Γ(s) has first derivative Γ (1) (s) and second derivative Γ (2) (s), which are otherwise unspecified. (a) Compute the sufficient statistics for Θ. Cite the Theorem in class which assures you it is sufficient. (b) Compute the Cramer-Rao lower bound for estimating Θ. (c) Is the MLE for Θ consistent? Why? Cite and apply Theorems from class. Problem #3: Consider the same setup as in Problem #2. (a) What is the limit distribution of the MLE? (b) Why is it that you cannot use the method of moments as applied to X to estimate Θ? (c) Apply the method of moments to s 2 to get an estimate of Θ. (d) Compute the limit distribution of the estimator you defined in (c). Problem #4 (DO NOT WORK): In many regression problems in statistics, it is useful to change the OPEF family slightly, so that { } T (x)θ + d(θ) f(x Θ) = exp + S(x). φ Compute E {T (X)} and var {T (X)} in terms of Θ and φ. Hint: compute the mgf of T (X), and remember that for all Θ, 1= f(x Θ)dx

33 31 Throughout this exam, X =(X 1,...,X n ). EXAM #2, 1996 Problem #1: The following are a grab-bag assortment of questions about basic concepts. (a) State the Neyman-Pearson Lemma for testing Θ 0 = {θ 0 } versus Θ 1 = {θ 1 }. Do not prove the Lemma!!!!! You may assume that X has a continuous density function. (b) When studying the GLR (Generalized Likelihood Ratio), Wald and Score tests for testing Θ 0 versus Θ 1, based on a sample of size n from iid observations, which did we prove: (i) (ii) maxlimit n Pr θ {S(X) =1} = α; θ Θ 0 limit n max θ Θ 0 Pr θ {S(X) =1} = α. Do not justify: simply tell me which we did!!!!! (c) In testing Θ 0 versus Θ 1, when the loss function takes on the values 0 or 1 depending on whether we make the correct decision or not, what is the Bayes decision rule? Do not justify: just write down the answer!!!!! You may assume that the posterior density π(θ X) is known to you. (d) What is the name of the Theorem in which we showed the fundamental result that if X has the density function f(x θ), E θ [ θ log{f(x θ)} ] =0. Do not justify: simply tell me the name!!!!! (e) Define what it means for a density f(x,θ) to have a monotone likelihood ratio in a statistic T (X). (f) In most of our Taylor series expansions, we expanded the mle θ mle about the true value θ. In one important instance, however, we did the opposite, expanding θ about θ mle. In what context did we do this latter expansion? Do not justify: simply tell me the context!!!!! (g) Return to (c). Suppose the loss function is L(θ, s =0) = 0 if θ Θ 0 ; = 1 if θ Θ 1 ; L(θ, s =1) = 2 if θ Θ 0 ; = 0 if θ Θ 1. Now what is the Bayes decision procedure? You don t have to justify your answer if you can do the work in your head.

34 32 CHAPTER 6. MY EXAMS FROM 1996 Problem #2: On your last exam, many of you had a problem with non trivial application of the delta-method (Taylor Slutsky). This is a reprise of that problem. Suppose that we have statistics T n (X) with the property that n 1/2 { T n (X) (4θ +1) 1} Normal(0,κ 2 ). What is the limiting distribution of [{1/T n (X)} 1] /4? Problem #3: Suppose that X has a continuous!!! density function f(x θ) =exp{θt(x)+d(θ)+q(x)}. This is an OPEF in canonial (natural) form, hence has a monotone likelihood ratio, etc. At least twice during the course of the semester I left you a very broad hint that the following problem was coming. (a) For arbitrary θ > θ, in terms of T (X), what is the form of the Neyman-Pearson test Θ 0 = {θ } versus Θ 1 = {θ }? Simply write this down, but do not derive it!!!!!! (b) Suppose you are testing Θ 0 = {θ θ 0 } against Θ 1 = {θ <θ 0 }. Remember that this is a continuous density function. What is the form of the UMP level α test? Simply write this down, but do not derive it!!!!!! (c) Show that the power function of the test in (b) is monotone nonincreasing. Provide a formal proof based upon contradiction, using (a). Remember that this is a problem with a continuous density function.

35 Problem #4: Let z 1,z 2,... are a sequence of fixed (non-random) positive constants with the properties that κ n,j = n 1 n i=1 z j i κ j as n. Let X 1,X 2,... be independent random variables whose possible values are 0, 1, 2, 3,... and whose probability mass functions are Pr(X i = x) =exp{xlog(z i )+xlog(θ) z i θ log(x!)}, where x! is read as x factorial. This is a member of the OPEF. (a) What is the mean of X i? (b) What is the mle for θ, based on a sample of size n? (c) Any unbiased estimator, based on a sample of size n, ofθ must have a variance greater than some number. What is that number? (d) Suppose I tell you that n 1/2 ( θ mle θ ) Normal{0,γ 2 (θ)}. What is γ 2 (θ)? (e) Suppose that θ has a gamma(a,1/b)-density for a prior, namely π(θ) = ba θ a 1 exp( bθ) ; Γ(a) E(θ) = a/b; var(θ) = a/b 2. What is the posterior density of θ given (X 1,...,X n )? (f) What is the Bayes estimator of θ with respect to squared error loss? (g) What is the UMP level α test of the hypothesis Θ 0 = {θ θ 0 } against Θ 1 = {θ >θ 0 }? Fill in as many details as you can, without going overboard, on how you would make such a test operational. (g) What is the level α Wald test of Θ 0 = {θ = θ 0 } against Θ 1 = {θ θ 0 }? 33

36 34 CHAPTER 6. MY EXAMS FROM 1996

37 Chapter 7 MY EXAMS FROM 1998 EXAM #1, 1998 In what follows, you may quote class results. There is no need to rederive everything. If you find that you are engaged in a long, laborious calculation, you are doing the problem incorrectly. The point of having general theory is to avoid having to do long, laborious calculations. Unless I specifically ask for it, you do not need to show that the mle derived by solving the score equation is actually a maximizer of the log likelihood. Problem #1 Let X 1 and X 2 be independent Poisson(θ). I am interested in estimating Pr(X = 0) = exp( θ) =q(θ),andanunbiasedestimateofitisi(x 1 =0),whereI( ) is the indicator function. (a) Find the UMVUE of q(θ). (b) Construct the Cramer Rao lower bound for unbiased estimates of q(θ) based on this sample of size 2. (c) Show that the UMVUE of q(θ) does not achieve the Cramer Rao lower bound. Problem #2 Let X 1,..., X n be an iid sample from the inverse gamma density: where α is known but θ>0 is unknown. (a) Compute the mle for θ. (b) Show that the mle is consistent. f(x θ) = θα exp( θ/x), x α+1 Γ(α) 35

38 36 CHAPTER 7. MY EXAMS FROM 1998 (c) Compute the limiting normal distribution for the MLE of θ. (d) Does the mle achieve the Cramer Rao lower bound? Why or why not? Problem #3 Let X be an observation from the Normal(0,σ 2 = θ) distribution, where it is known that θ>1. (a) Show that X 2 is a method of moments estimator of θ, aswellasumvue. (b) Compute the mle for θ, under the aforementioned restriction that θ>1. (c) Show that the MLE has a smaller mean squared error for estimating θ. (d) If the loss function is L(θ, s) =(s θ) 2 /θ 2, show that the method of moments estimator is inadmissible. (e) Is the method of moments estimator minimax? Why or why not? Problem #4 Let X 1,..., X n be iid Bernoulli(θ) random variables. Consider estimating q(θ) = θ(1 θ). (a) Show that the mle of q(θ) is consistent for q(θ). (b) What is the limiting normal distribution of the mle for q(θ)?

39 37 EXAM #2, 1998 In what follows, you may quote class results. There is no need to rederive everything. If you find that you are engaged in a long, laborious calculation, you are doing the problem incorrectly. The point of having general theory is to avoid having to do long, laborious calculations. Unless I specifically ask for it, you do not need to show that the mle derived by solving the score equation is actually a maximizer of the log likelihood. Problem #1: The following are a grab-bag assortment of questions about basic concepts. This is also to remind the Statistics students that it is helpful to study old exams. (a) State and prove the Neyman-Pearson Lemma for testing Θ 0 = {θ 0 } versus Θ 1 = {θ 1 }. You may assume that X has a continuous density function. (b) When studying UMP testing in the context of Monotone Likelihood Ratios for continuous densities, for testing Θ 0 = {θ θ 0 } versus Θ 1 = {θ >θ 0 }, based on a sample of size n from iid observations, which did we prove: (i) max θ Θ 0 limit n Pr θ {S(X) =1} = α; (ii) limit n max θ Θ 0 Pr θ {S(X) =1} = α. Do not justify: simply tell me which we did!!!!! (c) In testing Θ 0 versus Θ 1, when the loss function takes on the values 0 or 1 depending on whether we make the correct decision or not, what is the Bayes decision rule? Do not justify: just write down the answer!!!!! You may assume that the posterior density π(θ X) is known to you. (d) Suppose that X has the density function f(x θ). we used throughout the semester that E θ [ θ log{f(x θ)} ] =0. Assuming anything you want (except the final answer), prove this result. (e) Define what it means for a density f(x,θ) to have a monotone likelihood ratio in a statistic T (X).

40 38 CHAPTER 7. MY EXAMS FROM 1998 Problem #2: Suppose that X 1,..., X n,... are iid from a Rayleigh density parameterized in convenient form as f(x η) =2xηexp( ηx 2 ). (a) What is the Fisher information for η in a sample of size n? (b) What is the mle for η? (c) What is the limiting distribution for the mle of η, i.e., as n, what is the distribution of n 1/2 ( η mle η)? (d) What is the limiting distribution in the sense of (c) for η mle? (e) What is the UMP test for Θ 0 {η η 0 } versus Θ 1 {η>η 0 }? You can use class results, but you should state them as you go, and please remember to be careful, as this one has somewhat tricky algebra. (f) What is the conjugate prior density for η? (g) Derscribe how you would go about computing a 95% HPD credible region for η. Once you derive the posterior, you can then just give a brief outline. (h) Work out as many details as you can of the form of the GLR test of Θ 0 = {η = η 0 } versus Θ 1 = {η η 0 }. (i) What is the Wald test of Θ 0 = {η = η 0 } versus Θ 1 = {η η 0 }. (j) What is the score test of Θ 0 = {η = η 0 } versus Θ 1 = {η η 0 }.

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Statistics Masters Comprehensive Exam March 21, 2003

Statistics Masters Comprehensive Exam March 21, 2003 Statistics Masters Comprehensive Exam March 21, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Jonathan Marchini Department of Statistics University of Oxford MT 2013 Jonathan Marchini (University of Oxford) BS2a MT 2013 1 / 27 Course arrangements Lectures M.2

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Final Examination. STA 215: Statistical Inference. Saturday, 2001 May 5, 9:00am 12:00 noon

Final Examination. STA 215: Statistical Inference. Saturday, 2001 May 5, 9:00am 12:00 noon Final Examination Saturday, 2001 May 5, 9:00am 12:00 noon This is an open-book examination, but you may not share materials. A normal distribution table, a PMF/PDF handout, and a blank worksheet are attached

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Final Examination a. STA 532: Statistical Inference. Wednesday, 2015 Apr 29, 7:00 10:00pm. Thisisaclosed bookexam books&phonesonthefloor.

Final Examination a. STA 532: Statistical Inference. Wednesday, 2015 Apr 29, 7:00 10:00pm. Thisisaclosed bookexam books&phonesonthefloor. Final Examination a STA 532: Statistical Inference Wednesday, 2015 Apr 29, 7:00 10:00pm Thisisaclosed bookexam books&phonesonthefloor Youmayuseacalculatorandtwo pagesofyourownnotes Do not share calculators

More information

Chapter 5. Bayesian Statistics

Chapter 5. Bayesian Statistics Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

f(y θ) = g(t (y) θ)h(y)

f(y θ) = g(t (y) θ)h(y) EXAM3, FINAL REVIEW (and a review for some of the QUAL problems): No notes will be allowed, but you may bring a calculator. Memorize the pmf or pdf f, E(Y ) and V(Y ) for the following RVs: 1) beta(δ,

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Chapter 8 Maximum Likelihood Estimation 8. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function

More information

5.2 Fisher information and the Cramer-Rao bound

5.2 Fisher information and the Cramer-Rao bound Stat 200: Introduction to Statistical Inference Autumn 208/9 Lecture 5: Maximum likelihood theory Lecturer: Art B. Owen October 9 Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

Mathematical statistics

Mathematical statistics October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Department of Statistical Science FIRST YEAR EXAM - SPRING 2017

Department of Statistical Science FIRST YEAR EXAM - SPRING 2017 Department of Statistical Science Duke University FIRST YEAR EXAM - SPRING 017 Monday May 8th 017, 9:00 AM 1:00 PM NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam;

More information

Interval Estimation. Chapter 9

Interval Estimation. Chapter 9 Chapter 9 Interval Estimation 9.1 Introduction Definition 9.1.1 An interval estimate of a real-values parameter θ is any pair of functions, L(x 1,..., x n ) and U(x 1,..., x n ), of a sample that satisfy

More information

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

INTRODUCTION TO BAYESIAN METHODS II

INTRODUCTION TO BAYESIAN METHODS II INTRODUCTION TO BAYESIAN METHODS II Abstract. We will revisit point estimation and hypothesis testing from the Bayesian perspective.. Bayes estimators Let X = (X,..., X n ) be a random sample from the

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

1. (Regular) Exponential Family

1. (Regular) Exponential Family 1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Chapter 8.8.1: A factorization theorem

Chapter 8.8.1: A factorization theorem LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Stat 5102 Final Exam May 14, 2015

Stat 5102 Final Exam May 14, 2015 Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions

More information

Statistics Ph.D. Qualifying Exam: Part II November 3, 2001

Statistics Ph.D. Qualifying Exam: Part II November 3, 2001 Statistics Ph.D. Qualifying Exam: Part II November 3, 2001 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Conjugate Priors, Uninformative Priors

Conjugate Priors, Uninformative Priors Conjugate Priors, Uninformative Priors Nasim Zolaktaf UBC Machine Learning Reading Group January 2016 Outline Exponential Families Conjugacy Conjugate priors Mixture of conjugate prior Uninformative priors

More information

Lecture 1: Introduction

Lecture 1: Introduction Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One

More information

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise. 1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Chapter 7 Maximum Likelihood Estimation 7. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function

More information

ECE 275B Homework # 1 Solutions Version Winter 2015

ECE 275B Homework # 1 Solutions Version Winter 2015 ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2

More information

Suggested solutions to written exam Jan 17, 2012

Suggested solutions to written exam Jan 17, 2012 LINKÖPINGS UNIVERSITET Institutionen för datavetenskap Statistik, ANd 73A36 THEORY OF STATISTICS, 6 CDTS Master s program in Statistics and Data Mining Fall semester Written exam Suggested solutions to

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Probability and Statistics qualifying exam, May 2015

Probability and Statistics qualifying exam, May 2015 Probability and Statistics qualifying exam, May 2015 Name: Instructions: 1. The exam is divided into 3 sections: Linear Models, Mathematical Statistics and Probability. You must pass each section to pass

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

Statistical Theory MT 2006 Problems 4: Solution sketches

Statistical Theory MT 2006 Problems 4: Solution sketches Statistical Theory MT 006 Problems 4: Solution sketches 1. Suppose that X has a Poisson distribution with unknown mean θ. Determine the conjugate prior, and associate posterior distribution, for θ. Determine

More information

Today s Outline. Biostatistics Statistical Inference Lecture 01 Introduction to BIOSTAT602 Principles of Data Reduction

Today s Outline. Biostatistics Statistical Inference Lecture 01 Introduction to BIOSTAT602 Principles of Data Reduction Today s Outline Biostatistics 602 - Statistical Inference Lecture 01 Introduction to Principles of Hyun Min Kang Course Overview of January 10th, 2013 Hyun Min Kang Biostatistics 602 - Lecture 01 January

More information

Statistical Theory MT 2007 Problems 4: Solution sketches

Statistical Theory MT 2007 Problems 4: Solution sketches Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

ECE 275B Homework # 1 Solutions Winter 2018

ECE 275B Homework # 1 Solutions Winter 2018 ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,

More information

The binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution.

The binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution. The binomial model Example. After suspicious performance in the weekly soccer match, 37 mathematical sciences students, staff, and faculty were tested for the use of performance enhancing analytics. Let

More information

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval

More information

Probability and Estimation. Alan Moses

Probability and Estimation. Alan Moses Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.

More information

STAT 450: Final Examination Version 1. Richard Lockhart 16 December 2002

STAT 450: Final Examination Version 1. Richard Lockhart 16 December 2002 Name: Last Name 1, First Name 1 Stdnt # StudentNumber1 STAT 450: Final Examination Version 1 Richard Lockhart 16 December 2002 Instructions: This is an open book exam. You may use notes, books and a calculator.

More information

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm Statistics GIDP Ph.D. Qualifying Exam Theory Jan, 06, 9:00am-:00pm Instructions: Provide answers on the supplied pads of paper; write on only one side of each sheet. Complete exactly 5 of the 6 problems.

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

FIRST YEAR EXAM Monday May 10, 2010; 9:00 12:00am

FIRST YEAR EXAM Monday May 10, 2010; 9:00 12:00am FIRST YEAR EXAM Monday May 10, 2010; 9:00 12:00am NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam; please write your solutions on the paper provided. 2. Put the

More information

Qualifying Exam in Probability and Statistics.

Qualifying Exam in Probability and Statistics. Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Miscellaneous Errors in the Chapter 6 Solutions

Miscellaneous Errors in the Chapter 6 Solutions Miscellaneous Errors in the Chapter 6 Solutions 3.30(b In this problem, early printings of the second edition use the beta(a, b distribution, but later versions use the Poisson(λ distribution. If your

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods

More information

MAT 271E Probability and Statistics

MAT 271E Probability and Statistics MAT 71E Probability and Statistics Spring 013 Instructor : Class Meets : Office Hours : Textbook : Supp. Text : İlker Bayram EEB 1103 ibayram@itu.edu.tr 13.30 1.30, Wednesday EEB 5303 10.00 1.00, Wednesday

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

All other items including (and especially) CELL PHONES must be left at the front of the room.

All other items including (and especially) CELL PHONES must be left at the front of the room. TEST #2 / STA 5327 (Inference) / Spring 2017 (April 24, 2017) Name: Directions This exam is closed book and closed notes. You will be supplied with scratch paper, and a copy of the Table of Common Distributions

More information

STAT215: Solutions for Homework 2

STAT215: Solutions for Homework 2 STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Class 26: review for final exam 18.05, Spring 2014

Class 26: review for final exam 18.05, Spring 2014 Probability Class 26: review for final eam 8.05, Spring 204 Counting Sets Inclusion-eclusion principle Rule of product (multiplication rule) Permutation and combinations Basics Outcome, sample space, event

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions.

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions. Large Sample Theory Study approximate behaviour of ˆθ by studying the function U. Notice U is sum of independent random variables. Theorem: If Y 1, Y 2,... are iid with mean µ then Yi n µ Called law of

More information

General Bayesian Inference I

General Bayesian Inference I General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for

More information

A Course in Statistical Theory

A Course in Statistical Theory A Course in Statistical Theory David J. Olive Southern Illinois University Department of Mathematics Mailcode 4408 Carbondale, IL 62901-4408 dolive@.siu.edu January 2008, notes September 29, 2013 Contents

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

Chapter 3. Point Estimation. 3.1 Introduction

Chapter 3. Point Estimation. 3.1 Introduction Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics October 17, 2017 CS 361: Probability & Statistics Inference Maximum likelihood: drawbacks A couple of things might trip up max likelihood estimation: 1) Finding the maximum of some functions can be quite

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information