Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Similar documents
Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Qualifying Exam in Probability and Statistics.

Problem Selected Scores

simple if it completely specifies the density of x

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Masters Comprehensive Examination Department of Statistics, University of Florida

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

Master s Written Examination

Qualifying Exam in Probability and Statistics.

Problem 1 (20) Log-normal. f(x) Cauchy

Qualifying Exam in Probability and Statistics.

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm

Probability and Statistics qualifying exam, May 2015

Mathematical statistics

Mathematical statistics

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

A Very Brief Summary of Statistical Inference, and Examples

Lecture 17: Likelihood ratio and asymptotic tests

STATISTICS SYLLABUS UNIT I

Chapters 9. Properties of Point Estimators

STAT 450: Final Examination Version 1. Richard Lockhart 16 December 2002

STAT 512 sp 2018 Summary Sheet

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

Master s Written Examination - Solution

Statistics Ph.D. Qualifying Exam: Part II November 3, 2001

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Statistics Ph.D. Qualifying Exam: Part II November 9, 2002

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Spring 2012 Math 541B Exam 1

Statistics Ph.D. Qualifying Exam

First Year Examination Department of Statistics, University of Florida

Lecture 26: Likelihood ratio tests

A Very Brief Summary of Statistical Inference, and Examples

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Statistics 135 Fall 2008 Final Exam

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Review. December 4 th, Review

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Math 494: Mathematical Statistics

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Review and continuation from last week Properties of MLEs

Non-parametric Inference and Resampling

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Some General Types of Tests

Theory of Statistics.

Exercises and Answers to Chapter 1

Hypothesis testing: theory and methods

Suggested solutions to written exam Jan 17, 2012

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).

Mathematical statistics

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Chapter 3: Maximum Likelihood Theory

Master s Written Examination

Probability & Statistics - FALL 2008 FINAL EXAM

This does not cover everything on the final. Look at the posted practice problems for other topics.

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Masters Comprehensive Examination Department of Statistics, University of Florida

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Introduction to Estimation Methods for Time Series models Lecture 2

Stat 710: Mathematical Statistics Lecture 27

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

COMP2610/COMP Information Theory

Elements of statistics (MATH0487-1)

Information in Data. Sufficiency, Ancillarity, Minimality, and Completeness

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Ch. 5 Hypothesis Testing

This paper is not to be removed from the Examination Halls

Central Limit Theorem ( 5.3)

EECS564 Estimation, Filtering, and Detection Exam 2 Week of April 20, 2015

First Year Examination Department of Statistics, University of Florida

Parametric Inference

Exercises Chapter 4 Statistical Hypothesis Testing

Chapter 7. Hypothesis Testing

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

Lecture 7 Introduction to Statistical Decision Theory

Maximum Likelihood Estimation

Principles of Statistics

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.

Statistics Masters Comprehensive Exam March 21, 2003

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

STAT215: Solutions for Homework 2

Probability Theory and Statistics. Peter Jochumzen

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

UNIVERSITY OF TORONTO Faculty of Arts and Science

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm

Math 151. Rumbos Fall Solutions to Review Problems for Exam 2. Pr(X = 1) = ) = Pr(X = 2) = Pr(X = 3) = p X. (k) =

1. (Regular) Exponential Family

Parametric Models: from data to models

Exam 2 Practice Questions, 18.05, Spring 2014

Transcription:

Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from a normal distribution N(θ, 1). Answer the following questions. (1) Consider the hypotheses H 0 : θ = θ 0 against H 1 : θ > θ 0. Derive a uniformly most powerful (UMP) test at the level α (0, 1) of significance. A random sample of size n = 100 resulted in the sample mean x n = 10.160, do you reject H 0 when θ 0 = 10 at the level α =.05 of significance? (2) Derive the likelihood ratio test at the level α (0, 1) of significance for testing H 0 : θ = θ 0 against H 1 : θ θ 0, where θ 0 is specified. (Give the rejection rule). (3) Is the test in (2) an unbiased test of H 0 against H 1? Justify your conclusion. (4) Is the test in (2) a uniformly most powerful test of H 0 against H 1? Justify your conclusion. 2. Let X 1,..., X n denote a random sample from N (0, θ), where the variance θ is an unknown positive number. Show that there exists a uniformly most powerful test with significance level α for testing the simple hypothesis H 0 : θ = θ, where θ is a fixed positive number, against the alternative composite hypothesis H 1 : θ > θ. 3. Show that Y = X is a complete sufficient statistic for θ > 0, where X has the pdf f(x; θ) = 1/(2θ), for θ < x < θ, zero elsewhere. Show that Y = X and Z = sgn(x) (i.e. Z = 1 if X > 0 and Z = 1 if X < 0) are independent. 4. Suppose f(x; θ) is twice continuously differentiable w.r.t. θ Θ with the score function S(θ; x) = log f(x; θ), the Fisher information I(θ) = E([S(θ; θ X)]2 ), and the Hessian function H(θ) = E( 2 log f(x; θ)) (provided that the latter two expected values are 2 θ finite). Derive E(S(θ; X)) = 0, I(θ) = H(θ). In deriving the above two equalities, what assumptions are needed (list as many as you think are needed)? 5. Let X 1,..., X n be i.i.d. with E(X 1 ) = µ and V ar(x 1 ) = σ 2 (0, ). Let X = (X 1 +... + X n )/n and Sn 2 = n i=1 (X i X) 2 /(n 1). Show T n = n( X µ) S n converges in distribution to the standard normal.

6. Suppose X 1,..., X n are independent and identically distributed with density f(x; θ) = θx θ 1 with 0 < x < 1 and θ > 0 unknown. (1) Find the maximum likelihood estimator of θ. (2) Assuming the asymptotic theory for the MLE applies, give the asymptotic standard deviation for your estimator and use it to construct a 90% confidence interval for θ. (3) Describe the assumptions needed to justify the work required in (2). Without bogging down in this question, discuss how to check those assumptions apply in this situation. 2

Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal, t and chi-square) are allowed. Time: 3 hours. 1. Suppose that the random vector = (X 1,...,X n ) X has the multi-normal distribution N n (µ,σ 2 I n ), with = (µ 1,...,µ n ) µ and I n is the n n identity matrix. Let X = 1 n 1 n, with 1 n = (1,1,...,1) X, be the usual average. a) Find the exact distribution of the quadratic form, Q = n(x) 2. b) Evaluate E(Q) when n = 10, σ 2 = 4 and µ i = 2, i. 2. Consider the SLR model Y i = β 0 +β 1 x i +ɛ i, with ɛ i N(0,σ 2 ), i.i.d., i = 1,2,...,n. Let η 0 E(Y x = x 0 ) = β 0 + β 1 x 0, be the mean response of Y evaluated at some fixed value x 0 of x. a) Derive the LSE ˆη 0 of η 0? b) Calculate the mean and variance of this estimator, ˆη 0? c) Obtain a (1 α) 100% confidence interval for η 0. 3. Let X 1,X 2,...,X n be a random sample from a continuous distribution whose p.d.f. is f(x λ,γ) = λe λ(x γ) for x > γ with λ > 0 and γ R. a) Find the minimal sufficient statistic for θ (λ, γ). b) Assuming that λ = λ 0 is known, find the MLE for γ and obtain its pdf. Is it complete (prove or disprove)? c) Assuming that γ = γ 0 is known, find the MLE for λ and obtain its pdf. Is it complete (prove or disprove)? d) Find the (joint) MLE ˆθ n (ˆλ n, ˆγ n ) for θ = (λ,γ). e) Prove that ˆλ n and ˆγ n of part d) are independent r.v. s. f) Find the MLE, ˆψn, of the reliability ψ(λ,γ) = Pr(X t 0 λ,γ) at some known t 0 > γ. t 0 f(x λ,γ)dx, 1

4. Refer to problem 3 above. a) Construct an appropriate α-level likelihood ratio (LR) test of H 0 : λ λ 0 against H 1 : λ > λ 0 (when γ is known, say γ = 0). b) Find an expression for the power function of this test. c) Obtain an explicit expression for the critical value of LR test in terms of the appropriate quantile of a well known distribution. 5. Refer to problem 3 above and assume now that λ = 1. Let ξ(γ) = γ 2. Use results you obtained in problem 3 to derive an explicit expression for the UMVUE ˆξ n of ξ. 6. Suppose we have four identical coins and we would like to test H 0 : p 1/2 versus H 1 : p > 1/2, where p (0, 1) is the unknown probability of a Head for any one of the coins. We decide to perform the following experiment: Each one of the coins will be tossed repeatedly until the first Head occurs. Let X i denote the number of Tails counted until the first Head occurs for the i th coin, i = 1,2,3,4. a) Construct, based on the data, X 1,...,X 4, a size α = 3/16 UMP test of H 0 versus H 1. b) What is the power of the above UMP test at p = 1/4 and p = 3/4? 2

Mathematics Qualifying Examination August 2014 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal, t and chi-square) are allowed. Time: 3 hours. 1. Let Y 1,Y 2,...,Y n be a random sample from a Bernoulli distribution with parameter θ, where θ is restricted to the interval Θ (0,3/4]. Find the MLE of ν = θ(1 θ). 2. Let X 1,X 2,...,X n be i.i.d. observations from the gamma distribution f(x λ) = Xα 1 e x/λ Γ(α)λ α, 0 < x <, with known shape parameter α > 0 and unknown scale parameter λ. a) What is the sufficient statistics for λ? Is it minimal? b) Find the maximum likelihood estimator (MLE) of the reliability at a known t 0 > 0 given γ(λ) = t 0 f(t λ)dt. c) Find the Cramer-Rao lower bound for the variance of an unbiased estimator of γ(λ) based on X 1,...,X n or a suitable transformation thereof. d) Is the MLE of γ(λ) consistent and asymptotically efficient? Support your assertions. 3. Let X 1,X 2,...,X n be a random sample from N(0,σ 2 ). a) Find the UMVUE for σ 2. b) Show that your answer to part a above is statistically independent of X (1) X (2), the ratio of the first two order statistics of the given sample. 4. Suppose that the independent random variables Y 1,Y 2,...,Y n satisfy Y i = βx i + ɛ i, for, i = 1,2,...,n, where x 1,...,x n are some known constants, β is an unknown regression parameter, and ɛ i N(0,σ 2 ), are iid and σ 2 is a known constant. a) Construct the Likelihood Ratio Test (LRT) of H 0 : β = 0, versus H 1 : β 0. b) Suppose that n = 100 and 100 1 x i = 10 and σ 2 = 5. Find the exact rejection region of a size α = 0.05 LRT you constructed in part (a). 5. Suppose we have four identical coins and we would like to test H 0 : p = 0.5, versus H 1 : p > 0.5, where p is the unknown probability that any one of these coins comes up heads when it is flipped. We decided to perform the following experiment. Each coin will be flipped repeatedly until the first head occurs. Let X i denote the number of tails that occur before the first head occurs when the i th coin is flipped, i = 1,2,3,4. Use these data, X 1,X 2,X 3,X 4 to construct a Uniformly Most Powerful (UMP) test of size α = 3/16 of H 0 versus H 1. 1

6. Consider the problem of testing, based on a sample of size 1, of the hypothesis H 0 :X N(0,1) against the alternative hypothesis H 1 :X εn(0,1)+(1 ε)c(0,1) for some unknown ε > 0, where C(0,1) stands for the standard Cauchy distribution with 1 density π(1+x 2 ), < x <. a) Are H 0 and H 1 both simple hypotheses? b) Show that a UMP test of level.01 exists for this problem and that in fact it coincides with the usual test for testing that the mean θ of a normal distribution with variance 1 equals zero (which is to reject H 0 if x 2.58). 7. Let X and Y two Bernoulli random variables such that P(X = 1) = P(Y = 1) = p, p (0,1) and P(X = Y = 1) = θ. a) Prove that θ p. b) Calculate the (simple) correlation between X and Y, namely, ρ XY Cor(X,Y ). c) For X as above, let M p {g : E p (g(x)) = 0 and V ar p (g(x)) = 1 }. Find all members of the class of functions M p. (Hint: There are two such members in M p.) d) For X and Y as above, define the maximal correlation over M p as ρ M Calculate ρ M and compare it to ρ XY. sup Cor(g(X),u(Y )). g,u M p 2

Mathematics Ph.D. Qualifying Examination January 2013 STAT 52800 Mathematical Statistics NOTE: Answer all four questions completely. Justify every step. A calculator and some statistical tables (normal, t and chi-square) are allowed. Time allowed: 3 hours. 1. Suppose that X 1,...,X n is a random sample from the probability density function f(x θ) = 2 x /θ, 0 <x<, θ e x2 where θ>0 is an unknown parameter. (a) Find the method of moments estimator (MME) of θ and the maximum likelihood estimator (MLE) of θ. (b) Are the MME and MLE above (i) consistent? (ii) unbiased? Give reasons. (c) Which estimator, MME or MLE, do you favor using and why? (d) Based on the asymptotic distribution of the MLE for θ, construct a 95% confidence interval for θ. 2. Let Y 1 < Y 2 <... < Y n be the order statistics of a random sample from a lognormal(θ, 1) distribution, where θ>0 is an unknown parameter. (a) Find the minimum variance unbiased estimator (MVUE) of θ or of ψ = e θ. The choice is yours. (b) Find the maximum likelihood estimator (MLE) of ψ = e θ and of θ. (c) If ˆψn is either the MVUE or the MLE of ψ, show that, n r ( ψ n ψ) 0in probability as n, for any r [0, 0.5). (d) Derive an unbiased estimator η n of η =Φ( θ), where Φ( ) isthecumulative distribution function of the standard normal variable. Starting from this η n, how will you derive the MVUE of η? 3. Let X 1,...,X n be IID beta(1,ν 1 ), where 0 <ν< is an unknown parameter. Note that instead of working with X, you may prefer to work with Y = ln(1 X). (a) What is the maximum likelihood estimator (MLE) of ν? IstheMLEofν also a complete sufficient statistic for ν? (b) What is the best 5% level critical region for testing H 0 : ν 5versusH 1 : ν> 5? In what sense is it the best? (c) How big a sample is needed so that the above test will attain a 10% probability of type II error at ν =6? 1

(d) Develop a sequential probability ratio test for testing H 0 : ν =5versusH 2 : ν = 6 that will attain a 5% probability of type I error and a 10% probability of type II error. 4. (a) The table below gives the number of days spent in the ICU by all patients admitted during the week of December 23 29, 2012. This information is gathered after all such patients have been discharged from the ICU. Table 1: Duration of Stay in ICU #Days 1 2 3 4 5 total #Patients 20 30 12 9 9 80 Construct a 95% confidence interval for μ, the average number of days spent in the ICU by each patient. (b) Now suppose that we consider a different survey design. The surveyor visits the ICU at noon of each Sunday in December 2012 and makes a list of all patients in the ICU. Later on she collects the number of days these patients spent in the ICU. Table 2: Duration of Stay in ICU Using New Survey #Days 1 2 3 4 5 total #Patients 10 30 18 19 23 100 Based on the data in Table 2, construct a 95% confidence interval for μ. 2

Mathematics Ph.D. Qualifying Examination August 2012 STAT 52800 Mathematical Statistics NOTE: Answer all four questions completely. Justify every step. A calculator and some statistical tables (normal, t and chi-square) are allowed. Time allowed: 3 hours. 1. Suppose that X 1,...,X n is a random sample from the probability density function f(x θ) = rxr 1 e xr /θ, 0 <x<, θ where r>1 is a known constant and θ>0 is an unknown parameter. (a) Find an estimator of θ by the method of moments. (b) Find the MLE of θ. (c) Are the estimators in (a) and (b) consistent? (Show why or why not.) (d) Which estimator, (a) or (b), would you favor using and why? (e) Based on the MLE for θ, find an unbiased estimator of θ. (f) Based on the asymptotic distribution of the MLE for θ, construct a 95% confidence interval for θ. 2. Let X 1,...,X n be a random sample from a Poisson(λ) distribution, where λ>0is the mean parameter. (a) Find the uniformly minimum variance unbiased estimator (UMVUE) ψ n and the maximum likelihood estimator (MLE) ˆψ n of ψ = e λ. (b) Compare the UMVUE and the MLE of ψ. Are the two similar or quite different? Explain. (c) Show that, for any r [0, 0.5), the UMVUE satisfies n r ( ψ n ψ) 0in probability as n. (d) Derive the UMVUE η n of η = P {X i 1}. Is it also a consistent estimator of η? 3. Let X 1,...,X n be IID beta(ν 1, 1), where 0 <ν< is an unknown parameter. (a) Derive the distribution of Y i = ln X i,for1 i n. (b) What is the maximum likelihood estimator (MLE) of ν? IstheMLEofν also a complete sufficient statistic for ν? (c) What is the best 5% level critical region for testing H 0 : ν 10 versus H 1 : ν>10? In what sense is it the best? (d) How big a sample is needed so that the above test will attain a 10% probability of type II error at ν =5.6? 1

(e) Develop a sequential probability ratio test for testing H 0 : ν =5versusH 2 : ν =5.6 that will attain a 5% probability of type I error and a 10% probability of type II error. 4. A photocopy machine is fitted with a counter that records the number of copies made between successive breakdowns of the machine. After the repair person fixes the machine she resets the counter to zero. Below are the data on number of copies made between successive breakdowns. 7638, 1037, 5982, 20292, 20132, 13110, 4438, 4075, 12517, 14869, 2571, 32891 The summary statistics are: n =12, x = 11629.3,s x = 9360.1. We are interested in finding a 95% confidence interval for μ, the average number of copies made before the machine breaks down. First, decide whether the data follow an exponential distribution. If it does, find a parametric confidence interval; and if it does not, find a non-parametric confidence interval for μ. 2