Solutions: Homework 3

Similar documents
Statistical Theory MT 2009 Problems 1: Solution sketches

Statistical Theory MT 2008 Problems 1: Solution sketches

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

7.1 Convergence of sequences of random variables

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Chapter 6 Principles of Data Reduction

Unbiased Estimation. February 7-12, 2008

Lecture 19: Convergence

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Lecture 7: Properties of Random Samples

Lecture 3. Properties of Summary Statistics: Sampling Distribution

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Exponential Families and Bayesian Inference

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

This section is optional.

Problem Set 4 Due Oct, 12

Lecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables

AMS570 Lecture Notes #2

7.1 Convergence of sequences of random variables

Stat410 Probability and Statistics II (F16)

Estimation for Complete Data

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

Asymptotic distribution of products of sums of independent random variables

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

STATISTICAL INFERENCE

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

LECTURE NOTES 9. 1 Point Estimation. 1.1 The Method of Moments

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

6. Sufficient, Complete, and Ancillary Statistics

2. The volume of the solid of revolution generated by revolving the area bounded by the

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Lecture 11 and 12: Basic estimation theory

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

Lecture 18: Sampling distributions

SOLUTION FOR HOMEWORK 7, STAT np(1 p) (α + β + n) + ( np + α

Lecture 33: Bootstrap

Bayesian Methods: Introduction to Multi-parameter Models

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Topic 9: Sampling Distributions of Estimators

Expectation and Variance of a random variable

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0.

Lecture 12: September 27

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

Distribution of Random Samples & Limit theorems

Summary. Recap. Last Lecture. Let W n = W n (X 1,, X n ) = W n (X) be a sequence of estimators for

Confidence Level We want to estimate the true mean of a random variable X economically and with confidence.

1.010 Uncertainty in Engineering Fall 2008

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Lecture 20: Multivariate convergence and the Central Limit Theorem

Maximum Likelihood Estimation

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

STAT Homework 1 - Solutions

Lecture 10 October Minimaxity and least favorable prior sequences

Lecture 23: Minimal sufficiency

Random Variables, Sampling and Estimation

Questions and Answers on Maximum Likelihood

STAT431 Review. X = n. n )

Lecture 9: September 19

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT

STAT Homework 2 - Solutions

Sequences and Series of Functions

32 estimating the cumulative distribution function

LECTURE 8: ASYMPTOTICS I

n n i=1 Often we also need to estimate the variance. Below are three estimators each of which is optimal in some sense: n 1 i=1 k=1 i=1 k=1 i=1 k=1

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

f(x i ; ) L(x; p) = i=1 To estimate the value of that maximizes L or equivalently ln L we will set =0, for i =1, 2,...,m p x i (1 p) 1 x i i=1

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:

Mathematics 170B Selected HW Solutions.

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

HOMEWORK I: PREREQUISITES FROM MATH 727

Chapter 8: Estimating with Confidence

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

Stat 421-SP2012 Interval Estimation Section

Lecture 3: MLE and Regression

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

Advanced Stochastic Processes.

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

CEU Department of Economics Econometrics 1, Problem Set 1 - Solutions

The standard deviation of the mean

Lecture 16: UMVUE: conditioning on sufficient and complete statistics

Math 525: Lecture 5. January 18, 2018

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

Notes 19 : Martingale CLT

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA STATISTICAL THEORY AND METHODS PAPER I

Lecture Stat Maximum Likelihood Estimation

Direction: This test is worth 150 points. You are required to complete this test within 55 minutes.

1 Convergence in Probability and the Weak Law of Large Numbers

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Transcription:

Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe that Y i Normal( xi, ):i =,...,. With this iformatio, the log-likelihood fuctio relative to a observed sample y =(y,...,y ) R is `( ; y) = l( ) (y i x i ). The fuctio `( ; y) : R is cotiuously di eretiable i. I particular `0( ; y) = x i (y i x i )= h X x i y i i : = x i. Notice that Also This meas that `( ; y) : `0( ; y) > 0, < `0( ; y) < 0, > x i y i x i y i. lim `( ; y) =.!± R has a global maximum (which depeds o y) at MAX(y) = x i y i Fially, the MLE of will be the fuctio MAX : R! R evaluated at the sample Y =(Y,...,Y ), i.e ˆ = MAX (Y )= x i Y i. Fid the distributio of ˆ. Solutio: Observe that ˆ ca be expressed as ˆ = c i Y i : c i = x i. The last relatio says that ˆ is a liear combiatio of idepedet radom variables with Normal distributio. The ˆ Normal(µ, )where µ =E[ˆ] = c i E[Y i ]= = Var[ ˆ] = c i Var[Y i ]= S xx x i ( x i )= x i ( )=.

Fid the CRLB for estimatig with ˆ. Solutio: Because Y,...,Y are radom variables whose distributio belogs to the Expoetial Family CRLB( )= E[`00 ( ; Y )] : where `00 ( ; y) = for ay observed sample y R. This shows that `00 ( ; Y ) is costat as a fuctio of Y.The E[`00 ( ; Y )] = S xx ) CRLB( )=. Show that ˆ = UMVUE( ). Solutio: We proved earlier that Show that ˆ Normal,. From this it follows that ˆ is a ubiased estimator for belogs to the Expoetial Family ad This proves that ˆ = UMVUE( ). Var[ ˆ] = = CRLB( ):8 R. T = X i. Also ˆ has a distributio that is the UMVUE for R + if X,...,X IID Normal(0, ). Solutio: Give that the distributios for X,...,X belog to the Expoetial Family CRLB( )= E[`00 ( ; X)] where X = (X,...,X ). The associated log-likelihood fuctio relative to a observed sample x =(x,...,x ) R is Notice that With this iformatio E[`00 ( ; X)] = O the other had `( ; x) = l( ) l( ) `00 ( ; x) = ( ) 3 Var[X ] T = ( ) ( ) 3 x i. x i. ( ) = ( ) ) CRLB( )= ( ). U : U = with W i =( ) X i. Because W,...,W IID (), U (). The T has a distributio that belogs to the Expoetial Family (Chage of Variables) ad W i E[T ]= E[U ]=

i.e, a ubiased estimator for. Also Var[T ]= ( ) Var[U ]= ( ) = CRLB( ):8 R +. From this calculatios it follows that T = UMVUE( ). Let X,...,X IID Beroulli( ) with (0, ). Fid the Bayes Estimator ˆ B of with respect to the Uiform(0, ) prior uder the Loss Fuctio L(t, ) = (t ) : t R, (0, ). ( ) Solutio: The first step is to deduce the posterior distributio for give a observed sample x =(x,...,x ) {0, }. With the provided iformatio h Y ( x) / xi ( ) ii( xi (0, )) = s ( ) s I( (0, )) : s = x i. Notice that this expressio is the fuctioal part of a Beta(, ) desity with parameters = s + ad = s +. The x Beta(, ). The ext step is to calculate the posterior expected loss for our observed sample. I this case E[L(t, ) x] = B(, ) Z (0,) (t ) s ( ) s d : t R Observe that f( ) = s ( ) s I( (0, )) : R is proportioal to a Beta(, ) desity with parameters = s ad = s. Because of this observatio E[L(t, ) x] = B(, ) B(, ) E[(t? ) x] with? x Beta(, ). As a cosequece of the above relatio, the value t MIN (x) R (which depeds o x) that miimizes E[L(t, ) x] is the same that miimizes E[(t? ) x]. This last problem has a explicit solutio (Squared Error Loss) t MIN (x) =E[? x] = + = s = x. Fially, the Bayes Estimator will be the fuctio t MIN : {0, }! R evaluated at the sample X =(X,...,X ), i.e ˆ B = t MIN (X) =X. Solutios: Homework 4 Let X,...,X be a radom sample from a Normal(, ) populatio ( kow). Cosider estimatig usig Squared Error Loss ad a Normal(µ, ) prior distributio for. Let be the Bayes Estimator for. Show that the posterior distributio of is Normal(m,v ) with parameters m = x + + µ + v = +. 3

Solutio: Our task it s reduce to idetify the fuctioal part of the posterior desity for give a observed sample x =(x,...,x ) R. Igorig all terms that do t deped o Notice that exp Fially h Y ( x) / exp =exp h X (x i ) + (x i ) i exp h X (x i ) + ( i µ) / exp ( x) / exp which meas that x Normal(m,v ). =exp =exp / exp ( m ) v ( µ) I( R) ( i µ) I( R). h + h + h m i v ( m ) v. v I( R) x + µ i x + µ Fid the Bayes Estimator ˆ B of uder Squared Error Loss. Solutio: I geeral, whe the posterior distributio admits at least secod-order momets, the Bayes Estimator uder Squared Error Loss is the Posterior Mea. I our case, the Normal distributio has fiite momets for all orders. The with (X) =E[ X] = X + + µ + a = + µ b = +. = a + b Complete parts (a)-(c) of Problem 7.6. Solutio: For ay costats a R ad b R\{0}, the estimator (X) =ax +b has a Normal(a + b, a ) distributio give (ivariace uder liear trasformatios). The associated Risk Fuctio (Squared Error Loss) is R(, ) =(E[ (X) ] ) + Var[ (X) ] =((a + b) ) + a =( (a ) + b) + a =(b ( a) ) + a. I particular, for (X) =a X + b the coe ciets a R + ad b R satisfy the relatio b =( a )µ. The R(, ) =(b ( a ) ) + a =( a ) (µ ) + a = c (µ ) +( c ) : c = a = +. i 4

Fially, the associated Bayes Risk for the Bayes Estimator is B(, ) =E[R(, )] = c E[(µ ) ]+( c ) = ( ) ( + ) + ( ) ( + ) = h + + + + = + = c. i Solutios: Homework 5 = c +( c ) Cosider a radom sample of size from a distributio with discrete pdf f(x p) =p( for x =0,,..., ad 0 otherwise. p) x Before proceedig otice that E(X )=( p)/p ad Var(X )=( p)/p.. The MLE of p is the value that solves the followig d dp log f(x p) =0) d dp log p ( p) P i xi =0 P i ) x i =0 p ( p) ) ( p) =p X i x i ) p = P i x i + Now sice d dp log f(x p) = P i x i p < 0, ˆp is ideed a maximum. Further, the ( p) likelihood at p = ad p = 0 is zero. Thus, the MLE of p is ˆp = P i x i +. By the ivariace properties of MLEs, the MLE of =( p)/p is ˆ = ˆp = ˆp X. 3. The CRLB associated with ubiased estimators of is apple d dp I (p). Now d dp = d dp ( p)/p = /p ad I (p) =E [ ddp log f(x p)] apple d = E dp apple = E log f(x p) P i x i ( p) p = p + p p ( p) = p ( p) Thus the CRLB associated with ubiased estimators of is apple d dp I (p) = [ /p ] p ( p) = p p 5

4. Sice the variace of X attais the CRLB (i.e., Var( X) = (/)Var(X ) = ( p)/(p )), the ˆ is UMVUE for 5. First otice that ˆ is ubiased for. That is E( X) =( p)/p. Thesice lim E[(ˆ ) ]= lim Var( X) p = lim!!! p =0 ˆ is MSEC 6. From the asymptotic results of MLEs we have p (ˆp p) d! N(0, /I (p)) ) p (ˆp p) d! N(0,p ( p)) The from the Delta Method results we have p (ˆ ) d! N(0, (d /dp) /I (p)) ) p (ˆ ) d! N(0, (/p 4 )p ( p)) ) p (ˆ ) d! N(0, ( p)/p ) Thus, we ca say (somewhat iformally) that ˆ. N(, ( p)/(p )). 7. First the risk fuctio associated with ˆ = X. (Notice that i what follows that the risk fuctios are fuctios of p. If so desired, they could be made fuctios of.) " # Rˆ (p) (ˆ ) =E[L(ˆ, )] = E + = h + E (ˆ ) i = + Var( X) sice ˆ is ubiased for =( p)/p p p = + = /(p) + = /(p) + = 6

Now for the risk fuctio associated with = X/( + ) " # R (p) ( ) =E[L(, )] = E + = h + E ( ) i = + [Var( X/( + )) + bias( X/( + )) ] " = # p + + p + p p + p p " = # + + p + + " = # + + p + + = + p ( + ) R(p) 0.080 0.090 0.00 0.0 R θ ~ (p) R θ^(p) 0.0 0. 0.4 0.6 0.8.0 p 7

Solutios: Homework 6. (C& B 7.58 pg. 366) Let X be a observatio from the pdf f(x ) = x ( ) x, x =, 0, [0, ] (a) Fid the MLE of. To fid the MLE of for oe observatio of X it is eough to determie the value of [0, ] that maximizes f(x ) for each of the 3 possible X values. Notice that f(x = ) =f(x = ) = /. Ad = is the value of that maximizes f(x ). Similarly f(x =0 ) = ad = 0 is the value of that maximizes f(x ). Therefore the MLE is ˆ = x = 0 x =0 ) ˆ = x (b) Defie a estimator T (X) = x = 0 otherwise Show that T (X) is a ubiased estimator of theta. Notice that E [T (X)] = P (X = ) = ( /) = (c) Fid a estimator that is better tha T (X) ad prove that it is better. From cotext I am workig uder the assumptio that we are lookig for a better ubiased estimator. First otice that E(ˆ ) =P ( X = ) = P (X = or X = ) = P (X = ) + P (X = ) = /+ / = Thus, ˆ is ubiased. Note further that ad Var(T (X)) = E(T (X) ) [E(T (X)) ]=4P (X = ) = 4( /) = Var(ˆ ) =P ( X = ) = Fially for ay [0, ] we have that Var(ˆ ) apple Var(T (X)) ad therefore the MLE ˆ is a better estimator tha T (X) Alteratively, oe ca show that ˆ is a complete ad su ciet statistic for ad the by Method I coclude that ˆ is UMVUE. First to show that ˆ is use the factorizatio theorem. Let g(,s)=( /) s ( ) s for s = x ad h(x) =I[x {, 0, }]. The f(x ) = g(s, )h(x) ad by the factorizatio theorem x is su ciet for theta. To see that x is also complete, suppose that m( x ) is a fuctio such that E[m( x )] = 0 for all [0, ]. The 0=E[m( x )] = m( )( /) + m( 0 )( ) +m( )( /) = m() + m(0)( ) for all [0, ]. I particular for =0wehave0=m(0) ad for =wehave 0=m(). Therefore if 0 = E [m( x )] for all the m( x ) = 0 for x {, 0, } ) P (m( X ) = 0) = for all [0, ]. By defiitio x is complete. Sice ˆ is a fuctio of a complete ad su ciet statistic it is UMVUE ad therefore better tha T (X) 8

. Let X,...,X be iid N(, ), R. (a) By Method I ad sice we are ca assume that P X i is complete if we ca fid a ubiased estimator of ( ) = that is a fuctio of P X i the we are doe. As a first guess, cosider X. Note that X N(, /) ad thus E( X )=Var( X)+E( X) =/ +. This is obviously ot a ubiased estimator of ( ), but T (X) = X / is. Sice T P (X) is ubiased ad a fuctio of a complete ad su ciet statistic (we have show X i to be su ciet for uder N(, )) the T (X) is UMVUE. (b) To fid the Var(T (X)) otice that Z = p ( X ) N(0, ) ad Z () with E(Z ) = ad Var(Z ) =. Additioally, we will use the fact that skewess (as defied as the third cetral momet) is zero for symmetric distributios. That is, for X N(µ, )wehavee(x µ) 3 = 0. Thus for Z N(0, ), E(Z 3 ) = 0. Now we ca rewrite Hece the variace of T (X) is T (X) = X / =[ p ( X )+ p ] (/) / =[Z p ] (/) /. Var(T (X)) = Var( p ( X )+ p ] (/) /) =/ Var(Z Z p + ) =/ [Var(Z )+4 Var(Z) 4 p Cov(Z,Z)] =/ [ + 4 4 p (E(Z 3 ) E(Z )E(Z))] =/ [ + 4 4 p E(Z 3 )] sice E(Z) =0 =/ [ + 4 ] sice E(Z 3 ) = 0 see above =/ +4 / (c) Sice X i are iid, the CRLB of ( ) = is [ 0 ( )] /I ( ) with[ 0 ( )] =4 ad apple @ I ( ) = E @ log f(x ) apple @ = E @ (/) log( ) (/)(X ) apple @ = E (/)(X )( ) @ = E [ ] = Thus the CLRB is 4 / (d) Notice that Var(T (X)) = / +4 / =/ + CRLB > CRLB 9

3. (7.60 of C & B pg. 366). Fid the best ubiased estimator of / whe X,...,X are iid Gamma(, ) ad is kow. We will show that T = P X i is a complete ad su ciet statistic ad use Method. Notice that f(x, )= ( ) x exp{ x/ }I[x >0] c( )h(x)exp{q( )t(x)}i[x >0] where c( )=, h(x) =x / ( ), q( )=/, ad t(x) =x. Notice further that the support of the distributio A = {x : x>0} does ot deped o ad {q( ): > 0} = { / : > 0} =(, 0) is ope i R. Thusthegamma(, ) distributio with kow is a member of the expoetial family ad the ecessary requiremets are fulfilled to coclude that T = P X i is complete ad su ciet for. A logical first choice for a ubaised estimator of / is /T Note that T gamma(, E(/T )= = = = Z ) ad also that (/T ) 0 ( ) T exp{ T/ }dt Z ( ) T exp{ T/ }dt 0 Z ( ) ( ) 0 ( ) T exp{ T/ }dt itegral of desity is ad Gamma(a) =(a )! ( ) So defie T? =( )/T. Goig through the same procedure it is easy to show that T? is a ubaised estimate of / ad sice it is a fuctio of T (which is complete ad su ciet) the it is UMVUE. 0