λ n e λ n f(x;λ) f(y;λ) = λn e λ = e λ ( n = 2 θ θ 1 2 y2 1 3 y3 ] θ = 1 3 θ. E(Y i ) = 3 1 n n1 3 θ = θ. θ 1 3 y3 1 4 y4 ] θ = 1 6 θ2.

Similar documents
Introductory statistics

STATISTICAL INFERENCE

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Statistical Theory MT 2009 Problems 1: Solution sketches

Statistical Theory MT 2008 Problems 1: Solution sketches

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Direction: This test is worth 150 points. You are required to complete this test within 55 minutes.

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

Stat 421-SP2012 Interval Estimation Section

Questions and Answers on Maximum Likelihood

Maximum Likelihood Estimation

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

Mathematical Statistics - MS

Expectation and Variance of a random variable

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Problem Set 4 Due Oct, 12

Topic 9: Sampling Distributions of Estimators

Summary. Recap ... Last Lecture. Summary. Theorem

Lecture 6 Simple alternatives and the Neyman-Pearson lemma

Topic 9: Sampling Distributions of Estimators

Common Large/Small Sample Tests 1/55

Chapter 6 Principles of Data Reduction

5. Likelihood Ratio Tests

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0.

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Properties and Hypothesis Testing

Lecture 7: Properties of Random Samples

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Topic 9: Sampling Distributions of Estimators

Stat410 Probability and Statistics II (F16)

Random Variables, Sampling and Estimation

7.1 Convergence of sequences of random variables

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

Final Examination Statistics 200C. T. Ferguson June 10, 2010

Lecture 11 and 12: Basic estimation theory

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

This section is optional.

STAT431 Review. X = n. n )

November 2002 Course 4 solutions

Estimation of the Mean and the ACVF

LECTURE 14 NOTES. A sequence of α-level tests {ϕ n (x)} is consistent if

Lecture 12: September 27

TAMS24: Notations and Formulas

Stat 319 Theory of Statistics (2) Exercises

STAT Homework 1 - Solutions

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Math 152. Rumbos Fall Solutions to Review Problems for Exam #2. Number of Heads Frequency

Stat 200 -Testing Summary Page 1

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

7.1 Convergence of sequences of random variables

Estimation for Complete Data

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

Unbiased Estimation. February 7-12, 2008

Kurskod: TAMS11 Provkod: TENB 21 March 2015, 14:00-18:00. English Version (no Swedish Version)

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

EE 4TM4: Digital Communications II Probability Theory

Sample questions. 8. Let X denote a continuous random variable with probability density function f(x) = 4x 3 /15 for

of the matrix is =-85, so it is not positive definite. Thus, the first

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

6. Sufficient, Complete, and Ancillary Statistics

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

SOLUTION FOR HOMEWORK 7, STAT np(1 p) (α + β + n) + ( np + α

x = Pr ( X (n) βx ) =

Lecture 3: MLE and Regression

Asymptotic Results for the Linear Regression Model

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

Solutions: Homework 3

32 estimating the cumulative distribution function

Last Lecture. Wald Test

AMS570 Lecture Notes #2

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

Statistical inference: example 1. Inferential Statistics

Homework for 4/9 Due 4/16

6 Sample Size Calculations

AMS 216 Stochastic Differential Equations Lecture 02 Copyright by Hongyun Wang, UCSC ( ( )) 2 = E X 2 ( ( )) 2

1.010 Uncertainty in Engineering Fall 2008

2 1. The r.s., of size n2, from population 2 will be. 2 and 2. 2) The two populations are independent. This implies that all of the n1 n2

Lecture 23: Minimal sufficiency

2. The volume of the solid of revolution generated by revolving the area bounded by the

Lecture Notes 15 Hypothesis Testing (Chapter 10)

HOMEWORK I: PREREQUISITES FROM MATH 727

f(x i ; ) L(x; p) = i=1 To estimate the value of that maximizes L or equivalently ln L we will set =0, for i =1, 2,...,m p x i (1 p) 1 x i i=1

NANYANG TECHNOLOGICAL UNIVERSITY SYLLABUS FOR ENTRANCE EXAMINATION FOR INTERNATIONAL STUDENTS AO-LEVEL MATHEMATICS

Probability and Statistics

Class 23. Daniel B. Rowe, Ph.D. Department of Mathematics, Statistics, and Computer Science. Marquette University MATH 1700

Bayesian Methods: Introduction to Multi-parameter Models

April 18, 2017 CONFIDENCE INTERVALS AND HYPOTHESIS TESTING, UNDERGRADUATE MATH 526 STYLE

Chapter 13: Tests of Hypothesis Section 13.1 Introduction

Recall the study where we estimated the difference between mean systolic blood pressure levels of users of oral contraceptives and non-users, x - y.

An Introduction to Signal Detection and Estimation - Second Edition Chapter IV: Selected Solutions

Final Review. Fall 2013 Prof. Yao Xie, H. Milton Stewart School of Industrial Systems & Engineering Georgia Tech

Transcription:

CHAPTER Exercise 1 Suppose that Y (Y 1,,Y ) is a radom sample from a Exp(λ) distributio The we may write f Y (y) λe λy i λ e λ y i }{{} }{{} 1 g λ (T(Y )) h(y) It follows thatt(y ) Y i is a sufficiet statistic forλ Exercise Suppose that Y (Y 1,,Y ) is a radom sample from a Exp(λ) distributio The the ratio of the joit pdfs at two differet realizatios ofy,xad y, is x i f(x;λ) f(y;λ) λ e λ λ e λ y i e λ ( y i x i The ratio is costat iff y i x i Hece, by Lemma 1, T(Y ) Y i is a miimal sufficiet statistic forλ ) Exercise 3 Y i are idetically distributed, hace have the same expectatio, saye(y), for alli 1,, Here, fory [,θ], we have: E(Y) θ θ y(θ y)dy Bias: θ θ (θy y )dy θ [ θ 1 y 1 3 y3 ] θ 1 3 θ E(T(Y )) E(3Y) 3 1 That itbias(t(y )) E(T(Y )) θ E(Y i ) 3 1 1 3 θ θ Variace: Y i are idetically distributed, hace have the same variace, say var(y), for all i 1,,, var(y) E(Y ) [E(Y)] We eed to calculatee(y ) E(Y ) θ θ y (θ y)dy θ θ (θy y 3 )dy θ [ θ 1 3 y3 1 4 y4 ] θ 1 6 θ Hece var(y) E(Y ) [E(Y)] 1 6 θ 1 9 θ 9 θ This gives var(t(y )) 9var(Y) 9 1 var(y i ) 9 1 9 θ 9 1 9 θ θ 15

Cosistecy: T(Y ) is ubiased, so it is eough to check if its variace teds to zero whe teds to ifiity Ideed, as we have θ, that is T(Y ) 3Y is a cosistet estimator ofθ Exercise 4 We have X i iid Ber(p) for i 1,, Also, X 1 X i (a) For a estimator ϑ to be cosistet forϑwe require that themse( ϑ) as We have MSE( ϑ) var( ϑ)+[bias( ϑ)] We will ow calculate the variace ad bias of p X E(X) 1 HeceX is a ubiased estimator ofp E(X i ) 1 p p var(x) 1 var(x i ) 1 pq 1 pq Hece, MSE(X) 1 pq as, that is,x is a cosistet estimator ofp (b) The estimator ϑ is asymptotically ubiased for ϑ if E( ϑ) ϑ as, that is, the bias teds to zero as Here we have Note that we ca write That is E( pq) E[X(1 X)] E[X X ] E[X] E[X ] E[X ] var(x)+[e(x)] 1 pq +p ( ) 1 E( pq) p pq +p pq( 1) pq as Hece, the estimator is asymptotically ubiased forpq Exercise 5 Here we have a sigle parameterpad g(p) p By defiitio the CRLB(p) is CRLB(p) { } dg(p) dp { }, (1) E d logp(y y;p) dp where the joit pmf ofy (Y 1,,Y ) T, wherey i Ber(p) idepedetly, is P(Y y;p) p y i (1 p) 1 y i p y i (1 p) y i For the umerator of (1) we get dg(p) dp 1 Further, for brevity deote P P(Y y;p) For the deomiator of (1) we calculate logp y i logp+( 16 y i )log(1 p)

ad dlogp y i + y i, dp p 1 p d logp dp y i p + y i (1 p) Hece, sicee(y i ) p for alli, we get [ ] E d logp dp ( E Y ) ( i p E Y ) i (1 p) p p p (1 p) p(1 p) Hece, CRLB(p) p(1 p) Sicevar(Y) p(1 p), it meas thatvar(y)achieves the boud ad soy has the miimum variace amog all ubiased estimators ofp Exercise 6 From lectures, we kow that the joit pdf of idepedet ormal rvs is { } f(y;µ,σ ) (πσ ) exp 1 σ (y i µ) Deote f f(y µ,σ ) Takig log of the pdf we obtai Thus, we have ad It follows that ad logf log(πσ ) 1 σ logf µ 1 σ (y i µ) (y i µ) logf σ σ + 1 σ 4 (y i µ) log µ σ, logf µ σ 1 σ 4 logf σ 4 σ 4 1 σ 6 (y i µ) (y i µ) Hece, takig expectatio of each of the secod derivatives we obtai the Fisher iformatio matrix ( ) M σ σ 4 Now, letg(µ,σ ) µ+σ The we have g/ µ 1 ad g/ σ 1 So ( ) 1 ( ) CRLB(g(µ,σ )) (1,1) σ 1 1 σ ( 4 ) σ ( ) (1,1) 1 σ σ 4 1 (1+σ ) 17

Exercise 7 Suppose that Y 1,,Y are idepedet Poisso(λ) radom variables The we kow that T Y i is a sufficiet statistic forλ Now, we eed to fid out what is the distributio of T We showed i Exercise 11 that the mgf of a Poisso(λ) rv is M Y (z) e λ(ez 1) Hece, we may write (we usedz ot to be cofused with the values of T, deoted byt) M T (z) M Yi (z) e λ(ez 1) e λ(ez 1) Hece, T Poisso(λ), ad so its probability mass fuctio is Next, suppose that for λ > The we have P(T t) (λ)t e λ, t,1, t! E{h(T)} t t h(t) (λ)t e λ t! h(t) (λ) t t! for λ > Thus, every coefficiet h(t)/t! is zero, so that h(t) for all t, 1,, Sice T takes o valuest,1,, with probability 1 it meas that P{h(T) } 1 for allλ Hece,T Y i is a complete sufficiet statistic Exercise 8 S Y i is a complete sufficiet statistic for λ We have see that T Y S/ is a MVUE forλ Now, we will fid a uique MVUE ofφ λ We have ( ) 1 E(T ) E S 1 E(S ) 1 [ var(s)+[e(s)] ] 1 [ λ+ λ ] 1 λ+λ 1 E(T)+λ It meas that E [ T 1 T] λ, ie, T 1 T Y 1 Y is a ubiased estimator of λ It is a fuctio of a complete sufficiet statistic, hece it is the uique MVUE ofλ 18

Exercise 9 We may write P(Y y;λ) λy e λ y! 1 y! exp{ylogλ λ} 1 exp{(logλ)y λ} y! Thus, we have a(λ) logλ, b(y) y, c(λ) λ ad h(y) 1 y! That is the P(Y y;λ) has a represetatio of the form required by Defiitio 1 Exercise 1 (a) Here, fory >, we have f(y λ,α) λα Γ(α) yα 1 e λy { [ ]} λ α exp log Γ(α) yα 1 e λy { [ ]} λ α exp λy +(α 1)logy +log Γ(α) This has the required form of Defiitio 1, wherep ad a 1 (λ,α) λ a (λ,α) α 1 b 1 (y) y b (y) logy [ ] λ α c(λ,α) log Γ(α) h(y) 1 (b) By Theorem 8 (lecture otes) we have that S 1 (Y ) Y i ad S (Y ) logy i are the joit complete sufficiet statistics forλad α Exercise 11 To obtai the Method of Momets estimators we compare the populatio ad the sample momets For a oe parameter distributio we obtai θ as the solutio of: E(Y) Y () Here, fory [,θ], we have: E(Y) θ θ y(θ y)dy θ θ (θy y )dy θ [ θ 1 y 1 3 y3 ] θ 1 3 θ 19

The by () we get the method of momets estimator ofθ: θ 3Y Exercise 1 (a) First, we will show that the distributio belogs to a expoetial family Here, fory > ad kow α, we have f(y λ,α) λα Γ(α) yα 1 e λy α 1 λα y Γ(α) e λy { [ ]} λ y α 1 α exp log Γ(α) e λy { [ ]} λ y α 1 α exp λy +log Γ(α) This has the required form of Defiitio 11, wherep 1 ad a(λ) λ By Theorem 8 (lecture otes) we have that is the complete sufficiet statistic forλ (b) The likelihood fuctio is The the log-likelihood is L(λ;y) b(y) y [ ] λ α c(λ) log Γ(α) h(y) y α 1 S(Y ) Y i λ α Γ(α) yα 1 i e λy i λ α ) y α 1 i e λy i ( Γ(α) elog { λ α Γ(α) } e (α 1) logy i e λ y i l(λ;y) logl(λ;y ) αlogλ logγ(α)+(α 1) The, we obtai the followig derivative ofl(λ;y ) with respect toλ: dl dλ α1 λ y i logy i λ y i This, set to zero, gives λ α y α i y

Hece, themle(λ) α/y So, we get MLE[g(λ)] MLE [ ] 1 1 λ MLE(λ) 1 α Y 1 α That is,mle[g(λ)] is a fuctio of the complete sufficiet statistic Y i 1 S(Y ) α (c) To show that it is a ubiased estimator ofg(λ) we calculate: ( ) 1 E[g( λ)] E Y i 1 E(Y i ) 1 α α α α1 λ 1 λ It is a ubiased estimator ad a fuctio of a complete sufficiet statistics, hece, by Corollary (give i Lectures), it is the uique MVUE(g(λ)) Exercise 13 (a) The likelihood is L(β,β 1 ;y) (πσ ) exp { Now, maximizig this is equivalet to miimizig S(β,β 1 ) 1 σ } (y i β β 1 x i ) (y i β β 1 x i ), which is the criterio we use to fid the least squares estimators Hece, the maximum likelihood estimators are the same as the least squares estimators (c) The estimates ofβ ad β 1 are β Y β 1 x 9413, β 1 x iy i xy 166 x i x Hece the estimate of the mea respose at a givexis E(Y x 4) 9413 166x For the temperature of x 4 degrees we obtai the estimate of expected hardess equal to E(Y x) 43483 Exercise 14 The LS estimator ofβ 1 is β 1 x iy i xy x i x We will see that it has a ormal distributio ad is ubiased, ad we will fid its variace Now, ormality is clear from the fact that we may write β 1 x iy i x Y i x i x (x i x) Y i, 1

where x i x, so that ˆβ 1 is a liear fuctio ofy 1,,Y, each of which is ormally distributed Next, we have E( β 1 ) 1 1 1 1 1 (x i x)e(y i ) { x i E(Y i ) x } E(Y i ) { x i (β +β 1 x i ) x { } (β +β 1 x i ) } β x+β 1 x i β x β 1 x { x i x }β 1 1 β 1 β 1 Fially, sice they i s are idepedet, we have var(ˆβ 1 ) 1 ( ) 1 ( ) (x i x) var(y i ) (x i x) σ 1 ( ) σ σ Hece, ˆβ 1 N(β 1,σ / ) ad a1(1 α)% cofidece iterval forβ 1 is S ˆβ 1 ±t, α, wheres 1 (Y i ˆβ ˆβ 1 x i ) is the MVUE for σ Exercise 15 (a) The likelihood is ad so the log-likelihood is Thus, solvig the equatio L(θ;y) θy θ 1 i ( ) θ 1 θ y i, ( ) l(θ;y) logθ +(θ 1)log y i ( dl dθ ) θ +log y i, we obtai the maximum likelihood estimator ofθ as ˆθ /log( Y i) (b) Sice d l dθ θ,

we have CRLB(θ) 1 θ ( ) E d l dθ Thus, for large, ˆθ N(θ,θ /) (c) Here we have to replacecrlb(θ) with its estimator to obtai the approximate pivot This gives Q(Y,θ) θ θ θ P θ z α < θ θ AN(,1) approx < zα 1 α wherezα is such thatp( Z < z α) 1 α,z N(,1) It may be rearraged to yield θ P zα ˆθ < θ < ˆθ θ +zα 1 α Hece, a approximate1(1 α)% cofidece iterval forθ is θ ±zα θ Fially, the approximate 9% cofidece iterval forθ is ˆθ ˆθ ±16449, where θ /log( Y i) Exercise 16 (a) For a Poisso distributio we have the MLE(λ) equal to λ Y ad ( λ AN λ, λ ) Hece, So, after stadardizatio, we get λ 1 λ AN λ 1 λ (λ 1 λ ) λ1 1 + λ Hece, the approximate pivot forλ 1 λ is ( λ 1 λ, λ 1 + λ ) 1 AN(,1) Q(Y,λ 1 λ ) λ 1 λ (λ 1 λ ) λ1 1 + λ N(,1) approx 3

The, forzα such thatp( Z < z α) 1 α, Z N(,1), we may write what gives P P λ 1 λ zα z α < λ 1 λ (λ 1 λ ) λ1 1 + λ That is, a1(1 α)% CI for λ 1 λ is < zα 1 α λ 1 + λ < λ 1 λ < 1 λ λ 1 1 + λ zα Y 1 Y ±zα Y 1 1 + Y (b) Deote: Y i - desity of seedligs of tree A at a square meter areaiaroud the tree; X i - desity of seedligs of tree B at a square meter areaiaroud the tree The, we may assume thaty i iid Poisso(λ 1 ) ad X i iid Poisso(λ ) We are iterested i the differece i the mea desity, ie, iλ 1 λ From the data we get: λ 1 λ 1 λ 1, + λ 17 1 7 Hece, the approximate99% CI for λ 1 λ is [ ] 17 17 1 5758 7, 1+5758 [ 69, 69] 7 + λ 1 1 α The CI icludes zero, hece, at the 1% sigificace level, there is o evidece to reject H : λ 1 λ agaisth 1 : λ 1 λ, that is, there is o evidece to say, at the1% sigificace level, that tree A produced a higher desity of seedligs tha tree B did Exercise 17 (a) Y i iid Ber(p),i 1,, ad we are iterested i testig a hypothesis H : p p agaisth 1 : p p 1 The likelihood is ad so we get the likelihood ratio: L(p;y) p y i (1 p) y i λ(p) L(p ;y) L(p 1 ;y) p y i (1 p ) p y i 1 (1 p 1 ) { } p p 1 { p (1 p 1 ) p 1 (1 p ) y i y i } y i y { i 1 p 1 p 1 } y { } i 1 p 1 p 1 The, the critical regio is R {y : λ(p) a}, 4

where a is a costat chose to give sigificace level α It meas that we reject the ull hypothesis if { } p (1 p 1 ) y { } i 1 p a, p 1 (1 p ) 1 p 1 which is equivalet to { } p (1 p 1 ) y i b, p 1 (1 p ) or, after takig logs of both sides, to { } p (1 p 1 ) y i log c, p 1 (1 p ) wherebad c are costats chose to give sigificace levelα Whep 1 > p we have { } p (1 p 1 ) log < p 1 (1 p ) Hece, the critical regio ca be writte as R {y : y d}, for some costatdchose to give sigificace levelα By the cetral limit theorem, we have that (whe the ull hypothesis is true, ie,p p ): ( Y AN p, p ) (1 p ) Hece, ad we may write wherez α d p p (1 p ) Z Y p p (1 p ) AN(,1) α P(Y d p p ) P(Z z α ), p Heced p +z (1 p ) α ad the critical regio is R { y : y p +z α p (1 p ) (b) The critical regio does ot deped o p 1, hece it is the same for all p > p ad so there is a uiformly most powerful test forh : p p agaist H 1 : p > p The power fuctio is β(p) P(Y R p) ( ) p (1 p ) P Y p +z α p P Y p p p +z (1 p ) α p p(1 p) p(1 p) 1 Φ{g(p)}, } where g(p) p +z α p (1 p ) p p(1 p) stadard ormal distributio ad Φ deotes the cumulative distributio fuctio of the 5

Questio 18 (a) Let us deote: Y i Ber(p) - a respose of mouseito the drug cadidate The, from Questio 1, we have the followig critical regio { } p (1 p ) R y : y p +z α Herep 1, 3,α 5,z α 16449 It gives R {y : y 19} From the sample we have p y 6 3, that is there is evidece to reject the ull hypothesis at the sigificace levelα 5 (b) The power fuctio is β(p) 1 Φ{g(p)}, where g(p) p +z α z 5 16449, p (1 p ) p p(1 p) Whe 3, p 1 ad p we obtai, for g() 1356 adφ( 1356) 1 Φ(1356) 1 5539 4461 It gives the power equal to β() 5539 It meas that the probability of type II error is 4461, which is rather high This is because the value of the alterative hypothesis is close to the ull hypothesis ad also the umber of observatios is ot large To fid what is eeded to get the powerβ() 8 we calculate: g(p) 1+z 5 9 16 75z 5 5 Forβ(p) 1 Φ{g(p)} to be equal to 8 it meas thatφ{g(p)} From statistical tables we obtai thatg(p) 8416 Hece, it gives, forz 5 16449, (4 8416+3 16449) 689 At least 69 mice are eeded to obtai as high power test as 8 for detectig that the proportio is rather tha 1 6