MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas covered i the assigmet? (b) Were there ay topics that you believe should have bee covered, but were ot? (c) What problems did you have with the assigmet, if ay? Write your solutios carefully ad eatly, preferably i complete ad grammatically correct seteces. Justify all of your argumets. 1. Questio.7., page 333.. Questio.4.17, page 319. 3. Questio.4.18, page 30. 4. Questio.., page 33.. Questio.6.6, page 330. 6. Questio.6.7, page 330. 7. Questio.3.1, page 309. 8. Questio.3.7, page 310.
Solutios to Selected Problems. Solutio (.3.7). The true mea either does or does ot lie i a iterval of the give form (ivolvig a estimate ȳ). But if we cosider a radom iterval of the form ) I(Ȳ (Ȳ ) = σ 0.96, Ȳ + 1.06 σ, the it is a valid questio to ask, What is the probability that µ lies i this iterval? Usig the fact that Z = Ȳ µ σ/ is a stadard ormal radom variable, we have ( ) ( σ P Ȳ 0.96, Ȳ + 1.06 σ = P 1.06 < Ȳ µ ) σ/ < 0.96 = P ( 1.06 < Z < 0.96) = 0.831 0.1446 = 0.6869. So the probability that µ lies i the iterval I(Ȳ ) is about p = 0.687. We ow view each of our samples Ȳ as a Beroulli trial, where µ lies i I(Ȳ ) with success probability p = 0.6869. Let X be the umber of successes out of N = experimets. The X B(N, p) is biomial distributed, ad we have P (µ lies i at least 4 itervals I(ȳ)) = P (X 4) = P (X = 4) + P (X = ) ( ) ( ) = p 4 (1 p) 1 + p (1 p) 0 4 = 0.014. Solutio (.4.18). We wat to compare the variaces of ˆθ 1 ad ˆθ. Usig Theorem 3.10.1, we show that f Ymax (y) = y4 θ, 0 y θ. We have θ E(Y max ) = y y4 θ dy = θ, ad 6 E(Y max) = 0 θ 0 y y4 θ dy = 7 θ, so that Var(Y max ) = E(Ymax) E(Y max ) = 7 36 θ. Hece Var(ˆθ 1 ) = 36 Var(Y max) = 36 7 36 θ = 1 3 θ. By the symmetry of the pdf of Y about its mea, we must have Var(Y mi ) = Var(Y max ). Hece Var(ˆθ ) = 36Var(Y mi ) = 36 7 36 θ = 7 θ > Var(ˆθ 1 ). Hece ˆθ 1 is a more efficiet estimator for the parameter θ.
Solutio (..). To show that the estimator ˆλ = 1 Xi is EFFICIENT, we eed to prove that it is ubiased ad that its variace agrees with the Cramèr Rao lower boud. It is clearly a ubiased estimator for λ: E(ˆλ) = 1 E(Xi ) = λ. Its variace is For the Carmèr Rao lower boud, we have Var(ˆλ) = 1 Var(Xi ) = λ. l f X (k; λ) = λ λ ( λ + k l λ l k!) = k λ ( ) X E = 1 1 E(X) = λ λ λ { [ ]} 1 { l f X (k; λ) Cramèr Rao: E = 1 } 1 = λ λ λ As this boud agrees with the variace of ˆλ, we coclude that ˆλ is a efficiet estimator. Solutio (.6.6). To see if W = Y i is a sufficiet statistic, we apply the factorizatio theorem to show that the likelihood fuctio may be writte as L(θ) = g(w ; θ) h(y 1,..., y ), where h does ot deped o θ at all. Note that g is oly allowed to deped o θ ad o the estimator W. The likelihood fuctio is L(θ) = f Y (y i ; θ) = θy θ 1 i = θ ( yi ) θ 1 = [ θ w θ] y 1 i = g(w; θ) h(y 1,..., y ), with w = y i where g = θ w θ ad h = y 1 i have the desired properties. This shows that W is a sufficiet estimator. Now let s compute the max likelihood estimator. I this case, the pdf is supported o the iterval [0, 1], which DOES NOT deped o the parameter θ. So calculus will most likely fid the estimator we wat. We already computed the likelihood fuctio above, ad l L(θ) = l θ + θ l w l w l L(θ) = + l w Critical poit: θ = θ θ l w. The secod derivative satisfies l L(θ) = < 0, so that ay critical poit is automatically θ θ a local max. Hece ˆθ = is the maximum likelihood estimator, ad it is a fuctio of l W the sufficiet estimator W. 3
Solutio (.6.7). (a) As i the last problem, we will show that ˆθ = Y mi is sufficiet for θ by factorig its likelihood fuctio. Sice the pdf is ozero o the iterval [θ, ), which depeds o θ, a idicator fuctio will help us to uderstad the fier properties of the likelihood fuctio. To that ed, observe that I [θ, ) (y i ) = I [θ, ) (y mi ), sice y 1,..., y θ if ad oly if the smallest of the y i s is at least θ. Hece L(θ) = f Y (y i ; θ) = e (yi θ) I [θ, ) (y i ) = I [θ, ) (y mi ) exp [θ ] y i = [ e θ I [θ, ) (y mi ) ] exp ( ) y i = g(y mi ; θ) h(y 1,..., y ), where g = e θ I [θ, ) (y mi ) ad h = exp ( y i ) have the desired properties. This factorizatio shows that Y mi is a sufficiet estimator. (b) To show that Y max is NOT a sufficiet estimator, we caot use the factorizatio theorem directly. Ideed, we caot preset ALL possible factorizatios of L(θ), so we caot kow if there is oe lurkig about that shows Y max is sufficiet. Istead, suppose that Y max IS a sufficiet estimator. The the factorizatio theorem would say that L(θ) = g(y max ; θ) h(y 1,..., y ). So if we FIX the value of y max, the L(θ) should cease to deped o θ. But we already computed L(θ): L(θ) = [ e θ I [θ, ) (y mi ) ] exp ( ) y i. If we fix y max > θ, the L(θ) = 0 if ad oly if θ y mi, which is a coditio that clearly depeds o θ. So Y max caot be a sufficiet estimator. Solutio (.7.). To show that S = 1 Y i σ = Var(Y ), we fix ε > 0 ad show that P ( S σ ε ) 0 as. is a cosistet sequece of estimators for We will apply Chebyshev s iequality to show this. Set W = S. The E(W ) = E(Y ) = E(Y ) E(Y ) = σ, ad ( 1 ) Var(W ) = Var Y i = 1 ( ) Var Y Var(Y ) i =, where we have used the idepedece of the Y i to coclude that Var ( Y i ) = Var(Y i ). Sice Y is ormally distributed, the variace of Y is fiite. Applyig Chebyshev s iequality 4
gives P ( S σ ε ) = P ( W E(W ) ε) V ar(w ) ε = Var(Y ) ε 0 as.