A Itroductio to Sigal Detectio Estimatio - Secod Editio Chapter IV: Selected Solutios H V Poor Priceto Uiversity April 6 5 Exercise 1: a b ] ˆθ MAP (y) = arg max log θ θ y log θ =1 1 θ e ˆθ MMSE (y) = e 1 θe θ y dθ e 1 e θ y dθ = 1 y + e y e 1 e y e y e e y Exercise 3: So: ˆθ ABS (y) solves w(θ y) = ˆθ MMSE (y) = 1 y! ˆθ MAP (y) = arg θ y e θ e αθ θ y e θ e αθ dθ = θy e (α+1)θ (1 + α) y+1 y! θ y+1 e (α+1)θ dθ(1 + α) y+1 = y +1 α +1 ; ] max y log θ (α +1)θ = y θ> α +1 ; ˆθABS (y) w(θ y)dθ = 1 which reduces to y ˆθABS (y) ] = 1 eˆθ ABS (y) =! Note that the series o the left-h side is the trucated power series expasio of expˆθ ABS (y) so that the ˆθ ABS (y) is the value at which this trucated series equals half of its utrucated value whe the trucatio poit is y 1
Exercise 7: We have p θ (y) = w(θ) = e y+θ if y θ if y<θ ; 1 if θ ( 1) if θ ( 1) Thus e y+θ w(θ y) = mi1y e y+θ dθ = eθ e mi1y 1 for θ mi1y w(θ y) = otherwise This implies: which has the solutio Exercise 8: mi1y θe ˆθ θ dθ MMSE (y) = e mi1y 1 = mi1y 1]emi1y +1 ; e mi1y 1 ] ˆθ MAP (y) = arg = mi1y; ˆθABS (y) max θ mi1y eθ e θ dθ = 1 e mi1y 1 ] e ˆθ mi1y ] +1 ABS (y) = log a We have w(θ y) = e y+θ e θ e y y dθ = 1 y θ y w(θ y) = otherwise That is give y Θ is uiformly distributed o the iterval y] From this we have immediately that ˆθ MMSE (y) =ˆθ ABS (y) = y b We have MMSE = E Var(Θ Y ) Sice w(θ y) is uiform o y]var(θ Y )= Y We have 1 y p(y) =e y dθ = ye y y > from which MMSE = EY 1 = 1 1 y 3 e y dy = 3! 1 = 1
c I this case p θ (y) = e y+θ if <θ<miy 1 y from which ] ˆθ MAP (y) = arg max exp y +( 1)θ = miy 1 y <θ<miy 1 y Exercise 13: a We have where Rewritig this as p θ (y) =θ T (y) (1 θ) ( T (y)) T (y) = y φt (y) p θ (y) =C(φ)e with φ = log(θ/(1 θ)) C(φ) =e φ we see from the Completeess Theorem for Expoetial Families that T (y) is a complete sufficiet statistic for φ hece for θ (Assumig θ rages throughout ( 1)) Thus ay ubiased fuctio of T is a MVUE for θ Sice E θ T (Y ) = θ such a estimate is give by ˆθ MV (y) = T (y) = 1 y b ˆθ ML (y) = arg max <θ<1 θt (y) ( T (y)) (1 θ) = T (y)/ = ˆθ MV (y) Sice the MLE equals the MVUE we have immediately that E θ ˆθ ML (Y ) = θ The variace of ˆθ ML (Y ) is easily computed to be θ(1 θ)/ c We have θ log p θ(y )= T (Y ) T (Y ) θ (1 θ) from which The CRLB is thus I θ = θ + CRLB = 1 I θ = 1 θ = θ(1 θ) θ(1 θ) = Var θ (ˆθ ML (Y )) 3
Exercise 15: We have Thus So p θ (y) = e θ θ y y 1 y! θ log p θ(y) = θ ( θ + y log θ) = 1+y θ θ log p θ(y) = y θ < ˆθ ML (y) =y Sice Y is Poisso we have E θ ˆθ ML (Y ) = Var θ (ˆθ ML (Y )) = θ So ˆθ ML is ubiased Fisher s iformatio is give by I θ = E θ θ log p θ(y ) = E θy θ = 1 θ So the CRLB is θ which equals Var θ (ˆθ ML (Y )) (Hece the MLE is a MVUE i this case) Exercise : a Note that Y 1 Y Y are idepedet with Y havig the N ( 1+θs ) distributio Thus θ log p θ(y) = 1 θ log(1 + y θs ) (1 + θs ) = 1 s y s 1+θs (1 + θs ) from which the lielihood equatio becomes s (y 1 ˆθ ML (y)s ) (1 + ˆθ ML (y)s ) = b I θ = E θ θ log p θ(y ) = 1 = s 4 E θ Y (1 + θs s 4 )3 (1 + θ ) s 4 (1 + θs ) 4
So the CRLB is s 4 (1+θs ) c With s = 1 the lielihood equatio yields the solutio ˆθ ML (y) = ( 1 ) y 1 which is see to yield a maximum of the lielihood fuctio d We have E θ ˆθML (Y ) ( ) 1 = E θ Y 1=θ Similarly sice the Y s are idepedet Var θ (ˆθML (Y ) = 1 Var θ ( Y ) = 1 (! + θ) = (1 + θ) Thus the bias of the MLE is the variace of the MLE equals the CRLB (Hece the MLE is a MVUE i this case) Exercise : a Note that p θ (y) =exp log F (y )/θ which impies that the statistic log F (y ) is a complete sufficiet statistic for θ via the Completeess Theorem for Expoetial Families We have E θ log F (Y ) = E θ log F (Y 1 ) = log F (Y 1 )F(y 1 )] (1 θ)/θ f(y 1 )dy 1 θ Notig that d log F (y 1 )= f(y 1) dy F (y 1 ) 1 that F (y 1 )] 1/θ = explog F (y 1 )/θ we ca mae the substitutio x = log F (y 1 ) to yield E θ log F (Y ) = xe x/θ dx = θ θ Thus we have E θ ˆθMV (Y ) = θ which implies that ˆθ MV is a MVUE sice it is a ubiased fuctio of a complete sufficiet statistic 5
b Correctio: Note that for the give prior the prior mea should be EΘ = c m 1 ] It is straightforward to see that w(θ y) is of the same form as the prior with c replaced by c log F (y ) m replaced by + m Thus by ispectio EΘ Y = c log F (Y ) m + 1 which was to be show Agai the ecessary correctio has bee made] c I this example the prior posterior distributios have the same form The oly chage is that the parameters of that distributio are updated as ew data is observed A prior with this property is said to be a reproducig prior The prior parameters c m ca be thought of as comig from a earlier sample of size m As becomes large compared to m the importace of these prior parameters i the estimate dimiishes Note that log F (Y ) behaves lie Elog F (Y 1 ) for large Thus with m the estimate is appropximately give by the MVUE of Part a Alteatively with m the estimate is approximately the prior mea c/(m 1) Betwee these two extremes there is a balace betwee prior observed iformatio Exercise 3: a The log-lielihood fuctio is log p(y A φ) = 1 σ y A si ( )] π + φ log(πσ ) The lielihood equatios are thus: ( )] ( ) π y  si + ˆφ π si + ˆφ = ( )] ( ) π  y  si + ˆφ π cos + ˆφ = These equatios are solved by the estimates:  ML = yc + ys ( ) ˆφ ML = ta 1 yc y s where y c = 1 ( ) π y cos 1 / ( 1) y 6
y s = 1 ( ) π y si 1 / ( 1) +1 y 1 b Appedig the prior to the above problem yields MAP estimates: ˆφ MAP = ˆφ ML ( ) Â ML + r + (1+α)σ Â MAP = 1+α where α σ β c Note that whe β ( the prior diffuses ) the MAP estimate of A does ot approach the MLE of A However as the MAP estimate does approach the MLE 7