An Introduction to Signal Detection and Estimation - Second Edition Chapter IV: Selected Solutions

Similar documents
Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Maximum Likelihood Estimation

Statistical Theory MT 2008 Problems 1: Solution sketches

Statistical Theory MT 2009 Problems 1: Solution sketches

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

Unbiased Estimation. February 7-12, 2008

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

Problem Set 4 Due Oct, 12

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

Lecture 11 and 12: Basic estimation theory

EE 4TM4: Digital Communications II Probability Theory

Chapter 6 Principles of Data Reduction

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

6. Sufficient, Complete, and Ancillary Statistics

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

1.010 Uncertainty in Engineering Fall 2008

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

2. The volume of the solid of revolution generated by revolving the area bounded by the

Solutions: Homework 3

APPM 5720 Solutions to Problem Set Five

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Stat410 Probability and Statistics II (F16)

Stat 421-SP2012 Interval Estimation Section

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT

Lecture 12: September 27

Lecture 6 Ecient estimators. Rao-Cramer bound.

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Simulation. Two Rule For Inverting A Distribution Function

Questions and Answers on Maximum Likelihood

EE 6885 Statistical Pattern Recognition

5 : Exponential Family and Generalized Linear Models

Lecture 16: UMVUE: conditioning on sufficient and complete statistics

Department of Mathematics

Chapter 2 The Monte Carlo Method

Properties of Point Estimators and Methods of Estimation

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

Machine Learning Brett Bernstein

1 Introduction to reducing variance in Monte Carlo simulations

STATISTICAL INFERENCE

Monte Carlo Integration

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2

Probability and statistics: basic terms


Machine Learning Brett Bernstein

Topic 9: Sampling Distributions of Estimators

Bayesian Methods: Introduction to Multi-parameter Models

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Chapter 7 Maximum Likelihood Estimate (MLE)

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p).

Random Variables, Sampling and Estimation

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

Stat 319 Theory of Statistics (2) Exercises

IE 230 Probability & Statistics in Engineering I. Closed book and notes. No calculators. 120 minutes.

The Institute of Actuaries of India

Exponential Families and Bayesian Inference

International Journal of Mathematical Archive-5(7), 2014, Available online through ISSN

Lecture 2: Monte Carlo Simulation

Maximum Likelihood Estimation and Complexity Regularization

Lecture 3. Properties of Summary Statistics: Sampling Distribution

Summary. Recap ... Last Lecture. Summary. Theorem

Estimation for Complete Data

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Evaluation of the Gaussian Density Integral

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

Lecture 13: Maximum Likelihood Estimation

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Last Lecture. Unbiased Test

ST5215: Advanced Statistical Theory

Introductory statistics

Maximum likelihood estimation from record-breaking data for the generalized Pareto distribution

MATH4822E FOURIER ANALYSIS AND ITS APPLICATIONS

n n i=1 Often we also need to estimate the variance. Below are three estimators each of which is optimal in some sense: n 1 i=1 k=1 i=1 k=1 i=1 k=1

10-701/ Machine Learning Mid-term Exam Solution

Lecture Stat Maximum Likelihood Estimation

The standard deviation of the mean

Lecture 23: Minimal sufficiency

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0.

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator

6.041/6.431 Spring 2009 Final Exam Thursday, May 21, 1:30-4:30 PM.

MATHEMATICAL SCIENCES PAPER-II

x = Pr ( X (n) βx ) =

Lecture 10 October Minimaxity and least favorable prior sequences

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

Chapter 6 Sampling Distributions

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences.

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

1 Approximating Integrals using Taylor Polynomials

Topic 9: Sampling Distributions of Estimators

Final Examination Statistics 200C. T. Ferguson June 10, 2010

Sequences and Series of Functions

LECTURE 8: ASYMPTOTICS I

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA STATISTICAL THEORY AND METHODS PAPER I

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,

Transcription:

A Itroductio to Sigal Detectio Estimatio - Secod Editio Chapter IV: Selected Solutios H V Poor Priceto Uiversity April 6 5 Exercise 1: a b ] ˆθ MAP (y) = arg max log θ θ y log θ =1 1 θ e ˆθ MMSE (y) = e 1 θe θ y dθ e 1 e θ y dθ = 1 y + e y e 1 e y e y e e y Exercise 3: So: ˆθ ABS (y) solves w(θ y) = ˆθ MMSE (y) = 1 y! ˆθ MAP (y) = arg θ y e θ e αθ θ y e θ e αθ dθ = θy e (α+1)θ (1 + α) y+1 y! θ y+1 e (α+1)θ dθ(1 + α) y+1 = y +1 α +1 ; ] max y log θ (α +1)θ = y θ> α +1 ; ˆθABS (y) w(θ y)dθ = 1 which reduces to y ˆθABS (y) ] = 1 eˆθ ABS (y) =! Note that the series o the left-h side is the trucated power series expasio of expˆθ ABS (y) so that the ˆθ ABS (y) is the value at which this trucated series equals half of its utrucated value whe the trucatio poit is y 1

Exercise 7: We have p θ (y) = w(θ) = e y+θ if y θ if y<θ ; 1 if θ ( 1) if θ ( 1) Thus e y+θ w(θ y) = mi1y e y+θ dθ = eθ e mi1y 1 for θ mi1y w(θ y) = otherwise This implies: which has the solutio Exercise 8: mi1y θe ˆθ θ dθ MMSE (y) = e mi1y 1 = mi1y 1]emi1y +1 ; e mi1y 1 ] ˆθ MAP (y) = arg = mi1y; ˆθABS (y) max θ mi1y eθ e θ dθ = 1 e mi1y 1 ] e ˆθ mi1y ] +1 ABS (y) = log a We have w(θ y) = e y+θ e θ e y y dθ = 1 y θ y w(θ y) = otherwise That is give y Θ is uiformly distributed o the iterval y] From this we have immediately that ˆθ MMSE (y) =ˆθ ABS (y) = y b We have MMSE = E Var(Θ Y ) Sice w(θ y) is uiform o y]var(θ Y )= Y We have 1 y p(y) =e y dθ = ye y y > from which MMSE = EY 1 = 1 1 y 3 e y dy = 3! 1 = 1

c I this case p θ (y) = e y+θ if <θ<miy 1 y from which ] ˆθ MAP (y) = arg max exp y +( 1)θ = miy 1 y <θ<miy 1 y Exercise 13: a We have where Rewritig this as p θ (y) =θ T (y) (1 θ) ( T (y)) T (y) = y φt (y) p θ (y) =C(φ)e with φ = log(θ/(1 θ)) C(φ) =e φ we see from the Completeess Theorem for Expoetial Families that T (y) is a complete sufficiet statistic for φ hece for θ (Assumig θ rages throughout ( 1)) Thus ay ubiased fuctio of T is a MVUE for θ Sice E θ T (Y ) = θ such a estimate is give by ˆθ MV (y) = T (y) = 1 y b ˆθ ML (y) = arg max <θ<1 θt (y) ( T (y)) (1 θ) = T (y)/ = ˆθ MV (y) Sice the MLE equals the MVUE we have immediately that E θ ˆθ ML (Y ) = θ The variace of ˆθ ML (Y ) is easily computed to be θ(1 θ)/ c We have θ log p θ(y )= T (Y ) T (Y ) θ (1 θ) from which The CRLB is thus I θ = θ + CRLB = 1 I θ = 1 θ = θ(1 θ) θ(1 θ) = Var θ (ˆθ ML (Y )) 3

Exercise 15: We have Thus So p θ (y) = e θ θ y y 1 y! θ log p θ(y) = θ ( θ + y log θ) = 1+y θ θ log p θ(y) = y θ < ˆθ ML (y) =y Sice Y is Poisso we have E θ ˆθ ML (Y ) = Var θ (ˆθ ML (Y )) = θ So ˆθ ML is ubiased Fisher s iformatio is give by I θ = E θ θ log p θ(y ) = E θy θ = 1 θ So the CRLB is θ which equals Var θ (ˆθ ML (Y )) (Hece the MLE is a MVUE i this case) Exercise : a Note that Y 1 Y Y are idepedet with Y havig the N ( 1+θs ) distributio Thus θ log p θ(y) = 1 θ log(1 + y θs ) (1 + θs ) = 1 s y s 1+θs (1 + θs ) from which the lielihood equatio becomes s (y 1 ˆθ ML (y)s ) (1 + ˆθ ML (y)s ) = b I θ = E θ θ log p θ(y ) = 1 = s 4 E θ Y (1 + θs s 4 )3 (1 + θ ) s 4 (1 + θs ) 4

So the CRLB is s 4 (1+θs ) c With s = 1 the lielihood equatio yields the solutio ˆθ ML (y) = ( 1 ) y 1 which is see to yield a maximum of the lielihood fuctio d We have E θ ˆθML (Y ) ( ) 1 = E θ Y 1=θ Similarly sice the Y s are idepedet Var θ (ˆθML (Y ) = 1 Var θ ( Y ) = 1 (! + θ) = (1 + θ) Thus the bias of the MLE is the variace of the MLE equals the CRLB (Hece the MLE is a MVUE i this case) Exercise : a Note that p θ (y) =exp log F (y )/θ which impies that the statistic log F (y ) is a complete sufficiet statistic for θ via the Completeess Theorem for Expoetial Families We have E θ log F (Y ) = E θ log F (Y 1 ) = log F (Y 1 )F(y 1 )] (1 θ)/θ f(y 1 )dy 1 θ Notig that d log F (y 1 )= f(y 1) dy F (y 1 ) 1 that F (y 1 )] 1/θ = explog F (y 1 )/θ we ca mae the substitutio x = log F (y 1 ) to yield E θ log F (Y ) = xe x/θ dx = θ θ Thus we have E θ ˆθMV (Y ) = θ which implies that ˆθ MV is a MVUE sice it is a ubiased fuctio of a complete sufficiet statistic 5

b Correctio: Note that for the give prior the prior mea should be EΘ = c m 1 ] It is straightforward to see that w(θ y) is of the same form as the prior with c replaced by c log F (y ) m replaced by + m Thus by ispectio EΘ Y = c log F (Y ) m + 1 which was to be show Agai the ecessary correctio has bee made] c I this example the prior posterior distributios have the same form The oly chage is that the parameters of that distributio are updated as ew data is observed A prior with this property is said to be a reproducig prior The prior parameters c m ca be thought of as comig from a earlier sample of size m As becomes large compared to m the importace of these prior parameters i the estimate dimiishes Note that log F (Y ) behaves lie Elog F (Y 1 ) for large Thus with m the estimate is appropximately give by the MVUE of Part a Alteatively with m the estimate is approximately the prior mea c/(m 1) Betwee these two extremes there is a balace betwee prior observed iformatio Exercise 3: a The log-lielihood fuctio is log p(y A φ) = 1 σ y A si ( )] π + φ log(πσ ) The lielihood equatios are thus: ( )] ( ) π y  si + ˆφ π si + ˆφ = ( )] ( ) π  y  si + ˆφ π cos + ˆφ = These equatios are solved by the estimates:  ML = yc + ys ( ) ˆφ ML = ta 1 yc y s where y c = 1 ( ) π y cos 1 / ( 1) y 6

y s = 1 ( ) π y si 1 / ( 1) +1 y 1 b Appedig the prior to the above problem yields MAP estimates: ˆφ MAP = ˆφ ML ( ) Â ML + r + (1+α)σ Â MAP = 1+α where α σ β c Note that whe β ( the prior diffuses ) the MAP estimate of A does ot approach the MLE of A However as the MAP estimate does approach the MLE 7