An Introduction to Signal Detection and Estimation - Second Edition Chapter IV: Selected Solutions

Size: px
Start display at page:

Download "An Introduction to Signal Detection and Estimation - Second Edition Chapter IV: Selected Solutions"

Transcription

1 A Itroductio to Sigal Detectio Estimatio - Secod Editio Chapter IV: Selected Solutios H V Poor Priceto Uiversity April 6 5 Exercise 1: a b ] ˆθ MAP (y) = arg max log θ θ y log θ =1 1 θ e ˆθ MMSE (y) = e 1 θe θ y dθ e 1 e θ y dθ = 1 y + e y e 1 e y e y e e y Exercise 3: So: ˆθ ABS (y) solves w(θ y) = ˆθ MMSE (y) = 1 y! ˆθ MAP (y) = arg θ y e θ e αθ θ y e θ e αθ dθ = θy e (α+1)θ (1 + α) y+1 y! θ y+1 e (α+1)θ dθ(1 + α) y+1 = y +1 α +1 ; ] max y log θ (α +1)θ = y θ> α +1 ; ˆθABS (y) w(θ y)dθ = 1 which reduces to y ˆθABS (y) ] = 1 eˆθ ABS (y) =! Note that the series o the left-h side is the trucated power series expasio of expˆθ ABS (y) so that the ˆθ ABS (y) is the value at which this trucated series equals half of its utrucated value whe the trucatio poit is y 1

2 Exercise 7: We have p θ (y) = w(θ) = e y+θ if y θ if y<θ ; 1 if θ ( 1) if θ ( 1) Thus e y+θ w(θ y) = mi1y e y+θ dθ = eθ e mi1y 1 for θ mi1y w(θ y) = otherwise This implies: which has the solutio Exercise 8: mi1y θe ˆθ θ dθ MMSE (y) = e mi1y 1 = mi1y 1]emi1y +1 ; e mi1y 1 ] ˆθ MAP (y) = arg = mi1y; ˆθABS (y) max θ mi1y eθ e θ dθ = 1 e mi1y 1 ] e ˆθ mi1y ] +1 ABS (y) = log a We have w(θ y) = e y+θ e θ e y y dθ = 1 y θ y w(θ y) = otherwise That is give y Θ is uiformly distributed o the iterval y] From this we have immediately that ˆθ MMSE (y) =ˆθ ABS (y) = y b We have MMSE = E Var(Θ Y ) Sice w(θ y) is uiform o y]var(θ Y )= Y We have 1 y p(y) =e y dθ = ye y y > from which MMSE = EY 1 = 1 1 y 3 e y dy = 3! 1 = 1

3 c I this case p θ (y) = e y+θ if <θ<miy 1 y from which ] ˆθ MAP (y) = arg max exp y +( 1)θ = miy 1 y <θ<miy 1 y Exercise 13: a We have where Rewritig this as p θ (y) =θ T (y) (1 θ) ( T (y)) T (y) = y φt (y) p θ (y) =C(φ)e with φ = log(θ/(1 θ)) C(φ) =e φ we see from the Completeess Theorem for Expoetial Families that T (y) is a complete sufficiet statistic for φ hece for θ (Assumig θ rages throughout ( 1)) Thus ay ubiased fuctio of T is a MVUE for θ Sice E θ T (Y ) = θ such a estimate is give by ˆθ MV (y) = T (y) = 1 y b ˆθ ML (y) = arg max <θ<1 θt (y) ( T (y)) (1 θ) = T (y)/ = ˆθ MV (y) Sice the MLE equals the MVUE we have immediately that E θ ˆθ ML (Y ) = θ The variace of ˆθ ML (Y ) is easily computed to be θ(1 θ)/ c We have θ log p θ(y )= T (Y ) T (Y ) θ (1 θ) from which The CRLB is thus I θ = θ + CRLB = 1 I θ = 1 θ = θ(1 θ) θ(1 θ) = Var θ (ˆθ ML (Y )) 3

4 Exercise 15: We have Thus So p θ (y) = e θ θ y y 1 y! θ log p θ(y) = θ ( θ + y log θ) = 1+y θ θ log p θ(y) = y θ < ˆθ ML (y) =y Sice Y is Poisso we have E θ ˆθ ML (Y ) = Var θ (ˆθ ML (Y )) = θ So ˆθ ML is ubiased Fisher s iformatio is give by I θ = E θ θ log p θ(y ) = E θy θ = 1 θ So the CRLB is θ which equals Var θ (ˆθ ML (Y )) (Hece the MLE is a MVUE i this case) Exercise : a Note that Y 1 Y Y are idepedet with Y havig the N ( 1+θs ) distributio Thus θ log p θ(y) = 1 θ log(1 + y θs ) (1 + θs ) = 1 s y s 1+θs (1 + θs ) from which the lielihood equatio becomes s (y 1 ˆθ ML (y)s ) (1 + ˆθ ML (y)s ) = b I θ = E θ θ log p θ(y ) = 1 = s 4 E θ Y (1 + θs s 4 )3 (1 + θ ) s 4 (1 + θs ) 4

5 So the CRLB is s 4 (1+θs ) c With s = 1 the lielihood equatio yields the solutio ˆθ ML (y) = ( 1 ) y 1 which is see to yield a maximum of the lielihood fuctio d We have E θ ˆθML (Y ) ( ) 1 = E θ Y 1=θ Similarly sice the Y s are idepedet Var θ (ˆθML (Y ) = 1 Var θ ( Y ) = 1 (! + θ) = (1 + θ) Thus the bias of the MLE is the variace of the MLE equals the CRLB (Hece the MLE is a MVUE i this case) Exercise : a Note that p θ (y) =exp log F (y )/θ which impies that the statistic log F (y ) is a complete sufficiet statistic for θ via the Completeess Theorem for Expoetial Families We have E θ log F (Y ) = E θ log F (Y 1 ) = log F (Y 1 )F(y 1 )] (1 θ)/θ f(y 1 )dy 1 θ Notig that d log F (y 1 )= f(y 1) dy F (y 1 ) 1 that F (y 1 )] 1/θ = explog F (y 1 )/θ we ca mae the substitutio x = log F (y 1 ) to yield E θ log F (Y ) = xe x/θ dx = θ θ Thus we have E θ ˆθMV (Y ) = θ which implies that ˆθ MV is a MVUE sice it is a ubiased fuctio of a complete sufficiet statistic 5

6 b Correctio: Note that for the give prior the prior mea should be EΘ = c m 1 ] It is straightforward to see that w(θ y) is of the same form as the prior with c replaced by c log F (y ) m replaced by + m Thus by ispectio EΘ Y = c log F (Y ) m + 1 which was to be show Agai the ecessary correctio has bee made] c I this example the prior posterior distributios have the same form The oly chage is that the parameters of that distributio are updated as ew data is observed A prior with this property is said to be a reproducig prior The prior parameters c m ca be thought of as comig from a earlier sample of size m As becomes large compared to m the importace of these prior parameters i the estimate dimiishes Note that log F (Y ) behaves lie Elog F (Y 1 ) for large Thus with m the estimate is appropximately give by the MVUE of Part a Alteatively with m the estimate is approximately the prior mea c/(m 1) Betwee these two extremes there is a balace betwee prior observed iformatio Exercise 3: a The log-lielihood fuctio is log p(y A φ) = 1 σ y A si ( )] π + φ log(πσ ) The lielihood equatios are thus: ( )] ( ) π y  si + ˆφ π si + ˆφ = ( )] ( ) π  y  si + ˆφ π cos + ˆφ = These equatios are solved by the estimates:  ML = yc + ys ( ) ˆφ ML = ta 1 yc y s where y c = 1 ( ) π y cos 1 / ( 1) y 6

7 y s = 1 ( ) π y si 1 / ( 1) +1 y 1 b Appedig the prior to the above problem yields MAP estimates: ˆφ MAP = ˆφ ML ( ) Â ML + r + (1+α)σ Â MAP = 1+α where α σ β c Note that whe β ( the prior diffuses ) the MAP estimate of A does ot approach the MLE of A However as the MAP estimate does approach the MLE 7

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn Stat 366 Lab 2 Solutios (September 2, 2006) page TA: Yury Petracheko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/ yuryp/ Review Questios, Chapters 8, 9 8.5 Suppose that Y, Y 2,..., Y deote a radom

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation ECE 645: Estimatio Theory Sprig 2015 Istructor: Prof. Staley H. Cha Maximum Likelihood Estimatio (LaTeX prepared by Shaobo Fag) April 14, 2015 This lecture ote is based o ECE 645(Sprig 2015) by Prof. Staley

More information

Statistical Theory MT 2008 Problems 1: Solution sketches

Statistical Theory MT 2008 Problems 1: Solution sketches Statistical Theory MT 008 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. a) Let 0 < θ < ad put fx, θ) = θ)θ x ; x = 0,,,... b) c) where α

More information

Statistical Theory MT 2009 Problems 1: Solution sketches

Statistical Theory MT 2009 Problems 1: Solution sketches Statistical Theory MT 009 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. (a) Let 0 < θ < ad put f(x, θ) = ( θ)θ x ; x = 0,,,... (b) (c) where

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

Unbiased Estimation. February 7-12, 2008

Unbiased Estimation. February 7-12, 2008 Ubiased Estimatio February 7-2, 2008 We begi with a sample X = (X,..., X ) of radom variables chose accordig to oe of a family of probabilities P θ where θ is elemet from the parameter space Θ. For radom

More information

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

IIT JAM Mathematical Statistics (MS) 2006 SECTION A IIT JAM Mathematical Statistics (MS) 6 SECTION A. If a > for ad lim a / L >, the which of the followig series is ot coverget? (a) (b) (c) (d) (d) = = a = a = a a + / a lim a a / + = lim a / a / + = lim

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

EE 4TM4: Digital Communications II Probability Theory

EE 4TM4: Digital Communications II Probability Theory 1 EE 4TM4: Digital Commuicatios II Probability Theory I. RANDOM VARIABLES A radom variable is a real-valued fuctio defied o the sample space. Example: Suppose that our experimet cosists of tossig two fair

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes. Term Test October 3, 003 Name Math 56 Studet Number Directio: This test is worth 50 poits. You are required to complete this test withi 50 miutes. I order to receive full credit, aswer each problem completely

More information

6. Sufficient, Complete, and Ancillary Statistics

6. Sufficient, Complete, and Ancillary Statistics Sufficiet, Complete ad Acillary Statistics http://www.math.uah.edu/stat/poit/sufficiet.xhtml 1 of 7 7/16/2009 6:13 AM Virtual Laboratories > 7. Poit Estimatio > 1 2 3 4 5 6 6. Sufficiet, Complete, ad Acillary

More information

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas

More information

1.010 Uncertainty in Engineering Fall 2008

1.010 Uncertainty in Engineering Fall 2008 MIT OpeCourseWare http://ocw.mit.edu.00 Ucertaity i Egieerig Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu.terms. .00 - Brief Notes # 9 Poit ad Iterval

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

2. The volume of the solid of revolution generated by revolving the area bounded by the

2. The volume of the solid of revolution generated by revolving the area bounded by the IIT JAM Mathematical Statistics (MS) Solved Paper. A eigevector of the matrix M= ( ) is (a) ( ) (b) ( ) (c) ( ) (d) ( ) Solutio: (a) Eigevalue of M = ( ) is. x So, let x = ( y) be the eigevector. z (M

More information

Solutions: Homework 3

Solutions: Homework 3 Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe

More information

APPM 5720 Solutions to Problem Set Five

APPM 5720 Solutions to Problem Set Five APPM 5720 Solutios to Problem Set Five 1. The pdf is f(x; α, β) 1 Γ(α) βα x α 1 e βx I (0, ) (x). The joit pdf is f( x; α, β) 1 [Γ(α)] β α ( i1 x i ) α 1 e β i1 x i i1 I (0, ) (x i ) 1 [Γ(α)] βα } {{ }

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 016 MODULE : Statistical Iferece Time allowed: Three hours Cadidates should aswer FIVE questios. All questios carry equal marks. The umber

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Some Basic Cocepts of Statistical Iferece (Sec 5.) Suppose we have a rv X that has a pdf/pmf deoted by f(x; θ) or p(x; θ), where θ is called the parameter. I previous lectures, we focus o probability problems

More information

Stat 421-SP2012 Interval Estimation Section

Stat 421-SP2012 Interval Estimation Section Stat 41-SP01 Iterval Estimatio Sectio 11.1-11. We ow uderstad (Chapter 10) how to fid poit estimators of a ukow parameter. o However, a poit estimate does ot provide ay iformatio about the ucertaity (possible

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5 CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio

More information

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT Itroductio to Extreme Value Theory Laures de Haa, ISM Japa, 202 Itroductio to Extreme Value Theory Laures de Haa Erasmus Uiversity Rotterdam, NL Uiversity of Lisbo, PT Itroductio to Extreme Value Theory

More information

Lecture 12: September 27

Lecture 12: September 27 36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.

More information

Lecture 6 Ecient estimators. Rao-Cramer bound.

Lecture 6 Ecient estimators. Rao-Cramer bound. Lecture 6 Eciet estimators. Rao-Cramer boud. 1 MSE ad Suciecy Let X (X 1,..., X) be a radom sample from distributio f θ. Let θ ˆ δ(x) be a estimator of θ. Let T (X) be a suciet statistic for θ. As we have

More information

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the

More information

Simulation. Two Rule For Inverting A Distribution Function

Simulation. Two Rule For Inverting A Distribution Function Simulatio Two Rule For Ivertig A Distributio Fuctio Rule 1. If F(x) = u is costat o a iterval [x 1, x 2 ), the the uiform value u is mapped oto x 2 through the iversio process. Rule 2. If there is a jump

More information

Questions and Answers on Maximum Likelihood

Questions and Answers on Maximum Likelihood Questios ad Aswers o Maximum Likelihood L. Magee Fall, 2008 1. Give: a observatio-specific log likelihood fuctio l i (θ) = l f(y i x i, θ) the log likelihood fuctio l(θ y, X) = l i(θ) a data set (x i,

More information

EE 6885 Statistical Pattern Recognition

EE 6885 Statistical Pattern Recognition EE 6885 Statistical Patter Recogitio Fall 5 Prof. Shih-Fu Chag http://www.ee.columbia.edu/~sfchag Lecture 6 (9/8/5 EE6887-Chag 6- Readig EM for Missig Features Textboo, DHS 3.9 Bayesia Parameter Estimatio

More information

5 : Exponential Family and Generalized Linear Models

5 : Exponential Family and Generalized Linear Models 0-708: Probabilistic Graphical Models 0-708, Sprig 206 5 : Expoetial Family ad Geeralized Liear Models Lecturer: Matthew Gormley Scribes: Yua Li, Yichog Xu, Silu Wag Expoetial Family Probability desity

More information

Lecture 16: UMVUE: conditioning on sufficient and complete statistics

Lecture 16: UMVUE: conditioning on sufficient and complete statistics Lecture 16: UMVUE: coditioig o sufficiet ad complete statistics The 2d method of derivig a UMVUE whe a sufficiet ad complete statistic is available Fid a ubiased estimator of ϑ, say U(X. Coditioig o a

More information

Department of Mathematics

Department of Mathematics Departmet of Mathematics Ma 3/103 KC Border Itroductio to Probability ad Statistics Witer 2017 Lecture 19: Estimatio II Relevat textbook passages: Larse Marx [1]: Sectios 5.2 5.7 19.1 The method of momets

More information

Chapter 2 The Monte Carlo Method

Chapter 2 The Monte Carlo Method Chapter 2 The Mote Carlo Method The Mote Carlo Method stads for a broad class of computatioal algorithms that rely o radom sampligs. It is ofte used i physical ad mathematical problems ad is most useful

More information

Properties of Point Estimators and Methods of Estimation

Properties of Point Estimators and Methods of Estimation CHAPTER 9 Properties of Poit Estimators ad Methods of Estimatio 9.1 Itroductio 9. Relative Efficiecy 9.3 Cosistecy 9.4 Sufficiecy 9.5 The Rao Blackwell Theorem ad Miimum-Variace Ubiased Estimatio 9.6 The

More information

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018) Radomized Algorithms I, Sprig 08, Departmet of Computer Sciece, Uiversity of Helsiki Homework : Solutios Discussed Jauary 5, 08). Exercise.: Cosider the followig balls-ad-bi game. We start with oe black

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

STATISTICAL INFERENCE

STATISTICAL INFERENCE STATISTICAL INFERENCE POPULATION AND SAMPLE Populatio = all elemets of iterest Characterized by a distributio F with some parameter θ Sample = the data X 1,..., X, selected subset of the populatio = sample

More information

Monte Carlo Integration

Monte Carlo Integration Mote Carlo Itegratio I these otes we first review basic umerical itegratio methods (usig Riema approximatio ad the trapezoidal rule) ad their limitatios for evaluatig multidimesioal itegrals. Next we itroduce

More information

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization ECE 90 Lecture 4: Maximum Likelihood Estimatio ad Complexity Regularizatio R Nowak 5/7/009 Review : Maximum Likelihood Estimatio We have iid observatios draw from a ukow distributio Y i iid p θ, i,, where

More information

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2 82 CHAPTER 4. MAXIMUM IKEIHOOD ESTIMATION Defiitio: et X be a radom sample with joit p.m/d.f. f X x θ. The geeralised likelihood ratio test g.l.r.t. of the NH : θ H 0 agaist the alterative AH : θ H 1,

More information

Probability and statistics: basic terms

Probability and statistics: basic terms Probability ad statistics: basic terms M. Veeraraghava August 203 A radom variable is a rule that assigs a umerical value to each possible outcome of a experimet. Outcomes of a experimet form the sample

More information

1 of 7 7/16/2009 6:06 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 6. Order Statistics Defiitios Suppose agai that we have a basic radom experimet, ad that X is a real-valued radom variable

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Bayesian Methods: Introduction to Multi-parameter Models

Bayesian Methods: Introduction to Multi-parameter Models Bayesia Methods: Itroductio to Multi-parameter Models Parameter: θ = ( θ, θ) Give Likelihood p(y θ) ad prior p(θ ), the posterior p proportioal to p(y θ) x p(θ ) Margial posterior ( θ, θ y) is Iterested

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

Chapter 7 Maximum Likelihood Estimate (MLE)

Chapter 7 Maximum Likelihood Estimate (MLE) Chapter 7 aimum Likelihood Estimate (LE) otivatio for LE Problems:. VUE ofte does ot eist or ca t be foud . BLUE may ot be applicable ( Hθ w) Solutio: If the PDF

More information

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett Lecture Note 8 Poit Estimators ad Poit Estimatio Methods MIT 14.30 Sprig 2006 Herma Beett Give a parameter with ukow value, the goal of poit estimatio is to use a sample to compute a umber that represets

More information

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p).

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p). Limit Theorems Covergece i Probability Let X be the umber of heads observed i tosses. The, E[X] = p ad Var[X] = p(-p). L O This P x p NM QP P x p should be close to uity for large if our ituitio is correct.

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013 Large Deviatios for i.i.d. Radom Variables Cotet. Cheroff boud usig expoetial momet geeratig fuctios. Properties of a momet

More information

Stat 319 Theory of Statistics (2) Exercises

Stat 319 Theory of Statistics (2) Exercises Kig Saud Uiversity College of Sciece Statistics ad Operatios Research Departmet Stat 39 Theory of Statistics () Exercises Refereces:. Itroductio to Mathematical Statistics, Sixth Editio, by R. Hogg, J.

More information

IE 230 Probability & Statistics in Engineering I. Closed book and notes. No calculators. 120 minutes.

IE 230 Probability & Statistics in Engineering I. Closed book and notes. No calculators. 120 minutes. Closed book ad otes. No calculators. 120 miutes. Cover page, five pages of exam, ad tables for discrete ad cotiuous distributios. Score X i =1 X i / S X 2 i =1 (X i X ) 2 / ( 1) = [i =1 X i 2 X 2 ] / (

More information

The Institute of Actuaries of India

The Institute of Actuaries of India The Istitute of Actuaries of Idia ubject CT3 Probabilit & Mathematical tatistics 5 th Ma 7 IDICATIVE OLUTIO Itroductio The idicative solutio has bee writte b the Examiers with the aim of helpig cadidates.

More information

Exponential Families and Bayesian Inference

Exponential Families and Bayesian Inference Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where

More information

International Journal of Mathematical Archive-5(7), 2014, Available online through ISSN

International Journal of Mathematical Archive-5(7), 2014, Available online through  ISSN Iteratioal Joural of Mathematical Archive-5(7), 214, 11-117 Available olie through www.ijma.ifo ISSN 2229 546 USING SQUARED-LOG ERROR LOSS FUNCTION TO ESTIMATE THE SHAPE PARAMETER AND THE RELIABILITY FUNCTION

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Maximum Likelihood Estimation and Complexity Regularization

Maximum Likelihood Estimation and Complexity Regularization ECE90 Sprig 004 Statistical Regularizatio ad Learig Theory Lecture: 4 Maximum Likelihood Estimatio ad Complexity Regularizatio Lecturer: Rob Nowak Scribe: Pam Limpiti Review : Maximum Likelihood Estimatio

More information

Lecture 3. Properties of Summary Statistics: Sampling Distribution

Lecture 3. Properties of Summary Statistics: Sampling Distribution Lecture 3 Properties of Summary Statistics: Samplig Distributio Mai Theme How ca we use math to justify that our umerical summaries from the sample are good summaries of the populatio? Lecture Summary

More information

Summary. Recap ... Last Lecture. Summary. Theorem

Summary. Recap ... Last Lecture. Summary. Theorem Last Lecture Biostatistics 602 - Statistical Iferece Lecture 23 Hyu Mi Kag April 11th, 2013 What is p-value? What is the advatage of p-value compared to hypothesis testig procedure with size α? How ca

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Evaluation of the Gaussian Density Integral

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Evaluation of the Gaussian Density Integral Steve R. Dubar Departmet of Mathematics 3 Avery Hall Uiversity of Nebraska-Licol Licol, NE 68588-13 http://www.math.ul.edu Voice: 4-47-3731 Fax: 4-47-8466 Topics i Probability Theory ad Stochastic Processes

More information

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT TR/46 OCTOBER 974 THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION by A. TALBOT .. Itroductio. A problem i approximatio theory o which I have recetly worked [] required for its solutio a proof that the

More information

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments: Recall: STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Commets:. So far we have estimates of the parameters! 0 ad!, but have o idea how good these estimates are. Assumptio: E(Y x)! 0 +! x (liear coditioal

More information

Lecture 13: Maximum Likelihood Estimation

Lecture 13: Maximum Likelihood Estimation ECE90 Sprig 007 Statistical Learig Theory Istructor: R. Nowak Lecture 3: Maximum Likelihood Estimatio Summary of Lecture I the last lecture we derived a risk (MSE) boud for regressio problems; i.e., select

More information

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram.

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram. Key Cocepts: 1) Sketchig of scatter diagram The scatter diagram of bivariate (i.e. cotaiig two variables) data ca be easily obtaied usig GC. Studets are advised to refer to lecture otes for the GC operatios

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

Last Lecture. Unbiased Test

Last Lecture. Unbiased Test Last Lecture Biostatistics 6 - Statistical Iferece Lecture Uiformly Most Powerful Test Hyu Mi Kag March 8th, 3 What are the typical steps for costructig a likelihood ratio test? Is LRT statistic based

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory ST525: Advaced Statistical Theory Departmet of Statistics & Applied Probability Tuesday, September 7, 2 ST525: Advaced Statistical Theory Lecture : The law of large umbers The Law of Large Numbers The

More information

Introductory statistics

Introductory statistics CM9S: Machie Learig for Bioiformatics Lecture - 03/3/06 Itroductory statistics Lecturer: Sriram Sakararama Scribe: Sriram Sakararama We will provide a overview of statistical iferece focussig o the key

More information

Maximum likelihood estimation from record-breaking data for the generalized Pareto distribution

Maximum likelihood estimation from record-breaking data for the generalized Pareto distribution METRON - Iteratioal Joural of Statistics 004, vol. LXII,. 3, pp. 377-389 NAGI S. ABD-EL-HAKIM KHALAF S. SULTAN Maximum likelihood estimatio from record-breakig data for the geeralized Pareto distributio

More information

MATH4822E FOURIER ANALYSIS AND ITS APPLICATIONS

MATH4822E FOURIER ANALYSIS AND ITS APPLICATIONS MATH48E FOURIER ANALYSIS AND ITS APPLICATIONS 7.. Cesàro summability. 7. Summability methods Arithmetic meas. The followig idea is due to the Italia geometer Eresto Cesàro (859-96). He shows that eve if

More information

n n i=1 Often we also need to estimate the variance. Below are three estimators each of which is optimal in some sense: n 1 i=1 k=1 i=1 k=1 i=1 k=1

n n i=1 Often we also need to estimate the variance. Below are three estimators each of which is optimal in some sense: n 1 i=1 k=1 i=1 k=1 i=1 k=1 MATH88T Maria Camero Cotets Basic cocepts of statistics Estimators, estimates ad samplig distributios 2 Ordiary least squares estimate 3 3 Maximum lielihood estimator 3 4 Bayesia estimatio Refereces 9

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

Lecture Stat Maximum Likelihood Estimation

Lecture Stat Maximum Likelihood Estimation Lecture Stat 461-561 Maximum Likelihood Estimatio A.D. Jauary 2008 A.D. () Jauary 2008 1 / 63 Maximum Likelihood Estimatio Ivariace Cosistecy E ciecy Nuisace Parameters A.D. () Jauary 2008 2 / 63 Parametric

More information

The standard deviation of the mean

The standard deviation of the mean Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider

More information

Lecture 23: Minimal sufficiency

Lecture 23: Minimal sufficiency Lecture 23: Miimal sufficiecy Maximal reductio without loss of iformatio There are may sufficiet statistics for a give problem. I fact, X (the whole data set) is sufficiet. If T is a sufficiet statistic

More information

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0.

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0. Name: ID: Homework for /3. Determie the values of the followig quatities: a. t 0.5 b. t 0.055 c. t 0.5 d. t 0.0540 e. t 0.00540 f. χ 0.0 g. χ 0.0 h. χ 0.00 i. χ 0.0050 j. χ 0.990 a. t 0.5.34 b. t 0.055.753

More information

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator Last Lecture Biostatistics 60 - Statistical Iferece Lecture 16 Evaluatio of Bayes Estimator Hyu Mi Kag March 14th, 013 What is a Bayes Estimator? Is a Bayes Estimator the best ubiased estimator? Compared

More information

6.041/6.431 Spring 2009 Final Exam Thursday, May 21, 1:30-4:30 PM.

6.041/6.431 Spring 2009 Final Exam Thursday, May 21, 1:30-4:30 PM. 6.041/6.431 Sprig 2009 Fial Exam Thursday, May 21, 1:30-4:30 PM. Name: Recitatio Istructor: Questio Part Score Out of 0 2 1 all 18 2 all 24 3 a 4 b 4 c 4 4 a 6 b 6 c 6 5 a 6 b 6 6 a 4 b 4 c 4 d 5 e 5 7

More information

MATHEMATICAL SCIENCES PAPER-II

MATHEMATICAL SCIENCES PAPER-II MATHEMATICAL SCIENCES PAPER-II. Let {x } ad {y } be two sequeces of real umbers. Prove or disprove each of the statemets :. If {x y } coverges, ad if {y } is coverget, the {x } is coverget.. {x + y } coverges

More information

x = Pr ( X (n) βx ) =

x = Pr ( X (n) βx ) = Exercise 93 / page 45 The desity of a variable X i i 1 is fx α α a For α kow let say equal to α α > fx α α x α Pr X i x < x < Usig a Pivotal Quatity: x α 1 < x < α > x α 1 ad We solve i a similar way as

More information

Lecture 10 October Minimaxity and least favorable prior sequences

Lecture 10 October Minimaxity and least favorable prior sequences STATS 300A: Theory of Statistics Fall 205 Lecture 0 October 22 Lecturer: Lester Mackey Scribe: Brya He, Rahul Makhijai Warig: These otes may cotai factual ad/or typographic errors. 0. Miimaxity ad least

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Chapter 6 Sampling Distributions

Chapter 6 Sampling Distributions Chapter 6 Samplig Distributios 1 I most experimets, we have more tha oe measuremet for ay give variable, each measuremet beig associated with oe radomly selected a member of a populatio. Hece we eed to

More information

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences.

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences. Norwegia Uiversity of Sciece ad Techology Departmet of Mathematical Scieces Corrected 3 May ad 4 Jue Solutios TMA445 Statistics Saturday 6 May 9: 3: Problem Sow desity a The probability is.9.5 6x x dx

More information

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values of the secod half Biostatistics 6 - Statistical Iferece Lecture 6 Fial Exam & Practice Problems for the Fial Hyu Mi Kag Apil 3rd, 3 Hyu Mi Kag Biostatistics 6 - Lecture 6 Apil 3rd, 3 / 3 Rao-Blackwell

More information

1 Approximating Integrals using Taylor Polynomials

1 Approximating Integrals using Taylor Polynomials Seughee Ye Ma 8: Week 7 Nov Week 7 Summary This week, we will lear how we ca approximate itegrals usig Taylor series ad umerical methods. Topics Page Approximatig Itegrals usig Taylor Polyomials. Defiitios................................................

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Final Examination Statistics 200C. T. Ferguson June 10, 2010

Final Examination Statistics 200C. T. Ferguson June 10, 2010 Fial Examiatio Statistics 00C T. Ferguso Jue 0, 00. (a State the Borel-Catelli Lemma ad its coverse. (b Let X,X,... be i.i.d. from a distributio with desity, f(x =θx (θ+ o the iterval (,. For what value

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

LECTURE 8: ASYMPTOTICS I

LECTURE 8: ASYMPTOTICS I LECTURE 8: ASYMPTOTICS I We are iterested i the properties of estimators as. Cosider a sequece of radom variables {, X 1}. N. M. Kiefer, Corell Uiversity, Ecoomics 60 1 Defiitio: (Weak covergece) A sequece

More information

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA STATISTICAL THEORY AND METHODS PAPER I

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA STATISTICAL THEORY AND METHODS PAPER I THE ROYAL STATISTICAL SOCIETY 5 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA STATISTICAL THEORY AND METHODS PAPER I The Society provides these solutios to assist cadidates preparig for the examiatios i future

More information

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense, 3. Z Trasform Referece: Etire Chapter 3 of text. Recall that the Fourier trasform (FT) of a DT sigal x [ ] is ω ( ) [ ] X e = j jω k = xe I order for the FT to exist i the fiite magitude sese, S = x [

More information