ECE531 Screencast 11.5: Uniformly Most Powerful Decision Rules

Similar documents
ECE531 Lecture 4b: Composite Hypothesis Testing

ECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing

ECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations

ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters

ECE531 Lecture 13: Sequential Detection of Discrete-Time Signals

ECE531 Lecture 2a: A Mathematical Model for Hypothesis Testing (Finite Number of Possible Observations)

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

ECE531 Lecture 8: Non-Random Parameter Estimation

ECE531 Lecture 2b: Bayesian Hypothesis Testing

STAT 830 Hypothesis Testing

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,

STAT 830 Hypothesis Testing

Detection theory 101 ELEC-E5410 Signal Processing for Communications

8 Basics of Hypothesis Testing

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks

F79SM STATISTICAL METHODS

Derivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques

Hypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Tests and Their Power

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

hypothesis testing 1

Introduction to Bayesian Statistics

Lecture 21. Hypothesis Testing II

B.N.Bandodkar College of Science, Thane. Random-Number Generation. Mrs M.J.Gholba


ECE531 Homework Assignment Number 2

Hypothesis Testing - Frequentist

Chapter 9: Hypothesis Testing Sections

Ling 289 Contingency Table Statistics

EECS564 Estimation, Filtering, and Detection Exam 2 Week of April 20, 2015

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1

Chapter 4. Theory of Tests. 4.1 Introduction

Chapter 7: Hypothesis Testing

ECE504: Lecture 10. D. Richard Brown III. Worcester Polytechnic Institute. 11-Nov-2008

Ch. 5 Hypothesis Testing

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

10/4/2013. Hypothesis Testing & z-test. Hypothesis Testing. Hypothesis Testing

CH.9 Tests of Hypotheses for a Single Sample

Bayesian RL Seminar. Chris Mansley September 9, 2008

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use?

the amount of the data corresponding to the subinterval the width of the subinterval e x2 to the left by 5 units results in another PDF g(x) = 1 π

ECE531: Principles of Detection and Estimation Course Introduction

Outline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

This exam contains 13 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

Dynamic Programming Lecture #4

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

CHAPTER 8. Test Procedures is a rule, based on sample data, for deciding whether to reject H 0 and contains:

Part III. A Decision-Theoretic Approach and Bayesian testing

Statistics 135: Fall 2004 Final Exam

Bayesian Learning (II)

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over

An introduction to some aspects of functional analysis

Introductory Econometrics. Review of statistics (Part II: Inference)

Since the two-sided limits exist, so do all one-sided limits. In particular:

IC 102: Data Analysis and Interpretation

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

ECE531 Lecture 10b: Maximum Likelihood Estimation

5.1 Uniformly Most Accurate Families of Confidence

Confidence Intervals of Prescribed Precision Summary

1 Random variables and distributions

Statistical Inference

STAT 801: Mathematical Statistics. Hypothesis Testing

Testing Statistical Hypotheses

ECE531 Screencast 5.5: Bayesian Estimation for the Linear Gaussian Model

Mathematical Statistics

simple if it completely specifies the density of x

Probability. Lecture Notes. Adolfo J. Rumbos

Topic 15: Simple Hypotheses

Lecture 23: UMPU tests in exponential families

6.4 Type I and Type II Errors

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

On the Optimality of Likelihood Ratio Test for Prospect Theory Based Binary Hypothesis Testing

Charles Geyer University of Minnesota. joint work with. Glen Meeden University of Minnesota.

Frequentist Statistics and Hypothesis Testing Spring

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

F (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous

8 Testing of Hypotheses and Confidence Regions

Tests about a population mean

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Study Ch. 9.3, #47 53 (45 51), 55 61, (55 59)

2014/2015 Smester II ST5224 Final Exam Solution

Nuisance parameters and their treatment

Announcements. Proposals graded

Testing Statistical Hypotheses

IN HYPOTHESIS testing problems, a decision-maker aims

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

ECE531: Principles of Detection and Estimation Course Introduction

MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE

INTERVAL ESTIMATION AND HYPOTHESES TESTING

Practice Problems Section Problems

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

Lecture 13: p-values and union intersection tests

Transcription:

ECE531 Screencast 11.5: Uniformly Most Powerful Decision Rules D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III 1 / 9

Monotone Likelihood Ratio Despite UMP decision rules existing only in special cases, lots of real problems fall into these special cases. One such class of problems is the class of binary composite HT problems with monotone likelihood ratios. Definition (Lehmann, Testing Statistical Hypotheses, p.78) The real-parameter family of densities p x (y) for x X R is said to have monotone likelihood ratio if there exists a real-valued function T(y) that only depends on y such that, for any x 1 X and x 0 X with x 0 < x 1, the distributions p x0 (y) and p x1 (y) are distinct and the ratio L x1 /x 0 (y) := p x 1 (y) p x0 (y) = f(x 0,x 1,T(y)) is a non-decreasing function of T(y). Worcester Polytechnic Institute D. Richard Brown III 2 / 9

Monotone Likelihood Ratio: Example Consider the Laplacian density p x (y) = b 2 e b y x with b > 0 and X = [0, ). Let s see if this family of densities has monotone likelihood ratio... L x1 /x 0 (y) := p x 1 (y) p x0 (y) = eb( y x 0 y x 1 ) Since b > 0, L x1 /x 0 (y) will be non-decreasing in T(y) = y provided that y x 0 y x 1 is non-decreasing in y for all 0 x 0 < x 1. We can write x 0 x 1 y < x 0 y x 0 y x 1 = 2y x 1 x 0 x 0 y x 1 x 1 x 0 y > x 1. Note that x 0 x 1 < 0 is a constant and x 1 x 0 > 0 is also a constant with respect to y. Hence we can see that the Laplacian family of densities p x (y) = b 2 e b y x with b > 0 and X = [0, ) is monotone in T(y) = y. Worcester Polytechnic Institute D. Richard Brown III 3 / 9

Existence of a UMP Decision Rule for Monotone LR Theorem (Lehmann, Testing Statistical Hypotheses, p.78) Let X be a subinterval of the real line. Fix λ X and define the hypotheses H 0 : x X 0 = {x λ} versus H 1 : x X 1 = {x > λ}. If the family of densities p x (y) are distinct for all x X and has monotone likelihood ratio in T(y), then the decision rule 1 T(y) > v ρ(y) = γ T(y) = v 0 T(y) < v where v and γ are selected so that P fp,x=λ = α is UMP for testing H 0 : x X 0 versus H 1 : x X 1. Worcester Polytechnic Institute D. Richard Brown III 4 / 9

Coin Flipping Example Suppose we have a coin with Prob(heads) = x, where 0 x 1 is the unknown state. We flip the coin n times and observe the number of heads Y = y {0,...,n}. We know that, conditioned on the state x, the observation Y is distributed as ( ) n p x (y) = x y (1 x) n y. y Fix 0 < λ < 1 and define our composite binary hypotheses as H 0 : Y p x (y) for x [0,λ] = X 0 H 1 : Y p x (y) for x (λ,1] = X 1 Does there exist a UMP decision rule with significance level α? The p x (y) are distinct for all x X. This family of densities has a monotone likelihood ratio for T(y) = y (check this). Worcester Polytechnic Institute D. Richard Brown III 5 / 9

Coin Flipping Example According the the theorem, a UMP decision rule will be 1 y > v ρ(y) = γ y = v 0 y < v with v and γ selected so that P fp,x=λ = α. We just have to find v and γ. Worcester Polytechnic Institute D. Richard Brown III 6 / 9

Coin Flipping Example To find v, we can use our procedure from simple binary N-P hypothesis testing. We want to find the smallest value of v such that P fp,x=λ (δ v ) = n ( ) n = j x = λ] = λ }{{} j>vprob[y j (1 λ) n j α. j j>v p λ (y) Once we find the appropriate value of v, if P fp,x=λ (δ v ) = α, then we are done. We can set γ to any arbitrary value in [0,1] in this case. If P fp,x=λ (δ v ) < α, then we have to find γ such that P fp,x=λ (ρ) = α. The UMP decision rule is just the usual randomization ρ = (1 γ)δ v +γδ v ǫ and, in this case, γ = α n n ) j=v+1( j λ j (1 λ) n j ( n v) λ v (1 λ) n v. Worcester Polytechnic Institute D. Richard Brown III 7 / 9

Power Function For binary composite hypothesis testing problems with X a subinterval of the real line and a particular decision rule ρ, we can define the power function of ρ as β(x) := Prob(ρ decides 1 state is x) For each x X 1, β(x) is the probability of a true positive (the probability of detection). For each x X 0, β(x) is the probability of a false positive. Hence, a plot of β(x) versus x displays both the probability of a false positive and the probability of detection of ρ for all states x X. Our UMP decision rule for the coin flipping problem specifies that we always decide 1 when we observe v +1 or more heads, and we decide 1 with probability γ if we observe v heads. Hence, β(x) = n j=v+1 ( n j ) x j (1 x) n j +γ ( ) n x v (1 x) n v v Worcester Polytechnic Institute D. Richard Brown III 8 / 9

Coin Flipping Example: Power Function λ = 0.5, α = 0.1 1 0.9 0.8 n=2 n=4 n=6 n=8 n=10 0.7 power function β(x) 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 state x Worcester Polytechnic Institute D. Richard Brown III 9 / 9