ECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations

Similar documents
ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters

ECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing

ECE531 Lecture 2b: Bayesian Hypothesis Testing

ECE531 Lecture 4b: Composite Hypothesis Testing

ECE531 Screencast 11.5: Uniformly Most Powerful Decision Rules

ECE531 Lecture 13: Sequential Detection of Discrete-Time Signals

ECE531 Lecture 8: Non-Random Parameter Estimation

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

Chapter 2 Signal Processing at Receivers: Detection Theory

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,

Parameter Estimation

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

DETECTION theory deals primarily with techniques for

Detection and Estimation Chapter 1. Hypothesis Testing

Lecture 8: Signal Detection and Noise Assumption

An Introduction to Signal Detection and Estimation - Second Edition Chapter III: Selected Solutions

ECE531 Screencast 5.5: Bayesian Estimation for the Linear Gaussian Model

10. Composite Hypothesis Testing. ECE 830, Spring 2014

Hypothesis testing (cont d)

8 Basics of Hypothesis Testing

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes

ECE531 Lecture 10b: Maximum Likelihood Estimation

On the Optimality of Likelihood Ratio Test for Prospect Theory Based Binary Hypothesis Testing

Compressed Hypothesis Testing: To Mix or Not to Mix?

Detection theory 101 ELEC-E5410 Signal Processing for Communications

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Distributed Binary Quantizers for Communication Constrained Large-scale Sensor Networks

ECE531 Lecture 2a: A Mathematical Model for Hypothesis Testing (Finite Number of Possible Observations)

Detection and Estimation Theory

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Composite Hypotheses and Generalized Likelihood Ratio Tests

Detection Theory. Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010

Detection and Estimation Theory

Parameter Estimation, Sampling Distributions & Hypothesis Testing

Detection and Estimation Theory

F2E5216/TS1002 Adaptive Filtering and Change Detection. Course Organization. Lecture plan. The Books. Lecture 1

Linear Regression and Discrimination

IN HYPOTHESIS testing problems, a decision-maker aims

Detection theory. H 0 : x[n] = w[n]

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

Approximate Message Passing Algorithms

Introduction to Statistical Inference

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

Topic 3: Hypothesis Testing

Signal Detection Basics - CFAR

PATTERN RECOGNITION AND MACHINE LEARNING

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

University of Siena. Multimedia Security. Watermark extraction. Mauro Barni University of Siena. M. Barni, University of Siena

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Multivariate statistical methods and data mining in particle physics

Bayesian Decision Theory

Statistics for the LHC Lecture 1: Introduction

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

ECE531: Principles of Detection and Estimation Course Introduction

Derivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques

1.1 Basis of Statistical Decision Theory

Lecture 12 November 3

Use of the likelihood principle in physics. Statistics II

EE 574 Detection and Estimation Theory Lecture Presentation 8

OPPORTUNISTIC Spectrum Access (OSA) is emerging

Logistic Regression Review Fall 2012 Recitation. September 25, 2012 TA: Selen Uguroglu

EE4304 C-term 2007: Lecture 17 Supplemental Slides

Generalized Linear Models (1/29/13)

ECE531 Homework Assignment Number 6 Solution

STONY BROOK UNIVERSITY. CEAS Technical Report 829

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

Lecture 7 Introduction to Statistical Decision Theory

Evaluation. Andrea Passerini Machine Learning. Evaluation

Chapter 7: Channel coding:convolutional codes

Bayesian inference J. Daunizeau

Evidence Theory based Cooperative Energy Detection under Noise Uncertainty

Parameter Estimation and Fitting to Data

CS 195-5: Machine Learning Problem Set 1

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

STATISTICAL STRATEGIES FOR EFFICIENT SIGNAL DETECTION AND PARAMETER ESTIMATION IN WIRELESS SENSOR NETWORKS. Eric Ayeh

Answer Key for STAT 200B HW No. 7

APPENDIX 1 NEYMAN PEARSON CRITERIA

Stephen Scott.

Information Theory and Hypothesis Testing

Digital Transmission Methods S

Evaluation requires to define performance measures to be optimized

Minimax Estimation of Kernel Mean Embeddings

Testing Statistical Hypotheses

Results On Gas Detection and Concentration Estimation via Mid-IR-based Gas Detection System Analysis Model

E&CE 358, Winter 2016: Solution #2. Prof. X. Shen

Summary of Chapters 7-9

Bayesian inference J. Daunizeau

Fundamentals of Statistical Signal Processing Volume II Detection Theory

Decentralized Control of Stochastic Systems

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

Introduction to Signal Detection and Classification. Phani Chavali

Parametric Techniques Lecture 3

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Parametric Techniques

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

hypothesis testing 1

Transcription:

ECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III 1 / 7

Neyman Pearson Hypothesis Testing: Finite/Infinite Obs. Finite number of possible observations: Compute the likelihood ratios L l = P l,1 P l,0 Pick the threshold v to be the smallest value such that P fp = P l,0 α l:l l >v If P fp (δ v ) < α, compute the randomization γ so that P fp = α. The N-P decision rule is then 1 L l > v ρ NP (y l ) = γ L l = v 0 L l < v Worcester Polytechnic Institute D. Richard Brown III 2 / 7

Neyman Pearson Hypothesis Testing: Finite/Infinite Obs. Finite number of possible observations: Compute the likelihood ratios L l = P l,1 P l,0 Pick the threshold v to be the smallest value such that P fp = P l,0 α l:l l >v If P fp (δ v ) < α, compute the randomization γ so that P fp = α. The N-P decision rule is then 1 L l > v ρ NP (y l ) = γ L l = v 0 L l < v Infinite number of possible observations: Compute the likelihood ratios L(y) = p1(y) p 0(y) Pick the threshold v to be the smallest value such that P fp = p 0(y)dy α y:l(y)>v If P fp (δ v ) < α, compute the randomization γ so that P fp = α. The N-P decision rule is then 1 if L(y) > v ρ NP (y) = γ if L(y) = v 0 if L(y) < v Worcester Polytechnic Institute D. Richard Brown III 2 / 7

Suppose a transmitter sends one of two scalar signals a 0 or a 1 and the signal arrives at a receiver corrupted by zero-mean additive white Gaussian noise (AWGN) with variance σ 2. Suppose we have a scalar observation. Signal model when x j = a j : Y = a j +η where a j is the scalar signal and η N(0,σ 2 ). Hence ( 1 (y aj ) 2 ) p j (y) = exp 2πσ 2σ 2 Worcester Polytechnic Institute D. Richard Brown III 3 / 7

Suppose a transmitter sends one of two scalar signals a 0 or a 1 and the signal arrives at a receiver corrupted by zero-mean additive white Gaussian noise (AWGN) with variance σ 2. Suppose we have a scalar observation. Signal model when x j = a j : Y = a j +η where a j is the scalar signal and η N(0,σ 2 ). Hence ( 1 (y aj ) 2 ) p j (y) = exp 2πσ 2σ 2 We want to use N-P hypothesis testing to maximize subject to the constraint P D = Prob(decide H 1 a 1 was sent) P fp = Prob(decide H 1 a 0 was sent) α. Worcester Polytechnic Institute D. Richard Brown III 3 / 7

The N-P decision rule will be of the form 1 if L(y) > v ρ NP (y) = γ if L(y) = v 0 if L(y) < v where v 0 and 0 γ(y) 1 are such that P fp (ρ NP ) = α. Worcester Polytechnic Institute D. Richard Brown III 4 / 7

The N-P decision rule will be of the form 1 if L(y) > v ρ NP (y) = γ if L(y) = v 0 if L(y) < v where v 0 and 0 γ(y) 1 are such that P fp (ρ NP ) = α. We need to find the smallest v such that where Y v = {y Y : L(y) > v}. Y v p 0 (y)dy α Worcester Polytechnic Institute D. Richard Brown III 4 / 7

Example: Likelihood Ratio for a 0 = 0, a 1 = 1, σ 2 = 1 14 12 likelihood ratio L(y) = p 1 (y)/p 0 (y) 10 8 6 4 2 0 3 2 1 0 1 2 3 observation y Worcester Polytechnic Institute D. Richard Brown III 5 / 7

Note that, since a 1 > a 0, the likelihood ratio L(y) = p 1(y) p 0 (y) is monotonically increasing. This means that finding v is equivalent to finding a threshold τ so that ( ) τ a0 p 0 (y)dy α Q α τ σq 1 (α)+a 0 σ τ Ȳ v a 0 a 1 Y v Worcester Polytechnic Institute D. Richard Brown III 6 / 7

Note that, since a 1 > a 0, the likelihood ratio L(y) = p 1(y) p 0 (y) is monotonically increasing. This means that finding v is equivalent to finding a threshold τ so that ( ) τ a0 p 0 (y)dy α Q α τ σq 1 (α)+a 0 σ τ Ȳ v The N-P decision rule is then ρ NP (y) = and P D = σq 1 (α)+a 0 p 1 (y)dy. a 0 a 1 Y v { 1 if y σq 1 (α)+a 0 0 if y < σq 1 (α)+a 0 Worcester Polytechnic Institute D. Richard Brown III 6 / 7

Final Comments on Neyman-Pearson Hypothesis Testing 1. N-P decision rules are useful in asymmetric risk scenarios or in scenarios where one has to guarantee a certain probability of false detection. 2. N-P decision rules are always based on simple likelihood ratio comparisons. The comparison threshold is chosen to satisfy the significance level constraint. 3. Randomization may be necessary for N-P decision rules. Without randomization, the power of the test may not be maximized for the significance level constraint. 4. The original N-P paper: On the Problem of the Most Efficient Tests of Statistical Hypotheses, J. Neyman and E.S. Pearson, Philosophical Transactions of the Royal Society of London, Series A, Containing Papers of a Mathematical or Physical Character, Vol. 231 (1933), pp. 289-337. Available on jstor.org. Worcester Polytechnic Institute D. Richard Brown III 7 / 7