Composite tests
Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below Thu
Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below Argument was Thu Thu
Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below This is not correct, since it is limited to the case that C 2 -C 1 is positive semi-definite Slides have been corrected Thu Thu
Chapter 6: UMP - Uniformly most powerful tests Thu Consider the case when the value of A is unknown, but assume A>0
Chapter 6: UMP - Uniformly most powerful tests Thu Consider the case when the value of A is unknown, but assume A>0 UMP: An optimal test no matter the Thu value of A similar concept to MVU
Chapter 6: UMP - Uniformly most powerful tests Thu Consider the case when the value of A is unknown, but assume A>0 UMP: An optimal test no matter the Thu value of A similar concept to MVU Strategy to get UMPs: 1. Design test as if A is known 2. Show that test does not need knowledge of the value A
Chapter 6: UMP - Uniformly most powerful tests Thu Step 1: Design test as if A is known
Chapter 6: UMP - Uniformly most powerful tests Step 1: Design test as if A is known
Chapter 6: UMP - Uniformly most powerful tests Step 1: Design test as if A is known Cancel multiplicative constants Remove exp by taking logarithm Cancel x 2 [n]
Chapter 6: UMP - Uniformly most powerful tests Step 1: Design test as if A is known Manipulate a bit.
Chapter 6: UMP - Uniformly most powerful tests Step 1: Design test as if A is known scale
Chapter 6: UMP - Uniformly most powerful tests Step 1: Design test as if A is known Test statistic is not dependent on A Threshold seems to be, but is not
Chapter 6: UMP - Uniformly most powerful tests Step 2: Show that test does not need knowledge of the value A
Chapter 6: UMP - Uniformly most powerful tests Step 2: Show that test does not need knowledge of the value A
Chapter 6: UMP - Uniformly most powerful tests Step 2: Show that test does not need knowledge of the value A
Chapter 6: UMP - Uniformly most powerful tests Step 2: Show that test does not need knowledge of the value A Threshold does not depend on P FA
Chapter 6: UMP - Uniformly most powerful tests Compute P D
Chapter 6: UMP - Uniformly most powerful tests Compute P D
Chapter 6: UMP - Uniformly most powerful tests Compute P D
Chapter 6: UMP - Uniformly most powerful tests Compute P D Performance depends on A
Chapter 6: UMP - Uniformly most powerful tests Recap A test is UMP if it, for all possible values of the unknown parameter(s), maximzes P D for given P FA
Chapter 6: One-sided vs. Two sided Consider now: A<0
Chapter 6: One-sided vs. Two sided Consider now: A<0 Same steps as before Step 1: Design test as if A is known
Chapter 6: One-sided vs. Two sided Consider now: A<0 Same steps as before Step 1: Design test as if A is known Next thing was to divide with A This changes inequality with A<0
Chapter 6: One-sided vs. Two sided Consider now: A<0 Same steps as before Step 1: Design test as if A is known Next thing was to divide with A This changes inequality with A<0 <
Chapter 6: One-sided vs. Two sided Consider now: A<0 <
Chapter 6: One-sided vs. Two sided Consider now: A<0 <
Chapter 6: One-sided vs. Two sided Consider now: A<0 <
Chapter 6: One-sided vs. Two sided Consider now: A<0 <
Chapter 6: One-sided vs. Two sided Consider now: A<0
Chapter 6: One-sided vs. Two sided This means problems, since test can not be implemented For A>0, decide H 1 if For A<0, decide H 1 if
Chapter 6: One-sided vs. Two sided This means problems, since test can not be implemented For A>0, decide H 1 if For A<0, decide H 1 if UMP exists (one sided) UMP does not exist (two sided)
Chapter 6: One-sided vs. Two sided This means problems, since test can not be implemented For A>0, decide H 1 if For A<0, decide H 1 if UMP exists (one sided) An educated guess would be to decide H 1 if UMP does not exist (two sided) This will turn out to be well motivated by the GLRT that comes shortly
Chapter 6: Karlin-Rubin Thm - A condition for UMP If the likelihood ratio is monotonic in the test T(x) and it is known that then Detect H 1, if T(x) > γ is UMP
Chapter 6: Karlin-Rubin Thm - A condition for UMP If the likelihood ratio is monotonic in the test T(x) and it is known that then Detect H 1, if T(x) > γ is UMP This follows directly from the Neyman-Pearson theorem
Chapter 6: Karlin-Rubin Thm - A condition for UMP Application: Exponential family
Chapter 6: Karlin-Rubin Thm - A condition for UMP Application: Exponential family Likelihood ratio 0
Chapter 6: Karlin-Rubin Thm - A condition for UMP Application: Exponential family Likelihood ratio: If p(θ) is increasing, then LLR is monotonic in 0
Chapter 6: Karlin-Rubin Thm - A condition for UMP Application: Exponential family Likelihood ratio: If p(θ) is increasing, then LLR is monotonic in 0 In our case (DC level), we have p(θ) = θ/σ 2
Chapter 6: Composite tesiting Bayesian approach With likelihoods containing unknown parameters, We can integrate away the unknown
Chapter 6: Composite tesiting Bayesian approach With likelihoods containing unknown parameters, We can integrate away the unknown A case that is very common and fully doable is x=hθ+w, with Gaussian matrix H
Chapter 6: Composite tesiting Bayesian approach With likelihoods containing unknown parameters, We can integrate away the unknown If prior is unknown, use a non-informative one (See Estimation theory book)
Chapter 6: GLRT Finite data records The Generalized Likelihood ratio test is heuristic for finite data records, but can be proven optimal asymptotically in the size of the data record A = πr 2 Where θ 1 is the MLE of θ under H 1, θ 0 is the MLE of θ under H 0
Chapter 6: GLRT Finite data records Example: Non-coherent detection A = πr 2
Chapter 6: GLRT Finite data records Example: Non-coherent detection A = πr 2 GLRT replaces H with its ML estimate
Chapter 6: GLRT Finite data records Example: Non-coherent detection A = πr 2
Chapter 6: GLRT Finite data records Example: Non-coherent detection A = πr 2
Chapter 6: GLRT Finite data records Example: A = πr 2 GRLT is A = πr 2
Chapter 6: GLRT Finite data records Example: A = πr 2 GRLT is A = πr 2 But from estimation theory, we have that the MLE of A is
Chapter 6: GLRT Finite data records Example: A = πr 2 GRLT is A = πr 2 But from estimation theory, we have that the MLE of A is Thus
Chapter 6: GLRT Finite data records Example: Taking logs, and simplification gives
Chapter 6: GLRT Finite data records Example: Taking logs, and simplification gives
Chapter 6: GLRT Finite data records Example: Taking logs, and simplification gives Thus,
Chapter 6: GLRT Large data records Large in this case does not mean that we use Szegö and the Fourier transform. In this case, we consider large N, but with independent measurements Two assumptions: 1. Signal is weak 2. MLE attains asymptotic form
Chapter 6: GLRT Large data records Large in this case does not mean that we use Szegö and the Fourier transform. In this case, we consider large N, but with independent measurements Two assumptions: 1. Signal is weak Means that A is not enormous. Reasonable, otherwise problem is simple 2. MLE attains asymptotic form
Chapter 6: GLRT Large data records Large in this case does not mean that we use Szegö and the Fourier transform. In this case, we consider large N, but with independent measurements Two assumptions: 1. Signal is weak Means that A is not enormous. Reasonable, otherwise problem is simple 2. MLE attains asymptotic form From Estimation theory
Chapter 6: GLRT Large data records Theorem Setup Differ for H 0 and H 1 Parameter vector to be detected Equal for H 0 and H 1 (e.g. noise variance)
Chapter 6: GLRT Large data records Theorem Setup Differ for H 0 and H 1 Parameter vector to be detected Equal for H 0 and H 1 (e.g. noise variance) Hypotheses to test for
Chapter 6: GLRT Large data records Theorem Setup Differ for H 0 and H 1 Parameter vector to be detected Equal for H 0 and H 1 (e.g. noise variance) Hypotheses to test for Definition of GLRT Note that MLEs of θ s differ under H 0 and H 1
Chapter 6: GLRT Large data records Theorem Statement A = πr 2
Chapter 6: GLRT Large data records Theorem Chi-2 variable, r DoF Statement A = πr 2
Chapter 6: GLRT Large data records Theorem Chi-2 variable, r DoF Statement Non-central Chi-2 variable, r DoF A = πr 2
Chapter 6: GLRT Large data records Theorem Chi-2 variable, r DoF Statement Non-central Chi-2 variable, r DoF True value of θ r under H 1 A = πr 2
Chapter 6: GLRT Large data records Theorem Chi-2 variable, r DoF Statement True value of θ s under H 1 / H 0 Non-central Chi-2 variable, r DoF True value of θ r under H 1 A = πr 2
Chapter 6: GLRT Large data records Theorem Chi-2 variable, r DoF Statement True value of θ s under H 1 / H 0 Non-central Chi-2 variable, r DoF True value of θ r under H 1 A = πr 2 Fisher Inform, Doesn t depend on H 1 or H 0
Chapter 6: GLRT Large data records Theorem Chi-2 variable, r DoF Statement True value of θ s under H 1 / H 0 True value of θ r under H 1 Fisher information matrix: one does not need to think about H 0 or H 1. Non-central Chi-2 variable, r DoF Think like this: Given x, what is the Fisher info for θ r,θ s A = πr 2 Fisher Inform, Doesn t depend on H 1 or H 0
Chapter 6: GLRT Large data records Theorem Statement Cancels with no nusiance parameters A = πr 2
Chapter 6: GLRT Large data records Theorem Statement A = πr 2 Since Fisher is pos. def., λ is degraded by nuisance Cancels with no nusiance parameters
Chapter 6: GLRT Large data records Theorem Larger λ separates the pdfs more, Thus better P D with larger λ Statement A = πr 2 Since Fisher is pos. def., λ is degraded by nuisance Cancels with no nusiance parameters
Chapter 6: GLRT Large data records Theorem Larger λ separates the pdfs more, Thus better P D with larger λ Statement A = πr 2 Since Fisher is pos. def., λ is degraded by nuisance Cancels with no nusiance parameters So, not surprisingly, nuisance degrades our detection capability
Chapter 6: GLRT Large data records Theorem Statement No nuisance A = πr Note: The test is still difficult, since 2 it is still given by and we need to find the MLEs