STAT 801: Mathematical Statistics. Hypothesis Testing
|
|
- Wilfrid Stevenson
- 6 years ago
- Views:
Transcription
1 STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o choosing between two hypotheses: H o : θ Θ 0 or H 1 : θ Θ 1 where Θ 0 and Θ 1 are a partition o the model P θ ; θ Θ. That is Θ 0 Θ 1 = Θ and Θ 0 Θ 1 =. A rule or making the required choice can be described in two ways: 1. In terms o the set called the rejection or critical region o the test. R = {X : we choose Θ 1 i we observe X} 2. In terms o a unction φ(x) which is equal to 1 or those x or which we choose Θ 1 and 0 or those x or which we choose Θ 0. For technical reasons which will come up soon I preer to use the second description. However, each φ corresponds to a unique rejection region R φ = {x : φ(x) = 1}. Neyman Pearson approach treats two hypotheses asymmetrically. Hypothesis H o reerred to as the null hypothesis (traditionally the hypothesis that some treatment has no eect). Deinition: The power unction o a test φ (or the corresponding critical region R φ ) is π(θ) = P θ (X R φ ) = E θ (φ(x)) Interested in optimality theory, that is, the problem o inding the best φ. A good φ will evidently have π(θ) small or θ Θ 0 and large or θ Θ 1. There is generally a trade o which can be made in many ways, however. Simple versus Simple testing Finding a best test is easiest when the hypotheses are very precise. Deinition: A hypothesis H i is simple i Θ i contains only a single value θ i. The simple versus simple testing problem arises when we test θ = θ 0 against θ = θ 1 so that Θ has only two points in it. This problem is o importance as a technical tool, not because it is a realistic situation. Suppose that the model speciies that i θ = θ 0 then the density o X is 0 (x) and i θ = θ 1 then the density o X is 1 (x). How should we choose φ? To answer the question we begin by studying the problem o minimizing the total error probability. Type I error: the error made when θ = θ 0 but we choose H 1, that is, X R φ. Type II error: when θ = θ 1 but we choose H 0. The level o a simple versus simple test is α = P θ0 (We make a Type I error) or Other error probability denoted β is α = P θ0 (X R φ ) = E θ0 (φ(x)) β = P θ1 (X R φ ) = E θ1 (1 φ(x)). Minimize α + β, the total error probability given by α + β = E θ0 (φ(x)) + E θ1 (1 φ(x)) = [φ(x) 0 (x) + (1 φ(x)) 1 (x)]dx 1
2 Problem: choose, or each x, either the value 0 or the value 1, in such a way as to minimize the integral. But or each x the quantity φ(x) 0 (x) + (1 φ(x)) 1 (x) is between 0 (x) and 1 (x). To make it small we take φ(x) = 1 i 1 (x) > 0 (x) and φ(x) = 0 i 1 (x) < 0 (x). It makes no dierence what we do or those x or which 1 (x) = 0 (x). Notice: divide both sides o inequalities to get condition in terms o likelihood ratio 1 (x)/ 0 (x). Theorem: For each ixed λ the quantity β + λα is minimized by any φ which has { 1 1(x) φ(x) = > λ 0 1(x) < λ Neyman and Pearson suggested that in practice the two kinds o errors might well have unequal consequences. They suggested that rather than minimize any quantity o the orm above you pick the more serious kind o error, label it Type I and require your rule to hold the probability α o a Type I error to be no more than some prespeciied level α 0. (This value α 0 is typically 0.05 these days, chiely or historical reasons.) The Neyman and Pearson approach is then to minimize β subject to the constraint α α 0. Usually this is really equivalent to the constraint α = α 0 (because i you use α < α 0 you could make R larger and keep α α 0 but make β smaller. For discrete models, however, this may not be possible. Example: Suppose X is Binomial(n, p) and either p = p 0 = 1/2 or p = p 1 = 3/4. I R is any critical region (so R is a subset o {0, 1,..., n}) then P 1/2 (X R) = k 2 n or some integer k. I we want α 0 = 0.05 with say n = 5 or example we have to recognize that the possible values o α are 0, 1/32 = , 2/32 = and so on. Possible rejection regions or α 0 = 0.05: Region α β R 1 = 0 1 R 2 = {x = 0} (1/4) 5 R 3 = {x = 5} (3/4) 5 So R 3 minimizes β subject to α < Raise α 0 slightly to : possible rejection regions are R 1, R 2, R 3 and R 4 = R 2 R 3. The irst three have the same α and β as beore while R 4 has α = α 0 = an β = 1 (3/4) 5 (1/4) 5. Thus R 4 is optimal! Problem: i all trials are ailures optimal R chooses p = 3/4 rather than p = 1/2. But p = 1/2 makes 5 ailures much more likely does p = 3/4. Problem: discreteness. Here s how we get around the problem. First we expand the set o possible values o φ to include numbers between 0 and 1. Values o φ(x) between 0 and 1 represent the chance that we choose H 1 given that we observe x; the idea is that we actually toss a (biased) coin to decide! This tactic will show us the kinds o rejection regions which are sensible. In practice we then restrict our attention to levels α 0 or which the best φ is always either 0 or 1. In the binomial example we will insist that the value o α 0 be either 0 or P θ0 (X 5) or P θ0 (X 4) or... Smaller example: 4 possible values o X and 2 4 possible rejection regions. Here is a table o the levels or each possible rejection region R: R α 0 {3}, {0} 1/8 {0,3} 2/8 {1}, {2} 3/8 {0,1}, {0,2}, {1,3}, {2,3} 4/8 {0,1,3}, {0,2,3} 5/8 {1,2} 6/8 {0,1,3}, {0,2,3} 7/8 {0,1,2,3} 1 2
3 Best level 2/8 test has rejection region {0, 3}, β = 1 [(3/4) 3 + (1/4) 3 ] = 36/64. Best level 2/8 test using randomization rejects when X = 3 and, when X = 2 tosses a coin with P (H) = 1/3, then rejects i you get H. Level is 1/8+(1/3)(3/8) = 2/8; probability o Type II error is β = 1 [(3/4) 3 +(1/3)(3)(3/4) 2 (1/4)] = 28/64. Deinition: A hypothesis test is a unction φ(x) whose values are always in [0, 1]. I we observe X = x then we choose H 1 with conditional probability φ(x). In this case we have and π(θ) = E θ (φ(x)) α = E 0 (φ(x)) β = 1 E 1 (φ(x)) Note that a test using a rejection region C is equivalent to φ(x) = 1(x C) The Neyman Pearson Lemma: In testing 0 against 1 the probability β o a type II error is minimized, subject to α α 0 by the test unction: 1 1(x) > λ φ(x) = γ 1(x) = λ 0 1(x) < λ where λ is the largest constant such that and and where γ is any number chosen so that P 0 ( 1(X) 0 (X) λ) α 0 P 0 ( 1(X) 0 (X) λ) 1 α 0 E 0 (φ(x)) = P 0 ( 1(X) 0 (X) > λ) = α 0 + γp 0 ( 1(X) 0 (X) = λ) The value o γ is unique i P 0 ( 1(X) 0(X) = λ) > 0. Example: Binomial(n, p) with p 0 = 1/2 and p 1 = 3/4: ratio 1 / 0 is 3 x 2 n I n = 5 this ratio is one o 1, 3, 9, 27, 81, 243 divided by 32. Suppose we have α = λ must be one o the possible values o 1 / 0. I we try λ = 243/32 then and So λ = 81/32. P 0 (3 X /32) = P 0 (X = 5) = 1/32 < 0.05 P 0 (3 X /32) = P 0 (X 4) = 6/32 >
4 Since we must solve or γ and ind P 0 (3 X 2 5 > 81/32) = P 0 (X = 5) = 1/32 P 0 (X = 5) + γp 0 (X = 4) = 0.05 γ = /32 5/32 = 0.12 NOTE: No-one ever uses this procedure. Instead the value o α 0 used in discrete problems is chosen to be a possible value o the rejection probability when γ = 0 (or γ = 1). When the sample size is large you can come very close to any desired α 0 with a non-randomized test. I α 0 = 6/32 then we can either take λ to be 243/32 and γ = 1 or λ = 81/32 and γ = 0. However, our deinition o λ in the theorem makes λ = 81/32 and γ = 0. When the theorem is used or continuous distributions it can be the case that the cd o 1 (X)/ 0 (X) has a lat spot where it is equal to 1 α 0. This is the point o the word largest in the theorem. Example: I X 1,..., X n are iid N(µ, 1) and we have µ 0 = 0 and µ 1 > 0 then 1 (X 1,..., X n ) 0 (X 1,..., X n ) = exp{µ 1 Xi nµ 2 1/2 µ 0 Xi + nµ 2 0/2} which simpliies to exp{µ 1 Xi nµ 2 1/2} Now choose λ so that P 0 (exp{µ 1 Xi nµ 2 1/2} > λ) = α 0 Can make it equal because 1 (X)/ 0 (X) has a continuous distribution. Rewrite probability as P 0 ( X i > [log(λ) + nµ 2 1/2]/µ 1 ) Let z α be upper α critical point o N(0, 1); then = ( ) log(λ) + nµ 2 1 Φ 1 /2 n 1/2 µ 1 z α0 = [log(λ) + nµ 2 1/2]/[n 1/2 µ 1 ]. Solve to get a ormula or λ in terms o z α0, n and µ 1. The rejection region looks complicated: reject i a complicated statistic is larger than λ which has a complicated ormula. But in calculating λ we re-expressed the rejection region in terms o Xi > z α0 n The key eature is that this rejection region is the same or any µ 1 > 0. [WARNING: in the algebra above I used µ 1 > 0.] This is why the Neyman Pearson lemma is a lemma! Deinition: In the general problem o testing Θ 0 against Θ 1 the level o a test unction φ is The power unction is α = sup θ Θ 0 E θ (φ(x)) π(θ) = E θ (φ(x)) A test φ is a Uniormly Most Powerul level α 0 test i 4
5 1. φ has level α α o 2. I φ has level α α 0 then or every θ Θ 1 we have E θ (φ(x)) E θ (φ (X)) Proo o Neyman Pearson lemma: Given a test φ with level strictly less than α 0 we can deine the test φ (x) = 1 α 0 1 α φ(x) + α 0 α 1 α has level α 0 and β smaller than that o φ. Hence we may assume without loss that α = α 0 and minimize β subject to α = α 0. However, the argument which ollows doesn t actually need this. Lagrange Multipliers Suppose you want to minimize (x) subject to g(x) = 0. Consider irst the unction I x λ minimizes h λ then or any other x h λ (x) = (x) + λg(x) (x λ ) (x) + λ[g(x) g(x λ )] Now suppose you can ind a value o λ such that the solution x λ has g(x λ ) = 0. Then or any x we have (x λ ) (x) + λg(x) and or any x satisying the constraint g(x) = 0 we have (x λ ) (x) This proves that or this special value o λ the quantity x λ minimizes (x) subject to g(x) = 0. Notice that to ind x λ you set the usual partial derivatives equal to 0; then to ind the special x λ you add in the condition g(x λ ) = 0. Return to proo o NP lemma For each λ > 0 we have seen that φ λ minimizes λα + β where φ λ = 1( 1 (x)/ 0 (x) λ). As λ increases the level o φ λ decreases rom 1 when λ = 0 to 0 when λ =. There is thus a value λ 0 where or λ < λ 0 the level is less than α 0 while or λ > λ 0 the level is at least α 0. Temporarily let δ = P 0 ( 1 (X)/ 0 (X) = λ 0 ). I δ = 0 deine φ = φ λ. I δ > 0 deine 1 φ(x) = γ 0 1(x) > λ 0 1(x) = λ 0 1(x) < λ 0 where P 0 ( 1 (X)/ 0 (X) < λ 0 ) + γδ = α 0. You can check that γ [0, 1]. Now φ has level α 0 and according to the theorem above minimizes α + λ 0 β. Suppose φ is some other test with level α α 0. Then λ 0 α φ + β φ λ 0 α φ + β φ We can rearrange this as Since β φ β φ + (α φ α φ )λ 0 α φ α 0 = α φ 5
6 the second term is non-negative and β φ β φ which proves the Neyman Pearson Lemma. Example application o NP: Binomial(n, p) to test p = p 0 versus p 1 or a p 1 > p 0 the NP test is o the orm φ(x) = 1(X > k) + γ1(x = k) where we choose k so that and γ [0, 1) so that P p0 (X > k) α 0 < P p0 (X k) α 0 = P p0 (X > k) + γp p0 (X = k) This rejection region depends only on p 0 and not on p 1 so that this test is UMP or p = p 0 against p > p 0. Since this test has level α 0 even or the larger null hypothesis it is also UMP or p p 0 against p > p 0. Application o the NP lemma: In the N(µ, 1) model consider Θ 1 = {µ > 0} and Θ 0 = {0} or Θ 0 = {µ 0}. The UMP level α 0 test o H 0 : µ Θ 0 against H 1 : µ Θ 1 is φ(x 1,..., X n ) = 1(n 1/2 X > zα0 ) Proo: For either choice o Θ 0 this test has level α 0 because or µ 0 we have P µ (n 1/2 X > z α0 ) = P µ (n 1/2 ( X µ) > z α0 n 1/2 µ) = P (N(0, 1) > z α0 n 1/2 µ) P (N(0, 1) > z α0 ) = α 0 (Notice the use o µ 0. The central point is that the critical point is determined by the behaviour on the edge o the null hypothesis.) Now i φ is any other level α 0 test then we have Fix a µ > 0. According to the NP lemma where φ µ rejects i E 0 (φ(x 1,..., X n )) α 0 E µ (φ(x 1,..., X n )) E µ (φ µ (X 1,..., X n )) µ (X 1,..., X n )/ 0 (X 1,..., X n ) > λ or a suitable λ. But we just checked that this test had a rejection region o the orm n 1/2 X > zα0 which is the rejection region o φ. The NP lemma produces the same test or every µ > 0 chosen as an alternative. So we have shown that φ µ = φ or any µ > 0. This phenomenon is somewhat general. What happened was this. For any µ > µ 0 the likelihood ratio µ / 0 is an increasing unction o X i. The rejection region o the NP test is thus always a region o the orm X i > k. The value o the constant k is determined by the requirement that the test have level α 0 and this depends on µ 0 not on µ 1. Deinition: The amily θ ; θ Θ R has monotone likelihood ratio with respect to a statistic T (X) i or each θ 1 > θ 0 the likelihood ratio θ1 (X)/ θ0 (X) is a monotone increasing unction o T (X). Theorem: For a monotone likelihood ratio amily the Uniormly Most Powerul level α test o θ θ 0 (or o θ = θ 0 ) against the alternative θ > θ 0 is 1 T (x) > t α φ(x) = γ T (X) = t α 0 T (x) < t α 6
7 where P θ0 (T (X) > t α ) + γp θ0 (T (X) = t α ) = α 0. Typical amily where this works: one parameter exponential amily. Usually there is no UMP test. Example: test µ = µ 0 against two sided alternative µ µ 0. There is no UMP level α test. I there were its power at µ > µ 0 would have to be as high as that o the one sided level α test and so its rejection region would have to be the same as that test, rejecting or large positive values o X µ0. But it also has to have power as good as the one sided test or the alternative µ < µ 0 and so would have to reject or large negative values o X µ0. This would make its level too large. Favourite test: usual 2 sided test rejects or large values o X µ 0. Test maximizes power subject to two constraints: irst, level α; second power is minimized at µ = µ 0. Second condition means power on alternative is larger than on the null. 7
STAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These
More informationHypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.
Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two hypotheses:
More informationPartitioning the Parameter Space. Topic 18 Composite Hypotheses
Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one
More informationHomework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.
Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each
More informationChapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma
Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationRoberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points
Roberto s Notes on Dierential Calculus Chapter 8: Graphical analysis Section 1 Extreme points What you need to know already: How to solve basic algebraic and trigonometric equations. All basic techniques
More informationQuality control of risk measures: backtesting VAR models
De la Pena Q 9/2/06 :57 pm Page 39 Journal o Risk (39 54 Volume 9/Number 2, Winter 2006/07 Quality control o risk measures: backtesting VAR models Victor H. de la Pena* Department o Statistics, Columbia
More informationStat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS
Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS 1a. Under the null hypothesis X has the binomial (100,.5) distribution with E(X) = 50 and SE(X) = 5. So P ( X 50 > 10) is (approximately) two tails
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationChapters 10. Hypothesis Testing
Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment more effective than the old one? 3. Quality
More informationTopic 15: Simple Hypotheses
Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is
More informationLecture Testing Hypotheses: The Neyman-Pearson Paradigm
Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:
More informationLecture 12 November 3
STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1
More informationSTAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015
STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationExponential and Logarithmic. Functions CHAPTER The Algebra of Functions; Composite
CHAPTER 9 Exponential and Logarithmic Functions 9. The Algebra o Functions; Composite Functions 9.2 Inverse Functions 9.3 Exponential Functions 9.4 Exponential Growth and Decay Functions 9.5 Logarithmic
More informationChapters 10. Hypothesis Testing
Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the
More informationThe University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80
The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple
More informationORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing
ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2017 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationProblem Set. Problems on Unordered Summation. Math 5323, Fall Februray 15, 2001 ANSWERS
Problem Set Problems on Unordered Summation Math 5323, Fall 2001 Februray 15, 2001 ANSWERS i 1 Unordered Sums o Real Terms In calculus and real analysis, one deines the convergence o an ininite series
More informationHypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes
Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis
More informationTUTORIAL 8 SOLUTIONS #
TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level
More information8: Hypothesis Testing
Some definitions 8: Hypothesis Testing. Simple, compound, null and alternative hypotheses In test theory one distinguishes between simple hypotheses and compound hypotheses. A simple hypothesis Examples:
More informationChapter 9: Hypothesis Testing Sections
Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two
More informationChapter 4. Theory of Tests. 4.1 Introduction
Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule
More informationECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing
ECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III 1 / 8 Basics Hypotheses H 0
More information4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49
4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether
More informationSupplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values
Supplementary material or Continuous-action planning or discounted ininite-horizon nonlinear optimal control with Lipschitz values List o main notations x, X, u, U state, state space, action, action space,
More informationCentral Limit Theorems and Proofs
Math/Stat 394, Winter 0 F.W. Scholz Central Limit Theorems and Proos The ollowin ives a sel-contained treatment o the central limit theorem (CLT). It is based on Lindeber s (9) method. To state the CLT
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationSTA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4
STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.
More information( x) f = where P and Q are polynomials.
9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational
More informationF79SM STATISTICAL METHODS
F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is
More information1 Relative degree and local normal forms
THE ZERO DYNAMICS OF A NONLINEAR SYSTEM 1 Relative degree and local normal orms The purpose o this Section is to show how single-input single-output nonlinear systems can be locally given, by means o a
More information10. Joint Moments and Joint Characteristic Functions
10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent the inormation contained in the joint p.d. o two r.vs.
More information8.4 Inverse Functions
Section 8. Inverse Functions 803 8. Inverse Functions As we saw in the last section, in order to solve application problems involving eponential unctions, we will need to be able to solve eponential equations
More informationChapter 7. Hypothesis Testing
Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function
More informationDefinition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationHYPOTHESIS TESTING: FREQUENTIST APPROACH.
HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous
More informationy2 = 0. Show that u = e2xsin(2y) satisfies Laplace's equation.
Review 1 1) State the largest possible domain o deinition or the unction (, ) = 3 - ) Determine the largest set o points in the -plane on which (, ) = sin-1( - ) deines a continuous unction 3) Find the
More informationPublished in the American Economic Review Volume 102, Issue 1, February 2012, pages doi: /aer
Published in the American Economic Review Volume 102, Issue 1, February 2012, pages 594-601. doi:10.1257/aer.102.1.594 CONTRACTS VS. SALARIES IN MATCHING FEDERICO ECHENIQUE Abstract. Firms and workers
More informationHypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses
Testing Hypotheses MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline Hypothesis Testing 1 Hypothesis Testing 2 Hypothesis Testing: Statistical Decision Problem Two coins: Coin 0 and Coin 1 P(Head Coin 0)
More informationRATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions
RATIONAL FUNCTIONS Finding Asymptotes..347 The Domain....350 Finding Intercepts.....35 Graphing Rational Functions... 35 345 Objectives The ollowing is a list o objectives or this section o the workbook.
More informationCS 361 Meeting 28 11/14/18
CS 361 Meeting 28 11/14/18 Announcements 1. Homework 9 due Friday Computation Histories 1. Some very interesting proos o undecidability rely on the technique o constructing a language that describes the
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationLecture 16 November Application of MoUM to our 2-sided testing problem
STATS 300A: Theory of Statistics Fall 2015 Lecture 16 November 17 Lecturer: Lester Mackey Scribe: Reginald Long, Colin Wei Warning: These notes may contain factual and/or typographic errors. 16.1 Recap
More informationVALUATIVE CRITERIA FOR SEPARATED AND PROPER MORPHISMS
VALUATIVE CRITERIA FOR SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN Recall that or prevarieties, we had criteria or being a variety or or being complete in terms o existence and uniqueness o limits, where
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More information1. Definition: Order Statistics of a sample.
AMS570 Order Statistics 1. Deinition: Order Statistics o a sample. Let X1, X2,, be a random sample rom a population with p.d.. (x). Then, 2. p.d.. s or W.L.O.G.(W thout Loss o Ge er l ty), let s ssu e
More information(C) The rationals and the reals as linearly ordered sets. Contents. 1 The characterizing results
(C) The rationals and the reals as linearly ordered sets We know that both Q and R are something special. When we think about about either o these we usually view it as a ield, or at least some kind o
More informationHypothesis Testing - Frequentist
Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating
More informationDifferentiation. The main problem of differential calculus deals with finding the slope of the tangent line at a point on a curve.
Dierentiation The main problem o dierential calculus deals with inding the slope o the tangent line at a point on a curve. deinition() : The slope o a curve at a point p is the slope, i it eists, o the
More information9.3 Graphing Functions by Plotting Points, The Domain and Range of Functions
9. Graphing Functions by Plotting Points, The Domain and Range o Functions Now that we have a basic idea o what unctions are and how to deal with them, we would like to start talking about the graph o
More informationLecture 21. Hypothesis Testing II
Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric
More informationBEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.
BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider
More informationLab on Taylor Polynomials. This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor.
Lab on Taylor Polynomials This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor. In this Lab we will approimate complicated unctions by simple unctions. The
More information0,0 B 5,0 C 0, 4 3,5. y x. Recitation Worksheet 1A. 1. Plot these points in the xy plane: A
Math 13 Recitation Worksheet 1A 1 Plot these points in the y plane: A 0,0 B 5,0 C 0, 4 D 3,5 Without using a calculator, sketch a graph o each o these in the y plane: A y B 3 Consider the unction a Evaluate
More informationBasic properties of limits
Roberto s Notes on Dierential Calculus Chapter : Limits and continuity Section Basic properties o its What you need to know already: The basic concepts, notation and terminology related to its. What you
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More information557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES
557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationGaussian Process Regression Models for Predicting Stock Trends
Gaussian Process Regression Models or Predicting Stock Trends M. Todd Farrell Andrew Correa December 5, 7 Introduction Historical stock price data is a massive amount o time-series data with little-to-no
More informationLecture 10: Generalized likelihood ratio test
Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationTopic 17: Simple Hypotheses
Topic 17: November, 2011 1 Overview and Terminology Statistical hypothesis testing is designed to address the question: Do the data provide sufficient evidence to conclude that we must depart from our
More informationExtreme Values of Functions
Extreme Values o Functions When we are using mathematics to model the physical world in which we live, we oten express observed physical quantities in terms o variables. Then, unctions are used to describe
More informationLecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary
ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood
More informationIf there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,
Recall The Neyman-Pearson Lemma Neyman-Pearson Lemma: Let Θ = {θ 0, θ }, and let F θ0 (x) be the cdf of the random vector X under hypothesis and F θ (x) be its cdf under hypothesis. Assume that the cdfs
More informationVALUATIVE CRITERIA BRIAN OSSERMAN
VALUATIVE CRITERIA BRIAN OSSERMAN Intuitively, one can think o separatedness as (a relative version o) uniqueness o limits, and properness as (a relative version o) existence o (unique) limits. It is not
More informationBasic mathematics of economic models. 3. Maximization
John Riley 1 January 16 Basic mathematics o economic models 3 Maimization 31 Single variable maimization 1 3 Multi variable maimization 6 33 Concave unctions 9 34 Maimization with non-negativity constraints
More informationDefinition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series.
2.4 Local properties o unctions o several variables In this section we will learn how to address three kinds o problems which are o great importance in the ield o applied mathematics: how to obtain the
More informationChristoffel symbols and Gauss Theorema Egregium
Durham University Pavel Tumarkin Epiphany 207 Dierential Geometry III, Solutions 5 (Week 5 Christoel symbols and Gauss Theorema Egregium 5.. Show that the Gauss curvature K o the surace o revolution locally
More informationCSE 312 Final Review: Section AA
CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material
More informationTelescoping Decomposition Method for Solving First Order Nonlinear Differential Equations
Telescoping Decomposition Method or Solving First Order Nonlinear Dierential Equations 1 Mohammed Al-Reai 2 Maysem Abu-Dalu 3 Ahmed Al-Rawashdeh Abstract The Telescoping Decomposition Method TDM is a new
More informationSEPARATED AND PROPER MORPHISMS
SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN The notions o separatedness and properness are the algebraic geometry analogues o the Hausdor condition and compactness in topology. For varieties over the
More information2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction
. ETA EVALUATIONS USING WEBER FUNCTIONS Introduction So ar we have seen some o the methods or providing eta evaluations that appear in the literature and we have seen some o the interesting properties
More information14.30 Introduction to Statistical Methods in Economics Spring 2009
MIT OpenCourseWare http://ocw.mit.edu.30 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. .30
More informationLecture notes on statistical decision theory Econ 2110, fall 2013
Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic
More information2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd.
.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics Spline interpolation was originally developed or image processing. In GIS, it is mainly used in visualization o spatial
More information6.4 Type I and Type II Errors
6.4 Type I and Type II Errors Ulrich Hoensch Friday, March 22, 2013 Null and Alternative Hypothesis Neyman-Pearson Approach to Statistical Inference: A statistical test (also known as a hypothesis test)
More informationMath-3 Lesson 8-5. Unit 4 review: a) Compositions of functions. b) Linear combinations of functions. c) Inverse Functions. d) Quadratic Inequalities
Math- Lesson 8-5 Unit 4 review: a) Compositions o unctions b) Linear combinations o unctions c) Inverse Functions d) Quadratic Inequalities e) Rational Inequalities 1. Is the ollowing relation a unction
More informationThe basics of frame theory
First version released on 30 June 2006 This version released on 30 June 2006 The basics o rame theory Harold Simmons The University o Manchester hsimmons@ manchester.ac.uk This is the irst part o a series
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationTheory of Statistical Tests
Ch 9. Theory of Statistical Tests 9.1 Certain Best Tests How to construct good testing. For simple hypothesis H 0 : θ = θ, H 1 : θ = θ, Page 1 of 100 where Θ = {θ, θ } 1. Define the best test for H 0 H
More informationA NOTE ON HENSEL S LEMMA IN SEVERAL VARIABLES
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 11, November 1997, Pages 3185 3189 S 0002-9939(97)04112-9 A NOTE ON HENSEL S LEMMA IN SEVERAL VARIABLES BENJI FISHER (Communicated by
More information8 Testing of Hypotheses and Confidence Regions
8 Testing of Hypotheses and Confidence Regions There are some problems we meet in statistical practice in which estimation of a parameter is not the primary goal; rather, we wish to use our data to decide
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationECE531 Lecture 4b: Composite Hypothesis Testing
ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction
More informationSEPARATED AND PROPER MORPHISMS
SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN Last quarter, we introduced the closed diagonal condition or a prevariety to be a prevariety, and the universally closed condition or a variety to be complete.
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationOn the GLR and UMP tests in the family with support dependent on the parameter
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 3, September 2015, pp 221 228. Published online in International Academic Press (www.iapress.org On the GLR and UMP tests
More informationObjectives. By the time the student is finished with this section of the workbook, he/she should be able
FUNCTIONS Quadratic Functions......8 Absolute Value Functions.....48 Translations o Functions..57 Radical Functions...61 Eponential Functions...7 Logarithmic Functions......8 Cubic Functions......91 Piece-Wise
More informationHypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004
Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),
More informationCLASS NOTES MATH 527 (SPRING 2011) WEEK 6
CLASS NOTES MATH 527 (SPRING 2011) WEEK 6 BERTRAND GUILLOU 1. Mon, Feb. 21 Note that since we have C() = X A C (A) and the inclusion A C (A) at time 0 is a coibration, it ollows that the pushout map i
More informationELEG 3143 Probability & Stochastic Process Ch. 4 Multiple Random Variables
Department o Electrical Engineering University o Arkansas ELEG 3143 Probability & Stochastic Process Ch. 4 Multiple Random Variables Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Two discrete random variables
More information2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?
ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we
More information