Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
|
|
- Bridget Terry
- 5 years ago
- Views:
Transcription
1 Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed by the d-dimensional vector θ Θ IR d. We shall deal with hypotheses of the form θ Θ 0 Θ 3-1
2 Statistical Hypotheses Definition 3.2 A statistical test requires two hypotheses. The hypothesis to be tested is called the null hypothesis, H 0. The alternative hypothesis, H 1 is the hypothesis which will be accepted as true if we conclude that H 0 is false. Definition 3.3 If a statistical hypothesis completely specifies the population distribution then it is called a simple hypothesis otherwise it is called a composite hypothesis. 3-2
3 The Critical Region Definition 3.4 The critical region (or rejection region) associated with a statistical test is a subset of the sample space such that we reject the null hypothesis in favour of the alternative if, and only if, the observed sample falls within this set. Usually the critical region is specified in terms of a test statistic. 3-3
4 Hypothesis Testing Procedure 1. Specify the null hypothesis, H 0, which will be tested. 2. Specify the alternative hypothesis, H Specify the test statistic which will be used to test the hypothesis and define the critical region for the test. 4. Collect the data. 5. Reject H 0 if the observed value of the test statistic lies in the critical region, otherwise conclude that we cannot reject H
5 The Null Hypothesis H 0 is generally the sceptical hypothesis. Hypothesis testing can be thought of as the search for evidence against H 0 in favour of H 1. We generally do not conclude that H 0 is true, rather we conclude that there is insufficient evidence to prove it false. It is generally thought to be worse to declare H 0 false when it is not, than to fail to reject H 0 when it is actually false. For this reason we generally try to limit the probability of rejecting H 0 when it is true. 3-5
6 Size of a Test Definition 3.5 Suppose that we are testing H 0 : θ Θ 0 V H 1 : θ / Θ 0 and that our rejection region is of the form {x : x C}. Then the size of the rejection region (or test) is defined to be α = sup P θ (X C) θ Θ 0 The size of the test is the highest probability of rejecting H 0 when it is true. Generally we decide on a value of α and then find the set C such that the test has size α. 3-6
7 Likelihood Ratio Tests Definition 3.6 Suppose that x 1,..., x n are the observed values of a random sample from a single parameter distribution and we wish to test the simple null hypothesis H 0 : θ = θ 0 V H 1 : θ θ 0. Let ˆθ be the maximum likelihood estimate of θ. Then the likelihood ratio test has rejection region C = { x 1,..., x n L(θ 0 x 1,..., x n ) L(ˆθ x 1,..., x n ) k where k is chosen such that P θ0 (X C) = α. } 3-7
8 Generalized Likelihood Ratio Test Definition 3.7 Suppose that x 1,..., x n are the observed values of a random sample from a distribution depending on the parameter vector θ Θ and we wish to test H 0 : θ Θ 0 V H 1 : θ / Θ 0. Then the generalized likelihood ratio test statistic is λ(x) = sup Θ 0 L(θ x 1,..., x n ) sup Θ L(θ x 1,..., x n ) = L(ˆθ 0 x 1,..., x n ) L(ˆθ x 1,..., x n ) where ˆθ is the maximum likelihood estimator and ˆθ 0 is the maximum likelihood estimator constrained to be in the set Θ 0. The generalized likelihood ratio test has rejection region C = {x 1,..., x n λ(x) k} where k is chosen such that sup θ Θ 0 P θ (λ(x) k) = α. 3-8
9 Likelihood Ratio Tests and Sufficient Statistics Suppose that instead of recording x 1,..., x n we only record t = T (x) where T is a sufficient statistic for θ. Let f T (t θ) be the density (or mass) function for T. Then the likelihood for θ based on t is L (θ t) f T (t θ) How do tests based on L relate to those based on L? 3-9
10 Likelihood Ratio Tests and Sufficient Statistics Theorem 3.1 Suppose that X is a random sample from a population with density f(x θ) and T (X) is a sufficient statistic for θ. Consider testing H 0 : θ Θ 0 V H 1 : θ / Θ 0 Let λ(x) be the likelihood ratio test statistic based on the sample x and let λ (t) be the likelihood ratio test based on the sufficient statistic t = T (x). The for every x in the sample space λ ( T (x) ) = λ(x). 3-10
11 Error Probabilities Definition 3.8 When testing a statistical hypothesis there are two types of errors that can be made. Rejecting H 0 when it is actually true is called a Type I Error. Failing to reject H 0 when it is false is called a Type II Error Suppose our test specifies Reject H 0 T (x 1,..., x n ) C. The probability of making a type I error is α = P ( T (X 1,..., X n ) C H 0 is true ) The probability of making a type II error is β = P ( T (X 1,..., X n ) / C H 1 is true ) 3-11
12 Error Probabilities For testing two simple hypotheses with critical region C: The probability of making a type I error is α = P θ0 ( T (X1,..., X n ) C ) The probability of making a type II error is β = P θ1 ( T (X1,..., X n ) / C ) For testing composite hypotheses: The probability of making a type I error is ( α = sup P θ T (X1,..., X n ) C ) θ Θ 0 The probability of making a type II error is ( β = sup P θ T (X1,..., X n ) / C ) θ Θ
13 The Power Function of a Test Definition 3.9 The power function, β(θ), of a statistical test is the probability of rejecting the null hypothesis as a function of the true value θ. If the rejection region is {x : T (x) C} then β(θ) = P θ ( T (x) C ) The ideal power function would be β(θ) = { 0 θ Θ0 1 θ / Θ 0 Such a power function is never possible, however. 3-13
14 Size and Level of Tests Note that the size of a test is given by α = sup β(θ) θ Θ 0 Sometimes it is not possible to construct a test with size α and so we instead consider tests with level α. Definition 3.10 Consider a test of H 0 : θ Θ 0 with power function β(θ). test is said to be a level α test if The sup β(θ) α θ Θ
15 Unbiased Tests Definition 3.11 Suppose that we are testing H 0 : θ Θ 0 V H 1 : θ Θ 1 based on a test with power function β(θ). Then the test is said to be an unbiased test if, for every θ 0 Θ 0 and θ 1 Θ c 0, β(θ 1 ) β(θ 0 ). Note that, a size α test is unbiased if, and only if β(θ) α for every θ Θ c
16 Uniformly Most Powerful Tests We cannot simultaneously minimize the probabilities of Type I and Type II errors. Instead we usually control the size or level of the test to be some value α and then try to minimize the probability of Type II errors in the class of all tests with level α. If a single test has the lowest probability of Type II error (highest power) for all possible true values of the parameter then it is called the uniformly most powerful test. 3-16
17 Uniformly Most Powerful Tests Definition 3.12 A test of H 0 : θ Θ 0 V H 1 : θ Θ 1 based on the critical region C is said to be the uniformly most powerful (UMP) test of level α if 1. sup θ Θ 0 P θ (X C) α. 2. For any other critical region D with we have sup P θ (X D) α θ Θ 0 P θ (X C) P θ (X D) for all θ Θ
18 Neymann Pearson Theorem Theorem 3.2 Suppose that we are testing the simple hypotheses H 0 : θ = θ 0 V H 1 : θ = θ 1. Let L(θ x) be the likelihood for the parameters and let C be a subset of the sample space such that P θ0 (X C) = α and there exists a constant k > 0 such that and L(θ 0 x) L(θ 1 x) L(θ 0 x) L(θ 1 x) k for all x C, > k for all x / C. Then the test with critical region C is the most powerful test among all tests of level α. 3-18
19 Neymann Pearson Theorem Corollary Suppose that we are testing the simple hypotheses H 0 : θ = θ 0 V H 1 : θ = θ 1. Let T (X) be a sufficient statistic for θ and suppose that g(t θ) is the sampling pdf (or pmf) of T when θ is the true parameter value. Then the most powerful test has critical region R (a subset of the sample space of T ) satisfying for some fixed k 0. t R if g(t θ 1 ) kg(t θ 0 ) and t R if g(t θ 1 ) < kg(t θ 0 ) The size of the test is α = P θ0 (T R). 3-19
20 Uniformly Most Powerful Tests UMP tests of level α generally do not exist! A situation where this is often the case is for two-sided tests of the form H 0 : θ = θ 0 V H 1 : θ θ 0. For one-sided tests of the form H 0 : θ θ 0 V H 1 : θ < θ 0 or H 0 : θ θ 0 V H 1 : θ > θ 0. we can sometimes find a UMP test. 3-20
21 Karlin Rubin Theorem Definition 3.13 A family of distributions with pdf (or pmf) f(x θ) is said to have a monotone likelihood ratio if there exists a statistic t = T (x) such that for every pair θ 1 > θ 2 is an increasing function of t. L(θ 1 x) L(θ 2 x) = g(t θ 1) g(t θ 2 ) Theorem 3.3 Suppose that the family of distributions f(x θ) has a monotone likelihood ratio. Then the uniformly most powerful size α test of H 0 : θ θ 0 V H 1 : θ > θ 0 has critical region C = {x : T (x) > t 0 )} where t 0 is such that P θ0 (T > t 0 ) = α. 3-21
22 p-values of Tests Definition 3.14 A p-value p(x) of H 0 : θ Θ 0 is a test statistic such that 1. 0 p(x) 1 for every sample point x. 2. P θ (p(x) α) α for every θ Θ 0 and every 0 α 1. Small values of p(x) give evidence against H 0. Typically the p-value is calculated from a test statistic although it is a test statistic in its own right. 3-22
23 p-values of Tests Theorem 3.4 Suppose that we wish to test H 0 : θ Θ 0 V H 1 : θ / Θ 0 and let W (X) be a test statistic such that large values of W give evidence against H 0 in favour of H 1. For every sample point x define ( ) p(x) = sup P θ W (X) W (x) θ Θ 0 Then p(x) is a valid p-value. 3-23
24 Decision Theory Decision theory is a method of doing inference based on specifying a how much of a loss incorrect decisions can produce. The method is applicable to all forms of inference. It is widely used in Bayesian inference which we shall examine later but can equally well be applied in frequentist inference. Here will shall briefly introduce the concept and its application in hypothesis testing where it seems most natural. 3-24
25 Decision Rules Definition 3.15 Suppose X 1,..., X n is a random sample and we wish to make inference on a parameter θ. A decision rule δ(x 1,..., X n ) specifies what decision we would take based on the sample. In point estimation, a decision rule is just an estimator. In hypothesis testing suppose that we have a rejection region C then a decision rule could be { Reject H0 if X C δ(x) = Do not RejectH 0 if X / C For convenience we will label these two decision a 0 and a 1 respectively. 3-25
26 Loss Functions Definition 3.16 A loss function L(θ, δ) is a function of the parameter θ and the decision rule δ(x) and specifies what loss is incurred in using the decision rule δ(x) when θ is the true parameter value. The loss function is specified by the analyst and should be chosen to reflect the seriousness of errors in inference. In estimation common loss functions are Absolute error loss: L(θ, ˆθ) = ˆθ θ. Squared error loss L(θ, ˆθ) = (ˆθ θ) 2. Both of these loss functions are symmetric about the estimator but there is no need for that in general. If overestimation is considered more serious than underestimation for example then the loss function could reflect that. 3-26
27 Loss Function for Hypothesis Testing The only losses that can be made in a hypothesis testing framework is in making a Type I or Type II error. One very simple loss function is called 0 1 loss L(θ, a 0 ) = { 0 θ Θ0 1 θ / Θ 0 and L(θ, a 1 ) = { 1 θ Θ0 0 θ / Θ 0 This loss function can be generalized if we do not consider Type I and Type II errors to be equally bad. L(θ, a 0 ) = { 0 θ Θ0 C II θ / Θ 0 and L(θ, a 1 ) = { CI θ Θ 0 0 θ / Θ
28 General Loss Functions for Hypothesis Testing More general loss functions can take into account that the cost of a type I or type II error may be different depending on the value of θ. Consider the one-sided test H 0 : θ θ 0 V H 1 : θ > θ 0. In this case we may consider a loss function of the type L(θ, a 0 ) = L(θ, a 1 ) = { 0 θ θ0 c II (θ θ 0 ) θ > θ 0 { ci (θ 0 θ) θ θ 0 0 θ > θ 0 If deviations when we reject H 0 are more serious than those when we fail to reject H 0 we could have different functions of θ θ 0 in the two parts. 3-28
29 The Risk Function Definition 3.17 The risk function of a decision rule δ(x) is the expected value of the loss function. R(θ, δ) = E θ [ L(θ, δ(x)) ]. The risk function will depend on the true value θ and what decision rule and loss function we have specified for the problem. Often the decision rule is chosen to minimize the risk. Doing this uniformly for all possible θ is generally not possible but it can be in certain classes. 3-29
30 Risk Function for Hypothesis Testing Suppose we have a test procedure δ(x) as defined on Page 78 with corresponding power function β(θ) = P θ (X C). The risk function is given by R(θ, δ) = L(θ, a 0 )P θ (δ(x) = a 0 ) + L(θ, a 1 )P θ (δ(x) = a 1 ) = L(θ, a 0 )P θ (X / C) + L(θ, a 1 )P θ (X C) = L(θ, a 0 )(1 β(θ)) + L(θ, a 1 )β(θ) For a generalized 0-1 loss function this becomes R(θ, δ) = { CI β(θ) θ Θ 0 C II (1 β(θ)) θ / Θ
31 Minimizing the Risk Function For the generalized 0-1 loss function the issue of minimizing risk for a test of a given size is essentially the same problem as maximizing power. In the Neymann Pearson set up we would have { CI α θ = θ 0 R(θ, δ) = = C II (1 β(θ)) θ = θ 1 so the minimum risk test is the same as the most powerful test. In general the issue of minimizing risk is highly related to that of maximizing power although the specific form of the loss function will also play a key role. 3-31
Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationHypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes
Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis
More informationHypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004
Hypothesis Testing BS2 Statistical Inference, Lecture 11 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; November 15, 2004 Hypothesis testing We consider a family of densities F = {f(x; θ),
More informationSTA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4
STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationDecision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over
Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationHypothesis testing: theory and methods
Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More informationLecture 12 November 3
STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationHypothesis Testing - Frequentist
Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Class 6 AMS-UCSC Thu 26, 2012 Winter 2012. Session 1 (Class 6) AMS-132/206 Thu 26, 2012 1 / 15 Topics Topics We will talk about... 1 Hypothesis testing
More informationDetection theory 101 ELEC-E5410 Signal Processing for Communications
Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off
More information4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49
4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationChapter 7. Hypothesis Testing
Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function
More informationHypothesis Testing Chap 10p460
Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies
More informationModel comparison and selection
BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action
More informationHypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses
Testing Hypotheses MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline Hypothesis Testing 1 Hypothesis Testing 2 Hypothesis Testing: Statistical Decision Problem Two coins: Coin 0 and Coin 1 P(Head Coin 0)
More informationBasic Concepts of Inference
Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).
More informationBEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.
BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider
More informationChapter 4. Theory of Tests. 4.1 Introduction
Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule
More informationMath 152. Rumbos Fall Solutions to Assignment #12
Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationIntroductory Econometrics
Session 4 - Testing hypotheses Roland Sciences Po July 2011 Motivation After estimation, delivering information involves testing hypotheses Did this drug had any effect on the survival rate? Is this drug
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More information557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES
557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp
More informationStatistical Inference
Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Spring, 2006 1. DeGroot 1973 In (DeGroot 1973), Morrie DeGroot considers testing the
More informationhttp://www.math.uah.edu/stat/hypothesis/.xhtml 1 of 5 7/29/2009 3:14 PM Virtual Laboratories > 9. Hy pothesis Testing > 1 2 3 4 5 6 7 1. The Basic Statistical Model As usual, our starting point is a random
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More informationHypothesis Testing: The Generalized Likelihood Ratio Test
Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random
More informationChapter 9: Hypothesis Testing Sections
Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two
More informationDerivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques
Vol:7, No:0, 203 Derivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques D. A. Farinde International Science Index, Mathematical and Computational Sciences Vol:7,
More informationMTMS Mathematical Statistics
MTMS.01.099 Mathematical Statistics Lecture 12. Hypothesis testing. Power function. Approximation of Normal distribution and application to Binomial distribution Tõnu Kollo Fall 2016 Hypothesis Testing
More informationMethods of evaluating tests
Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These
More informationLectures 5 & 6: Hypothesis Testing
Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationChapter 8 Hypothesis Testing
Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two
More informationA Handbook to Conquer Casella and Berger Book in Ten Days
A Handbook to Conquer Casella and Berger Book in Ten Days Oliver Y. Chén Last update: June 25, 2016 Introduction (Casella and Berger, 2002) is arguably the finest classic statistics textbook for advanced
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationDetection and Estimation Chapter 1. Hypothesis Testing
Detection and Estimation Chapter 1. Hypothesis Testing Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2015 1/20 Syllabus Homework:
More informationECE531 Lecture 4b: Composite Hypothesis Testing
ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction
More informationF79SM STATISTICAL METHODS
F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationLecture notes on statistical decision theory Econ 2110, fall 2013
Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic
More informationSome General Types of Tests
Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.
More informationStatistical Inference
Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Week 12. Testing and Kullback-Leibler Divergence 1. Likelihood Ratios Let 1, 2, 2,...
More informationDirection: This test is worth 250 points and each problem worth points. DO ANY SIX
Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes
More informationControlling Bayes Directional False Discovery Rate in Random Effects Model 1
Controlling Bayes Directional False Discovery Rate in Random Effects Model 1 Sanat K. Sarkar a, Tianhui Zhou b a Temple University, Philadelphia, PA 19122, USA b Wyeth Pharmaceuticals, Collegeville, PA
More informationLecture Testing Hypotheses: The Neyman-Pearson Paradigm
Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:
More information10. Composite Hypothesis Testing. ECE 830, Spring 2014
10. Composite Hypothesis Testing ECE 830, Spring 2014 1 / 25 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve unknown parameters
More informationSTAT 514 Solutions to Assignment #6
STAT 514 Solutions to Assignment #6 Question 1: Suppose that X 1,..., X n are a simple random sample from a Weibull distribution with density function f θ x) = θcx c 1 exp{ θx c }I{x > 0} for some fixed
More informationPractice Problems Section Problems
Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More informationA Very Brief Summary of Bayesian Inference, and Examples
A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X
More informationSTA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources
STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various
More informationThe University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80
The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple
More informationP Values and Nuisance Parameters
P Values and Nuisance Parameters Luc Demortier The Rockefeller University PHYSTAT-LHC Workshop on Statistical Issues for LHC Physics CERN, Geneva, June 27 29, 2007 Definition and interpretation of p values;
More informationPartitioning the Parameter Space. Topic 18 Composite Hypotheses
Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one
More informationECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing
ECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III 1 / 8 Basics Hypotheses H 0
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationHYPOTHESIS TESTING. Hypothesis Testing
MBA 605 Business Analytics Don Conant, PhD. HYPOTHESIS TESTING Hypothesis testing involves making inferences about the nature of the population on the basis of observations of a sample drawn from the population.
More informationHYPOTHESIS TESTING: FREQUENTIST APPROACH.
HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous
More informationLecture 2: Basic Concepts of Statistical Decision Theory
EE378A Statistical Signal Processing Lecture 2-03/31/2016 Lecture 2: Basic Concepts of Statistical Decision Theory Lecturer: Jiantao Jiao, Tsachy Weissman Scribe: John Miller and Aran Nayebi In this lecture
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationECE531 Screencast 11.5: Uniformly Most Powerful Decision Rules
ECE531 Screencast 11.5: Uniformly Most Powerful Decision Rules D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III 1 / 9 Monotone Likelihood Ratio
More informationHomework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.
Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each
More informationLecture 21: October 19
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use
More informationMIT Spring 2016
Decision Theoretic Framework MIT 18.655 Dr. Kempthorne Spring 2016 1 Outline Decision Theoretic Framework 1 Decision Theoretic Framework 2 Decision Problems of Statistical Inference Estimation: estimating
More informationDetection Theory. Composite tests
Composite tests Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below Thu Chapter 5: Correction Thu I claimed that the above, which is the most general
More informationTopic 10: Hypothesis Testing
Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.
More informationInterval Estimation. Chapter 9
Chapter 9 Interval Estimation 9.1 Introduction Definition 9.1.1 An interval estimate of a real-values parameter θ is any pair of functions, L(x 1,..., x n ) and U(x 1,..., x n ), of a sample that satisfy
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More information8.1-4 Test of Hypotheses Based on a Single Sample
8.1-4 Test of Hypotheses Based on a Single Sample Example 1 (Example 8.6, p. 312) A manufacturer of sprinkler systems used for fire protection in office buildings claims that the true average system-activation
More information8 Testing of Hypotheses and Confidence Regions
8 Testing of Hypotheses and Confidence Regions There are some problems we meet in statistical practice in which estimation of a parameter is not the primary goal; rather, we wish to use our data to decide
More informationLecture 1: Bayesian Framework Basics
Lecture 1: Bayesian Framework Basics Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de April 21, 2014 What is this course about? Building Bayesian machine learning models Performing the inference of
More information2.5 Hypothesis Testing
118 CHAPTER 2. ELEMENTS OF STATISTICAL INFERENCE 2.5 Hypothesis Testing We assume that Y 1,...,Y n have a joint distribution which depends on the unknown parametersϑ = ϑ 1,...,ϑ p ) T. The set of all possible
More informationMathematical statistics
October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:
More informationIntroductory Econometrics. Review of statistics (Part II: Inference)
Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing
More informationLecture 21. Hypothesis Testing II
Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation
More informationComparing Variations of the Neyman-Pearson Lemma
Comparing Variations of the Neyman-Pearson Lemma William Jake Johnson Department of Mathematical Sciences Montana State University April 29, 2016 A writing project submitted in partial fulfillment of the
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and Estimation I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 22, 2015
More informationStatistical Inference. Hypothesis Testing
Statistical Inference Hypothesis Testing Previously, we introduced the point and interval estimation of an unknown parameter(s), say µ and σ 2. However, in practice, the problem confronting the scientist
More informationTwo examples of the use of fuzzy set theory in statistics. Glen Meeden University of Minnesota.
Two examples of the use of fuzzy set theory in statistics Glen Meeden University of Minnesota http://www.stat.umn.edu/~glen/talks 1 Fuzzy set theory Fuzzy set theory was introduced by Zadeh in (1965) as
More informationDepartment of Mathematics
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 20: Significance Tests, I Relevant textbook passages: Larsen Marx [8]: Sections 7.2, 7.4, 7.5;
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationTopic 15: Simple Hypotheses
Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is
More informationTUTORIAL 8 SOLUTIONS #
TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level
More information