Derivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques

Similar documents
Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

Testing Statistical Hypotheses

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4

Charles Geyer University of Minnesota. joint work with. Glen Meeden University of Minnesota.

Testing Statistical Hypotheses

Partitioning the Parameter Space. Topic 18 Composite Hypotheses

Hypothesis Testing. BS2 Statistical Inference, Lecture 11 Michaelmas Term Steffen Lauritzen, University of Oxford; November 15, 2004

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Lecture 12 November 3

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

On the GLR and UMP tests in the family with support dependent on the parameter

Chapter 4. Theory of Tests. 4.1 Introduction

Hypothesis Testing - Frequentist

Chapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be

A Very Brief Summary of Statistical Inference, and Examples

1. (Regular) Exponential Family

Lecture 21. Hypothesis Testing II

Methods of evaluating tests

ECE531 Lecture 4b: Composite Hypothesis Testing

Notes on Decision Theory and Prediction

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

Topic 10: Hypothesis Testing

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Practice Problems Section Problems

Mathematical statistics

Ch. 5 Hypothesis Testing

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES

Some History of Optimality

A Note on Hypothesis Testing with Random Sample Sizes and its Relationship to Bayes Factors

STAT 830 Hypothesis Testing

Tests and Their Power

A Very Brief Summary of Statistical Inference, and Examples

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

SUFFICIENT STATISTICS

F79SM STATISTICAL METHODS

Basic Concepts of Inference

STATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero

Mathematical Statistics

A BAYESIAN MATHEMATICAL STATISTICS PRIMER. José M. Bernardo Universitat de València, Spain

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Chapter 10. Hypothesis Testing (I)

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

Parameter Estimation, Sampling Distributions & Hypothesis Testing

Topic 15: Simple Hypotheses

Statistics for the LHC Lecture 1: Introduction

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

ECE531 Screencast 11.4: Composite Neyman-Pearson Hypothesis Testing

Detection and Estimation Chapter 1. Hypothesis Testing

Statistical Inference

Hypothesis Testing Chap 10p460

Statistical Inference

Theory of Probability Sir Harold Jeffreys Table of Contents

Topic 10: Hypothesis Testing

Statistical Inference

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods

Review. December 4 th, Review

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

8: Hypothesis Testing

Summary of Chapters 7-9

Tracking Interval for Type II Hybrid Censoring Scheme

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

ECE531 Screencast 11.5: Uniformly Most Powerful Decision Rules

TUTORIAL 8 SOLUTIONS #

Statistical Methods for Particle Physics (I)

Stat 5421 Lecture Notes Fuzzy P-Values and Confidence Intervals Charles J. Geyer March 12, Discreteness versus Hypothesis Tests

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Hypothesis testing: theory and methods

STAT 830 Hypothesis Testing

A Course in Statistical Theory

Theory and Methods of Statistical Inference. PART I Frequentist likelihood methods

Hypothesis Testing. Testing Hypotheses MIT Dr. Kempthorne. Spring MIT Testing Hypotheses

Chapter 9: Hypothesis Testing Sections

Introduction to Statistical Inference

University of Texas, MD Anderson Cancer Center

STATISTICS SYLLABUS UNIT I

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006

ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters

Theory of Statistical Tests

Bayesian inference. Justin Chumbley ETH and UZH. (Thanks to Jean Denizeau for slides)

Math Review Sheet, Fall 2008

FYST17 Lecture 8 Statistics and hypothesis testing. Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons

4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing

Exercises Chapter 4 Statistical Hypothesis Testing

Detection Theory. Composite tests

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Nuisance parameters and their treatment

Modern Methods of Data Analysis - WS 07/08

Math 494: Mathematical Statistics

Stat 5101 Lecture Notes

Invariant HPD credible sets and MAP estimators

Fundamental Probability and Statistics

Application of Variance Homogeneity Tests Under Violation of Normality Assumption

Deccan Education Society s FERGUSSON COLLEGE, PUNE (AUTONOMOUS) SYLLABUS UNDER AUTONOMY. FIRST YEAR B.Sc.(Computer Science) SEMESTER I

Transcription:

Vol:7, No:0, 203 Derivation of Monotone Likelihood Ratio Using Two Sided Uniformly Normal Distribution Techniques D. A. Farinde International Science Index, Mathematical and Computational Sciences Vol:7, No:0, 203 waset.org/publication/769 Abstract In this paper, two-sided uniformly normal distribution techniques were used in the derivation of monotone likelihood ratio. The approach mainly employed the parameters of the distribution for a class of all size α. The derivation technique is fast, direct and less burdensome when compared to some existing methods. I Keywords Neyman-Pearson Lemma, Normal distribution. I. INTRODUCTION N statistical hypothesis testing, a uniformly most powerful (UMP) test is a hypothesis test which has the greatest power -β among all possible tests of a given size α. Neyman- Pearson Lemma further states that the likelihood-ration test is UMP for testing simple (point) hypothesis. Let X denote a random vector taken from a parameterized family of probability density function (pdf) or probability mass function (pmf) given as f(x), which depends on the unknown deterministic parameter. the parameter space is partitioned into two disjoint sets. Let H 0 denote the hypothesis that and H denote the hypothesis that. The binary test for hypotheses is performed using a test function., 0, meaning that H is in force if the measurement R and that H 0 is in force if the measurement A. is a disjoint covering of the measurement space. Hence, a test function is uniformly most powerful of size α if for any other test function we have To introduce the UMP to Normal distribution, we consider the standard normal distribution of the family of Normal distribution. Section II contains review of UMP test. In Section III, we explained hypothesis testing uniformly most powerful tests. Section IV is devoted to the results in standard normal distribution. D. A. Farinde is with the Department of Mathematics & Statistics, Lagos State Polytechnic Ikorodu, Nigeria (e-mail: jophyab200@yahoo.com, danielfarinde@yahoo.co.uk). Sponsored by Lagos State Polytechnic, Ikorodu through Year 202 conference Grant of Tertiary Education Trusu Fund. (TETFund) II. UNIFORMLY MOST POWERFUL (UMP) TEST Reference [4] pioneered modern frequentist statistics as a model-based approach to statistical induction anchored on the notion of a statistical model. As G z,, Ѳ R.dimѲ Fishers proposed to begin with pre-specified G as a hypothetical infinite population. He estimated the specification of G as a response to the question: what population is this a random sample? A mis-specified G would vitiate any procedure relying on z or the likelihood function. Reference [5] argued for inductive inference spearheaded by his significant testing, and [8] argued for inductive behavior based on Neyman-Pearson testing. However, neither account gave a satisfactory answer to the canonical question. When do data provide evidence for a substantive claim hypothesis? Over the last three decades, Fisher s specification problem has been recast in the form of model selection problems. The essential question, how could n infinite set of all possible models that could have given rise to data be narrowed down to a single statistical model. These models may be nested or non-nested. For non-nested case, see [2], [3], [], [6], [], and [9]. In the nested case, we consider a parametric family of densities and two hypotheses as H 0 and H. When the domain of density is dependent on parameter, the theories for hypothesis testing and model selection have not developed. For the testing problem of type Against : : When a class of size-α tests is considered, and the family is one-parameter exponential distributions, a uniformly most powerful (UMP) test to exist. However, under these conditions, a UMP test does not exist for H 0 : θ < θ < θ 2 against H : θ < θ θ > θ 2 H : θ θ against H : θ θ. Whereas, if the class of size-α tests is reduced to a class by taking only the unbiased test and the family of distributions is Polya type, we know that a UMP test does not exist for testing problems of the two latter types of hypothesis testing. When a UMP test does not exist, we may use the same approach used in the estimation problems. Imposing a restriction on the test to be considered and finding optimal test within the class of International Scholarly and Scientific Research & Innovation 7(0) 203 538 scholar.waset.org/307-6892/769

Vol:7, No:0, 203 International Science Index, Mathematical and Computational Sciences Vol:7, No:0, 203 waset.org/publication/769 test under the restriction. Two such types of restrictions are unbiasedness and invariance. Under some distributional assumptions, let the power function of any test φ,, is continous in θ. Let us consider a test of size-α unbiased In this situation the two sided test is 0 ;, 2 0 where,,, are obtained by α and. Such a test is therefore UMP size-α unbiased test in for testing against. Consider a family of distribution which it supports is dependent on its parameter. In such a situation the UMP test is known for uniform and double exponential distributions, see [7]. III. HYPOTHESIS TESTING UNIFORMLY MOST POWERFUL TESTS We give the definition of a uniformly most powerful test in a general setting which includes one-sided and two-sided tests. We take the null hypothesis to be And the alternative to be : Ω : Ω We write the power function as Pow(θ, d) to make its dependence on the decision function explicit. Definition: A decision function d* is a uniformly most powerful (UMP) decision function (or test) at significance level α 0 if () Pow(θ, d*) α 0, Ω (2) For every decision function d which satisfies (), we have Pow(θ, d) Pow(θ, d*),, Ω. Do UMP tests ever exist? If the alternative hypothesis is one-sided then they do for certain distributions and statistics. We proceed by defining the needed property on the population distribution and the statistic. Definition: Let T = t(x, X 2,..., X n ) be a statistic. Let f(x, x 2,..., x n θ) be the joint density of the random sample. We say that f(x, x 2,..., x n θ) has a monotone likelihood ratio in the statistic T if for all the ratio,, depends on, only through t(, and the ratio is an increasing function of,. Example: Consider a Bernoulli distribution for the population, i.e. we are looking at a population proportion. So each X i = 0,, and p = P(X = x). The joint density is where Let we have,,,,, So the ratio depends on the sample only through the sample mean and it is an increasing function of. (It is an easy algebra exercise to check that if /. Example: Now consider a normal population with unknown mean and known variance. So, the joint density is,, Now let μ < μ 2. A little algebra shows 2,,,, 2 So the ratio depends on,,, only through, and the ratio is an increasing function of. Theorem : Suppose,, has a monotone likelihood ratio in the statistic,,. Consider hypothesis testing with alternative hypothesis : and null hypothesis : :. Let, be constants such that. Then the test that rejects the null hypothesis if is a UMP test at significant level,. Example: We consider the example of a normal population with known variance and unknown mean. We saw that the likelihood ratio is monotone in the sample mean. So if we reject the null hypothesis when, this will be a UMP test with significance level. Give a desired significance level, we choose c so this equation holds. Then the theorem tells us we have a UMP test. So for every, our test makes as large as possible. Example: We consider the example of a Bernoulli distribution for the population (population proportion). To be concrete, suppose the null hypothesis is 0. and the alternative is 0.. We have a random sample of size n = 20. Let be the sample proportion. By what we already done, the International Scholarly and Scientific Research & Innovation 7(0) 203 539 scholar.waset.org/307-6892/769

Vol:7, No:0, 203 International Science Index, Mathematical and Computational Sciences Vol:7, No:0, 203 waset.org/publication/769 test that rejects the null hypothesis when will be a UMP test. We want to choose c so that. However, is a discrete RV (it can only be 0/20, /20, 2/20,, 9/20), so this is not possible. Suppose we want a significant level of 0.005. Using software (or a table of the binomial distribution) we find that 6/20 0. 0.03 and 7/20 0. 0.0024. So we must take c = 7/20. Then the test that rejects the null if 7/20 is a UMP test at significance level 0.005. What about two-sided test alternatives? It can be shown that there is no UMP test in the setting. IV. UNIFORM MOST POWERFUL (UMP) TEST Let,, be an independent random sample. A test φ for testing : against : is said to be a uniformly most powerful test of size α if it is of size- α and it has no smaller power than that of any other test α, in class of level α tests i.e. 0 and for every, The known theorem provided a UMP test of size α for onesided testing problem : against :, whenever the p.d.f of has monotone likelihood ratio. The theorem holds for such distributions provided they have monotone likelihood ratio in. For testing against and test of the form 0 has non-decreasing power function and is UMP of its size provided its size is positive. Reference [0] explained that for every, 0, and every, there exist numbers 0 and such that the test given above is UMP size α test of against. Examples of Uniformly Most Powerful Test If the same result of MPT test is obtained for UMP by changing the composite range : 0 to specified range and then consider alternative hypothesis : then the result obtained is said to be UMPT. i.e when H 0 is simple and H is composite (one-sided) then a UMPT exist. On the other hand, if H 0 is simple generally no UMPT exists. Example Let X, X 2,..., X n be a random sample of size n from a distribution with density, 0 0, For testing : against : where is specified, what is the UMPT? In this case H 0 is simple and H is composite. Let : be the simple hypothesis. Then MPT for : against : is given by Since Therefore,, Taking the natural log, we have Let ln ln ln ln Also, let have ln ln ln ln ln ln lnln ln ln since ln is a constant, then we have since is also a constant, then we International Scholarly and Scientific Research & Innovation 7(0) 203 540 scholar.waset.org/307-6892/769

Vol:7, No:0, 203 International Science Index, Mathematical and Computational Sciences Vol:7, No:0, 203 waset.org/publication/769 Divide through by n, we have Therefore, Since the same MPT will be obtained for each simple hypothesis :, is the UMPT. Example 2 Let X, X 2,..., X n be a random sample of from,, find the UMPT for testing : 0 : 0. : against :, we want to find a UMPT. Here H 0 is simple and H is composite. Consider a specific alternative hypothesis : 0. Then an application of Neyman-Pearson lemma to test : 0 against : gives 2 2 2 2 0 0 0 0 2 2 2 0 ln ln 2 2 ln Therefore where c is determined such that 0 and not by (hence independent of and the critical region will be the same if we had selected another value of 0. Therefore the test given by is a UMPT. Example 3 Let X, X 2,..., X n be a random sample from 0,. Find the UMPT for testing : :. Consider a particular simple alternative hypothesis :. Then the MPT for testing against is given by 2 2 2 2 2 2 ln ln 2 ln ln ln 2 2 2 ln ln 2 2 2 2 2 ln 2 2 2 2 2 2 International Scholarly and Scientific Research & Innovation 7(0) 203 54 scholar.waset.org/307-6892/769

Vol:7, No:0, 203 where 2 ln International Science Index, Mathematical and Computational Sciences Vol:7, No:0, 203 waset.org/publication/769 Observe that as long as the MPT will remain the same for each simple alternative hypothesis : where c is determined once. α, the probability of type error, is specified, and independent of. Thus,. Since the critical region is independent of, the test obtained here is UMPT. If we were testing : against :, it can be verified that the corresponding UMPT is. Example 4 Let X, X 2,..., X n be a random sample from where, 0, 0 What is the UMPT for testing : against :? Let us consider the alternative :. Then MPT for : against : is given by ln ln ln ln ln Therefore,,Since 0 Since the same MPT will be obtained for each simple hypothesis then :, is the UMPT. REFERENCES [] Commenges, D., Sayyareh, A., letenneur, L., Guedj, J, and Bar-hen, A, (2008), Estimating a Difference of Kullback-Leibler Risks Using a Normalized Difference of AIC, The Annals of Applied Statistics, Vol. 2,No. 3, pp.23-42. [2] Cox, D.R. (96), Test of Separate Families of Hypothesis, Proceeding of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Vol., pp.05 23. [3] Cox, D.R. (962), Further Result on Tests of Separate Families of Hypotheses, Journal of the Royal Statistical Society, Vol. B, No. 24 pp.406 424. [4] Fisher, R.A. (922), On the mathematical foundations of theoretical statistics, Philosophical Transactions of the Royal Society Vol. A, No.222. pp.309 3689. [5] Fisher, R.A. (955), Statistical Methods and Scientific Induction, Journal of the Royal Statistical Society, (989), Vol. B, No. 7, pp.69 78. [6] Fisher, G. and McAleer, M. (98), Alternative Procedures and Associated Tests of Significance for Non-Nested Hypotheses, Journal of Econometrics, Vol. 6, pp.03 9. [7] Lehmann, E.L., (986). Testing Statistical Hypotheses New York, John Wiley, II edition. [8] Neyman, J. (956), Note on an article by Sir Ronald Fisher, Journal of the Royal Statistical Society, Vol. B, No. 8, pp.288 294. [9] Sayyareh, A., Obeidi, R., and Bar-hen, A. (20), Empirical comparison of some model selection criteria, Communication in Statistics Simulation and Computation, Vol.40 pp.72 86. [0] Sayyareh, A., Barmalzan G., and Haidari, A, (20), Two Sided Uniformly most powerful test for Pitman Family, Applied Mathematical Sciences Vol. 5, No.74, pp.3649 3660. [] Vuong, Q.H. (989), Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses, Econometrica, Vol. 57, No. 2, 307 333. International Scholarly and Scientific Research & Innovation 7(0) 203 542 scholar.waset.org/307-6892/769