EXACT AND ASYMPTOTICALLY ROBUST PERMUTATION TESTS. Eun Yi Chung Joseph P. Romano

Similar documents
Multiple Sample Categorical Data

large number of i.i.d. observations from P. For concreteness, suppose

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Exact and Approximate Stepdown Methods For Multiple Hypothesis Testing

Summary of Chapters 7-9

University of California San Diego and Stanford University and

Control of Generalized Error Rates in Multiple Testing

Inference for Identifiable Parameters in Partially Identified Econometric Models

The International Journal of Biostatistics

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Package RATest. November 30, Type Package Title Randomization Tests

Spring 2012 Math 541B Exam 1

Asymptotic Statistics-III. Changliang Zou

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

The Multinomial Model

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

ON THE UNIFORM ASYMPTOTIC VALIDITY OF SUBSAMPLING AND THE BOOTSTRAP. Joseph P. Romano Azeem M. Shaikh

Probability Theory and Statistics. Peter Jochumzen

Asymptotic Statistics-VI. Changliang Zou

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

Central Limit Theorem ( 5.3)

Math 494: Mathematical Statistics

Multiple Testing of One-Sided Hypotheses: Combining Bonferroni and the Bootstrap

Chapter 9: Hypothesis Testing Sections

Testing Statistical Hypotheses

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

Lecture 10: Generalized likelihood ratio test

One-Sample Numerical Data

Summary of Chapter 7 (Sections ) and Chapter 8 (Section 8.1)

Bootstrap tests. Patrick Breheny. October 11. Bootstrap vs. permutation tests Testing for equality of location

On the Uniform Asymptotic Validity of Subsampling and the Bootstrap

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

The Nonparametric Bootstrap

Inference for identifiable parameters in partially identified econometric models

Asymptotic efficiency of simple decisions for the compound decision problem

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

AN EMPIRICAL LIKELIHOOD RATIO TEST FOR NORMALITY

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Mantel-Haenszel Test Statistics. for Correlated Binary Data. Department of Statistics, North Carolina State University. Raleigh, NC

Political Science 236 Hypothesis Testing: Review and Bootstrapping

Resampling and the Bootstrap

A noninformative Bayesian approach to domain estimation

Sample Spaces, Random Variables

NEW APPROXIMATE INFERENTIAL METHODS FOR THE RELIABILITY PARAMETER IN A STRESS-STRENGTH MODEL: THE NORMAL CASE

This does not cover everything on the final. Look at the posted practice problems for other topics.

Cramér-Type Moderate Deviation Theorems for Two-Sample Studentized (Self-normalized) U-Statistics. Wen-Xin Zhou

Chapter 10. U-statistics Statistical Functionals and V-Statistics

Stochastic Convergence, Delta Method & Moment Estimators

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Chapter 7. Hypothesis Testing

Ling 289 Contingency Table Statistics

5 Introduction to the Theory of Order Statistics and Rank Statistics

A General Overview of Parametric Estimation and Inference Techniques.

Chapter 1 Statistical Inference

Testing Statistical Hypotheses

36. Multisample U-statistics and jointly distributed U-statistics Lehmann 6.1

October 1, Keywords: Conditional Testing Procedures, Non-normal Data, Nonparametric Statistics, Simulation study

Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training

STAT 512 sp 2018 Summary Sheet

Testing Restrictions and Comparing Models

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

ECE 275A Homework 7 Solutions

14.30 Introduction to Statistical Methods in Economics Spring 2009

11. Bootstrap Methods

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

On the Triangle Test with Replications

6 Single Sample Methods for a Location Parameter

Non-parametric Inference and Resampling

Probability and Statistics

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

ADJUSTED POWER ESTIMATES IN. Ji Zhang. Biostatistics and Research Data Systems. Merck Research Laboratories. Rahway, NJ

1 Exercises for lecture 1

Chapter 2. Review of basic Statistical methods 1 Distribution, conditional distribution and moments

Topic 3: Sampling Distributions, Confidence Intervals & Hypothesis Testing. Road Map Sampling Distributions, Confidence Intervals & Hypothesis Testing

1 Glivenko-Cantelli type theorems

Design of the Fuzzy Rank Tests Package

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Contents 1. Contents

Comparison of Two Samples

Random Variable. Pr(X = a) = Pr(s)

Final Examination Statistics 200C. T. Ferguson June 11, 2009

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

TUTORIAL 8 SOLUTIONS #

Introduction to Probability

BTRY 4090: Spring 2009 Theory of Statistics

Practice Problems Section Problems

Lecture 4: Probability and Discrete Random Variables

1 Hypothesis Testing and Model Selection

Statistics 3858 : Maximum Likelihood Estimators

Statistics. Statistics

Bootstrap, Jackknife and other resampling methods

Bivariate Paired Numerical Data

Generalized Linear Models Introduction

Master s Written Examination

Applying the proportional hazard premium calculation principle

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

arxiv: v1 [stat.co] 26 May 2009

Transcription:

EXACT AND ASYMPTOTICALLY ROBUST PERMUTATION TESTS By Eun Yi Chung Joseph P. Romano Technical Report No. 20-05 May 20 Department of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

EXACT AND ASYMPTOTICALLY ROBUST PERMUTATION TESTS By Eun Yi Chung Joseph P. Romano Stanford University Technical Report No. 20-05 May 20 This research was supported in part by National Science Foundation grant DMS 0707085. Department of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065 http://statistics.stanford.edu

Exact and Asymptotically Robust Permutation Tests EunYi Chung Department of Economics Stanford University Joseph P. Romano Departments of Statistics and Economics Stanford University May 5, 20 Abstract Given independent samples from P and Q, two-sample permutation tests allow one to construct exact level tests when the null hypothesis is P = Q. On the other hand, when comparing or testing particular parameters θ of P and Q, such as their means or medians, permutation tests need not be level α, or even approximately level α in large samples. Under very weak assumptions for comparing estimators, we provide a general test procedure whereby the asymptotic validity of the permutation test holds while retaining the exact rejection probability α in finite samples when the underlying distributions are identical. A quite general theory is possible based on a coupling construction, as well as a key contiguity argument for the binomial and hypergeometric distributions. The ideas are broadly applicable and special attention is given to a nonparametric k-sample Behrens-Fisher problem, whereby a permutation test is constructed which is exact level α under the hypothesis of identical distributions, but has asymptotic rejection probability α under the more general null hypothesis of equality of means. A Monte Carlo simulation study is performed. 200 MSC subject classifications. Primary 62E20, Secondary 62G0 KEY WORDS: Behrens-Fisher problem; Coupling; Permutation test Research has been supported by NSF Grant DMS-0707085.

Introduction In this article, we consider the behavior of two-sample (and later also k-sample) permutation tests for testing problems when the fundamental assumption of identical distributions need not hold. Assume X,..., X m are i.i.d. according to a probability distribution P, and independently, Y,..., Y n are i.i.d. Q. The underlying model specifies a family of pairs of distributions (P, Q) in some space Ω. For the problems considered here, Ω specifies a nonparametric model, such as the set of all pairs of distributions. Let N = m + n, and write Z = (Z,..., Z N ) = (X,..., X m, Y,..., Y n ). () Let Ω = {(P, Q) : P = Q}. Under the assumption (P, Q) Ω, the joint distribution of (Z,..., Z N ) is the same as (Z π(),..., Z π(n) ), where (π(),..., π(n)) is any permutation of {,..., N}. It follows that, when testing any null hypothesis H 0 : (P, Q) Ω 0, where Ω 0 Ω, then an exact level α test can be constructed by a permutation test. To review how, let G N denote the set of all permutations π of {,..., N}. Then, given any test statistic T m,n = T m,n (Z,..., Z N ), recompute T m,n for all permutations π; that is, compute T m,n (Z π(),..., Z π(n) ) for all π G N, and let their ordered values be T () m,n T (2) m,n T (N!) m,n. Fix a nominal level α, 0 < α <, and let k be defined by k = N! [αn!], where [αn!] denotes the largest integer less than or equal to αn!. Let M + (z) and M 0 (z) be the number of values T m,n(z) (j) (j =,..., N!) which are greater than T (k) (z) and equal to T (k) (z), respectively. Set a(z) = αn! M + (z). M 0 (z) Define the randomization test function φ(z) to be equal to, a(z), or 0 according to whether T m,n (Z) > T m,n(z), (k) T m,n (X) = T (k) (Z), or T m,n (Z) < T (k) (Z), respectively. Then, under any (P, Q) Ω, E P,Q [φ(x,..., X m, Y,..., Y n )] = α. 2

Also, define the permutation distribution as ˆR m,n(t) T = I{T m,n (Z π(),..., Z π(n) ) t}, (2) N! π G N where G N denotes the N! permutations of {, 2,..., N}. Roughly speaking (after accounting for discreteness), the permutation test rejects H 0 if the test statistic T m,n (evaluated at the original data set) exceeds T m,n, (k) or a α quantile of this permutation distribution. However, problems arise if Ω 0 is strictly bigger than Ω. Since a transformed permuted data set no longer has the same distribution as the original data set, the argument leading to the exact construction of a level α test fails, and faulty inferences can occur. To be concrete, consider constructing a permutation test based on the difference of sample means T m,n = m /2 ( X m Ȳn). Note that we are not taking the absolute difference, so that the test is one-sided, as we are rejecting for large positive values of the difference. First of all, one needs to be very careful in deciding what family of distributions Ω 0 is being tested under the null hypothesis. If the null specifies P = Q, then without further assumptions, a test based on X m Ȳn is not appropriate. First of all, even if P = Q so that the permutation construction will result in probability of rejection equal to α, the test clearly will not have any power against distributions P and Q whose means are identical but P Q. The test is only warranted if it can be assumed that lack of equality of distributions is accompanied by a corresponding change in population means. Such an assumption may be inappropriate. Consider the case where one group receives a treatment and the other a placebo. Then, no treatment effect may arguably be considered equivalent to both groups receiving a placebo, in which case the distributions would be the same. However, even in this case, if there is an effect due to treatment, P and Q may differ not only in location but also in other aspects of the distribution such as scale and shape. Moreover, if the two groups being compared are distinct in a way other than the assignment of treatment or placebo, as in comparing educational achievement between boys and girls, then it is especially crucial to clarify what is being tested and the implicit underlying assumptions. In such cases, the permutation test based on the difference of sample means is only appropriate as a test of equality of population means. However, the permutation test no longer controls the level of the test, even in large samples. As is well-known (Romano, 990), the permutation test possesses a certain asymptotic robustness as a test of differ- 3

ence in means if m/n as n, or the underlying variances of P and Q are equal, in the sense that the rejection probability under the null hypothesis of equal means tends to the nominal level. Without equal variances and comparable sample sizes, the rejection probability can be much larger than the nominal level, which is a concern. Because of the lack of robustness and the increased probability of a Type error, rejection of the null may incorrectly be interpreted as rejection of equal means, when in fact it is caused by unequal variances and unequal sample sizes. Even more alarming is the possibility of rejecting a one-sided null hypothesis in favor of a positive mean difference when in fact the difference in means is negative. Further note that there is also the possibility that the rejection probability can be much less than the nominal level, which by continuity implies the test is biased and has little power of detecting a true difference in means. The situation is even worse when basing a test on a difference in sample medians, in the sense that regardless of sample sizes, the asymptotic rejection probability of the permutation test will be α under very stringent conditions, which essentially means only in the case where the underlying distributions are the same. However, in a very insightful paper in the context of random censoring models, Neuhaus (993) first realized that by proper studentization of a test statistic, the permutation test can result in asymptotically valid inference even when the underlying distributions are not the same. Later, Janssen (997) showed that, in the case of the difference of sample means, by proper studentization of a test statistic, the permutation test is a valid asymptotic approach. In particular, his results imply that, if the underlying population means are identical (and population variances are finite and may differ), then the asymptotic rejection probability of the permutation test is α. Furthermore, the use of the permutation test retains the property that the exact rejection probability is α if the underlying distributions are identical. This result has been extended to other specific problems, such as comparing variances by Pauly (200) and the two-sample Wilcoxon test by Neubert and Brunner (2007). Other results on permutation tests are presented in Janssen (2005), Janssen and Pauls (2003), and Janssen and Pauls (2005). The goal of this paper is to obtain a quite general result of the same phenomenon. That is, when basing a permutation test using some test statistic as a test of a parameter (usually a difference of parameters associated with marginal distributions), we would like to retain the exactness property when P = Q, and also have the rejection probability be α for the more general null hypothesis specifying the parameter (such as the difference being zero). Of course, there are many alternatives to getting asymptotic tests, such as the bootstrap or subsampling. However, we do not wish to give up the exactness property under P = Q, and resampling methods do not have such finite sample properties. The main problem becomes: what is the asymptotic behavior of ˆRT m,n ( ) defined in (2) for 4

general test statistic sequences T m,n when the underlying distributions differ. Only for suitable test statistics is it possible to achieve both finite sample exactness when the underlying distributions are equal, but also maintain a large sample rejection probability near the nominal level when the underlying distributions need not be equal. In this sense, our results are both exact and asymptotically robust for heterogenous populations. This paper provides a framework for testing a parameter that depends on P and Q. We construct a general test procedure where the asymptotic validity of the permutation test holds in a general setting. Assuming that estimators are asymptotically linear and consistent estimators are available for their asymptotic variance, we provide a test that has asymptotic rejection probability equal to the nominal level α, but still retains the exact rejection probability of α in finite samples if P = Q. It is not even required that the estimators are based on differentiable functionals, and some methods like the bootstrap would not necessarily be even asymptotically valid under such conditions, let alone retain the finite sample exactness property when P = Q. The arguments of the paper are quite different from Janssen and previous authors, and hold under great generality. For example, they immediately apply to comparing means, variances, or medians. The key idea is to show that the permutation distribution behaves like the unconditional distribution of the test statistic when all N observations are i.i.d. from the mixture distribution pp + ( p)q, where p, where p is such that m/n p. This seems intuitive because the permutation distribution permutes the observations so that a permuted sample is almost like a sample from the mixture distribution. In order to make this idea precise, a coupling argument is given in Section 3.3 Of course, the permutation distribution depends on all permuted samples (for a given original data set). But even for one permuted data set, it cannot exactly be viewed as a sample from pp + ( p)q. Indeed, the first m observations from the mixture would include B m observations from P and the rest from Q, where B m has the binomial distribution based on m trials and success probability p. On the other hand, for a permuted sample, if H m denotes the number of observations from P, then H m has the hypergeometric distribution with mean mp. The key argument that allows for such a general result concerns the contiguity of the distributions of B m and H m. Section 3 highlights the main technical ideas required for the proofs. Section 4 applies these ideas to the k-sample Behrens-Fisher problem, though no assumption of normality is required. Once again, exact level is achieved when all k distributions are equal, but the asymptotic rejection probability equals the nominal level under the null hypothesis of mean equality (under a finite variance assumption). Lastly, Monte Carlos simulation studies illustrating our results are presented in Section 5. All proofs are reserved for the appendix. 5

2 Robust Studentized Two-sample Test In this section, we consider the general problem of inference from the permutation distribution when comparing parameters from two populations. Specifically, assume X,..., X m are i.i.d. P and, independently, Y,..., Y n are i.i.d. Q. Let θ( ) be a realvalued parameter, defined on some space of distributions P. The problem is to test the null hypothesis H 0 : θ(p ) = θ(q). (3) Of course, when P = Q, one can construct permutation tests with exact level α. Unfortunately, if P Q, the test need not be valid in the sense that the probability of a Type error need not be α even asymptotically. Thus, our goal is to construct a procedure that has asymptotic rejection probability equal to α quite generally, but also retains the exactness property in finite samples when P = Q. We will assume that estimators are available that are asymptotically linear. Specifically, assume that, under P, there exists an estimator ˆθ m = ˆθ m (X,..., X m ) which satisfies m /2 [ˆθ m θ(p )] = m f P (X i ) + o P () (4) m Similarly, we assume that, based on the Y j (under Q), n /2 [ˆθ n θ(q)] = n n f Q (Y j ) + o Q () (5) The functions determining the linear approximation f P and f Q can of course depend on the underlying distributions. Different forms of differentiability guarantee such linear expansions in the special case when ˆθ m takes the form an empirical estimate θ( ˆP m ), where ˆP m is the empirical measure constructed from X,..., X m, but we will not need to assume such stronger conditions. We will argue that our assumptions of asymptotic linearity already imply a result about the permutation distribution corresponding to the statistic m /2 [ˆθ m (X,..., X m ) ˆθ n (Y,..., Y n )], without having to impose any differentiability assumptions. However, we will assume the expansion (4) holds not just for i.i.d. samples under P, and also under Q, but also when sampling i.i.d. observations from the mixture distribution P = pp + qq. This is a weak assumption and replaces having to study the permutation distribution based on variables that are no longer independent nor identically distributed with a simple assumption about the behavior under an i.i.d. sequence. Indeed, we will argue that in all cases, the permutation distribution behaves asymptotically like the unconditional limiting sampling distribution of the studied j= 6

statistic sequence when sampling i.i.d. observations from P. Theorem 2.. Assume X,..., X m are i.i.d. P and, independently, Y,..., Y n are i.i.d. Q. Consider testing the null hypothesis (3) based on a test statistic of the form T m,n = m /2 [ˆθ m (X,..., X m ) ˆθ n (Y,..., Y n )], where the estimators satisfy (4) and (5). Further assume E P f P (X i ) = 0 and 0 < E P f 2 P (X i ) σ 2 (P ) <, and the same with P replaced by Q. Let m, n, with N = m + n, p m = m/n, q m = n/n and p m p (0, ) with p m p = O(m /2 ). (6) Assume the estimator sequence also satisfies (4) with P replaced by P = pp + qq with σ 2 ( P ) <. where Then, the permutation distribution of T m,n given by (2) satisfies sup ˆR m,n(t) T Φ(t/τ( P )) P 0, t τ 2 ( P ) = σ 2 ( P ) + p p σ2 ( P ) = p σ2 ( P ). (7) Remark 2.. Under H 0, the true unconditional sampling distribution of T m,n is asymptotically normal with mean 0 and variance σ 2 (P ) + which does not equal τ 2 ( P ) defined by (7) in general. p p σ2 (Q), (8) Example 2.. (Difference of Means) As is well-known, even for the case of comparing population means by sample means, equality holds if and only if p = /2 or σ 2 (P ) = σ 2 (Q). Example 2.2. (Difference of Medians) Let F and G denote the c.d.f.s corresponding to P and Q. Let θ(f ) denote the median of F, i.e., θ(f ) = inf{x : F (x) }. Then, 2 it is well known (Serfling, 980) that, if F is continuously differentiable at θ(p ) with 7

derivative F (and the same with F replaced by G), then and similarly, m /2 [θ( ˆP m ) θ(p )] = m m n /2 [θ( ˆQ n ) θ(q)] = n n j= I{X 2 i θ(p )} + o F P () (θ(p )) I{Y 2 j θ(q)} + o G Q (). (θ(q)) Thus, we can apply Theorem. and conclude that, when θ(p ) = θ(q) = θ, the permutation distribution of T m,n is approximately a normal distribution with mean 0 and variance 4( p)[pf (θ) + ( p)g (θ)] 2 in large samples. On the other hand, the true sampling distribution is approximately a normal distribution with mean 0 and variance v 2 (P, Q) 4[F (θ)] + p 2 p 4[G (θ)]. (9) 2 Thus, the permutation distribution and the true unconditional sampling distribution behave differently asymptotically unless F (θ) = G (θ) is satisfied. Since we do not assume P = Q, this condition is a strong assumption. Hence, the permutation test for testing equality of medians is generally not valid in the sense that the rejection probability tends to a value that is far from the nominal level α. Remark 2.2. The assumption (6) is of course a little stronger than the more basic assumption m/n p, where no rate is required between the difference m/n and p. Of course, we are free to choose p as m/n in any situation, and the assumption is rather innocuous. (Indeed, for any m 0 and N 0 with m 0 /N 0 = p, we can always let m and N tend to infinity with m = km 0 and N = kn 0 and let k.) Alternatively, we can replace (6) with the more basic assumption m/n p as long as we slightly strengthen the basic assumption that the statistic has a linear expansion under P = pp + qq to also have a linear expansion under sequences P m,n = m N P + n N Q, which is a rather weak form of local uniform triangular array type of convergence. We prefer to assume the convergence hypothesis based on an i.i.d. sequence from a fixed P, though it is really a matter of choice. Usually, we can appeal to some basic convergence in 8

distributions results with ease, but if linear expansions are available (or can be derived) which are uniform in the underlying probability distribution near P, then such results can be used instead with the weaker hypothesis p m p. The main goal now is to show how studentizing the test statistic leads to a general correction. Theorem 2.2. Assume the setup and conditions of Theorem 2.. Further assume that ˆσ m (X,..., X m ) is a consistent estimator of σ(p ) when X,..., X m are i.i.d. P. Assume consistency also under Q and P, so that ˆσ n (V,..., V n ) P σ( P ) as n when the V i are i.i.d. P. Define the studentized test statistic S m,n = T m,n V m,n, (0) where V m,n = ˆσ 2 m(x,..., X m ) + m n ˆσ2 n(y,..., Y n ). and consider the permutation distribution defined in (2) with T replaced by S. Then, sup ˆR m,n(t) S Φ(t) P 0. () t Thus, the permutation distribution is asymptotically standard normal, as is the true unconditional limiting distribution of the test statistics S m,n. Indeed, as mentioned in Remark 2., the true unconditional limiting distribution of T m,n is normal with mean 0 and variance given by (8). But, when sampling m observations from P and n from Q, V 2 m,n tends in probability to (8), and hence the limiting distribution of T m,n is standard normal, the same as that of the permutation distribution. Example 2.. (continued) As proved by Janssen (997), even when the underlying distributions may have different variances and different sample sizes, permutation tests based on studentized statistics T m,n = m/2 ( X m Ȳn), SX 2 + m n S2 Y where SX 2 = m m (X i X m ) 2 and SY 2 = n n j= (Y i Ȳm) 2, can allow one to construct a test that attains asymptotic rejection probability α when P Q while providing an additional advantage of maintaining exact level α when P = Q. 9

Example 2.2. (continued) Define the studentized median statistic M m,n = m/2 [θ( ˆP m ) θ( ˆQ n )] ˆv m,n, where ˆv m,n is a consistent estimator of v(p, Q) defined in (9). There are several choices for a consistent estimator of v(p, Q). Examples include the usual kernel estimator (Devroye and Wagner, 980), bootstrap estimator (Efron, 979), and the smoothed bootstrap (Hall, DiCiccio, and Romano, 989). Remark 2.3. Suppose that the true unconditional distribution of a test T m,n is, under the null hypothesis, asymptotically given by a distribution R( ). Typically a test rejects when T m,n > r m,n, where r m,n is nonrandom, as happens in many classical settings. Then, we typically have r m,n r( α) R ( α). Assume that T m,n converges to some limit law R ( ) under some sequence of alternatives which are contiguous to some distribution satisfying the null. Then, the power of the test against such a sequence would tend to R (r( α)). The point here is that, under the conditions of Theorem 2.2, the permutation test based on a random critical value ˆr m,n obtained from the permutation distribution satisfies, under the null, ˆr P m,n r( α). But then, contiguity implies the same behavior under a sequence of contiguous alternatives. Thus, the permutation test has the same limiting local power as the classical test which uses the nonrandom critical value. So, to first order, there is no loss in power in using a permutation critical value. Of course, there are big gains because the permutation test applies much more broadly than for usual parametric models, in that it retains the level exactly across a broad class of distributions and is at least asymptotically justified for a large nonparametric family. 3 Four Technical Ingredients In this section, we discuss four separate ingredients, from which the main results flow. These results are separated out so they can easily be applied to other problems and so that the main technical arguments are highlighted. The first two apply more generally to randomization tests, not just permutation tests, and are stated as such. 3. Hoeffding s Condition Suppose data X n has distribution P n in X n, and G n is a finite group of transformations g of X n onto itself. For a given statistic T n = T n (X n ), let ˆR T n ( ) denote the randomization 0

distribution of T n, defined by ˆR T n (t) = G n g G n I{T n (gx n ) t}. (2) (In the case of permutation tests, X n corresponds to Z = (X,..., X m, Y,..., Y n ) and g varies over the permutations of {,..., N}.) Hoeffding (952) gave a sufficient condition to derive the limiting behavior of ˆRT n ( ). This condition is verified repeatedly in the proofs, but we add the result that the condition is also necessary. Theorem 3.. Let G n and G n be independent and uniformly distributed over G n (and independent of X n ). Suppose, under P n, (T n (G n X n ), T n (G nx n )) d (T, T ), (3) where T and T are independent, each with common c.d.f. R T ( ). Then, for all continuity points t of R T ( ), ˆR T n (t) P R T ( ). (4) Conversely, if (4) holds for some limiting c.d.f. R T ( ) whenever t is a continuity point, then (3) holds. The reason we think it is important to add the necessity part of the result is that our methodology is somewhat different than that of other authors mentioned in the introduction, who take a more conditional approach to proving limit theorems. After all, the permutation distribution is indeed a distribution conditional on the observed set of observations (without regard to ordering). However, the theorem shows that a sufficient condition is obtained by verifying an unconditional weak convergence property, which may look surprising at first in that it includes additional auxiliary randomization G in its statement. Nevertheless, simple arguments (see the appendix) show the condition is indeed necessary and so taking such an approach is not fanciful. 3.2 Slutsky s Theorem for Randomization Distributions Consider the general setup of Subsection 3.. The result below describes Slutsky s theorem in the context of randomization distributions. In this context, the randomization distributions are random themselves, and therefore the usual Slutsky s theorem does not quite apply. Because of its utility in the proofs of our main results, we highlight the

AT +B statement. Given sequences of statistics T n, A n and B n, let ˆR n ( ) denote the randomization distribution corresponding to the statistic sequence A n T n + B n, i.e. replace T n in (2) by A n T n + B n, so AT +B ˆR n (t) G n g G n I{A n (gx n )T n (gx n ) + B n (gx n ) t} (5) Theorem 3.2. Let G n and G n be independent and uniformly distributed over G n (and independent of X n ). Assume T n satisfies (3). Also, assume A n (G n X n ) P a (6) and B n (G n X n ) P b, (7) for constants a and b. Let R at +b ( ) denote the distribution of at + b, where T is the limiting random variable assumed in (3). Then, AT +B ˆR n (t) P R at +b (t), if the distribution R at +b ( ) of at + b is continuous at t. (Of course, R at +b (t) = R T ( t b a ) if a 0.) Remark 3.. Under the randomization hypothesis that the distribution of X n is the same as that of gx n for any g G n, the conditions (6) and (7) are equivalent to the assumptions that A n (X n ) P a and B n (X n ) P b, i.e. the convergence in probability just based on the original sample X n without first transforming by a random G n. For more on the randomization hypothesis, see Section 5.2 of Lehmann and Romano (2005). 3.3 A Coupling Construction Consider the general situation where k samples are observed from possibly different distributions. Specifically, assume for i =,..., k that X i,,..., X i,ni is a sample of n i i.i.d. observations from P i. All N i n i observations are mutually independent. Put all the observations together in one vector Z = (X,,..., X,n, X 2,,..., X 2,n2,..., X k,,..., X k,nk ). The basic intuition driving the results concerning the behavior of the permutation distribution stems from the following. Since the permutation distribution considers the 2

empirical distribution of a statistic evaluated at all permutations of the data, it clearly does not depend on the ordering of the observations. Let n i /N denote the proportion of observations in the ith sample, and assume that n i in such a way that p i n i N = O(N /2 ). (8) Then the behavior of the permutation distribution based on Z should behave approximately like the behavior of the permutation distribution based on a sample of N i.i.d. observations Z = ( Z,..., Z N ) from the mixture distribution P p P + + p k P k. Of course, we can think of the N observations generated from P arising out of a twostage process: for i =,..., N, first draw an index j at random with probability p j ; then, conditional on the outcome being j, sample Z i from P j. However, aside from the fact that the ordering of the observations in Z is clearly that of n observations from P, following by n 2 observations from P 2, etc., the original sampling scheme is still only approximately like that of sampling from P. For example, the number of observations Z i out of the N which are from P is binomial with parameters N and p (and so has mean equal to p N n ), while the number of observations from P in the original sample Z is exactly n. Along the same lines, let π = (π(),..., π(n)) denote a random permutation of {,..., N}. Then, if we consider a random permutation of both Z and Z, then the number of observations in the first n coordinates of Z which were Xs has the hypergeometric distribution, while the number of observations in the first n coordinates of Z which were Xs is still binomial. We can make a more precise statement by constructing a certain coupling of Z and Z. That is, except for ordering, we can construct Z to include almost the same set of observations as in Z. The simple idea goes as follows. Given Z, we will construct observations Z,..., Z N via the two-stage process as above, using the observations drawn to make up the Z i as much as possible. First, draw an index j among {,..., k} at random with probability p j ; then, conditionally on the outcome being j, set Z = X j,. Next, if the next index i drawn among {,..., k} at random with probability p i is different from j from which Z was sampled, then Z 2 = X i, ; otherwise, if i = j as in the first step, set Z 2 = X j,2. In other words, we are going to continue to use the Z i to fill in the observations Z i. However, after a certain point, we will get stuck because we 3

will have already exhausted all the n j observations from the jth population governed by P j. If this happens and an index j was drawn again, then just sample a new observation X j,nj + from P j. Continue in this manner so that as many as possible of the original Z i observations are used in the construction of Z. Now, we have both Z and Z. At this point, Z and Z have many of the same observations in common. The number of observations which differ, say D, is the (random) number of added observations required to fill up Z. (Note that we are obviously using the word differ here to mean the observations are generated from different mechanisms, though in fact there may be a positive probability that the observations still are equal if the underlying distributions have atoms. Still, we count such observations as differing.) Moreover, we can reorder the observations in Z by a permutation π 0 so that Z i and Z π0 (i) agree for all i except for some hopefully small (random) number D. To do this, recall that Z has the observations in order, i.e., the first n observations arose from P and the next set of n 2 observations came from P 2, etc. Thus, to couple Z and Z, simply put all the observations in Z which came from P first up to n. That is, if the number of observations in Z from P is greater than or equal to n, then Z π(i) for i =,..., n are filled with the observations in Z which came from P and if the number was strictly greater than n, put them aside for now. On the other hand, if the number of observations in Z which came from P is less than n, fill up as many of Z from P as possible and leave the rest slots among the first n spots blank for now. Next, move onto the observations in Z which came from P 2 and repeat the above procedure for n +,..., n +n 2 spots, i.e., we start filling up the spots from n + as many of Z which came from P 2 as possible up to n 2 of them. After going though all the distributions P i from which each of observations in Z came, one must then complete the observations in Z π0 ; simply fill up the empty spots with the remaining observations that have been put aside. (At this point, it does not matter where each of the remaining observations gets inserted; but, to be concrete, fill the empty slots by inserting the observations which came from the index P i in chronological order from when constructed.) This permuting of observations in Z corresponds to a permutation π 0 and satisfies Z i = Z π0 (i) for indices i except for D of them. For example, suppose there are k = 2 populations. Suppose that N of the Z observations came from P and so N N from P 2. Of course, N is random and has the binomial distribution with parameters N and p. If N n, then the above construction yields the first n observations in Z and Z π0 completely agree. Furthermore, if N > n, then the number of observations in Z from P 2 is N N < N n = n 2, and N N of the last n 2 indices in Z match those of Z π0, with the remaining differ. In this situation, 4

we have and Z = (X,..., X n, Y,..., Y n2 ) Z π0 = (X,..., X n, Y,..., Y N N, X n +,..., X N ), so that Z and Z π0 differ only in the last N n places. In the opposite situation where N < n, Z and Z π are equal in the first N and last n 2 places, only differing in spots N +,..., n. The number of observations D where Z and Z π0 differ is random and we now analyze how large it is. Let N j denote number of observations in Z which are generated from P j. Then, (N,..., N k ) has the multinomial distribution based on N trials and success probabilities (p,..., p k ). In terms of the N j, the number of differing observations in the above coupling construction is D = max(n j N j, 0). j= If we assume p j > 0 for all j, then by the usual central limit theorem, which together with (8) yields N j Np j = O P (N /2 ), N j n j = (N j Np j ) + (Np j n j ) = O P (N /2 ). It follows that D = O P (N /2 ) and so D/N converges to 0 in probability. It also follows that E(D) E N j n j E N j p j N + p j N n j j= {E[(N j Np j ) 2 ]} /2 + O(N /2 ) = j= j= [Np j ( p j )] /2 + O(N /2 ) = O(N /2 ). j= In summary, the coupling construction shows that only a fraction of the N observations in Z and Z π0 differ with high probability. Therefore, if the randomization distribution is based on a statistic T N (Z) such that the difference T N (Z) T N ( Z π0 ) is small in some sense whenever Z and Z π0 mostly agree, then one should be able to deduce the behavior of the permutation distribution under samples from P,..., P k from the behavior of the permutation distribution when all N observations come from the same 5

distribution P. Whether or not this can be done requires some knowledge of the form of the statistic, but intuitively it should hold if the statistic cannot strongly be affected by a change in a small proportion of the observations; its validity though must be established on a case by case basis. The point is that it is a worthwhile and beneficial route to pursue because the behavior of the permutation distribution under N i.i.d. observations is typically much easier to analyze than under the more general setting when observations have possibly different distributions. Furthermore, the behavior under i.i.d. observations seems fundamental as this is the requirement for the randomization hypothesis to hold, i.e. the requirement to yield exact finite sample inference. To be more specific, suppose π and π are independent random permutations, and independent of the Z i and Z i. Suppose we can show that (T N ( Z π ), T N ( Z π )) d (T, T ), (9) where T and T are independent with common c.d.f R( ). Then, by Theorem 3., the randomization distribution based on T N converges in probability to R( ) when all observations are i.i.d. according to P. But since ππ 0 (meaning π composed with π 0 so π 0 is applied first) and π π 0 are also independent random permutations, (9) also implies (T N ( Z ππ0 ), T N ( Z π π 0 )) d (T, T ), Using the coupling construction to construct Z, suppose it can be shown that T N ( Z ππ0 ) T N (Z π ) P 0. (20) Then, it also follows that T N ( Z π π 0 ) T N (Z π ) P 0, and so by Slutsky s Theorem, it follows that (T N (Z π ), T N (Z π )) d (T, T ). (2) Therefore, again by Theorem 3., the randomization distribution also converges in probability to R( ) under the original model of k samples from possibly different distributions. In summary, the coupling construction of Z, Z and π 0 and the one added requirement (20) allow us to reduce the study of the permutation distribution under possibly k different distributions to the i.i.d. case when all N observations are i.i.d. according to P. We summarize this as follows. Lemma 3.. Assume (9) and (20). Then, (2) holds, and so the permutation distri- 6

bution based on k samples from possibly different distributions behaves asymptotically as if all observations are i.i.d. from the mixture distribution P and satisfies ˆR T m,n(t) P R(t), if t is a continuity point of the distribution R of T in (9). Example 3. (Difference of Sample Means). To appreciate what is involved in the verification of (20), consider the two-sample problem considered in Theorem 2., in the special case of testing equality of means. The unknown variances may differ and are assumed finite. Consider the test statistic T m,n = m /2 [ X m Ȳn]. By the coupling construction, Z ππ0 and Z π have the same components except for at most D places. Now, T m,n ( Z ππ0 ) T m,n (Z π ) = m /2 [ m m ( Z ππ0 (i) Z π(i) )] m /2 [ n N ( Z ππ0 (j) Z π(j) )]. j=m+ All of the terms in the above two sums are zero except for at most D of them. But any nonzero term like Z ππ0 (i) Z π(i) has variance bounded above by 2 max(v ar(x ), V ar(y )) <. Note the above random variable has mean zero under the null hypothesis that E(X i ) = E(Y j ). To bound its variance, condition on D and π, and note it has conditional mean 0 and conditional variance bounded above by m min(m 2, n 2 ) 2 max(v ar(x ), V ar(y ))D, and hence unconditional variance bounded above by m min(m 2, n 2 ) 2 max(v ar(x ), V ar(y ))O(N /2 ) = O(N /2 ) = o(), implying (20). In words, we have shown that the behavior of the permutation distribution can be deduced from the behavior of the permutation distribution when all observations are i.i.d. with mixture distribution P. Two final points are relevant. First, the limiting distribution R is typically the same as the limiting distribution of the true unconditional distribution of T N under P. The true limiting distribution under (P,..., P k ) need not be the same as under P. However, suppose the choice of test statistic T N is such that it is an asymptotic pivot in the sense that its limiting distribution does not depend on the underlying 7

probability distributions. Then, typically the randomization or permutation distribution under (P,..., P k ) will asymptotically reflect the true unconditional distribution of T N, resulting in asymptotically valid inference. Indeed, the general results in Section 2 yield many examples of this phenomenon. However, that these statements need qualification is made clear by the following two (somewhat contrived) examples. Example 3.2. Here, we illustrate a situation where coupling works, but the true sampling distribution does not behave like the permutation distribution under the mixture model P. In the two-sample setup with m = n, suppose X,..., X n are i.i.d according to uniform on the set of x where x <, and Y,..., Y n are i.i.d. uniform on the set of y with 2 < y < 3. So, E(X i ) = E(Y j ) = 0. Consider a test statistic T n,n defined as T n,n (X,..., X n, Y,..., Y n ) = n /2 [ n I{ Y i > 2} I{ X i < 2}]. Under the true sampling scheme, T n,n is zero with probability one. However, if all 2n observations are sampled from the mixture model, it is easy to see that T n,n is asymptotically normal N(0, 2), which is the same limit for the permutation distribution (in probability). So here, the permutation distribution under the given distributions is the same as under P, though it does not reflect the actual true unconditional sampling distribution. Example 3.3. Here, we consider a situation where both populations are indeed identical, so there is no need for a coupling argument. However, the point is that the permutation distribution does not behave like the true unconditional sampling distribution. Assume X,..., X n and Y,..., Y n are all i.i.d. N(0, ) and consider the test statistic n T n,n (X,..., X n, Y,..., Y n ) = n /2 (X i + Y i ). Unconditionally, T n,n converges in distribution to N(0, 2). However, the permutation distribution places mass one at n /2 ( X n + Ȳn) because the statistic T n,n is permutation invariant. Certainly the moral of the examples is that the statistic needs to reflect an actual comparison between P and Q, such as a difference between the same functional evaluated at P and Q. 8

3.4 An Auxiliary Contiguity Result Fix m and n with N = m + n. Eventually, m = m(n) as n. Set p m = m/n. Let P m be the binomial distribution based on m trials and success probability p m. Also, let Q m be the hypergeometric distribution representing the number of objects labeled X sampled without replacement; here, m objects are sampled without replacement from N objects, of which m are labeled X and n are labeled Y. Lemma 3.2. Assume the above setup with p m p (0, ) as m. Let B m be a random variable having distribution P m. Consider the likelihood ratio L m (x) = dq m (x)/dp m (x). (i) The limiting distribution of L m (B m ) satisfies L m (B m ) L q exp( p 2q Z2 ), (22) where Z N(0, ) denotes a standard normal random variable and q = p. (ii) Q m and P m are mutually contiguous. Remark 3.2. With B m having the binomial distribution with parameters m and p m as in Lemma 3.2, also let B m have the binomial distribution with parameters m and p. Then, the distributions of B m and B m are contiguous if and only if p m p = O(m /2 ), not just p m p. Lemma 3.3. Suppose V,..., V m are i.i.d. according to the mixture distribution P pp + qq, where p (0, ) and P and Q are two probabilities (on some general space). Assume, for some sequence W m of statistics, W m (V,..., V m ) P t, (23) for some constant t (which can depend on P, Q and p). Let m, n, with N = m + n, p m = m/n, q m = n/n and p m p (0, ) with p m p = O(m /2 ). (24) 9

Further, let X,..., X m be i.i.d. P and Y,..., Y n be i.i.d. Q. Let (Z,..., Z N ) = (X,..., X m, Y,..., Y n ). Let (π(),..., π(n)) denote a random permutation of {,..., N} (and independent of all other variables). Then, W m (Z π(),..., Z π(m) ) P t. (25) Remark 3.3. The importance of Lemma 3.3 is that is allows us to deduce the behavior of the statistic W m under the randomization or permutation distribution from the basic assumption of how W m behaves under i.i.d. observations from the mixture distribution P. Note that in (23), the convergence in probability assumption is required when the V i are P (so the P over the arrow is just a generic symbol for convergence in probability). Remark 3.4. As mentioned in Remark 2.2, the assumption (24) is stronger than the more basic assumption m/n p, where no rate is required between the difference m/n and p. Alternatively, we can replace (24) with the more basic assumption m/n p as long as we slightly strengthen the requirement (23) to W m (Z m,,..., Z m,m ) P t when Z m,,..., Z m,m are i.i.d. according to the mixture distribution p m P + q m Q (rather than pp + qq), so that the data distribution at time m depends on m. We prefer to assume the convergence hypothesis based on an i.i.d. sequence, though it is really a matter of choice. Usually, we can appeal to some basic convergence in probability results with ease, but if convergence in probability results are available (or can be derived) which are uniform in the underlying probability distribution, then such results can be used instead with the weaker hypothesis p m p. 4 Nonparametric k-sample Behrens-Fisher Problem From our general considerations, we are now guided by the principle that the large sample distribution of the test statistic should not depend on the underlying distributions; that is, it should be asymptotically pivotal under the null. Of course, it can be something other than normal, and we next consider the important problem of testing equality of means of k-samples (where a limiting Chi-squared distribution is obtained). 20

The problem studied is the nonparameric one-way layout in the analysis of variance. Assume we observe k independent samples of i.i.d. observations. Specifically, assume X i,,..., X i,ni are i.i.d. P i. Some of our results will hold for fixed n,..., n k, but we also have asymptotic results as N i n i. Let n = (n,..., n k ), and the notation n will mean min i n i. The P i are unknown probability distributions on the real line, assumed to have finite variance. Let µ(p ) and σ 2 (P ) denote the mean and variance of P, respectively. The problem of interest is to test the null hypothesis against the alternative H 0 : µ(p ) = = µ(p k ) H : µ(p i ) µ(p j ) for some i, j. The classical approach is to assume P i is normal N(µ, σ 2 ) with a common variance. Here, we will not impose normality, nor the assumption of common variance. One approach used to robustify the usual F -test is to apply a permutation test. The underlying distributions need not be normal for the permutation approach to yield exact level α tests, but what is needed is that P i is just P j shifted for all i and j. To put it another way, it must be the case that the c.d.f. F i corresponding to P i satisfy F i (x) = F (x µ i ) for some unknown F and constant µ i (which can then be taken to be the mean of F i, assuming the mean exists). In other words, under H 0, the observations must be mutually independent and identically distributed. Of course, this is much weaker than the usual normal theory assumptions. Unfortunately, a permutation test applied to the usual F -statistic will fail to control the probability of a Type error, even asymptotically. The goal here is to construct a method that retains the exact control of the probability of a Type error when the observations are i.i.d., but also asymptotically controls the probability of a Type error under very weak assumptions, specifically finite variances of the underlying distributions. The first step is a choice of test statistic. In order to preserve the good power properties of the classical test under normality, consider the generalized likelihood ratio for testing H 0 against H under the normal model where it is assumed P i N(µ i, σi 2 ). If, for now, we further assume that the σ i are known, then it is easily checked that the 2

generalized likelihood ratio test rejects for large values of T n,0 = n i σi 2 [ X n,i k n i X n,i /σ 2 i k n i/σ 2 i ] 2, (26) where X n,i = n i j= X i,j/n i. Since the σ i will not be assumed known, we replace σ i in (26) with S n,i, where Sn,i 2 = n i (X i,j n i X n,i ) 2, yielding T n, = n i S 2 n,i [ j= X n,i k n i X n,i /S 2 n,i k n i/s 2 n,i ] 2. (27) We need the limiting behavior of T n,, not just under normality or equal distributions. (Some relatively recent large sample approaches which do not retain our finite sample exactness property to this specific problem are given in Rice and Gaines (989) and Krishnamoorthy, Lu and and Mathew (2007).) Lemma 4.. Consider the above set-up with 0 < σi 2 = σ 2 (P i ) <. Assume n i with n i /N p i > 0. Then, under H 0, both T n,0 and T n, converge in distribution to the Chi-squared distribution with k degrees of freedom. Let ˆR n, ( ) denote the permutation distribution corresponding to T n,. In words, T n, is recomputed over all permutations of the data. Specifically, if we let (Z,..., Z N ) = (X,,..., X,n, X 2,,..., X 2,n2,..., X k,,..., X k,nk ), then, ˆR n, (t) is formally equal to the right side of (2), with T m,n replaced by T n,. Theorem 4.. Consider the above set-up with 0 < σ 2 (P i ) <. Assume n i with n i /N p i > 0. Then, under H 0, ˆR n, (t) P G k (t), where G d denotes the Chi-squared distribution with d degrees of freedom. Moreover, if P,..., P k satisfy H 0, then the probability that the permutation test rejects H 0 tends to the nominal level α. 22

5 Simulation Results Monte Carlos simulation studies illustrating our results are presented in this section. Table tabulates the rejection probabilities of one-sided tests for the studentized permutation median test where the nominal level considered is α = 0.05.. The simulation results confirm that the studentized permutation median test is valid in the sense that it approximately attains level α in large samples. In the simulation, odd numbers of sample sizes are selected in the Monte Carlo simulation for simplicity. We consider several pairs of distinct sample distributions that share the same median as listed in the first column of Table. For each situation, 0,000 simulations were performed. Within a given simulation, the permutation test was calculated by randomly sampling 999 permutations. Note that neither the exactness properties nor the asymptotic properties are changed at all (as long as the number of permutations sampled tends to infinity). For a discussion on stochastic approximations to the permutation distribution, see the end of Section 5.2. in Lehmann and Romano (2005) and Section 4 in Romano (989). As is well-known, when the underlying distributions of two distinct independent samples are not identical, the permutation median test is not valid in the sense that the rejection probability is far from the nominal level α = 0.05. For example, although a logistic distribution with location parameter 0 and scale parameter and a continuous uniform distribution with the support ranging from -0 to 0 have the same median of 0, the rejection probability for the sample sizes examined is between 0.099 and 0.226 and moves further away from the nominal level α = 0.05 as sample sizes increase. In contrast, the studentized permutation test results in rejection probability that tends to the nominal level α asymptotically. We apply the bootstrap method (Efron, 979) to estimate the variance for the median in the simulation given by 4fP 2 (θ) m m l= where for an odd number m, P (θ( ˆP m) ) ( ( = X (l) = P Binomial m, l ) m [ X (l) θ( ˆP ] 2 m ) P (θ( ˆP m) ) = X (l), m ) ( ( P Binomial m, l ) m ). 2 m 2 As noted earlier, there exist other choices such as the kernel estimator and the smoothed bootstrap estimator. We emphasize, however, that using the bootstrap to obtain an estimate of standard error does not destroy the exactness of permutation tests under 23

identical distributions. Distributions N(0,) N(0,5) N(0,) T(5) Logistic(0,) U(-0, 0) Laplace(ln 2, ) exp() m 5 3 5 0 0 20 40 n 5 2 0 0 20 20 40 Not Studentized 0.079 0.524 0.324 0.2309 0.2266 0.2266 0.2249 Studentized 0.0802 0.458 0.095 0.065 0.057 0.057 0.053 Not Studentized 0.0646 0.87 0.24 0.769 0.849 0.849 0.853 Studentized 0.0707 0.556 0.0904 0.0776 0.066 0.066 0.06 Not Studentized 0.099 0.43 0.237 0.2258 0.2233 0.2233 0.226 Studentized 0.077 0.249 0.0923 0.0686 0.0574 0.0574 0.0574 Not Studentized 0.0420 0.0462 0.0477 0.048 0.0493 0.046 0.050 Studentized 0.0386 0.0422 0.0444 0.0502 0.0485 0.0505 0.053 Table : Monte-Carlo Simulation Results for Studentized Permutation Median Test (One-sided, α = 0.05) 6 Conclusion When the fundamental assumption of identical distributions need not hold, two-sample permutation tests are invalid unless quite stringent conditions are satisfied depending on the precise nature of the problem. For example, the two-sample permutation test based on the difference of sample means is asymptotically valid only when either the distributions have the same variance or they are comparable in sample size. Thus, a careful interpretation of rejecting the null is necessary; rejecting the null based on the permutation tests does not necessarily imply the rejection of the null that some realvalued parameter θ(f, G) is some specified value θ 0. We provide a framework that allows one to obtain asymptotic rejection probability α in two-sample permutation tests. One great advantage of utilizing the proposed test is that it retains the exactness property in finite samples when P = Q, a desirable property that bootstrap and subsampling methods fail to possess. To summarize, if the true goal is to test whether the parameter of interest θ is some specified value θ 0, permutation tests based on correctly studentized statistic is an attractive choice. When testing the equality of means, for example, the permutation t-test based on a studentized statistic obtains asymptotic rejection probability α in 24