Re-sampling and exchangeable arrays University Ave. November Revised January Summary

Similar documents
Linear Regression and Its Applications

On prediction and density estimation Peter McCullagh University of Chicago December 2004

MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Normal Probability Plot Probability Probability

Lecture Notes 1: Vector spaces

ALGEBRAIC COMBINATORICS. c1993 C. D. Godsil

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

11. Bootstrap Methods

On Least Squares Linear Regression Without Second Moment

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Political Science 236 Hypothesis Testing: Review and Bootstrapping

A characterization of consistency of model weights given partial information in normal linear models

On an algebra related to orbit-counting. Peter J. Cameron. Queen Mary and Westeld College. London E1 4NS U.K. Abstract

Incompatibility Paradoxes

The exact bootstrap method shown on the example of the mean and variance estimation

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

2 JOSE BURILLO It was proved by Thurston [2, Ch.8], using geometric methods, and by Gersten [3], using combinatorial methods, that the integral 3-dime

1 I A Q E B A I E Q 1 A ; E Q A I A (2) A : (3) A : (4)

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Classication of greedy subset-sum-distinct-sequences

The Nonparametric Bootstrap

var D (B) = var(b? E D (B)) = var(b)? cov(b; D)(var(D))?1 cov(d; B) (2) Stone [14], and Hartigan [9] are among the rst to discuss the role of such ass

ECONOMETRICS. Bruce E. Hansen. c2000, 2001, 2002, 2003, University of Wisconsin

Chapter Two Elements of Linear Algebra

REPRESENTATION THEORY OF S n

Stat 5101 Lecture Notes

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

subroutine is demonstrated and its accuracy examined, and the nal Section is for concluding discussions The rst of two distribution functions given in

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

Extracted from a working draft of Goldreich s FOUNDATIONS OF CRYPTOGRAPHY. See copyright notice.

are the q-versions of n, n! and . The falling factorial is (x) k = x(x 1)(x 2)... (x k + 1).

1 Groups Examples of Groups Things that are not groups Properties of Groups Rings and Fields Examples...

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Boolean Inner-Product Spaces and Boolean Matrices

FACTOR MAPS BETWEEN TILING DYNAMICAL SYSTEMS KARL PETERSEN. systems which cannot be achieved by working within a nite window. By. 1.

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

A better way to bootstrap pairs

Solution Set 7, Fall '12

Roots of Unity, Cyclotomic Polynomials and Applications

Math 42, Discrete Mathematics

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

ORBIT-HOMOGENEITY. Abstract

[y i α βx i ] 2 (2) Q = i=1

Large Sample Properties of Estimators in the Classical Linear Regression Model

LECTURE 3. RATIONAL NUMBERS: AN EXAMPLE OF MATHEMATICAL CONSTRUCT

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Resampling and the Bootstrap

MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

arxiv: v1 [stat.co] 26 May 2009

X. Hu, R. Shonkwiler, and M.C. Spruill. School of Mathematics. Georgia Institute of Technology. Atlanta, GA 30332

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Lawrence D. Brown* and Daniel McCarthy*

1/sqrt(B) convergence 1/B convergence B

Behavioral Economics

3 Random Samples from Normal Distributions

LEBESGUE INTEGRATION. Introduction

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation

Solutions to Exercises Chapter 10: Ramsey s Theorem

Estimating Individual Mahalanobis Distance in High- Dimensional Data

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Detailed Proof of The PerronFrobenius Theorem

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

The Bootstrap is Inconsistent with Probability Theory

Contents. 1 Introduction to Dynamics. 1.1 Examples of Dynamical Systems

Math 314 Lecture Notes Section 006 Fall 2006

Spring 2012 Math 541B Exam 1

response surface work. These alternative polynomials are contrasted with those of Schee, and ideas of

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

ORBIT-HOMOGENEITY IN PERMUTATION GROUPS

Minimum and maximum values *

Finite Population Correction Methods

STA 4273H: Statistical Machine Learning

Economics Noncooperative Game Theory Lectures 3. October 15, 1997 Lecture 3

Review (Probability & Linear Algebra)

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Multivariate Distributions

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Newton's second law of motion

Projected partial likelihood and its application to longitudinal data SUSAN MURPHY AND BING LI Department of Statistics, Pennsylvania State University

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

Vectors. January 13, 2013

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Linear Algebra. Min Yan

Ridge analysis of mixture response surfaces

HANDBOOK OF APPLICABLE MATHEMATICS

Written Qualifying Exam. Spring, Friday, May 22, This is nominally a three hour examination, however you will be

294 Meinolf Geck In 1992, Lusztig [16] addressed this problem in the framework of his theory of character sheaves and its application to Kawanaka's th

Lie Groups for 2D and 3D Transformations

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

i=1 h n (ˆθ n ) = 0. (2)

Transcription:

Re-sampling and exchangeable arrays Peter McCullagh Department of Statistics University of Chicago 5734 University Ave Chicago Il 60637 November 1997 Revised January 1999 Summary The non-parametric, or re-sampling, bootstrap for a single unstructured sample corresponds to the algebraic operation of monoid composition, with a uniform distribution on the monoid. With this interpretation, the notion of re-sampling can be extended to designs having a certain group-invariance property. Two types of exchangeable array structures are considered in some detail, namely the one-way layout, and the two-way row-column exchangeable design. Although, in both cases there is a unique group under which the sampling distribution of the observations is exchangeable, the choice of monoid is not unique. Dierent choices of monoid can lead to drastically dierent, and in some cases quite misleading, inferences. Some key words: Bootstrap; Exchangeability; Group invariance; Monoid composition; Product monoid; Symmetric function. 1. Introduction Suppose that inference is required for a scalar parameter in the form of condence intervals. A consistent, asymptotically normal, estimator T is given, but no consistent estimate of variance is available. Under what conditions can re-sampling techniques be used to approximate the distribution of T, and thereby generate a rst-order correct condence interval? Two circumstances in which problems of this nature can arise are as follows. (i) The distribution of the data has a simple structure, possibly with independent components, but the functional form of T is complicated. (ii) The functional form of T is simple, possibly even linear, but the dependence structure in the data is suciently complicated that it is not feasible to derive a consistent variance estimate. Most re-sampling investigations have focused on problems of the rst type. The present paper aims at rather specialized problems of the second type in which the dependence among observations is specied by group invariance. That is to say, we consider structures in which Y has the same distribution as gy for each g in the given group G. We do this by rst interpreting the standard re-sampling bootstrap as monoid composition in which the This research was supported in part by National Science Foundation Grant DMS 97-05347 1

bootstrap distribution is uniform on the monoid. In this context, a monoid is any semigroup that includes G. The purpose, as always, is to generate pseudo-observations in such a way that the bootstrap distribution of a statistic is a faithful estimate of the sampling distribution of the same statistic, (Efron and Tibshirani, 1993, chapter 2). Two specic cases are considered in detail. For the one-way layout, we consider two linear statistics and six monoids, all containing the relevant wreath product group. All six bootstrap variances are calculated analytically for both statistics. Unfortunately, it appears that the choice of monoid is critical, and what appears to be the most natural choice does not necessarily give the best variance estimate. For the two-way row-column exchangeable array, one statistic and two monoids are considered. Although there exists a satisfactory consistent variance estimate, neither bootstrap scheme produces a consistent estimate. In fact, it is shown that no re-sampling scheme of the type considered in this paper, can yield a consistent estimate of var( Y ) in row-column exchangeable arrays. Finally, we conclude with some remarks concerning exchangeability in regression models. 2 2. Re-sampling 2.1 Monoid composition Let be the set of n observed subjects or sampling units. The observation vector y is a real-valued function on in the vector space R. The value of y on unit! is usually denoted by y(!) = y!, and the list of these values is a vector in R. Let G be the symmetric group acting on : each element in G is an invertible function from to itself. The composition of y with ' 2 G is a new function y ' from to R, whose value on unit! is given by (y ')(!) = y '(!) : Evidently, y ' is a permutation of y. Further, if ' is chosen uniformly at random from the group, y ' is a random permutation. For given y, the distribution of the statistic T (y ') is called the permutation distribution of T. Evidently, the permutation distribution is discrete, with at most n! points of support. Let M be the set of all n n transformations of to itself. This set includes the group, but it also includes the non-invertible transformations in which several points in are mapped to the same image. The composition y ' is a new function from to R. For statistical purposes, y ' is a sample of the y-values taken with replacement. In particular, if ' is chosen uniformly at random from M, y ', considered as a function from M into R, is a random variable whose distribution is called the bootstrap distribution of y. In algebraic terminology, M is a monoid (semi-group with identity) acting on. There are in fact many monoids acting on : the symmetric group itself is an example of a monoid. The particular monoid described above, the one that occurs in bootstrap re-sampling, is called the full monoid on.

2.2 Bootstrap exchangeability A list of n independent and identically distributed random variables indexed by is a vector in R whose sampling distribution is exchangeable, or invariant under permutation. In other words, for each g in the symmetric group, Y and Y g have the same sampling distribution. Under certain conditions, this property of the sampling distribution is transferred to permutation and bootstrap distributions. For xed y, the bootstrap random variable is a function Y (') = y ' from the monoid M into R. The composition of Y with a group element g is given by If, for each ' 2 M, all elements in the set Y g = (y ') g = y (' g) = y ' 0 : f' g : g 2 Gg; have equal probability, then the bootstrap distribution is exchangeable., the distribution need not be uniform on M, but it must be uniform on each of the group orbits in M. To see what the group orbits of M are, consider an example in which is a set of four integers. Consider the functions The inverse functions are ' 1 = f1 7! 4; 2 7! 4; 3 7! 3; 4 7! 2g; ' 2 = f1 7! 1; 2 7! 2; 3 7! 3; 4 7! 3g: ' 1 1 = f1 7! ;; 2 7! f4g; 3 7! f3g; 4 7! f1; 2gg; ' 1 2 = f1 7! f1g; 2 7! f2g; 3 7! f3; 4g; 4 7! ;g; so ' 1 1 () is the partition 12j3j4, and ' 1 2 () is the partition 1j2j34. These set partitions have the same block sizes corresponding to the number partition 4 = 2 + 1 + 1, so they belong to the same group orbit. The group itself constitutes one such orbit, so the permutation distribution is exchangeable. The number of orbits, or points in M=G, is equal to the number of partitions of the number n. The group itself corresponds to the orbit (1 n ) in which each element of has a distinct image. The orbit (2; 1 n 2 ) is that subset of M in which two points have the same image and all other points have distinct images. The bootstrap distribution is exchangeable if and only if the distribution on M is uniform on each of these p(n) orbits. The uniform distribution on M satises the conditions for exchangeability, so the bootstrap distribution is exchangeable. 3

2.3 Symmetric functions and bootstrap cumulants The bootstrap distribution is the uniform distribution on the monoid. Another way of saying the same thing is to express the bootstrap re-sampling scheme component-wise as y i = ' i ty t ; with implicit summation over repeated indices. Independently for each i, ' i is a random variable whose distribution is uniform multinomial on. The joint cumulants of degree r are cum r (' i 1 t1 ; : : :; ' i r tr ) = i 1:::i r t1 ;:::;t r ; where is the indicator function for equality of the indices, and the s are the cumulants of the uniform multinomial. The multinomial moments are t1 :::t r = t1 :::t r =n from which we obtain r = r =n r;s = rs =n r s =n 2 r;s;t = rst =n r st [3]=n 2 2 r s t =n 3 and so on, using the formulae for cumulants in terms of moments. Let k r be the sample k-statistic of degree r, so k 1 is the sample mean and k 2 the sample variance. In general, k r is the unique symmetric polynomial of degree r whose i.i.d. sampling expectation is r, the rth cumulant of the sampling distribution. Denote by kr the random variable k r (y ). A straightforward calculation shows that the bootstrap distribution satises the following identities: E(k 1 j y) = k 1 ; n var(k 1 j y) = E(k 2 j y) = (n 1)k 2 =n; n 2 cum 3 (k 1 j y) = n cov(k 1; k 2 j y) = E(k 3 j y) = (n 1)(n 2)k 3 =n 2 ; 4 n 3 cum 4 (k 1 j y) = n 2 cum 3 (k 1; k 1; k 2 j y) = n cov(k 1; k 3 j y) = n var(k 2 j y) = E(k 4 j y) = (n 1) (n 2 6n + 6)k 4 6nk 22 =n 3 : In the nal expression, k 22 is the polykay whose i.i.d. expectation is 2 2 (McCullagh, 1987, chapter 4). More generally, for any linear statistic T = l i y i, we have E(T j y) = P l i E(k 1 j y) n var(t j y) = P l 2 i E(k 2 j y); n 2 cum 3 (T j y) = P l 3 i E(k 3 j y); n 3 cum 4 (T j y) = P l 4 i E(k 4 j y); and so on for all higher-order cumulants. As we shall see in section 5, these are special cases of identities satised by some, but not all, bootstrap re-sampling schemes.

2.4 Bootstrap In the sections that follow, we consider random arrays whose sampling distribution is invariant under a specied group, G. In this paper, a bootstrap distribution is dened as follows. We rst choose a monoid M such that G M. Second, we choose a probability distribution on the monoid, uniform on group orbits, and independent of y. Given y, the bootstrap distribution is the distribution of y ', where ' has the specied distribution on M. In most cases, the distribution is uniform on the monoid, but this is not critical. 5 3. The one-way exchangeable array 3.1 Re-sampling schemes Consider an array of b blocks, each of size n having the property that the permuted array of random variables has the same joint sampling distribution as the original array. In this context, permutation refers to the wreath product group in which blocks are permuted by an element ' from S b, the symmetric group acting on the blocks. Observations in each block are permuted independently by composition with 2 S n. In other words, each permutation has components ('; 1 ; : : :; b ) where ' permutes blocks, and i permutes observations within a block. The action on the array is given either by or by : (i; j) 7! '(i); i (j) (1) : (i; j) 7! '(i); '(i) (j) ; (2) depending on the order of operations. In other words, under (1), the (i; j) element of y is y '(i); i (j): under (2) we get y '(i); '(i) (j), which is dierent. The sampling distribution of Y is exchangeable if, for each in the group, the distribution of Y is the same as the distribution of Y. So far as the denition of exchangeability is concerned, the actions (1) and (2) are eectively equivalent because, to each in the group, there corresponds a 0 0 = ('; '(1) ; : : :; '(b ) such that y 0 using (1) is equal to y using (2). Nevertheless, for semi-group operations, it is essential to distinguish between the two modes of action. There are various ways in which re-sampling might operate in this context, but all such schemes are based on an extension of the group to a monoid. Five extensions are described and studied. Boot-I: First permute the observations independently within the blocks. Then pick a random sample of the blocks with replacement. Boot-II: First pick a random sample of the blocks with replacement. Then permute the observations independently within the blocks. Boot-III: First permute the blocks. Then pick a random sample with replacement from each of the blocks. Boot-IV: First select a random sample with replacement from each block. Then pick a random sample of the blocks. Boot-V: First pick a random sample of the blocks with replacement. Then pick a random sample with replacement from each of the blocks.

Boot-VI Disregard the block structure, select a random sample of size nb with replacement, and arrange in blocks of size n. Each of these re-sampling schemes consists of two parts. The rst part is a set of transformations from the labels f(i; j)g to itself. The second part is a distribution on this set of transformations. Each set of transformations is a monoid containing the wreath product group as a subset. Each bootstrap distribution is uniform on the associated monoid, and hence invariant under the group. For monoids II, III, and V, the transformation is given by (1): for I and IV, the transformation is given by (2). In the case of transformation (1), duplicate blocks are permuted or sampled independently. Under (2), however, identical samples are obtained from duplicate blocks. The order of operations is immaterial in the wreath product of groups, but order does matter for semi-groups. Composite re-sampling schemes can also be constructed in which a randomization device is used to select a re-sampling scheme from the list above. A die is cast, generating a sequence of independent values. Depending on the outcome of the die, one of the sampling schemes listed above is chosen to generate a transformation '. This is equivalent to using the full monoid M V I, with a non-uniform distribution on the monoid. The distribution is, however, uniform on each group orbit. The relations among the monoids are as follows: G M I M II M V M V I ; G M III M IV M V M V I : The number of elements in each monoid is as follows: jgj = b!(n!) b ; jm I j X = S bj j! (n!) j ; jm II j = b b (n!) b ; 1jb jm III j = b!n nb ; jm IV j X = S bj j! n nj ; jm V j = b b n nb ; jm V I j = (nb) nb ; 1jb where S nm is Stirling's number of the second kind. 6 3.2 Bootstrap distributions We consider here two scalar statistics, one invariant under the group, the other noninvariant. The sample mean is an invariant statistic that is linear in the observations. For a non-invariant statistic, we suppose that m 1 + m 2 = n, and that in each block the rst m 1 observations have treatment level 1, and the remainder have treatment level 2. Then, if Y1 is the average of the observations on treatment 1, and Y2 the average on treatment 2, the treatment dierence, Y1 Y 2 is a within-blocks contrast that is linear in the data. We distinguish between the sampling distributions of these statistics, which depend on unknown parameters, and the bootstrap distributions, which are computable but depend on the particular choice of monoid. To keep the calculations as simple as possible, we focus only on the rst two moments. On the assumption that the sampling distribution is exchangeable and that observations in dierent blocks are independent, the mean and variance of the sample mean are E( Y ) = ; and var( Y ) = (n 2 b + 2 )=(nb):

7 Similar calculations show that the mean and variance of Y 1 Y 2 are E( Y1 Y2 ) = 0; and var( Y1 Y2 ) = 2 (1=m 1 + 1=m 2 )=b: The variance components here are dened by var(y ij ) = 2 b + 2 ; cov(y ij ; Y ij 0) = 2 b for j 6= j 0. Both variances can be estimated in very natural ways using the two invariant quadratic forms S 2 b = X y 2 i. =n y2.. =(nb) S 2 e = X ij y 2 ij X i y 2 i. =n whose expectations are E(S 2 b ) = (b 1)(n 2 b + 2 ) E(S 2 e) = b(n 1) 2 It is conventional to estimate var( Y ) by S 2 b =(nb(b 1)), and var( Y1 Y2 ) by Se 2 (1=m 1 + 1=m 2 )=(b 2 (n 1)). Within the class of unbiased symmetric quadratic forms, these estimates are unique. In all cases, the bootstrap mean of Y is equal to the sample mean, and the bootstrap mean of Y 1 Y in Table 1, where s 2 = Se 2 =b(n 1) is the within-blocks mean square. 2 is zero. The bootstrap variances are quadratic functions of the data given Table 1. Bootstrap variances of two statistics. Monoid B 2 ( Y ) n 2 b 2 B 2 ( Y 1 Y 2 ) bm 1 m 2 =n G 0 s 2 I nsb 2 (2 b 1 )s 2 II ns 2 b s 2 III S 2 e s 2 (n 1)=n IV (2 b 1 )S 2 e + ns2 b (2 b 1 )s 2 (n 1)=n V S 2 e + ns 2 b s 2 (n 1)=n VI S 2 e + S 2 b (S 2 e + S 2 b )=(nb) unbiased nbsb 2 =(b 1) s2 For the invariant statistic, monoids I and II provide the best variance estimate, diering from the unbiased estimate by a factor of (b 1)=b. Monoids III and VI give misleadingly small variances, even asymptotically. In a certain limiting sense, if b 2 6= 0, it can be argued that monoids IV and V give consistent variance estimates, but these are too large in nite samples, and certainly inferior to I and II. For the non-invariant statistic, the permutation variance and the monoid-ii variance are arguably the only correct values. Monoids I, IV and VI produce variances that are misleadingly large.

Fisher's (1935, section 21) analysis of Darwin's data, based on a randomization argument, is equivalent to re-sampling from the wreath product group. Although monoid-ii gives a bootstrap variance that is a reasonably good approximation to the sampling variance in both cases, it does not appear that there is a uniformly best scheme that is best for all statistics. Further, what appear at rst sight to be natural extensions of the group to a monoid (IV and V), can yield quite dierent bootstrap distributions. In more complicated designs where it is not feasible to compute a table of bootstrap variances and to compare these with the sampling variance of the statistic, it would not be easy to tell which re-sampling scheme, if any, is appropriate as a reference distribution. 8 4. Row-column exchangeable arrays 4.1 Introduction A random m n array Y with elements Y ij is said to be row-column exchangeable if, for any permutations 2 S m and 0 2 S n, the distribution of the permuted array Y = fy (i)0 (j)g is the same as the distribution of Y. We assume that the moments of Y exist, at least up to second order, and that inference is required for the mean value = E(Y ij ). Exchangeability implies that all components have the same marginal distribution, and hence the same mean. In addition, all second moments are determined by the four parameters 2 = var(y ij ) 2 r = cov(y ij ; Y i;j 0); j 6= j 0 2 c = cov(y ij ; Y i0 ;j); i 6= i 0 = cov(y ij ; Y i0 ;j 0); i 6= i0 ; j 6= j 0 : Exchangeability does not imply = 0, but the applications that we have in mind concern dissociated arrays in which components that are neither in the same row nor in the same column are independent. Henceforth, therefore, we assume that = 0. In addition, we assume positive deniteness for all m; n, which implies that 2 r 0, 2 c 0, and 2 = 2 2 r 2 c 0: 4.2 Variances and symmetric quadratic forms The only summary statistics of interest are those that are invariant under the group S m S n acting on the rows and columns, the so-called symmetric functions. The obvious estimate of, the average of the components, ^ = Y.., is a symmetric function of degree one. Modulo scalar multiples, this is the unique symmetric function of degree one. Under mild conditions, if m; n are both large, ^ is approximately normally distributed. The sampling variance is var(^) = (n 2 r + m 2 c + 2 )=(mn): (3)

An estimate of this combination is required in order to set approximate condence limits for. The space of symmetric quadratic forms has dimension four. It is spanned by the familiar sums of squares whose expectations under the model are as follows. Statistic Sm 2 = Y..=(mn) 2 Expectation mn 2 + 2 + n 2 r + m 2 c S 2 r = P Y 2 i. =n S2 m (m 1) 2 + n(m 1) 2 r S 2 c = P Y 2.j =m S2 m (n 1) 2 + m(n 1)2 c S 2 e = P Y 2 ij S 2 c S 2 r S 2 m (m 1)(n 1) 2 It follows that there is a unique symmetric quadratic form whose expectation is the desired combination (3). This quadratic form is a linear combination of mean squares given by (MS r + MS c MS e )=(mn); (4) Unfortunately, this variance estimate is not guaranteed to be non-negative. The conventional estimate of variance s 2 = (MS r MS e ) + + (MS c MS e ) + + MS e (mn) is positive, but not quadratic. Nevertheless, if m; n are both large, the probability that s 2 diers from (4) is exponentially small. The interval ^ k =2 s, contains with probability + o(1). This is a rst-order approximate condence interval for. 9 4.3 Re-sampling schemes Let r be the set of m row labels, c the set of n column labels, and r c the set of ordered pairs. The full monoids acting on these sets are denoted by M r, M c and M rc respectively Two specic re-sampling schemes are considered as follows: Boot-I Select a random sample of size mn with replacement from the observed table. In other words, choose ' uniformly at random from M rc. Boot-II Select a random sample of the rows, ' r 2 M r, and a random sample ' c 2 M c, of the columns. The bootstrap table is the set of cells (' r ; ' c ) applied to r c. In other words, choose ' uniformly at random from the product monoid M r M c and set Y = y '. For given y, the bootstrap distribution of Y, or of any derived statistic such as Y, obviously depends on the distribution of ' in M rc. In the rst scheme above, the distribution is uniform on the (mn) mn points in M rc : in the second scheme, the distribution is uniform on the m m n n points in the subset M r M c. In both schemes, the monoid includes the product group, so both bootstrap distributions are row-column exchangeable. In fact, M rc includes the full symmetric group, so the rst bootstrap distribution is completely exchangeable. This property seems undesirable for structured arrays.

The bootstrap mean of Y is equal to the sample mean in both cases. The bootstrap variances and their expectations are given below. Unbiased variance estimate mn S 2 r m 1 + S2 c n 1 S 2 e (m 1)(n 1) Boot-I S 2 r =mn + S 2 c =mn + S 2 e =mn 2 r m 1 Boot-II S 2 r =m + S 2 c =n + S 2 e =mn 2 r n(m 1) m Sampling expectation m + 2 c n 2 r + m 2 c + 2 + 2 c n 1 n m(n 1) n + 2 mn 1 mn + 2 3mn 2m 2n+1 mn It can be seen that the Boot-I variance is misleadingly small unless r 2 = c 2 = 0. The Boot-II variance behaves asymptotically as (nr 2 + m2 c + 32 )(1 + o p(1)). In a certain asymptotic sense, the estimate is consistent except at the point r 2 = c 2 = 0, when the variance is too large by a factor of three. So, although the Boot-II variance comes close for most parameter values, neither variance estimate is consistent for var( ^). If the variance components are small, say r 2 = O(n 1 ), c 2 = O(m 1 ), and 2 = O(1), neither bootstrap estimator is consistent, but the unbiased estimator and s 2 are quite satisfactory. 10 4.4 Positive quadratic functions All of the re-sampling schemes considered here are generated by monoid composition, Y = y ', in which the distribution of ' in M does not depend on y. Given ', the transformation from y to Y is linear. Since, by assumption, the distribution of ' in M is independent of y, each bootstrap moment or cumulant of order r is a homogeneous polynomial in y of degree r. In particular, the bootstrap mean of any linear statistic is a linear function of y, and the bootstrap variance of any linear statistic is a positive quadratic function of y. This property holds for all six re-sampling schemes described in section 2 as well as Boot-I and Boot-II above. In the case of the row-column exchangeable array, however, it was observed that there is a unique unbiased symmetric quadratic statistic (4) that estimates the variance of ^. However, this estimator is not positive denite, so there is no bootstrap distribution for which this is the variance of Y. In addition, since there are only four invariant quadratic statistics, it is not dicult to show that, within the class of symmetric quadratic forms, every consistent estimator permits negative values. In other words, there does not exist a symmetric positive denite quadratic estimator that is consistent for nr 2 + mc 2 + 2. In fact, we can dispense with the symmetry condition. If there exists an asymmetric positive denite quadratic estimator, this can be symmetrized by averaging over permutations to yield a symmetric statistic that remains positive denite, quadratic, and also consistent. This is a contradiction, so we conclude that no consistent estimator exists within the class of positive quadratic functions. But every bootstrap variance of Y is a positive quadratic function of y. So we conclude that no re-sampling scheme can yield a consistent estimate of the variance of ^. It is not simply the case that the two re-sampling schemes exhibited above give inconsistent variance estimates; all re-sampling schemes having the property that the distribution of ' is independent of y have the same property.

4.5 Hybrid re-sampling schemes Another bootstrap scheme that has been proposed in the present context is as follows. First decompose the array y into `row eects', `column eects' and residuals as follows: y ij = y.. + r i + c j + e ij ; where the row and column sums of e ij are all zero. Then generate bootstrap observations by sampling from the rows, sampling from the columns, and sampling independently from the residuals. That is to say r = r ' r, c = c ' c, and e = e ' e, with ' r 2 M r, ' c 2 M c and ' e 2 M rc. The components in the re-constituted array are y ij = y.. + r i + c j + e ij: Note that r, c and e do not satisfy the zero sum constraint, so there is variation in the mean of the bootstrap samples. This hybrid scheme is not a bootstrap re-sampling scheme in the sense that the term is used in this paper because y is not obtained from y by monoid composition. Nevertheless, given ' r, ' c and ' e, the transformation from y to y is linear. Further, the distributions are uniform on M r, M c and M rc. Consequently, the variance of Y is a positive symmetric quadratic form in y. By the argument given in the preceding section, the bootstrap variance of Y cannot be a consistent estimate of the sampling variance. In fact, the bootstrap variance given by this hybrid scheme is identical to the Boot-II variance using the product monoid. 11 4.6 Assumptions Most of the re-sampling schemes considered in this paper have the property that the distribution of ' is uniform on some monoid. Uniformity, however, is not critically important: generally speaking the distributions are not uniform on the full monoid. What is important, however, is that the distribution of ' on M should be independent of y. It is this property that guarantees that the bootstrap moments and cumulants of order r are homogeneous polynomials in y of degree r. If this assumption is violated, the bootstrap variance need not be a quadratic form in y. So it is not impossible that a modied resampling scheme might be devised such that the bootstrap variance is consistent. To give an example of a re-sampling scheme in which the critical assumption is violated, suppose we rst calculate Sr 2, S2 c and S2 e, the row, column and error sums of squares. Given these values, let Z i be a sequence of independent Bernoulli variables with parameter Se 2=(S2 e + S2 r + S2 c ). If Z i = 0 the ith bootstrap sample is drawn uniformly from the product monoid, otherwise uniformly from the full monoid. The bootstrap distribution is a mixture of Boot-I and Boot-II. The variance in this case is a ratio of homogeneous polynomials of degree 4 and 2. It seems most unlikely that a randomized bootstrap scheme of this sort could yield a consistent variance estimate, but I have been unable to prove this.

5. Pivotal statistics The two examples discussed above are intended to illustrate a common type of problem in which the target statistic has a natural estimator, ^, that is asymptotically normally distributed, but no consistent variance estimate is readily available. The purpose of resampling is then to obtain a variance estimate in order to set approximate condence intervals for. If this can be done successfully, the often dicult analytic exercise of computing a consistent variance estimate is avoided. When a consistent estimate of the variance, s 2, is available, the nature of the problem is drastically altered. First, the need for bootstrapping is greatly diminished because a rst-order approximate condence interval can be obtained directly from the normal approximation. Second, if re-sampling is to be used, it is more natural to use the bootstrap distribution of the approximately pivotal statistic ( Y y)=s. The purpose of the bootstrap is then to improve on the rst-order normal approximation. Each family of sampling distributions considered in this paper is characterized by two properties, group invariance and independence of certain components. In the one-way layout, for example, observations in distinct blocks are independent. Monoid composition automatically preserves group invariance, but it need not preserve the independence structure. In the single-sample problem, the permutation distribution is exchangeable but it does not have independent components. In the one-way layout, both group invariance and independence of blocks are preserved by re-sampling plans II, V and VI only. For the two-way exchangeable array, both re-sampling plans have the property that if I; I 0 are disjoint sets of rows, and J; J 0 disjoint columns, then YIJ and Y I 0 J are independent. 0 The hybrid scheme in section 4.5 has the same property. In each of these re-sampling schemes, the bootstrap distribution belongs to the family of sampling distributions under consideration. Any identity that is satised by the family of sampling distributions is automatically satised by the bootstrap distribution. For the one-way layout, the family of sampling distributions has the property that nb var( Y ) = E(MS b ). It follows immediately as a special case for bootstraps II, V and VI, that nb var( Y j y) = E(MS b j y). By the same argument, in the row-column exchangeable array, mn var( Y j y) = E(MS r + MS c MS e j y) 12 for both monoid re-sampling schemes and the hybrid. As a consequence, for each of these re-sampling schemes, the studentized statistic ( Y y)=s is standard normal to rst order. Here s 2 is the consistent variance estimate or any asymptotically equivalent statistic. For the pivotal statistic, the bootstrap distributions are in rst-order agreement with the sampling distribution of ( Y )=s. It is unclear if any bootstrap distribution is correct to the next order. 6. Linear Regression 6.1 Simple linear regression Let be the set of units, and let the response vector Y be a random variable in V = R. Given a covariate vector z 2 R, the expected value of Y satises the linear model = E(Y j z) = + z:

We draw a distinction between the theoretical residuals = Y, which are assumed to be i.i.d., and the tted residuals ^ = Y P Y, which are not exchangeable. Re-sampling operates on the theoretical residuals. For any, we dene = Y z: if is the true value, this vector is exchangeable. Consider the statistic T = h ; zi V=1, the adjusted sum of products P (z i z) i, which is also equal to (^ )kzk 2, where kzk 2 = P (z i z) 2. For re-sampling purposes, we dene = ', with ' chosen uniformly from the group or from the monoid. The re-sampling distribution of T is the distribution of T = h ; xi V=1, which has mean zero and permutation variance var(t ) = k k 2 kzk 2 =(n 1) = kzk 2 (n 2)s 2 + (^ ) 2 kzk 2 =(n 1); where s 2 is the residual mean square after regression on z. If monoid re-sampling is used, the variance is reduced by the factor (n 1)=n. The preceding argument based on exchangeability diers from standard practice as described by Efron and Tibshirani, (1993), or Davison and Hinkley, (1997). Although T = h ; zi, these authors consider the reference distribution h^ ; zi, treating the tted residuals as exchangeable. As a partial x for practical work, Davison and Hinkley recommend re-sampling from the modied tted residuals, but these are not exchangeable either. It is often a reasonable approximation to suppose that the studentized statistic has a xed distribution independent of and z, but dependent on n. In this case, a condence interval for can be obtained as the set n : (^ )kzk p n 1 o p (n 2)s2 + (^ ) 2 kzk C 2 : 13 The usual t-ratio is t = (^ )kzk=s, so the preceding interval coincides with the usual normal-theory interval if C = p n 1 t n 2; qn 2 + t 2 n 2; : The exact version of the preceding argument requires C to be chosen from the empirical distribution of the standardized values h ; zi p (n 1) ; k k kzk with ' chosen uniformly from the group. This exercise must be repeated for a range of values in order to construct the required interval. In other words, exact condence sets are based on the permutation t distribution.

6.2 Multiple regression In the standard multiple linear regression model, Y = X +, the components of the observation vector Y are not exchangeable, but the (theoretical) residual vector in R is taken to be i.i.d. and thus exchangeable. Exchangeability here refers to the group of permutations acting on the set of units. Let X be that p-dimensional subspace of R spanned by the columns of X. What is required is a notion of exchangeability modulo translations in X. One possible denition is as follows. We say that the random variable Y is exchangeable modulo X if there exists a non-random point x 2 X such that the random variable Y x is exchangeable in V = R. In the standard linear regression model, the observation vector is exchangeable modulo X because, with x = X, Y x = has i.i.d. components. As we will see, this denition is not especially useful for re-sampling purposes because the point x is typically unknown. Consider now an extension of the linear regression model E(Y ) = X + z in which z is a given vector in R, but not in X. A condence interval is required for under the assumption that Y z is exchangeable modulo X. For purposes of testing and estimation, we choose the conventional statistic, the quotient-space inner product, T = hy; zi V=X = hz + ; zi V=X = ^hz; zi V=X ; in which all vectors are regarded as elements in the quotient space R modulo X. In the more familiar matrix notation T = Y T W (I P )z; where P = X(X T W X) 1 X T W is the orthogonal projection on to X. In the present context of exchangeable errors, the inner product matrix must be invariant. We assume that the o-diagonal elements are zero, so W = I. For re-sampling purposes, it is inappropriate to use Y z directly because its components are far from exchangeable. Nor can we use the the quotient space version Y z + X because this is not a point in R. The only remaining option is to use the projection ^ = (I P )(y z) on to X?. However the residual vector thus dened does not satisfy the denition of exchangeability modulo X. Nevertheless, we take ^ as given, and proceed as if these residuals are exchangeable. That is to say, we construct re-sampled residuals = ^ ' where ' is chosen uniformly at random either from the symmetric group or the monoid on. Then, for each, we compare the observed observed value of T with the bootstrap distribution of values t obs () = ^ T (I P )z = (^ ) zt (I P )z T (') = (^ ') T (I P )z for ' in G or in M. The set of values of for which t obs () does not fall in the tails of the relevant distribution constitutes a condence set for, although the coverage probability may not coincide with the nominal level. 14

On the assumption that the constant functions are in X, i.e. that the model X includes the intercept, and that the re-sampling distribution is uniform on the group, the bootstrap mean of T is zero and the variance is var(t j ^ ) = k^ k 2 kzk 2 =(n 1) = (n p 1)s 2 + (^ ) 2 kzk 2 kzk 2 (n 1) where kzk 2 = z T (I P )z, and s 2 is the residual mean square after regression on both X and z. In the more familiar normal-theory framework, T is normally distributed with mean zero and variance 2 kzk 2 under the hypothesis. By comparison, the re-sampling variance is too small by the factor (n p)=(n 1). If we make the approximation that that the re-sampling distribution is normal, the bootstrap interval for becomes n : (n 1)(^ ) 2 kzk 2 (n p 1)s 2 + (^ ) 2 kzk 2 2 1; In terms of the more familiar F -ratio, F = (^ ) 2 kzk 2 =s 2, on (1; n p 1) degrees of freedom, the bootstrap interval is n : (n 1)F o n p 1 + F 2 1; : If sampling is done uniformly on the monoid, i.e. with replacement, the set is n nf o : n p 1 + F 2 1; : Both intervals are too narrow, and could be improved by multiplicative adjustment of the bootstrap variance or by adjustment of the residuals as recommended by Davison and Hinkley (1997). But even with this correction, the coverage probability does not achieve the nominal level. o 15 6.3 Rotational exchangeability The preceding derivation is unsatisfactory because it was assumed that (I P )(Y z) is exchangeable in R. This assumption is clearly false because the residuals are known to lie in the subspace X?. Unless X is an invariant subspace, i.e. either X = 0; 1, or 1?, the residuals cannot be exchangeable. The diculty arises from the fact that the re-sampled residuals have a component in X, which does not contribute to the statistic T. One possible remedy is to replace e = ^e ' by e 0 = Qe k^ek=kqe k; where Q = I P. In other words, the re-sampled residuals are projected on to X? and re-scaled to have the same norm as ^e. Although similar in spirit, this remedy diers from Davison and Hinkley's (1997, p. 259) recommendation of using modied residuals for the same purpose. A serious drawback with e 0 is that if there exists a ' such that e 2 X, Qe

is zero and e 0 is undened. The existence of such a ' is assured if monoid re-sampling is used. If re-sampling is restricted to the group, however, there does not ordinarily exist a ' 2 G such that Qe = 0. One exception, however, occurs if X is a factorial model for a 2 k factorial design in which p n k. An alternative, but closely related, option is to consider the sub-group of GL(V; V) that leaves the subspace X invariant. It is enough in fact to consider the group G X of orthogonal transformations on V that leaves X invariant. We say that Y is spherically symmetric modulo X if, for each ' 2 G X, the distribution of 'Y is the same as the distribution of Y. Invariance under G X is a weaker assumption than spherical symmetry for the distribution of in V. For example, in a blocked design, spherical symmetry modulo block eects is usually more sensible than full spherical symmetry. Apart from the normal-theory model, full spherical symmetry is rarely an appealing assumption. We proceed on the basis of G X -exchangeability modulo X. Since X is invariant, we operate directly on the vector ^ = Y 0 z by an element ' in G X. By assumption, the rotated vector = '^ has the same distribution as ^. The calculations proceed as above except that ' is chosen uniformly with respect to Harr measure on G X. This amounts to saying that is uniformly distributed on the surface of the sphere of radius (^ T (I P )^ ) 1/2 in V=X. In this case, the bootstrap distributional calculations coincide with conditional normal-theory calculations for the conditional distribution of T given the sucient statistics X T Y and s 2 0. So the one-sided bootstrap condence interval for is 16 ( 1; ^ + t n p 1; s=kzk); where kzk is the quotient space norm (z T (I P )z) 1/2. The preceding argument is closely related to Box (1976, section 3.10) and Dawid (1977) who note that rotational symmetry is sucient to derive the traditional t and F -statistics for linear models. Note that, since G X is innite, the distinction between sampling with and without replacement does not arise. 7. Conclusions The main purpose of this paper is to test the limits of re-sampling as an inferential technique. Under what conditions is the bootstrap distribution of a statistic a faithful estimate of the sampling distribution? The question has been investigated by studying the distributions of linear statistics computed on exchangeable arrays. It is found that, for non-pivotal statistics, the bootstrap distribution is ordinarily not a consistent estimate of the sampling distribution. For the two-way exchangeable array, no re-sampling scheme exists such that the bootstrap variance of Y consistently estimates the sampling variance of this statistic. Outside of the i.i.d. case, it appears that re-sampling is not a reliable method for obtaining a consistent variance estimate.

References Box, G.E.P. (1976) Science and statistics. J. Am. Statist. Assoc. 71, 791{799. Davison, A. and Hinkley, D.V. (1997) Bootstrap Methods and their Application. Cambridge University Press. Dawid, A.P. (1977) Spherical matrix distributions and a multivariate model. J. R. Statist. Soc. B 39, 254{261. Efron, B. and Tibshirani, R. (1993) An Introduction to the Bootstrap. Chapman and Hall, London. Fisher, R.A. (1935) Design of Experiments. Edinburgh, Oliver and Boyd. McCullagh, P. (1987) Tensor Methods in Statistics. Chapman and Hall, London. 17