Foundations for Envelope Models and Methods

Similar documents
Envelopes: Methods for Efficient Estimation in Multivariate Statistics

Envelopes for Efficient Multivariate Parameter Estimation

Supplementary Materials for Fast envelope algorithms

Model-free Envelope Dimension Selection

Simultaneous envelopes for multivariate linear regression

Algorithms for Envelope Estimation II

Statistica Sinica Preprint No: SS R2

Fast envelope algorithms

Renvlp: An R Package for Efficient Estimation in Multivariate Analysis Using Envelope Models

Stat 5101 Lecture Notes

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

ESTIMATION OF MULTIVARIATE MEANS WITH HETEROSCEDASTIC ERRORS USING ENVELOPE MODELS

Estimation of Multivariate Means with. Heteroscedastic Errors Using Envelope Models

Supplementary Materials for Tensor Envelope Partial Least Squares Regression

ENVELOPE MODELS FOR PARSIMONIOUS AND EFFICIENT MULTIVARIATE LINEAR REGRESSION

MA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Multivariate Distributions

Modeling Mutagenicity Status of a Diverse Set of Chemical Compounds by Envelope Methods

Generalized Linear Models. Kurt Hornik

Stat 710: Mathematical Statistics Lecture 12

STA216: Generalized Linear Models. Lecture 1. Review and Introduction

Parsimonious Tensor Response Regression

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

STA 2201/442 Assignment 2

arxiv: v1 [stat.me] 30 Jan 2015

Generalized Linear Models. Last time: Background & motivation for moving beyond linear

Fused estimators of the central subspace in sufficient dimension reduction

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Tutorial on Principal Component Analysis

Linear Methods for Prediction

TAMS39 Lecture 2 Multivariate normal distribution

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

Linear Regression and Its Applications

Semiparametric Generalized Linear Models

Linear model A linear model assumes Y X N(µ(X),σ 2 I), And IE(Y X) = µ(x) = X β, 2/52

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

LOGISTIC REGRESSION Joseph M. Hilbe

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Multivariate Statistics

Gaussian Models (9/9/13)

ST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples

STA 216: GENERALIZED LINEAR MODELS. Lecture 1. Review and Introduction. Much of statistics is based on the assumption that random

STATS306B STATS306B. Discriminant Analysis. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

14 Singular Value Decomposition

Outline of GLMs. Definitions

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE

Statistical Inference of Covariate-Adjusted Randomized Experiments

ECE521 week 3: 23/26 January 2017

Computational methods for mixed models

Pattern Recognition 2

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j

Machine Learning 2017

,..., θ(2),..., θ(n)

Next is material on matrix rank. Please see the handout

Machine Learning. Lecture 3: Logistic Regression. Feng Li.

Sparse Linear Models (10/7/13)

STAT 730 Chapter 4: Estimation

Random Vectors, Random Matrices, and Matrix Expected Value

The Expectation-Maximization Algorithm

Generalized Linear Models

Further Results on Model Structure Validation for Closed Loop System Identification

CS281A/Stat241A Lecture 17

Regression. ECO 312 Fall 2013 Chris Sims. January 12, 2014

POLI 8501 Introduction to Maximum Likelihood Estimation

Kernel-Based Contrast Functions for Sufficient Dimension Reduction

Again consider the multivariate linear model (1.1), but now allowing the predictors to be stochastic. Restating it for ease of reference,

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2)

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

CS281 Section 4: Factor Analysis and PCA

The Multivariate Gaussian Distribution

Sliced Inverse Regression

Sufficient Dimension Reduction using Support Vector Machine and it s variants

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics

Lecture Notes 1: Vector spaces

Lecture 2: Linear Algebra Review

Lecture 5: LDA and Logistic Regression

Introduction to Maximum Likelihood Estimation

Chapter 6: Orthogonality

Biostat 2065 Analysis of Incomplete Data

Multivariate Regression (Chapter 10)

TECHNICAL REPORT # 59 MAY Interim sample size recalculation for linear and logistic regression models: a comprehensive Monte-Carlo study

Generalized Linear Models 1

Econ 2120: Section 2

Copula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011

Regression and Statistical Inference

Regression Graphics. 1 Introduction. 2 The Central Subspace. R. D. Cook Department of Applied Statistics University of Minnesota St.

Forecasting 1 to h steps ahead using partial least squares

i=1 h n (ˆθ n ) = 0. (2)

Optimization Problems

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

STAT 992 Paper Review: Sure Independence Screening in Generalized Linear Models with NP-Dimensionality J.Fan and R.Song

Statistics & Data Sciences: First Year Prelim Exam May 2018

Machine Learning. Linear Models. Fabio Vandin October 10, 2017

A General Overview of Parametric Estimation and Inference Techniques.

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Part 6: Multivariate Normal and Linear Models

Generalized Linear Models (GLZ)

Optimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X.

Transcription:

1 2 3 Foundations for Envelope Models and Methods R. Dennis Cook and Xin Zhang October 6, 2014 4 5 6 7 8 9 10 11 12 13 14 Abstract Envelopes were recently proposed by Cook, Li and Chiaromonte (2010) as a method for reducing estimative and predictive variations in multivariate linear regression. We extend their formulation, proposing a general definition of an envelope and a general framework for adapting envelope methods to any estimation procedure. We apply the new envelope methods to weighted least squares, generalized linear models and Cox regression. Simulations and illustrative data analysis show the potential for envelope methods to significantly improve standard methods in linear discriminant analysis, logistic regression and Poisson regression. Key Words: Generalized linear models; Grassmannians; Weighted least squares. 15 16 17 18 1 Introduction The overarching goal of envelope models and methods is to increase efficiency in multivariate parameter estimation and prediction. The development has so far been restricted to the multivariate linear model, Y i = α + βx i + ɛ i, i = 1,..., n, (1.1) 19 20 21 22 23 where ɛ i R r is a normal error vector that has mean 0, constant covariance Σ Y X > 0 and is independent of X, α R r and β R r p is the regression coefficient matrix in which we are primarily interested. Efficiency gains in the estimation of β are achieved by reparameterizing this model in terms of special projections of the response vector Y R r or the predictor vector X R p. In this article we propose extensions of envelope methodology beyond the linear R. Dennis Cook is Professor, School of Statistics, University of Minnesota, Minneapolis, MN 55455 (E-mail: dennis@stat.umn.edu). Xin Zhang is Assistant Professor, Department of Statistics, Florida State University, Tallahassee, FL, 32306 (Email: henry@stat.fsu.edu). 1

24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 model to quite general multivariate contexts. Since envelopes represent nascent methodology that is likely unfamiliar to many, we outline in Section 1.1 response envelopes as developed by Cook, Li and Chiaromonte (2010). An example is given in Section 1.2 to illustrate their potential advantages in application. A review of the literature on envelopes is provided in Section 1.3 and the specific goals of this article are outlined in Section 1.4. 1.1 Response envelopes Response envelopes for model (1.1) gain efficiency in the estimation of β by incorporating the projection P E Y onto the smallest subspace E R r with the properties (1) the distribution of Q E Y X does not depend on the value of the non-stochastic predictor X, where Q E = I r P E, and (2) P E Y is independent of Q E Y given X. These conditions imply that the distribution of Q E Y is not affected by X marginally or through an association with P E Y. Consequently, changes in the predictor affect the distribution of Y only via P E Y. We refer to P E Y informally as the material part of Y and to Q E Y as the immaterial part of Y. The notion of an envelope arises in the formal construction of the response projection P E as guided by the following two definitions, which are not restricted to the linear model context. Let R m n be the set of all real m n matrices and let S k k be the set of all real and symmetric k k matrices. If A R m n, then span(a) R m is the subspace spanned by columns of A. Definition 1. A subspace R R p is said to be a reducing subspace of M R p p if R decomposes M as M = P R MP R + Q R MQ R. If R is a reducing subspace of M, we say that R reduces M. This definition of a reducing subspace is equivalent to that given by Cook, Li and Chiaromonte (2010). It is common in the literature on invariant subspaces and functional analysis (Conway 1990), although the underlying notion of reduction differs from the usual understanding in statistics. Definition 2. (Cook et al. 2010) Let M S p p and let B span(m). Then the M-envelope of B, denoted by E M (B), is the intersection of all reducing subspaces of M that contain B. Definition 2 is the formal definition of an envelope, which is central to our developments. We will often identify the subspace B as the span of a specified matrix U: B = span(u). To 2

52 53 54 55 56 57 58 59 avoid proliferation of notation in such cases, we will also use the matrix as the argument to E M : E M (U) := E M (span(u)). The response projection P E is then defined formally as the projection onto E ΣY X (β), which by construction is the smallest reducing subspace of Σ Y X that contains span(β) (or envelopes span(β) and hence the name envelope ). Model (1.1) can be parameterized in terms of P E by using a basis. Let u = dim(e ΣY X (β)) and let (Γ, Γ 0 ) R r r be an orthogonal matrix with Γ R r u and span(γ) = E ΣY X (β). This leads directly to the envelope version of model (1.1), Y i = α + ΓηX i + ε i, with Σ Y X = ΓΩΓ T + Γ 0 Ω 0 Γ T 0, i = 1,..., n, (1.2) 60 61 62 63 64 65 66 67 68 69 where β = Γη, η R u p gives the coordinates of β relative to basis Γ, and Ω and Ω 0 are positive definite matrices. While η, Ω and Ω 0 depend on the basis Γ selected to represent E ΣY X (β), the parameters of interest β and Σ Y X depend only on E ΣY X (β) and not on the basis. All parameters in (1.2) can be estimated by maximizing the likelihood from (1.2) with the envelope dimension u determined by standard methods like likelihood ratio testing, information criteria, cross-validation or a hold-out sample, as described by Cook et al. (2010). In particular, the envelope estimator β env of β is just the projection of the maximum likelihood estimator β onto the estimated envelope, β env = P Γ β, and n(vec( βenv ) vec(β)) is asymptotically normal with mean 0 and covariance matrix given by Cook et al. (2010), where vec is the vectorization operator that stacks the columns of a matrix and u is assumed to be known. 70 71 72 73 74 75 76 77 78 79 1.2 Cattle data and response envelopes The data for this illustration resulted from an experiment to compare two treatments for the control of an intestinal parasite in cattle: thirty animals were randomly assigned to each of the two treatments and their weights (in kilograms) were recorded at weeks 2, 4,..., 18 and 19 after treatment (Kenward 1987). Because of the nature of a cow s digestive system, the treatments were not expected to have an immediate measurable affect on weight. The objectives of the study were to find if the treatments had differential effects on weight and, if so, about when were they first manifested. We begin by consider the multivariate linear model (1.1), where Y i R 10 is the vector of cattle weights from week 2 to week 19, and the binary predictor X i is either 0 or 1 indi- 3

80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 cating the two treatments. Then α = E(Y X = 0) is the mean profile for one treatment and β = E(Y X = 1) E(Y X = 0) is the mean profile difference between treatments. Fitting by maximum likelihood yields the profile plot of the fitted response vectors given in the top panel of Figure 1.1. The maximum absolute ratio of an element in β to its bootstrap standard error is only about 1.3, suggesting that there were no inter-treatment difference at any time. However, the likelihood ratio test statistics for the hypothesis β = 0 is about 27 with 10 de- grees of freedom, which indicates that the two treatments did have different effects on cattle weights. Further analysis is necessary to gain a better understanding of the treatment effects. The literature on longitudinal data is rich with ways of modeling mean profiles as a function of time and structuring Σ Y X to reflect dependence over time. Although not designed specifically for longitudinal data, an envelope analysis offers a viable alternative that does not require prior selection of models for the mean profile or the covariance structure. Turning to a fit of the envelope model (1.2), likelihood ratio testing and Bayesian informa- tion criterion (BIC) both give a strong indication that u = 1, so only a single linear combination of the elements of Y is relevant to comparing treatments. Details for determining an envelope dimension can be found in Cook et al. (2010), and we illustrate dimension selection in the real data applications of Section 6. The corresponding fitted weight profiles are given in the bottom plot of Figure 1.1. The two profile plots in Figure 1.1 are quite similar, except the envelope profiles are notably closer in the early weeks, supporting the notion that there is a lag between treatment application and effect. The absolute ratios of elements in β env to their bootstrap stan- dard errors were all smaller than 2.5 before week 10 and all larger than 3.4 from week 10 and on. This finding gives an answer the original question: the two treatments have different effects on cattle weight growth starting no later than week 10. To illustrate the working mechanism of the envelope estimator β env = P Γ β graphically, 103 as shown in Figure 1.2, we use only the weights at week 12 and 14 as the bivariate response 104 105 106 107 108 109 110 Y = (Y 1, Y 2 ) T. The maximum likelihood estimator of the first element β 1 of β under model (1.1) is just the difference between the treatment means at week 12. In terms of Figure 1.2 this corresponds to projecting each data point onto the horizontal axis and comparing the means of the resulting univariate samples, as represented schematically by the two relatively flat densities for Y 1 (X = 0) and Y 1 (X = 1) along the horizontal axis of Figure 1.2. These densities are close, indicating that it may take a very large sample size to detect the difference between the 4

340 320 X=0 X=1 Standard method Fitted weight 300 280 260 240 220 2 4 6 8 10 12 14 16 18 20 Week 340 320 X=0 X=1 Envelope method Fitted weight 300 280 260 240 220 2 4 6 8 10 12 14 16 18 20 Week Figure 1.1: Cattle data: The top plot is obtained by a standard likelihood analysis and the bottom plot is obtained by the corresponding envelope analysis. 111 112 113 114 115 116 117 118 marginal means that is evident in the figure. In contrast, the envelope estimator for β 1 first projects the data onto the estimated envelope span( Γ) and then onto the horizontal axis, as illustrated in Figure 1.2. The result is represented schematically by two peaked densities on the horizontal axis. These densities are relatively well separated, reflecting the efficiency gains provided by an envelope analysis. The standard estimator β = (5.5, 4.8) T with bootstrap standard errors (4.2, 4.4) T, while the envelope estimator β env = (5.4, 5.1) T with bootstrap standard errors (1.12, 1.07) T. 1.3 Overview of available envelope methods 119 120 Envelopes were introduced by Cook, Li and Chiaromonte (2010) for response reduction in the multivariate linear model with normal errors. They proved that the maximum likelihood estimator β env of β from model (1.2) is β 121 env = P Γ β and that the asymptotic variance of vec( βenv) is never larger than that for vec( β). The reduction in variation achieved by the 122 123 124 envelope estimator can be substantial when the immaterial variation Ω 0 = var(γ T 0 Y) is large relative to the material variation Ω = var(γ T Y). For instance, using the Frobenius norm, we 5

360 Estimated Envelope Γ T Y 340 Weight on week 14 320 300 Γ T 0 Y E E S 280 260 240 260 280 300 320 340 360 Weight on week 12 Figure 1.2: Cattle data with the 30 animals receiving one treatment marked as o s and the 30 animals receiving the other marked as x s. Representative projection paths, labeled E for envelope analysis and S for standard analysis, are shown on the plot. Along the E path only the material information P ΓY is used for the envelope estimator. 125 126 127 128 129 typically see substantial gains when Ω 0 F Ω F, as happens in Figure 1.2. When some predictors are of special interest, Su and Cook (2011) proposed the partial envelope model, with the goal of improving efficiency of the estimated coefficients corresponding to these particular predictors. They used the Σ Y X -envelope of span(β 1 ), E ΣY X (β 1 ), to develop a partial envelope estimator of β 1 in the partitioned multivariate linear regression Y i = α + β 1 X 1i + β 2 X 2i + ε i, i = 1,..., n, (1.3) 130 131 132 133 134 135 136 137 138 139 140 where β 1 R r is the parameter vector of interest, X = (X 1, X T 2 ) T and the remaining terms are as defined for model (1.1). The partial envelope estimator β 1,env = P Γ β1 has the potential to yield efficiency gains beyond those for the full envelope, particularly when E ΣY X (β) = R r so the full envelope offers no gain. Cook, Helland and Su (2013) used envelopes to study predictor reduction in multivariate linear regression (1.1), where the predictors are stochastic with var(x) = Σ X. Their reasoning led them to parameterize the linear model in terms of E ΣX (β T ), which again achieved substantial efficiency gains in the estimation of β and in prediction. They also showed that the SIMPLS algorithm (de Jong 1993; see also ter Braak and de Jong, 1998) for partial least squares provides a n-consistent estimator of E ΣX (β T ) and demonstrated that the envelope estimator β env = βp Ṱ typically outperforms the SIMPLS estimator in practice, where we Γ(S X ) 6

141 142 143 144 145 146 147 use P A(V) := P A(V) = A(A T VA) 1 A T V to denote the projection onto A = span(a) in the V inner product, and Q A(V) = I P A(V). Given the dimension of the envelope, the envelope estimators in these three articles are all maximum likelihood estimators based on normality assumptions, but are also n-consistent with only finite fourth moments. It can be shown that the partial maximized log-likelihood functions L n (Γ) = (n/2)j(γ) for estimation of a basis Γ of E ΣY X (β), E ΣY X (β 1 ) or E ΣX (β T ) all have the same form with J(Γ) = log Γ T MΓ + log Γ T ( M + Û) 1 Γ, (1.4) where the positive definite matrix M 148 and the positive semi-definite matrix Û depend on context. The estimated basis is Γ = arg min J(Γ), where the minimization is carried out over a 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 set of semi-orthogonal matrices whose dimensions depend on the envelope being estimated. Estimates of the remaining parameters are then simple functions of Γ. We represent sam- ple covariance matrices as S ( ) and defined with the divisor n: S X = n i=1 (X i X)(X i X) T /n, S XY = n i=1 (X i X)(Y i Y) T /n and S Y X denotes the covariance matrix of the residuals from the linear fit of Y on X. To determine the estimators of the response enve- lope E ΣY X (β), the predictor envelope E ΣX (β T ) and the partial envelope E ΣY X (β 1 ), we have { M, M + Û} = {S Y X, S Y }, {S X Y, S X } and {S Y X, S Y X2 }, respectively. For instance, a basis of the response envelope for the cattle data was estimated by maximizing L n (Γ) with { M, M + Û} = {S Y X, S Y }. Still in the context of multivariate linear regression, Schott (2013) used saddle point ap- proximations to improve a likelihood ratio test for the envelope dimension. Su and Cook (2013) adapted envelopes for the estimation of multivariate means with heteroscedastic er- rors, and Su and Cook (2012) introduced a different type of envelope construction, called inner envelopes, that can produce efficiency gains when envelopes offer no gains. Cook and Zhang (2014b) introduced envelopes for simultaneously reducing the predictors and responses and showed synergetic effects of simultaneous envelopes in improving both estimation efficiency and prediction. Cook and Zhang (2014a) proposed a fast and stable 1D algorithm for envelope estimation. 7

168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 1.4 Organization and notation The previous studies on envelope models and methods are all limited to multivariate linear regression. While envelope constructions seem natural and intuitive in that setting, nothing is available to guide the construction of envelopes in other contexts like generalized linear models. In this article, we introduce the constructive principle that an asymptotically normal estimator φ of a parameter vector φ may be improved by enveloping span(φ) with respect to the asymptotic covariance of φ. This principle recovers past estimators and allows for extensions to many other contexts. While the past envelope applications are rooted in normal likelihood-based linear regression, the new constructive principle not only broadens the existing methods beyond linear regression but also allows generic moment-based estimation alternatives. The rest of this article is organized as follows. In Section 2 we give a general constructive envelope definition. Based on this definition, we study envelope regression in Section 3 for generalized linear models and other applications. In Section 4, we extend envelope estimation beyond the scope of regression applications and lay general estimation foundations. We propose moment-based and objective-function-based estimation procedures in Section 4.1 and turn to envelopes in likelihood-based estimation in Section 4.2. Simulation results are given in Section 5, and illustrative analyses are given in Section 6. Although our focus in this article is on vector-valued parameters, we describe in a Supplement to this article how envelopes for matrix parameters can be constructed generally. Due to space limitations, we focus our new applications in generalized linear models. The Supplement also contains applications to weighted least squares and Cox regression, along with additional simulation results, proofs and other technical details. The following additional notations and definitions will be used in our exposition. The Grassmannian consisting of the set of all u dimensional subspaces of R r, u r, is denoted as G u,r. If n( θ θ) converges to a normal random vector with mean 0 and covariance matrix V we write its asymptotic covariance matrix as avar( n θ) = V. We will use oper- ators vec : R a b R ab, which vectorizes an arbitrary matrix by stacking its columns, and vech : R a a R a(a+1)/2, which vectorizes a symmetric matrix by stacking the unique elements of its columns. Let A B denote the Kronecker product of two matrices A and B. Envelope estimators will be written with the subscript env, unless the estimator is uniquely 8

198 199 200 associated with the envelope construction itself, in which case the subscript designation is un- necessary. We use θ α to denote an estimator of a parameter θ given another parameter α. For instance, Σ X,Γ is an estimator of Σ X given Γ. 201 2 A constructive definition of envelopes 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 The following proposition summarizes some key algebraic properties of envelopes. For a matrix M S p p, let λ i and P i, i = 1,..., q, be its distinct eigenvalues and corresponding eigen- projections so that M = q i=1 λ ip i. Define f : R p p R p p as f (M) = q i=1 f(λ i)p i, where f : R R is a function such that f(x) = 0 if and only if x = 0. Proposition 1. (Cook et al. 2010) 1. If M S p p has q p eigenspaces, then the M-envelope of B span(m) can be constructed as E M (B) = q i=1 P ib; 2. With f and f as previously defined, E M (B) = E M (f (M)B). 3. If f is strictly monotonic then E M (B) = E f (M)(B) = E M (f (M)B). From the first part of this proposition, we see that the M-envelope of B is the sum of the projections of B onto the eigenspaces of M; the second part of this proposition gives a variety of substitutes for B without changing the envelope; and part 3 implies that we may consider replacing M by f (M) for a monotonic function f without affecting the envelope. Envelopes arose in the studies reviewed in Section 1.3 as natural consequences of postu- lating reductions in Y or X. However, they provide no guidance on how to employ parallel reasoning in more complex settings like generalized linear models, or in settings without a clear regression structure. In terms of Definition 2, the previous studies offer no guidance on how to choose the matrix M and the subspace B for use in general multivariate problems, par- ticularly since there are many ways to represent the same envelope, as indicated in parts 2 and 3 of Proposition 1. We next propose a broad criterion to guide these selections. Let θ denote an estimator of a parameter vector θ Θ R m based on a sample of size n. Let θ t denote the true value of θ and assume, as is often the case, that n( θ θ t ) converges in distribution to a normal random vector with mean 0 and covariance matrix V(θ t ) > 0 as n 9

225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246. To accommodate the presence of nuisance parameters we decompose θ as θ = (ψ T, φ T ) T, where φ R p, p m, is the parameter vector of interest and ψ R m p is the nuisance parameter vector. The asymptotic covariance matrix of φ is represented as V φφ (θ t ), which is the p p lower right block of V(θ t ). Then we construct an envelope for improving φ as follows. Definition 3. The envelope for the parameter φ R p is defined as E Vφφ (θ t)(φ t ) R p. This definition of an envelope expands previous approaches reviewed in Section 1.3 in a variety of ways. First, it links the envelope to a particular pre-specified method of estima- tion through the covariance matrix V φφ (θ t ), while normal-theory maximum likelihood is the only method of estimation allowed by the previous approaches. The goal of an envelope is to improve that pre-specified estimator, perhaps a maximum likelihood, least squares or robust estimator. Second, the matrix to be reduced here V φφ (θ t ) is dictated by the method of estimation. Third, the matrix to be reduced can now depend on the parameter being estimated, in addition to perhaps other parameters. Definition 3 reproduces the partial envelopes for β 1 reviewed in Section 1.3 and the en- velopes for β when it is a vector; that is, when r = 1 and p > 1 or when r > 1 and p = 1. It also reproduces the partially maximized log likelihood function (1.4) by setting M = V φφ (θ t ) and U = φ t φ T t. To apply Definition 3 for the partial envelope of β 1 based on model (1.3), the asymptotic covariance matrix of the maximum likelihood estimator of β 1 is V β1 β 1 = (Σ 1 X ) 11Σ Y X, where (Σ 1 X ) 11 is the (1, 1) element of the inverse of Σ X = lim n n 1 n i=1 X ix T i. Consequently, by Proposition 1, E Vβ1 β 1 (β 1 ) = E ΣY X (β 1 ), and thus Definition 3 recovers the partial envelopes of Su and Cook (2011). To construct the partially 247 248 maximized log likelihood (1.4) we set M = V β1 β 1 sions gives and U = β 1 β T 1. Then using sample ver- J(Γ) = log (S 1 X ) 11Γ T S Y X Γ + log Γ T {(S 1 X ) 11S Y X + β 1 βt 1 } 1 Γ = log Γ T S Y X Γ + log Γ T S 1 Y X 2 Γ, 249 250 251 which is the partially maximized log-likelihood of Su and Cook (2011). It is important to note that, although E ΣY X (β 1 ) = E Vβ1 β 1 (β 1 ), Definition 3 requires that we use V β1 β 1 = (Σ 1 X ) 11Σ Y X and not Σ Y X alone. The response envelope by Cook et al. (2010) can be seen as 10

252 253 254 255 a special case of the partial envelope model with absence of X 2. This implies that Definition 3 can also reproduce the response envelope objective function, which is obtained by replacing S Y X2 with S Y in the J(Γ) function (1.4). As another illustration, consider X reduction in model (1.1) with r = 1 and p > 1. To emphasize the scalar response, let σy 2 256 X = var(ε) with sample residual variance s2 Y X. The ordinary least squares estimator of β has asymptotic variance V ββ = σy 2 257 X Σ 1 X. Direct application of Definition 3 then leads to the σ 2 258 X -envelope of span(β), E σ (β). However, Y 2 X Σ 1 259 260 261 262 263 Y X Σ 1 it follows from Proposition 1 that this envelope is equal to E ΣX (β), which is the envelope used by Cook, et al., (2013) when establishing connections with partial least squares. To construct the corresponding version of (1.4), let M = V ββ and U = ββ T. Then substituting sample quantities J(Γ) = log s 2 Y XΓ T S 1 X Γ + log ΓT {s 2 Y XS 1 X + β β T } 1 Γ = log Γ T S 1 X Γ + log ΓT (S X S XY S T XY /s 2 Y ) 1 Γ, (2.1) which is the partially maximized log-likelihood of Cook et al. (2013). While E ΣX (β) = E σ 2 Y X Σ 1 (β), we must still use M = V ββ = σ 2 264 X Y X Σ 1 X in the construction of J(Γ). 265 Definition 3 in combination with J(Γ) can also be used to derive envelope estimators for X 266 267 268 269 270 271 272 273 274 275 276 277 new problems. For example, consider enveloping for a multivariate mean µ in the model Y = µ + ε, where ε N(0, Σ). We take φ = µ and µ = n 1 n i=1 Y i. Then M = V µµ = Σ, which is the asymptotic covariance matrix of µ, U = µµ T, and M + U = E(YY T ). Substituting sample versions of M and U leads to the same objective function J(Γ) as that obtained when deriving the envelope estimator from scratch. 3 Envelope regression In this section we study envelope estimators in a general regression setting. We first discuss the likelihood-based approach to envelope regression in Section 3.1 and then discuss specific applications including generalized linear models (GLM) in Sections 3.2 3.4. 3.1 Conditional and unconditional inference in regression Let Y R 1 and X R p have a joint distribution with parameters θ := (α T, β T, ψ T ) T R q+p+s, so the joint density or mass function can be written as f(y, X θ) = g(y α, β T X)h(X ψ) 11

278 279 280 281 282 283 and the observed data are {Y i, X i }, i = 1,..., n. We take β to be the parameter vector of inter- est and, prior to the introduction of envelopes, we restrict α, β and ψ to a product space. Let L n (θ) = n i=1 log f(y i, X i θ) be the full log-likelihood, let the log-likelihood conditional on X be represented by C n (α, β) = n i=1 log g(y i α, β T X i ) and let M n (ψ) = n i=1 log h(x i ψ) be the marginal log-likelihood for ψ. Then we can decompose L n (θ) = C n (α, β) + M n (ψ). Since our primary interest lies in β and X is ancillary, estimators are typically obtained as ( α, β) = arg max α,β C n(α, β). (3.1) 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 Our goal here is to improve the pre-specified estimator β by introducing the envelope E Vββ (β t ), where V ββ = V ββ (θ t ) = avar( n β), according to Definition 3. Let (Γ, Γ 0 ) R p p denote an orthogonal matrix where Γ R p u is a basis for E Vββ (β t ). Since β t E Vββ (β t ), we can write β t = Γη t for some η t R u. Because V ββ (θ t ) typically depends on the distribution of X and E Vββ (β t ) reduces V ββ (θ t ), the marginal M n will depend on Γ. Then the log-likelihood becomes L n (α, η, ψ 1, Γ) = C n (α, η, Γ) + M n (ψ 1, Γ), where ψ 1 represents any parameters remaining after incorporating Γ. Since both C n and M n depend on Γ, the predictors are no longer ancillary after incorporating the envelope structure and estimation must be carried out by maximizing {C n (α, η, Γ) + M n (ψ 1, Γ)}. The relationship between X and E Vββ (β t ) that is embodied in M n (ψ 1, Γ) could be complicated, depending on the distribution of X. However, as described in the following proposition, it simplifies considerably when E(X Γ T X) is a linear function of Γ T X. This is well-known as the linearity condition in the sufficient dimension reduction literature where there Γ denotes a basis for the central subspace (Cook 1996). Background on the linearity condition, which is widely regarded as restrictive but nonetheless rather mild, is available from Cook (1998), Li and Wang (2007) and many other articles in sufficient dimension reduction. For instance, if X follows an elliptically contoured distribution, the linearity condition will be guaranteed for any Γ (Eaton 1986). 302 303 Proposition 2. Assume that E(X Γ T X) is a linear function of Γ T X. Then E Vββ (β t ) = E ΣX (β t ). 304 305 A implication of Proposition 2 is that, for some positive definite matrices Ω R u u and Ω 0 R (p u) (p u), Σ X = ΓΩΓ T + Γ 0 Ω 0 Γ T 0 and thus M n must depend on Γ through the 12

306 307 308 309 310 311 312 313 marginal covariance Σ X. Consequently, we can write M n (Σ X, ψ 2 ) = M n (Γ, Ω, Ω 0, ψ 2 ), where ψ 2 represents any remaining parameters in the marginal function. If X is normal with mean µ X and variance Σ X, then ψ 2 = µ X and M n (Γ, Ω, Ω 0, ψ 2 ) = M n (Γ, Ω, Ω 0, µ X ) is the marginal normal log-likelihood. In this case, it is possible to maximize M n over all its parameters except Γ, as stated in the following proposition. Proposition 3. Assume that X R p is multivariate normal N(µ X, Σ X ) and that Γ R p u is a semi-orthogonal basis matrix for E ΣX (β t ). Then µ X,Γ = X, Σ X,Γ = P Γ S X P Γ + Q Γ S X Q Γ and M n (Γ) := max M n (Γ, Ω, Ω 0, µ X ) (3.2) Ω,Ω 0,µ X = n { log Γ T S X Γ + log Γ T 0 S X Γ 0 } 2 = n { log Γ T S X Γ + log Γ T S 1 X 2 Γ + log S X }, (3.3) 314 315 316 where (Γ, Γ 0 ) R p p is an orthogonal basis for R p. Moreover, the global maximum of M n (Γ) is attained at all subsets of u eigenvectors of S X. This proposition indicates that if X is marginally normal, the envelope estimators are ( α env, η, Γ) = arg max{c n (α, η, Γ) + M n (Γ)}. (3.4) 317 318 319 320 321 322 323 324 325 326 327 In particular, the envelope estimator of β is β env = Γ η and, from Proposition 3, Σ X,env = P ΓS X P Γ +Q ΓS X Q Γ. It follows also from Proposition 3 that one role of M n is to pull span( Γ) toward the reducing subspaces of S X, although it will not necessarily coincide with any such subspace. 3.2 Generalized linear models with canonical link In the generalized linear model setting (Agresti 2002), Y belongs to an exponential family with probability mass or density function f(y i ϑ i, ϕ) = exp{[y i ϑ i b(ϑ i )]/a(ϕ) + c(y, ϕ)}, i = 1,..., n, where ϑ is the natural parameter and ϕ > 0 is the dispersion parameter. We consider the canonical link function ϑ(α, β) = α + β T X, which is a monotonic differentiable function of E(Y X, ϑ, ϕ). We also restrict discussion to one-parameter families so that the dispersion parameter ϕ is not needed. 13

µ := E(Y X, α, β) C(ϑ) C (ϑ) C (ϑ) ( W ) Normal ϑ Y ϑ ϑ 2 /2 Y ϑ 1 Poisson exp(ϑ) Y ϑ exp(ϑ) Y exp(ϑ) exp(ϑ) Logistic exp(ϑ)/a(ϑ) Y ϑ log A(ϑ) Y exp(ϑ)/a(ϑ) exp(ϑ)/a 2 (ϑ) Exponential ϑ 1 > 0 Y ϑ log( ϑ) (Y 1/ϑ) ϑ 2 Table 1: A summary for the mean functions, the conditional log-likelihoods and their derivatives of various exponential family distributions. A(ϑ) = 1 + exp(ϑ). 328 329 The conditional log likelihood takes the form of log f(y ϑ) = yϑ b(ϑ) + c(y) := C(ϑ), where ϑ = α + β T X is the canonical parameter. Then the Fisher information matrix for (α, β) 330 evaluated at the true parameters is ( ( 1 X F(α t, β t ) = E C (α t + β T t X) X T XX T )), (3.5) 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 where C (α t + β T t X) is the second derivative of C(ϑ) evaluated at α t + β T t X. To construct the asymptotic covariance of β from the Fisher information matrix and to in- troduce notation, we transform (α, β) into orthogonal parameters (Cox and Reid 1987). Specif- ically, we transform (α, β) to (a, β) so that a and β have asymptotically independent maximum likelihood estimators. Define weights W (ϑ) = C (ϑ)/e(c (ϑ)) so that E(W ) = 1, and write ϑ = α + β T E(W X) + β T {X E(W X)} := a + β T {X E(W X)}. Then the new parame- terization (a, β) has Fisher information matrix ( 1 0 F(a t, β t ) = E( C ) 0 Σ X(W ) where Σ X(W ) = E{W [X E(W X)][X E(W X)] T }. We now have orthogonal parameters and avar( n β) = V ββ (a t, β t ) = { E( C ) Σ X(W ) } 1, while EVββ (β t ) is the corresponding envelope. The parameterizations (a, β) and (α, β) lead to equivalent implementation since we are interested only in β. Therefore, we use (α, β) in the estimation procedure that follows and use orthogonal parameters (a, β) to derive asymptotic properties in Section 3.3. The conditional log-likelihood, which varies for different exponential family distributions of Y X, can be written as C n (α, β) = n i=1 C(ϑ i), where ϑ i = α + β T X i and different functions C(ϑ) are summarized in Table 1. We next briefly review Fisher scoring, which is the standard iterative method for maximizing C n (α, β), as background for the alternating envelope algorithm (Algorithm 1). At each iteration of the Fisher scoring method, the update step for β ), 14

348 can be summarized in the form of a weighted least squares (WLS) estimator as follows: β S 1 X(Ŵ )S X V (Ŵ ), (3.6) 349 where Ŵ = W ( ϑ) = W ( α + β T X) is the weight at the current iteration, V = ϑ + {Y 350 µ( ϑ)}/ŵ is a pseudo-response variable at the current iteration, the weighted covariance S X(Ŵ ) 351 is the sample estimator of Σ X(W ) and the sample weighted cross-covariance S X V ( Ŵ ) is defined 352 similarly. Upon convergence of the Fisher scoring process, the estimator β = S 1 X(Ŵ )S X V (Ŵ ) is 353 a function of the estimators α, ϑ, Ŵ and V at the final iteration. 354 355 356 Assuming normal predictors, it follows from Section 3.1 that the full log-likelihood can be written as L n (α, Γ, η) = C n (α, β)+m n (Γ), where β = Γη is the coefficient vector of interest and M n (Γ) is given in Proposition 3. For fixed Γ R p u, the Fisher scoring method of fitting the GLM of Y on Γ T X leads to η Γ = (Γ T S X( Ŵ ) Γ) 1 Γ T 357 S X V ( Ŵ ). Therefore, the partially maximized log-likelihood for Γ based on Fisher scoring algorithm is 358 L n (Γ) = C n (Γ) + M n (Γ) = C n ( α, Γ η Γ ) n 2 { log Γ T S X Γ + log Γ T S 1 X Γ + log S X }, (3.7) 359 360 361 362 363 364 365 366 367 368 369 370 371 where the optimization over Γ treats η Γ as a function of Γ instead of as fixed. Since we are able to compute the analytical form of the matrix derivative L n (Γ)/ Γ, which is summarized in Lemma 1 of Supplement Section C, the alternating update of Γ based on η Γ is more efficient than updating Γ with fixed η. We summarize this alternating algorithm for fitting GLM envelope estimators in Algorithm 1, where the alternating between (2a) and (2b) typically converges in only a few rounds. Step (2a) is solved by using the sg_min Matlab package by Ross A. Lippert (http://web.mit.edu/~ripper/www/software/) with the analytical form of L n (Γ)/ Γ, and Fisher scoring was the default method in the Matlab package glmfit. 3.3 Asymptotic properties with normal predictors In this section we describe the asymptotic properties of envelope estimators in regression when α (i.e. orthogonal parameter a in GLM) and β are orthogonal parameter vectors and X is normally distributed. We also contrast the asymptotic behavior of the envelope estimator β env with that of the estimator β from C n (α, β), and comment on other settings at the end of the 15

Algorithm 1 The alternating algorithm for GLM envelope estimator. 1. Initialize α, β, ϑ, Ŵ and V using the standard estimators. Initialize Γ using the 1D algorithm (Algorithm 2) discussed in Section 4.1. 2. Alternating the following steps until the convergence of β env or the maximum number of iterations is reached. (2a) Update Γ by maximizing (3.7) over Grassmann manifold G u,p, where C n (Γ) is calculated based on α, β, ϑ, Ŵ and V at current step. (2b) Update α and η by the Fisher scoring method of fitting GLM of Y on Γ T X. Then set β env = Γ η and simultaneously update ϑ, Ŵ and V. 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 section. The results in this section apply to any likelihood-based envelope regression, including the envelope estimators in GLMs. The parameters involved in the coordinate representation of the envelope model are α, η, Ω, Ω 0 and Γ. Since the parameters η, Ω and Ω 0 depend on the basis Γ and the objective function is invariant under orthogonal transformations of Γ, the estimators of these parameters are not unique. Hence, we consider only the asymptotic properties of the estimable functions α, β = Γη and Σ X = ΓΩΓ T + Γ 0 Ω 0 Γ T 0, which are invariant under choice of basis Γ and thus have unique maximizers. Under the normality assumption, X N(µ X, Σ X ), we neglect the mean vector µ X since it is orthogonal to all of the other parameters. We define the following parameters ξ and estimable functions h. ξ 1 α ξ 2 ξ = ξ 3 ξ 4 := η vec(γ) vech(ω) ξ 5 vech(ω 0 ), h = h 1 (ξ) h 2 (ξ) h 3 (ξ) := α β vech(σ X ) Since the number of free parameters in h is q+p+p(p+1)/2 and the number of free parameters in ξ is q + u + (p u)u + u(u + 1)/2 + (p u)(p u + 1)/2 = q + u + p(p + 1)/2, the envelope model reduces the total number of parameters by p u. Proposition 4. Assume that for i = 1,..., n, the predictors X i are independent copies of a normal random vector X with mean µ X and variance Σ X > 0, and that the data (Y i, X i ) are independent copies of (Y, X) with finite fourth moments. Assume also that α and β are orthogonal parameter vectors. Then, as n, n( β env β t ) converges to a normal vector. 16

389 with mean 0 and covariance matrix avar( n β env ) = avar( n β Γ ) + avar( nq Γ βη ) = P Γ V ββ P Γ + (η T Γ 0 )M 1 (η Γ T 0 ) avar( n β), 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 where M = (η Γ T 0 )V 1 ββ (ηt Γ 0 ) + Ω Ω 1 0 + Ω 1 Ω 0 2I u(p u), Ω = Γ T Σ X Γ and Ω 0 = Γ T 0 Σ X Γ 0. The conditional log-likelihood is reflected in avar( n β env ) primarily through the asymp- totic variance V ββ, while Ω and Ω 0 stem from the normal marginal likelihood of X. The span of Γ reduces both V ββ and Σ X, so the envelope serves as a link between the condi- tional and marginal likelihoods in the asymptotic variance. The first addend avar( n β Γ ) is the asymptotic covariance of the estimator given the envelope. The second addend avar( nq Γ βη ) reflects the asymptotic cost of estimating the envelope and this term is orthogonal to the first. Moreover, the envelope estimator is always more efficient than or equally efficient as the usual estimator β. A vector SE( β env ) of asymptotic standard errors for the elements of β env can be constructed by first using the plug-in method to obtain an estimate of âvar( n β env ) of avar( n β env ) and then SE( β env ) = diag 1/2 {âvar( n β env )/n}. An important special case of Proposition 4 is given in the following corollary. Corollary 1. Under the same conditions as in Proposition 4, if we assume further that Σ X = σ 2 I p, σ 2 > 0 then E Vββ (β t ) = span(β t ) and avar( n β env ) = avar( n β) = V ββ. Corollary 1 tells us that, if we have normal predictors with isotropic covariance, then the envelope estimator is asymptotically equivalent to the standard estimator and enveloping offers no gain, but there is no loss, either. This implies that there must be some degree of co-linearity among the predictors before envelopes can offer gains. We illustrate this conclusion in the simulations of Section 5.2 for logistic and Poisson regressions and in Supplement Section B for least squares estimators. Experience has shown that (3.4) provides a useful envelope estimator when the predictors satisfy the linearity condition but are not multivariate normal. In this case there is a connec- tion between the desired envelope E Vββ (β t ) and the marginal distribution of X, as shown in 17

415 416 417 418 419 420 421 Proposition 2, and β env is still a n-consistent estimator of β. If the linearity condition is substantially violated, we can still use the objective function {C n (α, β) + M n (Γ)} to estimate β within E ΣX (β t ) but this envelope may no longer equal E Vββ (β t ). Nevertheless, as demonstrated later in Corollary 2, this still has the potential to yield an estimator of β with smaller asymptotic variance than β, although further work is required to characterize the gains in this setting. Alternatively, the general estimation procedure proposed in Section 4.1 yields n-consistent estimators without requiring the linearity condition. 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 3.4 Other regression applications Based on our general framework, the envelope model can be adapted to other regression types. In Supplement Sections A.1 and A.2 we give derivation and implementation details for envelope in WLS regression and in Supplement Section A.3, we include details for envelopes in Cox regression. Theoretical results in this section could also apply to the WLS and the Cox regression models, as we discussed in the Supplement Section A. The envelope methods proposed here are based on sample estimators of asymptotic covariance matrices, which may be sensitive to outliers. However, the idea of envelopes can be extended to robust estimation procedures (see, for example, Yohai et al. (1991)). 4 Envelope estimation beyond regression Having seen that Definition 3 recovers past envelopes and having applied it in the context of GLMs, we next turn to its general use in estimation. We propose three generic estimation procedures one based on moments, one based on a generic objective function and one based on a likelihood for envelope models and methods in general. The estimation frameworks in this section includes previous sections as special cases, and the algorithms in this section can also be applied to any regression context. 4.1 Moment-based and objective-function-based estimation Definition 3 combined with the 1D algorithm (Cook and Zhang 2014a), which is restated as Algorithm 2 in its population version, gives us a generic moment-based estimator of the envelope E M (U), requiring only matrices M and Û. Setting Û = φ φ T S q q and M equal to a n- 18

Algorithm 2 The 1D algorithm. 1. Set initial value g 0 = G 0 = 0. 2. For k = 0,..., u 1, g k+1 R q is obtained direction in the envelope E M (U) as follows, (a) Let G k = (g 1,..., g k ) if k 1 and let (G k, G 0k ) be an orthogonal basis for R q. (b) Define the stepwise objective function J k (w) = log(w T A k w) + log(w T B 1 k w), (4.1) where A k = G T 0k MG 0k, B k = (G T 0k MG 0k + G T 0k UG 0k) and w R q k. (c) Solve w k+1 = arg min w J k (w) subject to a length constraint w T w = 1. (d) Define g k+1 = G 0k w k+1 to be the unit length (k + 1)-th stepwise direction. 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 consistent estimator of V φφ (θ t ) S q q, it follows from Cook and Zhang (2014a; Proposition 6) that the 1D algorithm provides a n-consistent estimator PĜu = ĜuĜT u of the projection onto E Vφφ (θ t)(φ t ) R p, where Ĝu is obtained from the sample version of the 1D algorithm and u = dim(e Vφφ (θ t)(φ t )) is assumed to be known. The moment-based envelope estimator φ env = PĜu φ is then a n-consistent estimator of φt. For example, the moment-based estimator for GLMs is obtained by letting Û = β β T and M { = S 1 X(Ŵ )/ n } 1 n i=1 ( C ( ϑ i )) in the 1D algorithm. Consequently, a distributional assumption, such as normality, is not a requirement for estimators based on the 1D algorithm to be useful, a conclusion that is supported by previous work and by our experience. However, the weight W (ϑ) depends on the parameter β so that iterative updates of weights and estimators could be used to refine the final estimator, as we will discuss at the end of Section 4.2. The 1D algorithm can also play the role of finding starting values for other optimizations. For instance, in envelope GLMs, we use the 1D algorithm to get starting values for maximizing (3.7). Moreover, the likelihood-based objective function in (3.7) has the property that L n (Γ) = L n (ΓO) for any orthogonal matrix O R u u. Maximization of L n is thus over the Grassmannian G u,p. Since u(p u) real numbers are required to specify an element of G u,p uniquely, optimization of L n is essentially over u(p u) real dimensions, and can be time consuming and sensitive to starting values when this dimension is large. The 1D algorithm mitigates these computational issues: to our experience, with the 1D estimator as the starting value for step (2a) in Algorithm 1, the iterative Grassmann manifold optimization of L n (Γ) in 19

462 463 (3.7) typically converges after only a few iterations. To gain intuition about the potential advantages of the moment-based envelope estimator, 464 assume that a basis Γ for E Vφφ (θ t)(φ t ) is known and write the envelope estimator as P Γ φ. 465 Then since the envelope reduces V φφ(θ t ), we have avar( n φ) = V φφ (θ t ) = P Γ V φφ (θ t )P Γ + Q Γ V φφ (θ t )Q Γ, avar( np Γ φ) = PΓ V φφ (θ t )P Γ V φφ (θ t ). These relationships allow some straightforward intuition by writing n( φ φ t ) = 466 n(p Γ φ φt) + nq Γ φ. The second term nqγ φ is asymptotically normal with mean 0 and variance 467 Q Γ V φφ(θ t )Q Γ, and is asymptotically independent of 468 n(p Γ φ φt). Consequently, we think 469 of Q Γ φ as the immaterial information in φ. The envelope estimator then achieves efficiency gains by essentially eliminating the immaterial variation Q Γ V φφ(θ t )Q Γ, the greatest gains 470 471 472 473 474 being achieved when Q Γ V φφ (θ t )Q Γ is large relative to P Γ V φφ (θ t )P Γ. Of course, we will typically need to estimate Γ in practice, which will mitigate the asymptotic advantages when Γ is known. But when the immaterial variation is large compared to the cost of estimating the envelope, substantial gain will still be achieved. Although we do not have an expression for the asymptotic variance of the moment-based estimator φ 475 env = PĜu φ, the bootstrap can be used to estimate it. Depending on context, cross validation, an information criteria like AIC and BIC, 476 477 478 479 or sequential hypothesis testing can be used to aid selection of u, as in Cook et al. (2010, 2013) and Su and Cook (2011). If φ is obtained by minimizing an objective function φ = arg min φ R q F n (φ) then, as an alternative to the moment-based estimator φ 480 env = PĜu φ, an envelope estimator can be con- 481 structed as φenv = Ĝu η, where η = arg min η Ru F n (Ĝuη). We refer to this as the objective- 482 483 484 485 486 487 488 function-based estimator. Sometimes these two approaches are identical, i.e., response envelopes (Cook et al. 2010) and partial envelopes (Su and Cook 2011). In Section 4.2 we specialize the objective function approach to maximum likelihood estimators. 4.2 Envelopes for maximum likelihood estimators In this section, we broaden the envelope estimators in regression to generic likelihood-based estimators. Consider estimating θ as θ = arg max θ Θ L n (θ), where L n (θ) is a log-likelihood that is twice continuously differentiable in an open neighborhood of θ t. Then under standard 20

489 490 491 492 493 494 495 496 497 conditions n( θ θ t ) is asymptotically normal with mean 0 and covariance matrix V(θ t ) = F 1 (θ t ), where F is the Fisher information matrix for θ. The asymptotic covariance matrix of φ, V φφ (θ t ), is the lower right block of V(θ t ). Recall that for envelope models, we can write the log-likelihood as L n (θ) = L n (ψ, φ) = L n (ψ, Γη). To obtain MLE, we need to maximize L n (ψ, Γη) over ψ, η and Γ R p u, where Γ is semi-orthogonal basis for E Vφφ (θ t)(φ t ) and u = dim(e Vφφ (θ t)(φ t )). We first discuss the potential advantages of envelopes by considering the maximization of L n (ψ, Γη) over ψ and η with known Γ. Since φ t E Vφφ (θ t)(φ t ), we have that φ t = Γη t for η t R u. Consequently, for know Γ, the envelope estimators become ( ψ Γ, η Γ ) = arg max n(ψ, Γη), ψ,η (4.2) φ Γ = Γ η Γ (4.3) θ Γ = ( ψ T Γ, φ T Γ) T, (4.4) 498 499 500 Any basis for E Vφφ (θ t)(φ t ) will give the same solution φ Γ : for an orthogonal matrix O, write φ = ΓOO T η. Then η ΓO = O T η Γ. The estimator φ Γ given in (4.3) is in general different from the moment-based estimator 501 P Γ φ discussed near the end of Section 4.1. However, as implied by the following proposition, these two estimators have the same asymptotic distribution, which provides some support for 502 503 504 505 the simple projection estimator of Section 4.1. Proposition 5. As n, n( φ Γ φ t ) converges to a normal random vector with mean 0 and asymptotic covariance avar( n φ Γ ) = P Γ V φφ (θ t )P Γ V φφ (θ t ). 506 507 508 509 510 The n-consistent nuisance parameter estimator also satisfies avar( n ψ Γ ) avar( n ψ). The following corollary to Proposition 5 characterizes the asymptotic variance when an arbitrary envelope is used. Corollary 2. If Γ is a basis for an arbitrary envelope E M (φ t ), where M is a symmetric positive definite matrix, then avar( n φ Γ ) = Γ{Γ T V 1 φφ (θ t)γ} 1 Γ T V φφ (θ t ). (4.5) 21

Algorithm 3 The iterative envelope algorithm for MLE. 1. Initialize θ 0 = θ, and let M k and Ûk be the estimators of M and U based on the k-th envelope estimator θ k of θ, so M 0 and Û0 are based only on the pre-specified estimator. 2. For k = 0, 1,... iterate as follows until a measure of the change between φ k+1 and φ k is sufficiently small (a) Using the 1D algorithm, construct an estimated basis Γ k for E Vφφ (θ t)(φ t ) using M k = V φφ ( θ k ) and Ûk = φ k φt k. (b) Set θ k+1 = θ Γk from (4.4). 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 The above expression shows that Γ is intertwined with V φφ (θ t ) in the asymptotic covariance of the envelope estimator φ Γ, while the envelope by Definition 3 makes avar( n φ Γ ) more interpretable because the material and the immaterial variations are separable. As formulated, this likelihood context allows us to construct the envelope estimator φ Γ when a basis Γ for E Vφφ (θ t)(φ t ) is known, but it does not by itself provide a basis estimator Γ. However, a basis can be estimated by using the 1D algorithm (Algorithm 2 in Section 4.1), setting M = V φφ (θ) and U = φφ T and plugging-in the pre-specified estimator θ = arg max θ Θ L n (θ) to get n-consistent estimators of M and U. The envelope estimator is then φ env = φ Γ, where Γ = Ĝu. Then we have the following proposition. Proposition 6. If the estimated basis Γ for E Vφφ (θ t)(φ t ) is obtained by the 1D algorithm (Algorithm 2), then the envelope estimator φ env = φ Γ is a n-consistent estimator of φ. The envelope estimator φ env depends on the pre-specified estimator θ through M = V φφ (θ) and U = φφ T. Although it is n-consistent, we have found empirically that, when M depends non-trivially on θ t, it can often be improved by iterating so the current estimate of θ is use to construct estimates of M and U. The iteration can be implemented as Algorithm 3. 526 5 Simulations 527 528 529 530 In this section we present a few simulations to support and illustrate the foundations discussed in previous sections. In Section 5.1 we simulate two datasets to illustrate the workings of envelope estimation in logistic regression. In F 5.2 and 5.3, we simulated 100 datasets for each of the three sample sizes: n = 50, 200 and 800. The true dimension of the envelope was always 22