DEPARTMENT MATHEMATIK ARBEITSBEREICH MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

Similar documents
TESTING FOR THE EQUALITY OF k REGRESSION CURVES

Goodness-of-fit tests for the cure rate in a mixture cure model

Bootstrap of residual processes in regression: to smooth or not to smooth?

Variance Function Estimation in Multivariate Nonparametric Regression

Estimation of the Bivariate and Marginal Distributions with Censored Data

GOODNESS OF FIT TESTS FOR PARAMETRIC REGRESSION MODELS BASED ON EMPIRICAL CHARACTERISTIC FUNCTIONS

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Local linear multiple regression with variable. bandwidth in the presence of heteroscedasticity

Density estimators for the convolution of discrete and continuous random variables

DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA

Bickel Rosenblatt test

Estimation in semiparametric models with missing data

7 Semiparametric Estimation of Additive Models

SEMIPARAMETRIC ESTIMATION OF CONDITIONAL HETEROSCEDASTICITY VIA SINGLE-INDEX MODELING

Local Polynomial Regression

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Generated Covariates in Nonparametric Estimation: A Short Review.

Estimation of a quadratic regression functional using the sinc kernel

I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S (I S B A)

Semiparametric modeling and estimation of the dispersion function in regression

Single Index Quantile Regression for Heteroscedastic Data

On a Nonparametric Notion of Residual and its Applications

Additive Isotonic Regression

NONPARAMETRIC ENDOGENOUS POST-STRATIFICATION ESTIMATION

THE INDEPENDENCE PROCESS IN CONDITIONAL QUANTILE LOCATION-SCALE MODELS AND AN APPLICATION TO TESTING FOR MONOTONICITY

A New Test in Parametric Linear Models with Nonparametric Autoregressive Errors

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Introduction. Linear Regression. coefficient estimates for the wage equation: E(Y X) = X 1 β X d β d = X β

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Efficiency of Profile/Partial Likelihood in the Cox Model

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas

Single Index Quantile Regression for Heteroscedastic Data

Statistica Sinica Preprint No: SS R4

Test for Discontinuities in Nonparametric Regression

Nonparametric Modal Regression

41903: Introduction to Nonparametrics

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

Statistica Sinica Preprint No: SS

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data

A comparison of different nonparametric methods for inference on additive models

Nonparametric Econometrics

Likelihood ratio confidence bands in nonparametric regression with censored data

Statistica Sinica Preprint No: SS R4

Discussion of the paper Inference for Semiparametric Models: Some Questions and an Answer by Bickel and Kwon

A Bootstrap Test for Conditional Symmetry

Smooth simultaneous confidence bands for cumulative distribution functions

Nonparametric Regression Härdle, Müller, Sperlich, Werwarz, 1995, Nonparametric and Semiparametric Models, An Introduction

Nonparametric Regression. Changliang Zou

A review of some semiparametric regression models with application to scoring

UNIVERSITÄT POTSDAM Institut für Mathematik

Transformation and Smoothing in Sample Survey Data

Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model

Nonparametric Regression. Badr Missaoui

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Estimation and Testing for Varying Coefficients in Additive Models with Marginal Integration

Integral approximation by kernel smoothing

Nonparametric regression with rescaled time series errors

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

D I S C U S S I O N P A P E R 2012/22 GOODNESS-OF-FIT TEST IN PARAMETRIC MIXED-EFFECTS MODELS BASED ON THE ESTIMATION OF THE ERROR DISTRIBUTION

D I S C U S S I O N P A P E R

USING A BIMODAL KERNEL FOR A NONPARAMETRIC REGRESSION SPECIFICATION TEST

Oberwolfach Preprints

Local Polynomial Modelling and Its Applications

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

Nonparametric Regression

Statistical inference on Lévy processes

PREWHITENING-BASED ESTIMATION IN PARTIAL LINEAR REGRESSION MODELS: A COMPARATIVE STUDY

J. Cwik and J. Koronacki. Institute of Computer Science, Polish Academy of Sciences. to appear in. Computational Statistics and Data Analysis

Submitted to the Brazilian Journal of Probability and Statistics

Discussion Papers in Economics

A Note on Data-Adaptive Bandwidth Selection for Sequential Kernel Smoothers

Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity

D I S C U S S I O N P A P E R

Malaya J. Mat. 2(1)(2014) 35 42

Preface. 1 Nonparametric Density Estimation and Testing. 1.1 Introduction. 1.2 Univariate Density Estimation

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Minimum Hellinger Distance Estimation in a. Semiparametric Mixture Model

ECO Class 6 Nonparametric Econometrics

Estimation of the Error Density in a Semiparametric Transformation Model

The high order moments method in endpoint estimation: an overview

On the Robust Modal Local Polynomial Regression

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

An updated review of Goodness-of-Fit tests for regression models

Bayesian estimation of the discrepancy with misspecified parametric models

Semiparametric Estimation of Partially Linear Dynamic Panel Data Models with Fixed Effects

Computational treatment of the error distribution in nonparametric regression with right-censored and selection-biased data

A simple bootstrap method for constructing nonparametric confidence bands for functions

Simple and Efficient Improvements of Multivariate Local Linear Regression

Time Series and Forecasting Lecture 4 NonLinear Time Series

Introduction to Regression

Estimation of a semiparametric transformation model in the presence of endogeneity

Additive Models: Extensions and Related Models.

Nonparametric estimation and symmetry tests for

Modelling Non-linear and Non-stationary Time Series

A Local Generalized Method of Moments Estimator

Author s Accepted Manuscript

Estimating a frontier function using a high-order moments method

Nonparametric Time-Varying Coefficient Panel Data Models with Fixed Effects

Efficient Estimation for the Partially Linear Models with Random Effects

Nonparametric Inference via Bootstrapping the Debiased Estimator

Transcription:

Estimating the error distribution in nonparametric multiple regression with applications to model testing Natalie Neumeyer & Ingrid Van Keilegom Preprint No. 2008-01 July 2008 DEPARTMENT MATHEMATIK ARBEITSBEREICH MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE BUNDESSTR. 55, D 20146 HAMBURG

Estimating the error distribution in nonparametric multiple regression with applications to model testing Natalie Neumeyer Ingrid Van Keilegom July 7, 2008 Abstract In this paper we consider the estimation of the error distribution in a heteroscedastic nonparametric regression model with multivariate covariates. As estimator we consider the empirical distribution function of residuals, which are obtained from multivariate local polynomial fits of the regression and variance functions, respectively. Weak convergence of the empirical residual process to a Gaussian process is proved. We also consider various applications for testing model assumptions in nonparametric multiple regression. The obtained model tests are able to detect local alternatives that converge to zero at n 1/2 -rate, independent of the covariate dimension. We consider in detail a test for additivity of the regression function. Key words: Additive model, Goodness-of-fit, Hypothesis testing, Nonparametric regression, Residual distribution, Semiparametric regression. Department of Mathematics, University of Hamburg, Bundesstrasse 55, 20146 Hamburg, Germany, E-mail: neumeyer@math.uni-hamburg.de Institute of Statistics, Université catholique de Louvain, Voie du Roman Pays 20, 1348 Louvain-la- Neuve, Belgium, E-mail: ingrid.vankeilegom@uclouvain.be 1

1 Introduction In mathematical statistics nonparametric regression models constitute very important methods of analyzing relations between observed random variables. In this paper we regard the often neglected case of multivariate covariates, which is of special importance in applications. To this end consider the random vector (X, Y ), where X is d-dimensional and Y is one dimensional, and suppose the relation between X and Y is given by Y = m(x) + σ(x)ε, (1.1) where m( ) = E(Y X = ), σ 2 ( ) = Var(Y X = ) and where it is assumed that ε and X are independent. We are interested in estimating the distribution of the error ε, and in applying this estimated error distribution to develop tests for model assumptions. As an estimator for the error distribution function we consider the empirical distribution of residuals, that are obtained from multivariate local polynomial fits of the regression and variance functions, respectively. We show weak convergence of the corresponding empirical residual process to a Gaussian process. Comparable results in a model with univariate covariates (d = 1) were developed by Akritas and Van Keilegom (2001). In the case of multivariate covariates we are only aware of the work by Müller, Schick and Wefelmeyer (2007) for a partially linear model. So far estimating the error distribution in the nonparametric model has not been considered in the literature in the case of multivariate covariates. Moreover the proofs as given by Akritas and Van Keilegom (2001) are not straightforwardly generalized to the case of multivariate covariates. Akritas and Van Keilegom s (2001) results for the nonparametric regression model with univariate covariates have been successfully applied to develop tests for model assumptions. In this context Van Keilegom, González Manteiga and Sánchez Sellero (2008) consider goodness-of-fit tests for the regression function and Dette, Neumeyer and Van Keilegom (2007) propose a goodness-of-fit test for the variance function. For comparison of several independent regression models, Pardo Fernández, Van Keilegom and González Manteiga (2007) and Pardo Fernández (2007) investigated tests for equality of regression functions and tests for equality of error distributions, respectively. Neumeyer, Dette and Nagel (2005) suggested a goodness-of-fit test for the error distribution. Thanks to the results on estimation of the error distribution developed in this paper, all of the above tests are also valid in the important case of multivariate covariates. An important advantage of the proposed test statistics is that they are able to detect local alternatives that converge to zero at n 1/2 -rate, independent of the dimension of the 2

covariate, whereas for the classical tests based on smoothing (with bandwidth parameter h) this rate is n 1/2 h d/4, and this will be substantially slower when d is large. Moreover the new theory opens various possibilities to test for parametric or semiparametric models in the context of multiple regression. We explain the general idea for these tests and consider testing for additivity of the regression function as detailed example. Here we prove weak convergence of the residual empirical process on which the test statistics are based. The paper is organized as follows. In Section 2 we define the estimator of the error distribution and give the asymptotic results under regularity conditions. In Section 3 we explain in general how the results can be applied for model testing. The case of testing for additivity of the regression function is considered in detail in Section 4. All proofs are given in an Appendix. 2 Estimation of the error distribution As mentioned in the Introduction, the aim of this section is to propose and study an estimator of the distribution of the error ε under model (1.1). Let F X (x) = P (X x) and F ε (y) = P (ε y) and let f X (x) and f ε (y) denote the probability density functions of X and ε. Let (X 1, Y 1 ),..., (X n, Y n ) be an i.i.d. sample taken from model (1.1), where we denote the components of X i by (X i1,..., X id ) (i = 1,..., n). We start by estimating the regression function m(x) and the variance function σ 2 (x) for an arbitrary point x = (x 1,..., x d ) in the support R X of X in IR d, which we suppose to be compact. We estimate m(x) by a local polynomial estimator of degree p [see Fan and Gijbels (1996) or Ruppert and Wand (1994), among others], i.e. m(x) = β 0, where β 0 is the first component of the vector β, which is the solution of the local minimization problem min β { Y i P i (β, x, p)} 2Kh (X i x), (2.1) where P i (β, x, p) is a polynomial of order p built up with all 0 k p products of factors of the form X ij x j (j = 1,..., d). The vector β is the vector of length p k=0 dk, consisting of all coefficients of this polynomial. Here, for u = (u 1,..., u d ) IR d, K(u) = d j=1 k(u j) is a d-dimensional product kernel, k is a univariate kernel function, h = (h 1,..., h d ) is a d-dimensional bandwidth vector converging to zero when n tends to infinity, and 3

K h (u) = d j=1 k(u j/h j )/h j. To estimate σ 2 (x), define σ 2 (x) = γ 0 β 2 0, (2.2) where γ 0 is defined in the same way as β 0, but with Y i replaced by Y 2 i in (2.1) (i = 1,..., n). See also Härdle and Tsybakov (1997), where this estimator is considered for a one-dimensional covariate. Next, let for i = 1,..., n, ε i = Y i m(x i ), σ(x i ) and define the estimator of the error distribution F ε (y) by F ε (y) = n 1 I( ε i y). (2.3) We will need the following conditions: (C1) k is a symmetric probability density function supported on [ 1, 1], k is d times continuously differentiable, and k (j) (±1) = 0 for j = 0,..., d 1. (C2) h j (j = 1,..., d) satisfies h j /h c j for some 0 < c j < and some baseline bandwidth h satisfying nh 2p+2 0 and nh 3d+δ for some small δ > 0. (C3) All partial derivatives of F X up to order 2d + 1 exist on the interior of R X, they are uniformly continuous and inf x RX f X (x) > 0. (C4) All partial derivatives of m and σ up to order p + 2 exist on the interior of R X, they are uniformly continuous and inf x RX σ(x) > 0. (C5) F ε is twice continuously differentiable, sup y y 2 f ε(y) <, and E Y 6 <. Note that (C2) implies that the order p of the local polynomial fit should satisfy p + 1 > (3d)/2, e.g. when d = 1 we can take p = 1, when d = 2 a local cubic fit suffices, etc. Also, note that the condition nh 2p+2 0 in (C2) comes from the fact that the asymptotic bias, which is of order O(h p+1 ), should be asymptotically negligible with respect to terms of order O(n 1/2 ). However, the order of this bias can be refined, in a similar way as was done in e.g. Fan and Gijbels (1996) (p. 62) when d = 1, which leads to the following refined condition : nh 2p+4 0 when p is even, and nh 2p+2 0 when p is odd. Taking this refinement into account, we get that p = 0 suffices when X is one-dimensional, and this coincides with what has been done in Akritas and Van Keilegom (2001). We are now ready to state the two main results of this Section. 4

Theorem 2.1 Assume (C1)-(C5). Then, F ε (y) F ε (y) = n 1 { } I(ε i y) F ε (y) + ϕ(ε i, y) + R n (y), where sup <y< R n (y) = o P (n 1/2 ) and { ϕ(z, y) = f ε (y) z + y } 2 [z2 1]. Corollary 2.2 Assume (C1)-(C5). Then, the process n 1/2 ( F ε (y) F ε (y)) ( < y < ) converges weakly to a zero-mean Gaussian process Z(y) with covariance function [ ] Cov(Z(y 1 ), Z(y 2 )) = E {I(ε y 1 ) F ε (y 1 ) + ϕ(ε, y 1 )}{I(ε y 2 ) F ε (y 2 ) + ϕ(ε, y 2 )}. Note that the above results are considerably more difficult to obtain than the corresponding results for d = 1 obtained by Akritas and Van Keilegom (2001) by using local constant estimators. A direct extension of the results in the latter paper to dimensions d larger than one is in fact not possible because that would lead to two contradictory conditions on the bandwidth, namely on the one hand nh 3d+δ, and on the other hand nh 4 0 (where the latter condition comes from the bias term). It was therefore necessary to consider other estimators of m and σ, that improve the rate of convergence of the bias term. We chose to use local polynomial estimators, because of their nice bias properties and also because of their excellent behavior in practice, which has been widely demonstrated in the literature (see e.g. Fan and Gijbels (1996)). Also note that without any exception, all papers in the literature related to estimation or testing problems involving the nonparametric estimation of the error distribution, are developed for d = 1. 3 Model tests Tests for various hypotheses can be based on the suggested estimated error distribution. Due to the curse of dimensionality in nonparametric multiple regression, investigators often prefer parametric models m {m ϑ ϑ Θ}, (3.1) or semiparametric models such as partially linear models, m(x 1,..., x d ) = β 1 x 1 + + β d 1 x d 1 + g(x d ) for some β 1,..., β d 1 R, (3.2) 5

single index models m(x 1,..., x d ) = g(β 1 x 1 + + β d x d ) for some β 1,..., β d R, (3.3) or additive models m(x 1,..., x d ) = m 1 (x 1 ) + + m d (x d ) (3.4) with univariate nonparametric functions g and m 1,..., m d, respectively. Hence there is a great interest in goodness-of-fit tests for the regression function. The results displayed in Section 2 can be applied to test for each of the hypotheses (3.1) (3.4), and in the next section we consider in detail the testing for an additive regression model (3.4). General testing procedures for semiparametric regression models have also been considered by Rodríguez-Póo, Sperlich and Vieu (2005) and Chen and Van Keilegom (2006). Their tests are based on smoothing techniques (with a bandwidth h), and they are able to detect local alternatives of the order n 1/2 h d/4. The tests proposed here can detect however local alternatives that converge to zero at n 1/2 -rate, which will be substantially faster when d is large. The general idea to apply the new results for hypotheses testing is to compare the estimated error distribution F ε under the full nonparametric model (as in Section 2) with the empirical distribution function F ε of residuals esimated under the null model, and to apply Kolmogorov-Smirnov or Cramér-von Mises tests based on the process n( Fε ( ) F ε ( )), which converges to a Gaussian process. Analogous tests for hypothesis (3.1) were proposed by Van Keilegom, González Manteiga and Sánchez Sellero (2008) when the covariate is one-dimensional. Similar tests for the hypothesis σ 2 {σϑ 2 ϑ Θ} for the variance function (which includes tests for homoscedasticity as special case) were considered by Dette, Neumeyer and Van Keilegom (2007), whereas Neumeyer, Dette and Nagel (2005) test goodness-of-fit of the error distribution, i. e. F ε {F ϑ ϑ Θ}. Thanks to the results of Section 2, the three latter tests are now also valid when the covariate is multi-dimensional. 6

Moreover the tests by Pardo Fernández, Van Keilegom and González Manteiga (2007) and Pardo Fernández (2007) for equality of regression functions, i. e. m 1 = = m k, and equality of error distributions, i. e. F ε1 = = F εk, respectively, are now carried over to the case of multivariate covariates. Those tests are in the context of k independent regression models Y ij = m i (X ij ) + σ i (X ij )ε ij, j = 1,..., n i, i = 1,..., k. All the considered tests provide the possibility to detect local alternatives of rate n 1/2, independent of the covariate dimension d. As explained already above, this is in big contrast with smoothing based tests, that are based e.g. on the L 2 -distance between the nonparametrically and parametrically estimated regression fucntion in the case of hypothesis (3.1). These tests usually can only detect local alternatives of the rate n 1/2 h d/4. 4 Testing for additivity In this section we consider in detail the application of the residual-based empirical process to testing additivity of a multivariate regression function. Different tests for additivity of regression models were proposed by Gozalo and Linton (2001), Dette and von Lieres und Wilkau (2001), Yang, Park, Xue and Härdle (2006), among others. Our aim is to test validity of the hypothesis H 0 of an additive regression model, i. e. H 0 : m(x 1,..., x d ) = m 1 (x 1 ) + + m d (x d ) + c for all (x 1,..., x d ) R X, (4.1) against the general nonparametric alternative as considered in Section 2. Here in model (4.1) we assume E[m l (X il )] = 0 for all l = 1,..., d to identify the univariate regression functions. Let m, σ 2 and F ε denote the estimators defined in (2.1), (2.2) and (2.3), respectively. Further, let X i, l = (X i1,..., X i,l 1, X i,l+1,..., X id ) and denote its density by f X l, whereas the density of X il is denoted by f Xl. To estimate the additive regression components we apply the marginal integration estimator [see Newey (1994), Tjøstheim and Auestad (1994), Linton and Nielsen (1995)] and define m l (x l ) = 1 m(x j1,..., X j,l 1, x l, X j,l+1,..., X jd ) Y n, (4.2) n j=1 7

where Y n = n 1 n j=1 Y j. Let m(x 1,..., x d ) = m 1 (x 1 ) + + m d (x d ) + Y n (4.3) denote the additive regression estimator and let F ε be the empirical distribution function of residuals ε i = Y i m(x i ), i = 1,..., n, σ(x i ) estimated under the null model. Tests for additivity can be based on Kolmogorov-Smirnov or Cramér-von Mises type functionals of the process n 1/2 ( F ε F ε ), given by T KS = n 1/2 T CM = n sup <y< F ε (y) F ε (y) ( F ε (y) F ε (y)) 2 d F ε (y). Please note that F ε consistently estimates F ε, whereas F ε consistently estimates the distribution F ε of ε i = (Y i m 1 (X i1 ) m d (X id ) c)σ 1 (X i ) for c = E[Y i ] and m l (x l ) = E[m(X i1,..., X i,l 1, x l, X i,l+1,..., X id )], where m(x) = E[Y i X i = x]. Exactly as in Theorem 2.1 by Van Keilegom, González Manteiga and Sánchez Sellero (2008) it follows that F ε = F ε is equivalent to the null hypothesis (4.1). To derive the following asymptotic results we will need an additional assumption. (C6) All partial derivatives of f X and f X l uniformly continuous. (l = 1,..., d) up to order p + 1 exist and are Theorem 4.1 Assume (C1)-(C6) and the null hypothesis (4.1). Then, F ε (y) F ε (y) = f ε(y) n ε i H(X i ) + R n (y), where sup <y< R n (y) = o P (n 1/2 ). Here, H is defined by H(X i ) = 1 d l=1 σ(x i ) g l(x il )f X l (X i, l ) f X (X i ) fx (x) + (d 1)σ(X i ) σ(x) dx, where for l = 1,..., d, g l (X il ) = fx (x 1,..., x l 1, X il, x l+1,..., x d ) σ(x 1,..., x l 1, X il, x l+1,..., x d ) d(x 1,..., x l 1, x l+1,..., x d ). 8

Corollary 4.2 Assume (C1)-(C6) and the null hypothesis (4.1). Then, the process n 1/2 ( F ε (y) F ε (y)) ( < y < ) converges weakly to f ε (y)z, where Z is a zero-mean normal random variable with variance Var(Z) = E[H 2 (X)]. Please note that for a univariate model (d = 1) the limiting process is degenerate as could be expected because then each regression model is additive. Further in models with homoscedastic variance the dominating part of the expansion in Theorem 4.1 simplifies to f ε (y) n ε i d {1 f X l (X il )f X l (X i, l )} l=1 f X (X i ) which vanishes in the case where all covariate components are independent. Corollary 4.3 Assume (C1)-(C6) and the null hypothesis (4.1). Then, T d KS sup <y< f ε (y) Z d fε 2 (y)df ε (y) Z 2, T CM where Z is defined in Corollary 4.2. The proof of Corollary 4.3 if very similar to the proof of Corollary 3.3 in Van Keilegom, González Manteiga and Sánchez Sellero (2008) and is therefore omitted. To apply the test we recommend the application of smooth residual bootstrap. discription of the method and asymptotic theory for the univariate case can be found in Neumeyer (2006). Remark 4.4 Consider the local alternative H 1n : m(x 1,..., x d ) = m 1 (x 1 ) +... + m d (x d ) + c + n 1/2 r(x 1,..., x d ), for all (x 1,..., x d ) R X, where E[m l (X il )] = 0 (l = 1,..., d) and the function r satisfies E(r 2 (X)) <. Then, it can be shown that under H 1n, for some b IR. T d KS sup <y< f ε (y) Z + b d fε 2 (y)df ε (y) (Z + b) 2, T CM The proof is similar to the proof of Theorem 3.4 in Van Keilegom, González Manteiga and Sánchez Sellero (2008), and we therefore refer to that paper for more details. 9 A

Remark 4.5 Assume we want to test for a separable model with regression function m(x 1,..., x d ) = G(m 1 (x 1 ),..., m d (x d )) with known link function G, where functions q l (l = 1,..., d) are known (or consistently estimable) such that G(m 1 (x 1 ),..., m d (x d ))q l (x l ) dx l = m l (x l ), where x l = (x 1,..., x l 1, x l+1,..., x d ). With m l (x l ) = m(x 1,..., x d ) q l (x l ) dx l and m(x 1,..., x d ) = G( m 1 (x 1 ),..., m d (x d )) the analogous testing procedure as explained for testing additivity can be applied. Remark 4.6 Combining the methods developed in Section 2 with those considered by Pardo Fernández, Van Keilegom and González Manteiga (2007) and Neumeyer and Sperlich (2006) one can also test for equality of additive components, when k regression models Y ij = m i (X ij ) + σ i (X ij )ε ij, j = 1,..., n i, i = 1,..., k, with additive structure m i (X ij ) = m i (Z ij, W ij ) = r i (Z ij ) + g i (W ij ), E[r i (Z ij )] = 0, i = 1,..., k, are given and one is interested in the hypothesis H 0 : r 1 = r 2 = = r k. For i = 1,..., k denote by F εi the empirical distribution function of the residuals ε ij = (Y ij m(x ij ))/ σ(x ij ) (j = 1,..., n i ), and by F εi the empirical distribution function of the residuals ε ij = (Y ij r(z ij ) ĝ i (W ij ))/ σ(x ij ), where ĝ i denotes the marginal integration estimator for g i (within the ith sample), and r denotes a marginal integration estimator for r = r 1 = = r k under H 0 obtained from the pooled sample [see Neumeyer and Sperlich (2006)]. A test can be obtained from comparing F εi with F εi (i = 1,..., k) in the same manner as shown by Pardo Fernández, Van Keilegom and González Manteiga (2007) in the context of testing for equality of regression functions. Appendix: Proofs In this Appendix the proofs will be given of the main theorems and of several lemmas. 10

Lemma A.1 Assume (C1)-(C5). Then, m m d+α = o P (1), σ σ d+α = o P (1), where 0 < α < δ/2, δ is defined as in condition (C2), and where for any function f defined on R X, k = (k 1,..., k d ), f d+α = max k. d sup D k D k f(x) D k f(x ) f(x) + max sup, x R X k.=d x,x R X x x α D k = k. x k 1 1... x k d d k. = d j=1 k j, and is the Euclidean norm on IR d. Proof. First note that it follows from Theorem 6 in Masry (1996) that sup m(x) m(x) = O P ((nh d ) 1/2 (log n) 1/2 ) + O(h p+1 ) = o P (1). x R X Next, note that based on this result it can now be shown that for k. d, sup D k m(x) D k m(x) = O P ((nh d+2k. ) 1/2 (log n) 1/2 ) + O(h p+1 k. ) = o P (1), x R X and for k. = d, D k m(x) D k m(x) D k m(x ) + D k m(x ) sup x,x R X x x α = O P ((nh 3d+2α ) 1/2 (log n) 1/2 ) + O(h p+1 d α ) = o P (1). A detailed proof of the latter two results can be found in Proposition 3.2 and Theorem 3.2 in Ojeda (2008) respectively for the case where d = 1. In the multiple regression case, the proof is similar but more technical, and is therefore omitted. For σ σ, note that we can again apply Theorem 6 in Masry (1996), but with Y i replaced by Y 2 i (i = 1,..., n). Hence, the same reasoning as for m m applies., Lemma A.2 Assume (C1)-(C5). Then, m(x) m(x) f X (x)dx = n 1 σ(x) and σ(x) σ(x) f X (x)dx = 1 σ(x) 2 n 1 11 ε i + o P (n 1/2 ), {ε 2 i 1} + o P (n 1/2 ).

Proof. We prove the first statement. The second one can be shown in a very similar way. The proof is based on the notion of equivalent kernels, introduced by Fan and Gijbels (1996), p. 63-64. Although their development is valid in the case where d = 1, it can be seen (but the proof is technical) that equivalent kernels can be extended to the context of multiple regression. In fact, for the extension to d > 1 one needs to stack the p k=0 dk coefficients of the local polynomial of order p into one big vector, and apply the same kind of development as in the case d = 1. Details are omitted. This development yields that the estimator m(x) can be written in the form of a Nadaraya-Watson type estimator: m(x) = W n 0 where the weights W n 0 ( ) depend on x and satisfy W n 0 ( x Xi h ) Y i, ( x Xi ) = 1 and W0 n (u) = 1 1 h nh f X (x) K (u){1 + o P (1)} uniformly in u [ 1, 1] d and x R X. Here, K ( ) is the so-called equivalent kernel and is a product kernel K (u 1,..., u d ) = d j=1 k (u j ), where k is a univariate kernel satisfying u q k (u)du = δ 0q (0 q p). (A.1) It now follows that we can write m(x) m(x) = n 1 f 1 X (x) Kh(x X i )(Y i m(x)){1 + o P (1)}, and hence (we take σ 1 for simplicity) ( m(x) m(x))f X (x)dx = n 1 K h(x X i )(Y i m(x))dx{1 + o P (1)} = [ n 1 ε i + O(h p+1 ) ] {1 + o P (1)} = n 1 ε i + o P (n 1/2 ), where the second equality follows from a Taylor expansion of m(x i ) m(x) of order p + 1 and from equation (A.1), and where the last equality follows from condition (C2). 12

Lemma A.3 Assume (C1)-(C5). Then, sup n 1 <y< { I( ε i y) I(ε i y) F ε (y) + F ε (y)} = op (n 1/2 ), where F ε is the distribution of ε = (Y m(x))/ σ(x) conditionally on the data (X j, Y j ), j = 1,..., n (i.e. considering m and σ as fixed functions). Proof. The proof is very similar to that of Lemma 1 in Akritas and Van Keilegom (2001). Therefore, we will restrict attention to explaining the main changes with respect to that proof. First note that Lemma A.1 implies that, with probability tending to one, ( m m)/σ and σ/σ belong to C1 d+α d+α (R X ) and C 2 (R X ) respectively, where 0 < α < δ/2 is as in Lemma A.1. Here, C1 d+α (R X ) is the class of d times differentiable functions f defined d+α on R X such that f d+α 1 (with f d+α defined in Lemma A.1), and C 2 (R X ) is the class of d times differentiable functions f defined on R X such that f d+α 2 and inf x RX f(x) 1/2. Next, note that for any ε > 0, the ε 2 -bracketing numbers of these two classes are bounded by N [ ] ( ε 2, C d+α 1 (R X ), L 2 (P )) exp(k ε 2d/(d+α) ) N [ ] ( ε 2, C d+α 2 (R X ), L 2 (P )) exp(k ε 2d/(d+α) ), where P is the joint probability measure of (X, ε) and K > 0 [see Theorem 2.7.1 in van der Vaart and Wellner (1996)]. It now follows using the same arguments as in Akritas and Van Keilegom (2001) that the ε-bracketing number of the class F 1 = { (x, e) I(e yf 2 (x) + f 1 (x)) : y IR, f 1 C d+α 1 (R X ) and f 2 is at most O( ε 2 exp(k ε 2d/(d+α) )) and hence 0 log N [ ] ( ε, F 1, L 2 (P )) d ε <. } d+α C 2 (R X ) The rest of the proof is now exactly the same as in Akritas and Van Keilegom (2001) and is therefore omitted. 13

Proof of Theorem 2.1. From Lemma A.3 it follows that F ε (y) F ε (y) = n 1 I(ε i y) F ε (y) { ( y σ(x) + m(x) m(x) ) + F ε σ(x) = n 1 I(ε i y) F ε (y) +f ε (y) + 1 2 } F ε (y) df X (x) + o P (n 1/2 ) σ 1 (x) { y( σ(x) σ(x)) + m(x) m(x) } df X (x) f ε(ξ x )σ 2 (x) { y( σ(x) σ(x)) + m(x) m(x) } 2 dfx (x) + o P (n 1/2 ), for some ξ x between y and σ 1 (x){y σ(x) + m(x) m(x)}. The third term above is O P ((nh d ) 1 log n) = o P (n 1/2 ), which follows from the proof of Lemma A.1 and from conditions (C2) and (C5), while the second term equals n 1 n ϕ(ε i, y) + o P (n 1/2 ), which follows from Lemma A.2. Proof of Corollary 2.2. The proof is very similar to that of Theorem 2 in Akritas and Van Keilegom (2001) and is therefore omitted. Lemma A.4 Assume (C1)-(C5) and (4.1). Then, for m defined in (4.3), m m d+α = o P (1), where 0 < α < δ/2, with δ defined in condition (C2), and d+α defined in Lemma A.1. Proof. From (4.3) and (4.1) we have for x = (x 1,..., x d ) d m(x) m(x) = ( m l (x l ) m l (x l )) + Y n c, l=1 where Y n c = n 1 n (Y i E[Y i ]) = O P (n 1/2 ), and it does not depend on x. For the ease of notation we consider m l (x l ) m l (x l ) in detail for l = 1. With the assumption E[m l (X il )] = 0 (l = 1,..., d) we obtain m 1 (x 1 ) m 1 (x 1 ) = 1 ( ) m(x 1, X j2,..., X jd ) m(x 1, X j2,..., X jd ) n + j=1 d l=2 1 n m l (X jl ) + c Y n, j=1 14

where the terms in the last line are of order O P (n 1/2 ) and independent of x 1. Hence, we obtain sup m 1 (x 1 ) m 1 (x 1 ) sup m(x) m(x) + O P (n 1/2 ) = o P (1) x 1 x by Lemma A.1. For the derivatives note that for k d, and for j = 1,..., n, m (k) 1 (x 1 ) = k m(x 1, X j2,..., X jd ) x k 1, m (k) 1 (x 1 ) = 1 n j=1 k m(x 1, X j2,..., X jd ), x k 1 and hence sup m (k) 1 (x 1 ) m (k) 1 (x 1 ) sup k m(x) x 1 x x k 1 k m(x) x k 1 = o P (1) by Lemma A.1. Further, we have m (k) 1 (x 1 ) m (k) sup x 1,x 1 1 sup x 1,x n 1 j=1 1 (x 1 ) m (k) 1 (x 1) + m (k) 1 (x 1) x 1 x 1 α k m(x 1,X j, 1 ) k m(x 1,X j, 1 ) k m(x x k 1 x k 1,X j, 1) 1 x k 1 (x 1, X j, 1 ) t (x 1, X j, 1 ) t α D (k,0,...,0) ( m m)(x) D (k,0,...,0) ( m m)(x ) sup = o x,x x x α P (1) + k m(x 1,X j, 1) x k 1 by Lemma A.1. Lemma A.5 Assume (C1)-(C6) and (4.1). Then, for l = 1,..., d, ml (x l ) m l (x l ) df X (x) = 1 σ(x) n where g l is defined in Theorem 4.1. σ(x i )ε i g l (X il )f X l (X i, l ) f X (X i ) fx (x) σ(x) dx 1 n [ ] σ(x i )ε i + m l (X il ) + o P (n 1/2 ) 15

Proof. For ease of notation we only consider the case l = 1. Then from (A.2) and the proof of Lemma A.2 we have by standard arguments (using condition (C6)) m1 (x 1 ) m 1 (x 1 ) df X (x) σ(x) 1 Kh = σ(x n 2 i )ε (x 1 X i1, X j2 X i2,..., X jd X id ) df X (x) i f X (x 1, X j2,..., X jd ) σ(x) = + + = 1 n j=1 fx (x) σ(x) dx [ (c Y n ) + 1 nh 1 fx (x) σ(x) dx 1 n d k=2 1 n j=1 ] m k (X jk ) + o P (n 1/2 ) ( σ(x i )ε i k x1 X ) i1 fx 1(X i2,..., X id ) df X (x) h 1 f X (x 1, X i2,..., X id ) σ(x) j=1 σ(x i )ε i f X 1 (X i2,..., X id ) f X (X i ) fx (x) σ(x) dx 1 n j=1 [ ] c Y j + m 2 (X j2 ) + + m d (X jd ) + o P (n 1/2 ) fx (X i1, x 2,..., x d ) σ(x i1, x 2,..., x d ) d(x 2,..., x d ) [ ] σ(x j )ε j + m 1 (X j1 ) + o P (n 1/2 ), which is the desired expansion. Proof of Theorem 4.1. Exactly as in the proof of Theorem 2.1 we obtain the expansion F ε (y) F ε (y) = n 1 I(ε i y) F ε (y) + f ε (y) σ 1 (x) { y( σ(x) σ(x)) + m(x) m(x) } df X (x) + o P (n 1/2 ) uniformly with respect to y, and hence, F ε (y) F ε (y) = f ε (y)[ σ 1 (x) { m(x) m(x) } df X (x) (Y n c) σ 1 (x) df X (x) d l=1 σ 1 (x) { m l (x l ) m l (x l ) } ] df X (x) + o P (n 1/2 ). 16

Now Lemma A.2 and Lemma A.5 yield F ε (y) F ε (y) ( 1 = f ε (y) n d { 1 n l=1 ε i (Y n c) + o P (n 1/2 ) ( 1 = f ε (y) n σ 1 (x) df X (x) σ(x i )ε i g l (X il )f X l (X i, l ) f X (X i ) ε i [1 + o P (n 1/2 ) d l=1 fx (x) σ(x) dx 1 n σ(x i ) g l(x il )f X l (X i, l )] fx (x) + (d 1) f X (X i ) σ(x) dx 1 n [ ]}) σ(x i )ε i + m l (X il ) ) σ(x i )ε i and the assertion follows. Acknowledgments. The second author acknowledges support from IAP research network nr. P6/03 of the Belgian government (Belgian Science Policy). References M. G. Akritas and I. Van Keilegom (2001). Nonparametric estimation of the residual distribution. Scand. J. Statist. 28, 549 567. S. X. Chen and I. Van Keilegom (2006). A goodness-of-fit test for parametric and semiparametric models in multiresponse regression (submitted) (paper available at http://www.stat.ucl.ac.be/ispub/dp/2006/dp0616.pdf) (under revision for Bernoulli). H. Dette, N. Neumeyer and I. Van Keilegom (2007). A new test for the parametric form of the variance function in non-parametric regression. J. Roy. Statist. Soc. Ser. B 69, 903 917. H. Dette and C. von Lieres und Wilkau (2001). Testing additivity by kernel based methods what is a reasonable test? Bernoulli 7, 669 697. J. Fan and I. Gijbels (1996). Local Polynomial Modelling and Its Applications. Chapman & Hall, London. 17

P. L. Gozalo and O. B. Linton (2001). A nonparametric test of additivity in generalized nonparametric regression with estimated parameters, J. Econometrics 104, 1 48. W. Härdle and A. Tsybakov (1997). Local polynomial estimators of the volatility function in nonparametric autoregression. J. Econometrics 81, 223-242. O. B. Linton and J. P. Nielsen (1995). A kernel method of estimating structured nonparametric regression based on marginal integration. Biometrika 82, 93 101. E. Masry (1996). Multivariate local polynomial regression for time series: uniform strong consistency and rates. J. Time Series Analysis 17, 571 599. U. U. Müller, A. Schick and W. Wefelmeyer (2007). Estimating the error distribution function in semiparametric regression. Statist. Decisions 25, 1 18. N. Neumeyer (2006). Smooth residual bootstrap for empirical processes of nonparametric regression residuals (submitted) (paper available at http://www.math.unihamburg.de/research/ims.html) (under revision for Scand. J. Statist.). N. Neumeyer, H. Dette and E. R. Nagel (2005). Bootstrap tests for the error distribution in linear and nonparametric regression models. Austr. New Zealand J. Statist. 48, 129 156. N. Neumeyer, S. Sperlich (2006). Comparison of separable components in different samples. Scand. J. Statist. 33, 477 501. W. K. Newey (1994). Kernel estimation of partial means. Econometric Theory 10, 233 253. J. L. Ojeda (2008). Hölder continuity properties of the local polynomial estimator (submitted) (paper available at http://www.unizar.es/galdeano/preprints/lista.html). J. C. Pardo Fernández (2007). Comparison of error distributions in nonparametric regression. Statist. Probab. Lett. 77, 350 356. J. C. Pardo Fernández, I. Van Keilegom and W. González Manteiga (2007). Comparison of regression curves based on the estimation of the error distribution. Statistica Sinica 17, 1115 1137. 18

J. M. Rodríguez-Póo, S. Sperlich and P. Vieu (2005). An adaptive specification test for semiparametric models (submitted) (paper available at http://papers.ssrn.com/sol3/). D. Ruppert and M. P. Wand (1994). Multivariate locally weighted least squares regression. Ann. Statist. 22, 1346 1370. D. Tjøstheim and B. H. Auestad (1994). Nonparametric identification of nonlinear time series: projections. J. Amer. Statist. Assoc. 89, 1398 1409. A. W. van der Vaart and J. A. Wellner (1996). Weak Convergence and Empirical Processes. Springer, New York. I. Van Keilegom, W. González Manteiga and C. Sánchez Sellero (2008). Goodness-of-fit tests in parametric regression based on the estimation of the error distribution. TEST, to appear. L. Yang, B. U. Park, L. Xue and W. Härdle (2006). Estimation and testing for varying coefficients in additive models with marginal integration. J. Amer. Statist. Assoc. 101, 1212 1227. 19