Department of Statistics Purdue University West Lafayette, IN USA

Size: px
Start display at page:

Download "Department of Statistics Purdue University West Lafayette, IN USA"

Transcription

1 Effect of Mean on Variance Funtion Estimation in Nonparametric Regression by Lie Wang, Lawrence D. Brown,T. Tony Cai University of Pennsylvania Michael Levine Purdue University Technical Report #06-08 Department of Statistics Purdue University West Lafayette, IN USA October 2006

2 Effect Of Mean On Variance Function Estimation In Nonparametric Regression Lie Wangl, Lawrence D. Brown1, T. Tony Cail and Michael Levine2 March 16, 2006 Abstract Variance function estimation in nonparametric regression is considered and the minimax rate of convergence is derived. We are particularly interested in the effect of the unknown mean on the estimation of the variance function. Our results indicate that, contrary to the common practice, it is often not desirable to base the estimator of the variance function on the residuals from an optimal estimator of the mean. Instead it is desirable to use estimators of the mean with minimal bias. In addition the results also correct the optimal rate claimed in Hall and Carroll (1989, JRSSB). Keywords: Minimax estimation, nonparametric regression, variance estimation. AMS 2000 Subject Classification: Primary: 62G08, 62G20. Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA The research of Tony Cai was supported in part by NSF Grant DMS Department of Statistics, Purdue University, West Lafayette, IN

3 1 Introduction Consider the heteroscedastic nonparametric regression model pi= f(x,)+l/~(xz)zi, i=l,..., n, (1) where x, = iln and zi are independent with zero mean, unit variance and uniformly bounded fourth moments. Both the mean function f and variance function V are defined on [O,1] and are unknown. The main object of interest is the variance function V. The estimation accuracy is measured both globally by the mean integrated squared error and local.1~ by the mean squared error at a point We wish to study the effect of the unknown mean f on the estimation of the variance function V. In particular, we are interested in the case where the difficulty in estimation of V is driven by the degree of smoothness of the mean f. The effect of not knowing the mean f on the estimation of V has been studied before in Hall and Carroll (1989). The main conclusion of their paper is that it is possible to characterize explicitly how the smoothness of the unknown mean function influences the rate of convergence of the variance estimator. In association with this they claim an explicit minimax rate of convergence for the variance estimator under pointwise risk. For example, they state that the "classical" rates of convergence (np4l5) for the twice differentiable variance function estimator is achievable if and only if f is in the Lipschitz class of order at least 113. More precisely, Hall and Carroll (1989) stated that, under the pointwise mean squared error loss the minimax rate of convergence for estimating V is rniwcin-20r+l, n-28+1) (4) if f has cr derivatives and V has,b derivatives. We shall show here that this result is in fact incorrect. In the present paper we revisit the problem in the same setting as in Hall and Carroll (1989). We show that the minimax rate of convergence under both the pointwise squared error and global integrated mean squared error is

4 if f has a derivatives and V has p derivatives. The derivation of the minimax lower bound is involved and is based on a moment matching technique and a two-point testing argument. A key step is to study a hypothesis testing problem where the alternative hypothesis is a Gaussian location mixture with a special moment matching property. The minimax upper bound is obtained using kernel smoothing of the squared first order differences. Our results have two interesting implications. Firstly, if V is known to belong to a regular parametric model, such as the set of positive polynomials of a given order, the cutoff for the smoothness of f on the estimation of V is 114, not 112 as stated in Hall and Carroll (1989). That is, if f has at least 114 derivative then the minimax rate of convergence for estimating V is solely determined by the smoothness of V as if f were known. On the other hand, if f has less than 114 derivative then the minimax rate depends on the relative smoothness of both f and V and will be completely driven by the roughness of f. Secondly, contrary to the common practice, our results indicate that it is often not desirable to base the estimator? of the variance function V on the residuals from an optimal estimator f^ of f. In fact, the result shows that it is desirable to use an f with minimal bias. The main reason is that the bias and variance off have quite different effects on the estimation of V. The bias of f cannot be removed or even reduced in the second stage smoothing of the squared residuals, while the variance of f^ can be incorporated easily. The paper is organized as follows. Section 2 presents an upper bound for the minimax risk while Section 3 derives a rate-sharp lower bound for the minimax risk under both the global and local losses. The lower and upper bounds together yield the minimax rate of convergence. Section 4 discusses the obtained results and their implications for practical variance estimation in the nonparametric regression. The proofs are given in Section 5. 2 Upper bound In this section we shall construct a kernel estimator based on the square of the first order differences. Such and more general difference based kernel estimators of the variance function have been considered, for example, in Miiller and Stadtmiiller (1987 and 1993). For estimating a constant variance, difference based estimators has a long history. See von Neumann (1941, 1942), Rice (1984), Hall, Kay and Titterington (1990) and Munk, Bissantz, Wagner and Freitag (2005).

5 Define the Lipschitz class Aa(M) in the usual way: Aa(M) = {g: for all 0 5 x,y 5 1, 5 = 0,..., 101-1, ~g(~)(x)l 5 M, and lg(laj)(x) - g(laj)(y)l 5 M Jx - yla'} where 101 is the largest integer less than cu and a' = cu - LcuJ We shall assume that f E Aa(Mf) and V E AP(MV). We say that the function f "has cu derivatives" if f E Aa(Mf ) and V "has /3 derivatives" if V E A~(&). For i = 1,2,..., n - 1, set Di = yi - yi+l. Then one can write where 6, = f (xi)- f(~,+~), v,; = J;(v(xi) + V(xi+l)) and has zero mean and unit variance. We construct an estimator by applying kernel smoothing to the squared differences D: which have means 6: + 22V,. Let K(x) be a kernel function satisfying K(x)issupportedon[-1,1], S_: ~(x)x~dx = 0 for i = 1,2,..., 1/31 and It is well known in kernel regression that special care is needed in order to avoid significant, sometimes dominant, boundary effects. We shall use the boundary kernels with asymmetric support, given in Gasser and Miiller (1979 and 1984), to control the boundary effects. For any t E [O, 11, There exists a boundary kernel function Kt(x) with support [-I, t] satisfying the same conditions as K (x), i.e. ~~(x)x~dx = 0 for i = 1,2,..., 1/31 j- 1 1 t ~~~(x)dx 5 je < oo for all t E [o, I].

6 We can also make Kt(x) + K(x) as t + 1 (but this is not necessary here). See Gasser, Miiller and Mammitzsch (1985). For any 0 < h < a, x E [O,l], and i = 1,2,..., n - 1, let (xz+xz+1)/21 J(xz+xzl)/2 K ) when x E (h, 1 - h), (xz+xz+1)/2 x z + x z l ~ 2 K t when x = th for some t E [O,l], (x'+x'+1)/21kt(-y)du when x = 1 - th for some t E J(zz+xz-1)/2 [O, 11, h where for convenience we take the integral from 0 to (x1+x2)/2 instead of from (x1+x0)/2 to (xl + x2)/2 when i = 1, and integral from (xn-1 + xn-2)/2 to 1 when i = n - 1. Then we can see that for any 0 < x < 1, c:!: K,h(z) = 1. Define estimator Q as Similar to the mean function estimation problem, the optimal bandwidth hn can be easily seen to be hn = ~(n-l/(l+~fl)) for V E A~(Mv). For this optimal choice of the bandwidth, we have the following theorem. Theorem 1 Under the regresszon model (1) where xi = i/n and zt aare independent wzth zero mean, unit variance and uniformly bounded fourth moments, let the estimator Q be given as in (7) with the bandwidth h = ~(n-l/(l+~fl)). Then there exists some constant Co > 0 depending only on a, P, Mf and Mv such that for suficiently large n, SUP f EA"(M~),VEA~(MV) sup E(~(x*) - ~ (z*))~ 5 CO. ma~{n-~o, osx* n 1+211) (8) and Remark 1: The uniform rate of convergence given in (8) yields immediately the pointwise rate of convergence that for any fixed point x, E [O,1] Remark 2: The upper bound given in (8) is smaller than the minimax risk lower bound given in Hall and Carroll (1989). The lower bound in their paper is incorrect and the upper bound not optimal. See Sections 3 and 4 for further discussion.

7 3 Lower Bound In this section, we derive a lower bound for the minimax risk of estimating the variance function V under the regression model (1). The lower bound shows that the upper bound given in the previous section is rate-sharp. As in Hall and Carroll (1989) we shall assume in the lower bound argument that the errors are normally distributed, i.e., ti J N(0,l). Theorem 2 Under the regression model (1) with y % N(0, 1)) inf SUP Ellv - V II; > C1. max{r~-~~, G ~EA~(M~),VEAP(MV) and for any fixed x, E (0,l) inf SUP E(~(x,) v ~EA~(M~),vEA~(Mv) - ~ (2,))~ ZP n-m) 20 > C1. ma~{n-~~, n-'+zp) where Cl > 0 is a constant depending only on a, P, Mf and Mv. (10) (11) It follows immediately from Theorems 1 and 2 that the minimax rate of convergence for estimating V under both the global and local losses is show The proof of this theorem can be naturally divided into two parts. The first step is to This part is standard and relatively easy. Brown and Levine (2006) contains a detailed proof of this assertion for the case p = 2. Their argument can be easily generalized to other values of,b. We omit the details. The proof of the second step, is much more involved. The derivation of the lower bound (13) is based on a moment matching technique and a two-point testing argument. One of the main steps is to study a complicated hypothesis testing problem where the alternative hypothesis is a Gaussian location mixture with a special moment matching property. iid More specifically, let X1,..., X, - P and consider the following hypothesis testing problem between H~ : P = po = N(O,I + 8;)

8 and HI : P = PI = J N(Bnv, l)g(dv) where On > 0 is a constant and G is a distribution of the mean v with compact support. The distribution G is chosen in such a way that, for some positive integer q depending on a, the first q moments of G match exactly with the corresponding moments of the standard normal distribution. The existence of such a distribution is given in the following lemma from Karlin and Studden (1966). Lemma 1 For any fixed positive integer q, there exist a B < oo and a symmetric dis- tribution G on [-B, B] such that G and the standard normal distribution have the same first q moments, i.e. JP, Lrn +a xjg(dx) = where cp denotes the density of the standard normal distribution. xjcp(x)dx, j = 1,2,..., q The moment matching property makes the testing between the two hypotheses L'difficult". The lower bound (13) then follows from a two-point argument with an appropriately chosen 8,. Technical details of the proof are given in Section 5. Remark 3: For a between 114 and 118, a much simpler proof can be given with a two- point mixture for PI which matches the mean and variance, but not the higher moments, of Po and PI. However, this simpler proof fails for smaller a. It appears to be necessary in general to match higher moments of Po and PI. Remark 4: Hall and Carroll (1989) gave the lower bound c max{n-4~l(1+2~)), n-2pl(1+2p))) for the minimax risk. This bound is larger than the lower bound given in our Theorem 2 and as we have noted it is incorrect. This is due to a miscalculation on appendix C of their paper. A key step in that proof is to find some d 2 0 such that In the above expression, Nl denotes a standard normal random variable. But in fact - exp( ). I-, exp(x/a) + exp(-x/2) JZ;;;~ 2d 8 This is an integral of an odd function which is identically 0 for all d.

9 4 Discussion Variance function estimation in regression is more typically based on the residuals from a preliminary estimator f of the mean function. Such estimators have the form where w,(x) are weight functions. A natural and common approach is to subtract in (14) an optimal estimator f of the mean function f (x). See, for example, Hall and Carroll (1989), Neumann (1994), Ruppert, Wand, Holst, and Hossjer (1997), and Fan and Yao (1998). When the unknown mean function is smooth, this approach often works well since the bias in f is negligible and V can be estimated as well as when f is identically zero. However, when the mean function is not smooth, using the residuals from an optimally smoothed f will lead to a sub-optimal estimator of V. For example, Hall and Carroll (1989) used a kernel estimator with optimal bandwidth for f^ and showed that the resulting variance estimator attains the rate of over f E Aa(Mf) and V E A~(MV). This rate is strictly slower than the minimax rate 2B P when & < or equivalently, a < -. Consider the example where V belongs to a regular parametric family, such as {V(x) = exp(ax + b) : a, b E R). As Hall and Carroll have noted this case is equivalent to the case of,f3 = ca in results like Theorems 1 and 2. Then the rate of convergence for this 4a estimator becomes nonparametric at n-2.l+l for a < i, while the optimal rate is the usual 1 parametric rate n-3 for all a 1 4 and is n-4a for 0 < a < 4. The main reason for the poor performance of such an estimator in the non-smooth setting is the "large" bias in f. An optimal estimator f of f balances the squared bias and variance. However, the bias and variance of f have significantly different effects on the estimation of V. The bias of f^ cannot be further reduced in the second stage smoothing of the squared residuals, while the variance of f can be incorporated easily. For f E Aa(Mf ), the maximum bias of an optimal estimator f^ is of order n-* B becomes the dominant factor in the risk of P when a < m. which To minimize the effect of the mean function in such a setting one needs to use an estimator f(xi) with minimal bias. Note that our approach is, in effect, using a very crude estimator f^ of f with f^(x,) = y,+l. Such an estimator has high variance and low bias. As we have seen in Section 2 the large variance of f does not pose a problem (in terms of

10 rates) for estimating V. Hence for estimating the variance function V an optimal f^ is the one with minimum possible bias, not the one with minimum mean squared error. (Here we should of course exclude the obvious, and not useful, unbiased estimator f"(xi) = yi). Another implication of our results is that the unknown mean function does not have any first-order effect for estimating V as long as f has more than 114 derivatives. When a > 114, the variance estimator is essentially adaptive over f E Aff(Mj) for all a > 114. In other words, if f is known to have more than 114 derivatives, the variance function V can be estimated with the same degree of first-order precision as if f is completely known. However, when a < 114, the rate of convergence for estimating V is entirely determined by the degree of smoothness of the mean function f. 5 Proofs 5.1 Upper Bound: Proof of Theorem 1 We shall only prove (8). Inequality (9) is a direct consequence of (8). Recall that 1 where 6, = f (xi)- f (xi+l), v,' = 4; (v(xi) + V(xi+1)) and Without loss of generality, suppose h = n-1/(1+2p). It is easy to see that for any x, E [0, I.], xi K:(x,) = 1, and when X, > (xi + xi+1)/2 + h or X, 5 (xi + ~ ~-~)/2 - h, K:(x,) equals to 0. Suppose Ic < c, we also have where K, (u) = K (u) when x, E (h, 1 - h); K, (u) = Kt (u) when x, = th for some t E [0, 11; and K,(u) = Kt(-u) when x, = 1 - th for some t E [O, 11.

11 For all f E Aa(Mf) and V E A~(Mv), the mean squared error of v at x, satisfies Suppose a 5 114, otherwise n-4a < n-2p/(1+2p) for any P. Since for any i, ldil = If (xi)- f (~i+~)l < Mf Ixi - xi+lla = Mfn-", we have Note that for any x, y E [O,l], Taylor's theorem yields x (x - u)lpj-l (LPl - Mv < 12-YIP. LPl Mv Ix- Y I P- LPJ du

12 and Since the kernel functions have vanishing moments, for j = 1,2, -., L,B], for some constant c' > 0. Similarly ~111' K,~(X,)(~ i+l - x*)j 5 dn-l. So, for some constant (? > 0 which does not depend on x,. Then we have

13 The last inequality is due to the fact 0 < h + < h + < 3h. On the other hand, and where p4 denotes the uniform bound for the fourth moments of the ei. Putting the four terms together we have, uniformly for all x, E [0, I], f E Aa(Mf) and v E AP(M~) for some constant Co > 0. This proves (8) Lower Bound: : Proof of Theorem 2 We shall only prove the lower bound for the pointwise squared error loss. The same proof with minor modifications immediately yields the lower bound under integrated squared error. Note that, to prove inequality (13), we only need to focus on the case where a < 114, otherwise n-2pl(1+2p) is always greater than np4" for sufficiently large n and then (13) follows directly from (12). For a given 0 < a < 114, there exists an integer q such that (q + 1)a > 1. For convenience we take q to be an odd integer. From lemma 1, there is a positive constant B < r x and a symmetric distribution G on [-B, B] such that G and N(0,l) have the

14 - same first q moments. Let ri, i = 1,..., n, be independent variables with the distribution G. Set On = $n-a, fa 0, &(x) = 1 + $2 and Vl(x) = 1. Let g(x) = 1-2nlxl for x E [-&-, &-I and 0 otherwise. Define the random function fl by Then it is easy to see that fl is in Aa(Mj) for all realizations of Ti. Moreover, f 1 (xi) = Onri are independent and identically distributed. Now consider testing the following hypotheses, where ~i are independent N(0,l) variables which are also independent of the ri's. Denote by Po and Pl the joint distributions of yi's under Ho and H1, respectively. Note that for any estimator p of V, where p(po, PI) is the Hellinger affinity between Po and PI. See, for example, Le Cam (1986). Let po and pl be the probability density function of Po and Pl with respect to the Lebesgue measure p, then p(po, PI) = J m d p. The minimax lower bound (13) follows immediately from the two-point bound (16) if we show that for any n, the Hellinger affinity p(po, PI) 2 C for some constant C > 0. (C may depend on q, but does not depend on n.) Note that under Ho, yi N N(0,l + 02) and its density do can be written as J p(t - uo~)g(~u). It is easy to see that p(po, PI) = (J&&'&d~)~, since the yz's are independent variables. Under HI, the density of yz is dl(t) Note that the Hellinger affinity is bounded below by the total variation affinity, Taylor expansion yields

15 where Hk(t) is the corresponding Hermite polynomial. And from the construction of distribution G, vzp(v)dv for i = 0,1,..., q. so, Suppose q + 1 = 2p for some integer p, it can be seen that and where (22 - I)!! A (22-1) x (2i - 3) x...3 x 1. So from (17),

16 and then For the Hermite polynomial H2ir we have For sufficiently large n, 9, < 112 and it then from the above inequality that and

17 Then from (18) where c is a constant that only depend on q. So Since cr(q + 1) 2 1, limn,,(l (16). 1 - ~n-"(q+l))~ 2 epc > 0 and the theorem then follows from Acknowledgment TTC would like to thank William Studden for discussions on the finite moment matching problem and for the reference Karlin and Studden (1966). References [I] Brown, L. D. and Levine, M. (2006). Variance estimation in nonparametric regression via the difference sequence method. Technical Report. [2] Fan, J. and Yao, Q. (1998). Efficient estimation of conditional variance functions in stochastic regression. Biometrika 85, [3] Hall, P. and Carroll, R. J. (1989). Variance function estimation in regression: the effect of estimating the mean. J. Roy. Statist. Soc. Ser. B 51, [4] Muller, H. G. and Stadtmuller, U. (1987). Estimation of heteroscedasticity in regression analysis. Ann. Statist. 15, [5] Muller, H. G. and Stadtmuller, U. (1993). On variance function estimation with quadratic forms. J. Stat. Planning and Inf. 35, [6] Gasser, T. and Muller, H. G. (1979). Kernel estimation of regression functions. In Smoothing Techniques for Curve Estimation, pp Berlin: Springer. (Lecture Notes in Mathematics No. 757). [7] Gasser, T. and Muller, H. G. (1984). Estimating regression functions and their derivatives by the kernel method. Scand. J. Statist. 11,

18 [8] Gasser, T., Miiller, H. G. and Mammitzsch, V. (1985). Kernels for nonparametric curve estimation. J. Roy. Statist. Soc. B 47, [9] Hall, P., Kay, J. and Titterington, D. M. (1990). Asymptotically optimal differencebased estimation of variance in nonparametric regression. Biometrika 77, [lo] Karlin, S. and Studden, W. J. (1966). Tchebycheff Systems: With Applications In Analysis And Statistics. Interscience, New York. [ll] Le Cam, L. (1986). Asymptotic Methods in Statistical Decision Theory. Springer- Verlag, New York Munk, A,, Bissantz, N., Wagner, T. and Freitag, G. (2005). On difference based variance estimation in nonparametric regression when the covariate is high dimensional. J. Roy. Statist. Soc. B 67, [13] Neumann, M. (1994). Fully data-driven nonparametric variance estimators. Statistics 25, [14] Parzen, E. (1962). On estimation of a probability density function and mode. Ann. Math. Statist. 33, [15] Rice, J. (1984). Bandwidth choice for nonparametric kernel regression. Ann. Statist. 12, [16] Rosenblatt, M. (1956). Remarks on some nonparametric estimates of a density function. Ann. Math. Statist. 27, [17] Ruppert, D., Wand, M. P., Holst, U. and Hossjer, 0.(1997). Local polynomial variance function estimation. Technometrics 39, [18] Stone, C. J. (1980). Optimal rates of convergence for nonparametric estimators. Ann. Statist. 8, [19] von Neumann, J. (1941). Distribution of the ratio of the mean squared successive difference to the variance. Ann. Math. Statist. 12, [21:1] von Neumann, J. (1942). A further remark concerning the distribution of the ratio of the mean squared successive difference to the variance. Ann. Math. Statist. 13,

Variance Function Estimation in Multivariate Nonparametric Regression

Variance Function Estimation in Multivariate Nonparametric Regression Variance Function Estimation in Multivariate Nonparametric Regression T. Tony Cai 1, Michael Levine Lie Wang 1 Abstract Variance function estimation in multivariate nonparametric regression is considered

More information

Department of Statistics Purdue University West Lafayette, IN USA

Department of Statistics Purdue University West Lafayette, IN USA Variance Function Estimation in Multivariate Nonparametric Regression by T. Tony Cai, Lie Wang University of Pennsylvania Michael Levine Purdue University Technical Report #06-09 Department of Statistics

More information

Department of Statistics Purdue University West Lafayette, IN USA

Department of Statistics Purdue University West Lafayette, IN USA Variance Estimation in Nonparametric Regression Via the Difference Sequence Method by Lawrence D. Brown University of Pennsylvania M. Levine Purdue University Technical Report #06-07 Department of Statistics

More information

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model. Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of

More information

Optimal Estimation of a Nonsmooth Functional

Optimal Estimation of a Nonsmooth Functional Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose

More information

Bayesian Nonparametric Point Estimation Under a Conjugate Prior

Bayesian Nonparametric Point Estimation Under a Conjugate Prior University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda

More information

Nonparametric Density and Regression Estimation

Nonparametric Density and Regression Estimation Lower Bounds for the Integrated Risk in Nonparametric Density and Regression Estimation By Mark G. Low Technical Report No. 223 October 1989 Department of Statistics University of California Berkeley,

More information

Estimation of a quadratic regression functional using the sinc kernel

Estimation of a quadratic regression functional using the sinc kernel Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073

More information

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show

More information

Discussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan

Discussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan Discussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania Professors Antoniadis and Fan are

More information

Nonparametric Regression

Nonparametric Regression Nonparametric Regression Econ 674 Purdue University April 8, 2009 Justin L. Tobias (Purdue) Nonparametric Regression April 8, 2009 1 / 31 Consider the univariate nonparametric regression model: where y

More information

SEMIPARAMETRIC ESTIMATION OF CONDITIONAL HETEROSCEDASTICITY VIA SINGLE-INDEX MODELING

SEMIPARAMETRIC ESTIMATION OF CONDITIONAL HETEROSCEDASTICITY VIA SINGLE-INDEX MODELING Statistica Sinica 3 (013), 135-155 doi:http://dx.doi.org/10.5705/ss.01.075 SEMIPARAMERIC ESIMAION OF CONDIIONAL HEEROSCEDASICIY VIA SINGLE-INDEX MODELING Liping Zhu, Yuexiao Dong and Runze Li Shanghai

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Jan Wehr and Jack Xin Abstract We study waves in convex scalar conservation laws under noisy initial perturbations.

More information

Problem Set 5: Solutions Math 201A: Fall 2016

Problem Set 5: Solutions Math 201A: Fall 2016 Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

Chi-square lower bounds

Chi-square lower bounds IMS Collections Borrowing Strength: Theory Powering Applications A Festschrift for Lawrence D. Brown Vol. 6 (2010) 22 31 c Institute of Mathematical Statistics, 2010 DOI: 10.1214/10-IMSCOLL602 Chi-square

More information

ROBUST ESTIMATION AND OUTLIER DETECTION FOR REPEATED MEASURES EXPERIMENTS. R.M. HUGGINS

ROBUST ESTIMATION AND OUTLIER DETECTION FOR REPEATED MEASURES EXPERIMENTS. R.M. HUGGINS 77 ROBUST ESTIMATION AND OUTLIER DETECTION FOR REPEATED MEASURES EXPERIMENTS. 1. INTRODUCTION. R.M. HUGGINS Standard methods of testing for repeated measures experiments and other mixed linear models are

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations Journal of Statistical Physics, Vol. 122, No. 2, January 2006 ( C 2006 ) DOI: 10.1007/s10955-005-8006-x Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations Jan

More information

ERROR VARIANCE ESTIMATION IN NONPARAMETRIC REGRESSION MODELS

ERROR VARIANCE ESTIMATION IN NONPARAMETRIC REGRESSION MODELS ERROR VARIANCE ESTIMATION IN NONPARAMETRIC REGRESSION MODELS By YOUSEF FAYZ M ALHARBI A thesis submitted to The University of Birmingham for the Degree of DOCTOR OF PHILOSOPHY School of Mathematics The

More information

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

Defect Detection using Nonparametric Regression

Defect Detection using Nonparametric Regression Defect Detection using Nonparametric Regression Siana Halim Industrial Engineering Department-Petra Christian University Siwalankerto 121-131 Surabaya- Indonesia halim@petra.ac.id Abstract: To compare

More information

Bickel Rosenblatt test

Bickel Rosenblatt test University of Latvia 28.05.2011. A classical Let X 1,..., X n be i.i.d. random variables with a continuous probability density function f. Consider a simple hypothesis H 0 : f = f 0 with a significance

More information

arxiv: v1 [math.ap] 17 May 2017

arxiv: v1 [math.ap] 17 May 2017 ON A HOMOGENIZATION PROBLEM arxiv:1705.06174v1 [math.ap] 17 May 2017 J. BOURGAIN Abstract. We study a homogenization question for a stochastic divergence type operator. 1. Introduction and Statement Let

More information

A Difference Based Approach to Semiparametric Partial Linear Model

A Difference Based Approach to Semiparametric Partial Linear Model A Difference Based Approach to Semiparametric Partial Linear Model Lie Wang, Lawrence D. Brown and T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania Abstract In this article,

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute

More information

The choice of weights in kernel regression estimation

The choice of weights in kernel regression estimation Biometrika (1990), 77, 2, pp. 377-81 Printed in Great Britain The choice of weights in kernel regression estimation BY THEO GASSER Zentralinstitut fur Seelische Gesundheit, J5, P.O.B. 5970, 6800 Mannheim

More information

Local Polynomial Regression

Local Polynomial Regression VI Local Polynomial Regression (1) Global polynomial regression We observe random pairs (X 1, Y 1 ),, (X n, Y n ) where (X 1, Y 1 ),, (X n, Y n ) iid (X, Y ). We want to estimate m(x) = E(Y X = x) based

More information

Introduction. Linear Regression. coefficient estimates for the wage equation: E(Y X) = X 1 β X d β d = X β

Introduction. Linear Regression. coefficient estimates for the wage equation: E(Y X) = X 1 β X d β d = X β Introduction - Introduction -2 Introduction Linear Regression E(Y X) = X β +...+X d β d = X β Example: Wage equation Y = log wages, X = schooling (measured in years), labor market experience (measured

More information

DEPARTMENT MATHEMATIK ARBEITSBEREICH MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK ARBEITSBEREICH MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE Estimating the error distribution in nonparametric multiple regression with applications to model testing Natalie Neumeyer & Ingrid Van Keilegom Preprint No. 2008-01 July 2008 DEPARTMENT MATHEMATIK ARBEITSBEREICH

More information

Goodness-of-fit tests for the cure rate in a mixture cure model

Goodness-of-fit tests for the cure rate in a mixture cure model Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,

More information

Nonparametric Econometrics

Nonparametric Econometrics Applied Microeconometrics with Stata Nonparametric Econometrics Spring Term 2011 1 / 37 Contents Introduction The histogram estimator The kernel density estimator Nonparametric regression estimators Semi-

More information

L. Brown. Statistics Department, Wharton School University of Pennsylvania

L. Brown. Statistics Department, Wharton School University of Pennsylvania Non-parametric Empirical Bayes and Compound Bayes Estimation of Independent Normal Means Joint work with E. Greenshtein L. Brown Statistics Department, Wharton School University of Pennsylvania lbrown@wharton.upenn.edu

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

Plugin Confidence Intervals in Discrete Distributions

Plugin Confidence Intervals in Discrete Distributions Plugin Confidence Intervals in Discrete Distributions T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania Philadelphia, PA 19104 Abstract The standard Wald interval is widely

More information

Lawrence D. Brown, T. Tony Cai and Anirban DasGupta

Lawrence D. Brown, T. Tony Cai and Anirban DasGupta Statistical Science 2005, Vol. 20, No. 4, 375 379 DOI 10.1214/088342305000000395 Institute of Mathematical Statistics, 2005 Comment: Fuzzy and Randomized Confidence Intervals and P -Values Lawrence D.

More information

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Hidehiko Ichimura Graduate School of Public Policy and Graduate School of Economics University of Tokyo A Conference in honor of

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Bias Correction and Higher Order Kernel Functions

Bias Correction and Higher Order Kernel Functions Bias Correction and Higher Order Kernel Functions Tien-Chung Hu 1 Department of Mathematics National Tsing-Hua University Hsinchu, Taiwan Jianqing Fan Department of Statistics University of North Carolina

More information

University of California, Berkeley

University of California, Berkeley University of California, Berkeley U.C. Berkeley Division of Biostatistics Working Paper Series Year 24 Paper 153 A Note on Empirical Likelihood Inference of Residual Life Regression Ying Qing Chen Yichuan

More information

Mark Gordon Low

Mark Gordon Low Mark Gordon Low lowm@wharton.upenn.edu Address Department of Statistics, The Wharton School University of Pennsylvania 3730 Walnut Street Philadelphia, PA 19104-6340 lowm@wharton.upenn.edu Education Ph.D.

More information

A nonparametric method of multi-step ahead forecasting in diffusion processes

A nonparametric method of multi-step ahead forecasting in diffusion processes A nonparametric method of multi-step ahead forecasting in diffusion processes Mariko Yamamura a, Isao Shoji b a School of Pharmacy, Kitasato University, Minato-ku, Tokyo, 108-8641, Japan. b Graduate School

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Local linear multiple regression with variable. bandwidth in the presence of heteroscedasticity

Local linear multiple regression with variable. bandwidth in the presence of heteroscedasticity Local linear multiple regression with variable bandwidth in the presence of heteroscedasticity Azhong Ye 1 Rob J Hyndman 2 Zinai Li 3 23 January 2006 Abstract: We present local linear estimator with variable

More information

University of Mannheim, West Germany

University of Mannheim, West Germany - - XI,..., - - XI,..., ARTICLES CONVERGENCE OF BAYES AND CREDIBILITY PREMIUMS BY KLAUS D. SCHMIDT University of Mannheim, West Germany ABSTRACT For a risk whose annual claim amounts are conditionally

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Nonparametric Regression

Nonparametric Regression Adaptive Variance Function Estimation in Heteroscedastic Nonparametric Regression T. Tony Cai and Lie Wang Abstract We consider a wavelet thresholding approach to adaptive variance function estimation

More information

Discussion of the paper Inference for Semiparametric Models: Some Questions and an Answer by Bickel and Kwon

Discussion of the paper Inference for Semiparametric Models: Some Questions and an Answer by Bickel and Kwon Discussion of the paper Inference for Semiparametric Models: Some Questions and an Answer by Bickel and Kwon Jianqing Fan Department of Statistics Chinese University of Hong Kong AND Department of Statistics

More information

Statistical Inference Using Maximum Likelihood Estimation and the Generalized Likelihood Ratio

Statistical Inference Using Maximum Likelihood Estimation and the Generalized Likelihood Ratio \"( Statistical Inference Using Maximum Likelihood Estimation and the Generalized Likelihood Ratio When the True Parameter Is on the Boundary of the Parameter Space by Ziding Feng 1 and Charles E. McCulloch2

More information

A Novel Nonparametric Density Estimator

A Novel Nonparametric Density Estimator A Novel Nonparametric Density Estimator Z. I. Botev The University of Queensland Australia Abstract We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with

More information

Information Measure Estimation and Applications: Boosting the Effective Sample Size from n to n ln n

Information Measure Estimation and Applications: Boosting the Effective Sample Size from n to n ln n Information Measure Estimation and Applications: Boosting the Effective Sample Size from n to n ln n Jiantao Jiao (Stanford EE) Joint work with: Kartik Venkat Yanjun Han Tsachy Weissman Stanford EE Tsinghua

More information

Input-Dependent Estimation of Generalization Error under Covariate Shift

Input-Dependent Estimation of Generalization Error under Covariate Shift Statistics & Decisions, vol.23, no.4, pp.249 279, 25. 1 Input-Dependent Estimation of Generalization Error under Covariate Shift Masashi Sugiyama (sugi@cs.titech.ac.jp) Department of Computer Science,

More information

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.

More information

Introduction to Regression

Introduction to Regression Introduction to Regression p. 1/97 Introduction to Regression Chad Schafer cschafer@stat.cmu.edu Carnegie Mellon University Introduction to Regression p. 1/97 Acknowledgement Larry Wasserman, All of Nonparametric

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

University, Tempe, Arizona, USA b Department of Mathematics and Statistics, University of New. Mexico, Albuquerque, New Mexico, USA

University, Tempe, Arizona, USA b Department of Mathematics and Statistics, University of New. Mexico, Albuquerque, New Mexico, USA This article was downloaded by: [University of New Mexico] On: 27 September 2012, At: 22:13 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

O Combining cross-validation and plug-in methods - for kernel density bandwidth selection O

O Combining cross-validation and plug-in methods - for kernel density bandwidth selection O O Combining cross-validation and plug-in methods - for kernel density selection O Carlos Tenreiro CMUC and DMUC, University of Coimbra PhD Program UC UP February 18, 2011 1 Overview The nonparametric problem

More information

Wallace R. Blischke, Research Associate Biometrics Unit New York State College of Agriculture Cornell University Ithaca, New York

Wallace R. Blischke, Research Associate Biometrics Unit New York State College of Agriculture Cornell University Ithaca, New York LEAST SQUARES ESTIMATORS OF TWO INTERSECTING LINES Technical Report No. 7 Department of Navy Office of Naval Research Contract No. Nonr-401(39) Project No. (NR 042-212) Wallace R. Blischke, Research Associate

More information

A COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 )

A COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 ) Electronic Journal of Differential Equations, Vol. 2006(2006), No. 5, pp. 6. ISSN: 072-669. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) A COUNTEREXAMPLE

More information

DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA

DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA Statistica Sinica 18(2008), 515-534 DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA Kani Chen 1, Jianqing Fan 2 and Zhezhen Jin 3 1 Hong Kong University of Science and Technology,

More information

Lecture 35: December The fundamental statistical distances

Lecture 35: December The fundamental statistical distances 36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Wavelet Shrinkage for Nonequispaced Samples

Wavelet Shrinkage for Nonequispaced Samples University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Wavelet Shrinkage for Nonequispaced Samples T. Tony Cai University of Pennsylvania Lawrence D. Brown University

More information

Local Whittle Likelihood Estimators and Tests for non-gaussian Linear Processes

Local Whittle Likelihood Estimators and Tests for non-gaussian Linear Processes Local Whittle Likelihood Estimators and Tests for non-gaussian Linear Processes By Tomohito NAITO, Kohei ASAI and Masanobu TANIGUCHI Department of Mathematical Sciences, School of Science and Engineering,

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

A New Method for Varying Adaptive Bandwidth Selection

A New Method for Varying Adaptive Bandwidth Selection IEEE TRASACTIOS O SIGAL PROCESSIG, VOL. 47, O. 9, SEPTEMBER 1999 2567 TABLE I SQUARE ROOT MEA SQUARED ERRORS (SRMSE) OF ESTIMATIO USIG THE LPA AD VARIOUS WAVELET METHODS A ew Method for Varying Adaptive

More information

Support Vector Method for Multivariate Density Estimation

Support Vector Method for Multivariate Density Estimation Support Vector Method for Multivariate Density Estimation Vladimir N. Vapnik Royal Halloway College and AT &T Labs, 100 Schultz Dr. Red Bank, NJ 07701 vlad@research.att.com Sayan Mukherjee CBCL, MIT E25-201

More information

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Statistica Sinica 20 2010, 365-378 A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Liang Peng Georgia Institute of Technology Abstract: Estimating tail dependence functions is important for applications

More information

IEOR 165 Lecture 7 1 Bias-Variance Tradeoff

IEOR 165 Lecture 7 1 Bias-Variance Tradeoff IEOR 165 Lecture 7 Bias-Variance Tradeoff 1 Bias-Variance Tradeoff Consider the case of parametric regression with β R, and suppose we would like to analyze the error of the estimate ˆβ in comparison to

More information

ON DIVISION ALGEBRAS*

ON DIVISION ALGEBRAS* ON DIVISION ALGEBRAS* BY J. H. M. WEDDERBURN 1. The object of this paper is to develop some of the simpler properties of division algebras, that is to say, linear associative algebras in which division

More information

THE CLASSIFICATION OF PLANAR MONOMIALS OVER FIELDS OF PRIME SQUARE ORDER

THE CLASSIFICATION OF PLANAR MONOMIALS OVER FIELDS OF PRIME SQUARE ORDER THE CLASSIFICATION OF PLANAR MONOMIALS OVER FIELDS OF PRIME SQUARE ORDER ROBERT S COULTER Abstract Planar functions were introduced by Dembowski and Ostrom in [3] to describe affine planes possessing collineation

More information

A combinatorial problem related to Mahler s measure

A combinatorial problem related to Mahler s measure A combinatorial problem related to Mahler s measure W. Duke ABSTRACT. We give a generalization of a result of Myerson on the asymptotic behavior of norms of certain Gaussian periods. The proof exploits

More information

EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR THIRD ORDER NONLINEAR BOUNDARY VALUE PROBLEMS

EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR THIRD ORDER NONLINEAR BOUNDARY VALUE PROBLEMS Tόhoku Math. J. 44 (1992), 545-555 EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR THIRD ORDER NONLINEAR BOUNDARY VALUE PROBLEMS ZHAO WEILI (Received October 31, 1991, revised March 25, 1992) Abstract. In this

More information

A simple bootstrap method for constructing nonparametric confidence bands for functions

A simple bootstrap method for constructing nonparametric confidence bands for functions A simple bootstrap method for constructing nonparametric confidence bands for functions Peter Hall Joel Horowitz The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP4/2

More information

SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS. Kai Diethelm. Abstract

SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS. Kai Diethelm. Abstract SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS Kai Diethelm Abstract Dedicated to Prof. Michele Caputo on the occasion of his 8th birthday We consider ordinary fractional

More information

Integrated Likelihood Estimation in Semiparametric Regression Models. Thomas A. Severini Department of Statistics Northwestern University

Integrated Likelihood Estimation in Semiparametric Regression Models. Thomas A. Severini Department of Statistics Northwestern University Integrated Likelihood Estimation in Semiparametric Regression Models Thomas A. Severini Department of Statistics Northwestern University Joint work with Heping He, University of York Introduction Let Y

More information

Introduction to Regression

Introduction to Regression Introduction to Regression David E Jones (slides mostly by Chad M Schafer) June 1, 2016 1 / 102 Outline General Concepts of Regression, Bias-Variance Tradeoff Linear Regression Nonparametric Procedures

More information

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract Journal of Data Science,17(1). P. 145-160,2019 DOI:10.6339/JDS.201901_17(1).0007 WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION Wei Xiong *, Maozai Tian 2 1 School of Statistics, University of

More information

STEIN-TYPE IMPROVEMENTS OF CONFIDENCE INTERVALS FOR THE GENERALIZED VARIANCE

STEIN-TYPE IMPROVEMENTS OF CONFIDENCE INTERVALS FOR THE GENERALIZED VARIANCE Ann. Inst. Statist. Math. Vol. 43, No. 2, 369-375 (1991) STEIN-TYPE IMPROVEMENTS OF CONFIDENCE INTERVALS FOR THE GENERALIZED VARIANCE SANAT K. SARKAR Department of Statistics, Temple University, Philadelphia,

More information

Smooth simultaneous confidence bands for cumulative distribution functions

Smooth simultaneous confidence bands for cumulative distribution functions Journal of Nonparametric Statistics, 2013 Vol. 25, No. 2, 395 407, http://dx.doi.org/10.1080/10485252.2012.759219 Smooth simultaneous confidence bands for cumulative distribution functions Jiangyan Wang

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

THE REGULARITY OF MAPPINGS WITH A CONVEX POTENTIAL LUIS A. CAFFARELLI

THE REGULARITY OF MAPPINGS WITH A CONVEX POTENTIAL LUIS A. CAFFARELLI JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 5, Number I, Jaouary 1992 THE REGULARITY OF MAPPINGS WITH A CONVEX POTENTIAL LUIS A. CAFFARELLI In this work, we apply the techniques developed in [Cl]

More information

Can we do statistical inference in a non-asymptotic way? 1

Can we do statistical inference in a non-asymptotic way? 1 Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Modelling Non-linear and Non-stationary Time Series

Modelling Non-linear and Non-stationary Time Series Modelling Non-linear and Non-stationary Time Series Chapter 2: Non-parametric methods Henrik Madsen Advanced Time Series Analysis September 206 Henrik Madsen (02427 Adv. TS Analysis) Lecture Notes September

More information

Optimal rates of convergence for estimating Toeplitz covariance matrices

Optimal rates of convergence for estimating Toeplitz covariance matrices Probab. Theory Relat. Fields 2013) 156:101 143 DOI 10.1007/s00440-012-0422-7 Optimal rates of convergence for estimating Toeplitz covariance matrices T. Tony Cai Zhao Ren Harrison H. Zhou Received: 17

More information

VARIANCE ESTIMATION AND KRIGING PREDICTION FOR A CLASS OF NON-STATIONARY SPATIAL MODELS

VARIANCE ESTIMATION AND KRIGING PREDICTION FOR A CLASS OF NON-STATIONARY SPATIAL MODELS Statistica Sinica 25 (2015), 135-149 doi:http://dx.doi.org/10.5705/ss.2013.205w VARIANCE ESTIMATION AND KRIGING PREDICTION FOR A CLASS OF NON-STATIONARY SPATIAL MODELS Shu Yang and Zhengyuan Zhu Iowa State

More information

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Chanseok Park 1 Department of Mathematical Sciences, Clemson University, Clemson, SC 29634 Short Title: Smooth

More information

Single Index Quantile Regression for Heteroscedastic Data

Single Index Quantile Regression for Heteroscedastic Data Single Index Quantile Regression for Heteroscedastic Data E. Christou M. G. Akritas Department of Statistics The Pennsylvania State University SMAC, November 6, 2015 E. Christou, M. G. Akritas (PSU) SIQR

More information

A Bootstrap Test for Conditional Symmetry

A Bootstrap Test for Conditional Symmetry ANNALS OF ECONOMICS AND FINANCE 6, 51 61 005) A Bootstrap Test for Conditional Symmetry Liangjun Su Guanghua School of Management, Peking University E-mail: lsu@gsm.pku.edu.cn and Sainan Jin Guanghua School

More information

Nonparametric Methods

Nonparametric Methods Nonparametric Methods Michael R. Roberts Department of Finance The Wharton School University of Pennsylvania July 28, 2009 Michael R. Roberts Nonparametric Methods 1/42 Overview Great for data analysis

More information

Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model

Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model Some Theories about Backfitting Algorithm for Varying Coefficient Partially Linear Model 1. Introduction Varying-coefficient partially linear model (Zhang, Lee, and Song, 2002; Xia, Zhang, and Tong, 2004;

More information

On a Nonparametric Notion of Residual and its Applications

On a Nonparametric Notion of Residual and its Applications On a Nonparametric Notion of Residual and its Applications Bodhisattva Sen and Gábor Székely arxiv:1409.3886v1 [stat.me] 12 Sep 2014 Columbia University and National Science Foundation September 16, 2014

More information

Fast learning rates for plug-in classifiers under the margin condition

Fast learning rates for plug-in classifiers under the margin condition Fast learning rates for plug-in classifiers under the margin condition Jean-Yves Audibert 1 Alexandre B. Tsybakov 2 1 Certis ParisTech - Ecole des Ponts, France 2 LPMA Université Pierre et Marie Curie,

More information