On variable bandwidth kernel density estimation

Size: px
Start display at page:

Download "On variable bandwidth kernel density estimation"

Transcription

1 JSM 04 - Section on Nonparametric Statistics On variable bandwidth kernel density estimation Janet Nakarmi Hailin Sang Abstract In this paper we study the ideal variable bandwidth kernel estimator introduced by Mcay 7, 8] and the plug-in practical version of variable bandwidth kernel estimator with two sequences of bandwidths as in Giné and Sang 4]. The dominating terms of the variance of the true estimator in the variance decomposition are separated from the other terms. Based on the exact formula of bias and these dominating terms, we develop the optimal bandwidth selection of this variable kernel estimator. ey Words: bandwidth selection, variable kernel density estimation.. Introduction Suppose that X i, i N, are independent identically distributed i.i.d. observations with density function ft, t R. Let to be a symmetric probability kernel satisfying some differentiability properties. The classical kernel density estimator ˆft; h n = nh n h n, where h n is the bandwidth sequence with h n 0, nh n, and its properties have been well studied in the literature. The variance of has order Onh n and the bias has order h n if ft has bounded second order derivative. In this paper, we study the following variable bandwidth kernel density estimator proposed by Mcay 7, 8]: ft; h n = nh n where αs is a smooth function of the form αfx i h n αfx i t X i, αs := cp / s/c. 3 The function p is at least four times differentiable and satisfies px for all x and px = x for all x t 0 for some t 0 <, and 0 < c < is a fixed number. is a variable bandwidth kernel density estimator since the bandwidth has form h n /αfx i if we rewrite in the form of the classical one,. The study of variable bandwidth kernel density estimation goes back to Abramson ]. He proposed the following estimator f A t; h n = n h n γt, X i h n γt, X i t X i, 4 Department of Mathematics, University of Mississippi, University, MS 38677, USA 3586

2 JSM 04 - Section on Nonparametric Statistics where, γt, s = fs ft/0 /. The bandwidth h n /γt, X i at each observation X i is inversely proportional to f / X i if fx i ft/0. Notice that also has the square root law since αfx i = f / X i if fx i t 0 c by the definition of the function px. The estimator or 4 has clipping procedure in αs or γt, s since they make the true bandwidth h n /αfx i h n /c or h n /γt, X i 0 / h n /ft /. The clipping procedure prevent too much contribution to the density estimation at t if the observation X i is too far away from t. Abramson showed that, this square root law improves the bias from the order of h n to the order of h 4 n for the estimator 4 while at the same time keeps the variance at the order of nh n if ft 0 and fx has fourth order continuous derivative at t. However, this variable bandwidth estimator 4 is not a density function of a true probability measure since the integral of f A t; h n over t is not -it would if γ depended only on s-. Terrell and Scott ] and Mcay 8] showed that the following modification of Abramson estimator without the clipping filter ft/0 / on f / X i studied in Hall and Marron 5], f HM t; h n = n h n f / X i h n f / X i t X i, 5 which has integral and hence is a true probability density, may have bias of order much larger than h 4 n. Therefore the clipping is necessary for such bias reduction. Hall, Hu and Marron 995 then proposed the estimator f HHM t; h n = nh n f / X i f / X i I t X i < h n B 6 h n where B is a fixed constant; see also Novak 0] for a similar estimator. This estimator is nonnegative and achieves the desired bias reduction but, like Abramson s, it does not integrate to. In conclusion, it seems that the estimator has all the advantages: it is a true density function with square root law and smooth clipping procedure. But notice that this estimator and all the other variable bandwidth kernel density estimators can not be applied in practice since they all include the studied density function f. Therefore, we call them ideal estimators in the literature. Hall and Marron 5] studied a true density estimator ˆf HM t;, = n ˆf / X i ; ˆf / X i ;, by plugging in the classical estimator as the pilot estimator. Here, the bandwidth sequence is the h n as in 5 and the bandwidth sequence is applied in the classical kernel density estimator. They took Taylor expansion of t Xi ˆf / X i ; at t Xi f / X i and studied the asymptotics of the true estimator. By applying this Taylor decomposition, Mcay 8] studied convergence of plug-in true estimator of in probability and pointwise. Giné and Sang 3, 4] studied plug-in true estimators of 6 and for one and multiple d- dimensional observations. They proved that the discrepancy between the true estimator and the true value converges uniformly over a data adaptive region at a rate Olog n/n 4/8d by applying empirical process techniques. The true estimator in Giné and Sang 4] has the following 3587

3 JSM 04 - Section on Nonparametric Statistics form ˆft;, = n α h ˆfX i ; α ˆfX i ;. 7,n And, they also studied the uniform convergence in almost sure sense of true estimators with bias order h 6 n. In section, we provide decomposition of some important terms which we use in later sections. In section 3 of this paper, we study the decomposition of the variance of the true estimator 7 and find the exact formula of the dominating terms in the variance decomposition. Moreover, we provide a theoretical formula for the optimal bandwidth in section 4.. Preliminary decomposition For convenience, we adopt the notations as in Giné and Sang 4] for the Taylor series expansion of t Xi α ˆfX i ; at t Xi αfx i. We also give the statements without detailed explanation. For details, readers are referred to Giné and Sang 4]. P C will denote the set of all probability densities on R that are uniformly continuous and are bounded by C <, and P C,k will denote the set of densities on R for which themselves and their partial derivatives of order k or lower are bounded by C < and are uniformly continuous. Define δt = δt, n by the equation δt = α ˆft; αft. αft Then and α ˆft; = αft δt 8 δt Bc ˆft; ft for a constant B that depends only on p. Although we study the asymptotics of the true estimator pointwise, the uniform asymptotic behavior of the quantity δ is needed in the latter analysis. Define Dt; = ˆft; E ˆft; and bt; = E ˆft; ft. Note that for f P C,, and by Giné and Guillou ], for f P C. Denote sup t R sup t R sup bt; = Oh,n, 9 t R Dt; = O log h,n n log h,n n h,n := U. Then we have, ˆft; ft = sup Dt; bt; = O U 0 t R 3588

4 JSM 04 - Section on Nonparametric Statistics and for f P C,. By the definition of δt, we also have, sup δt = O U t R δt = α ft ˆft; ft] αft α η ˆft; ft] αft where η = ηt, 0 is between ˆft; and ft. Notice that α ηt, c 3 A for some constant A which depends only on the clipping function p. Set L t = t t and Lt = t t t, t R. We then have the following Taylor Series expansion α h ˆfX i ;,n = αfx i L αfx i δx i δ t, X i, 3 where δ t, X i = / ξt X i h,n α fx i δ X i, 4 ξ being a random number between t X i αfx i and t X i αfx i t X i αfx i δx i. By the analysis in Giné and Sang 4], sup δ t, x = O t,x R ˆf ; f = OU if f P C,. 5 Therefore, ˆft;, = ft; n n n n L αfx i αfx i δx i αfx i δ t, X i L αfx i δ X i αfx i αfx i δx i δ t, X i. 3589

5 JSM 04 - Section on Nonparametric Statistics 3. Variance of the true estimator To easy the notation, we denote t x = t x αfx, L t x = L t x αfx, L t, x = L t x αfx, Mx = L t xα fx = L t x αfx α fx and M i = MX i. Theorem Let X,..., X n be a random sample of size n with density function ft, t R, and ˆft;, defined as in 7 is an estimator of ft. Assume that the kernel on R is a non-negative symmetric function with support contained in T, T ], T <, integrates to and has bounded fourth order derivatives. The function αx in the estimator ˆft;, is defined in 3 for a nondecreasing clipping function ps ps for all s and ps = s for all s c ] with five bounded and uniformly continuous derivatives, and constant c > 0. Then, where V ar ˆft;, = V V V 3 n o V = E t X α fx = αftft o, ] X X V = E t X αfx M, and Proof. Note that, V 3 = h,n E M M X X 3 X X 3 ]. 6 V ar ˆft;, = E ˆf t;, E ˆft;, of the true estimator 7, we start by calculating E ˆf t;,. It is obvious that E ˆf t;, = nh E,n n nh E,n t X α ˆfX ; α h ˆfX ;,n t X α h ˆfX ;,n t X α h ˆfX ;,n α ˆfX ; 7 α ˆfX ;

6 JSM 04 - Section on Nonparametric Statistics The decompositions of α ˆfX ; in 8 and t X α ˆfX ; in 3 then give: 7 = nh,n E t X α fx δx 9 nh,n E L t,x α fx δ X δx 0 nh,n E δ t, X α fx δx nh,n E t X α fx δx L t, X δx nh,n E L t, X δx δ t, X α fx δx nh,n E t X δ t, X α fx δx. By Proposition of Nakarmi and Sang 9], we have 9 = V o n = αftft n o. Notice that each term in 0- has δ or δ. Then, they all have order on if we take = On /5 and = On /9 by the boundedness of, L, αf due to the boundedness of f and and 5. For example, U = O nh = on.,n Again, by applying the decompositions of α ˆfX ; in 8 and t Xi α ˆfX i ; in 3, and noting that the expectation terms with coefficients and the terms with coefficients n nh,n = h,n nh,n along with quantities δ δ, δ δ or δ δ are negligible comparing with On by the boundedness of, L, αf and and 5, we get 8 = n nh,n E t X t X αfx αfx ] 3 h,n E L tx L t X δx δx αfx αfx ] 4 h,n E tx αfx L t X δx αfx ] 5 h,n E t X αfx L t, X δ X αfx ] 6 h,n E tx αfx δ t, X αfx ] on. 7 On the other hand, by the boundedness of, L, αf and and 5, the terms of the form E E have order o n if they include quantities δ δ, δ δ or δ δ. Therefore, E ˆft;, = h,n {E tx αfx ]} 8 h,n E L tx δx αfx ] 9 h,n E tx αfx ]EL t X δx αfx ] 30 h,n E tx αfx ]EL t, X δ X αfx ] 3 h,n E tx αfx ]Eδ t, X αfx ] on

7 JSM 04 - Section on Nonparametric Statistics By the above analysis, we shall study the difference between the terms 9, 3-7 and 8-3. The difference between 3 and 8 The difference between 3 and 8 is n E t X αfx ], which has order On by the bias formula of the ideal estimator. The difference between 4 and 9 Since L has bounded support and αfx is is bounded and bounded away from zero, t x E M L αfx α fx fxdx = O. 33 Recall the decomposition of δt in. 9, 0 and 33 give and h,n E M bx ; = Oh,n, h,n M E ˆfX ; ft] = OU h,n E M α η ˆfX ; ft] = OU. We first apply the decomposition of δt, and use the above analysis. Then, by applying the results from section 3. and 3. of Giné and Sang 4], we get the following, 9 = h,n E M DX ; h,n E M bx ; ] OU 3 ] = h,n E M bx ; O U 3 n n h,n. 34 On the other hand, by a similar analysis, 4 = h,n E M M DX ; bx ; ]DX ; bx ; ] OU 3 := Q Q Q 3 OU 3, where Q = h,n EM M DX ; DX ; Q = h,n EM M DX ; bx ; Q 3 = h,n EM M bx ; bx ; = h,n EM bx ; ]. 359

8 JSM 04 - Section on Nonparametric Statistics We also denote N i = Xi u fudu, i =,. Then Q = n h E M M 0 N 0 ] 35 ] X X X X n h E M M N 36 ] n X X X X 3 n h E M M N 37 n ] X X 3 X X 3 n h E M M N 38 n n 3 n h n h E n n h E M M M M 0 N X X 3 E M M 0 N N ] X X X X 3 ] X X ]. 4 It is easy to see that, 35, 36, 40= On h,n by applying 33. Since X X 3 X u E X = fudu = N, 4 37= 0. Similarly, 39 = 4 = 0. From 3.37 of Giné and Sang 4], we have V 3 = O. Also it is easy to see that EM N = O. Hence 38 = n n h E M M X X 3 n n h EM N ] = V 3 On n ] X X

9 JSM 04 - Section on Nonparametric Statistics and Q = V 3 n On. Q = h E M M DX ; bx ;,n ] X X = E M M N nh n nh E M M nh ] X X 3 N E M M 0 N ] ] X v fv fx dv 43 ] X v fv fx dv 44 ] X v fv fx dv. 45 Under the condition that fx has bounded second order continuous derivative, X v fv fx dv = Oh 3,n. Thus, by the boundedness of and N and 33, 44 = 0 since E X X 3 43 = O n = 45. X = X u fudu = N. Therefore, Q = O n. Notice that the first quantity in 34 is same as Q 3. Hence 4 9 = V 3 n On. The difference between 6 and 3 Next we denote Ax = t xαfx and L t, x = L t, x α fx αfx. Then 0 and the decomposition of δt in give 6 =h,n E AX L t, X DX ; bx ; ] OU 3 =h,n E AX L t, X D X ; ] 46 4h,n E AX L t, X DX ; bx ; ] 47 h,n E AX L t, X b X ; ] OU = n h E AX L t, X 4 n h E AX L t, X i<j n ] ] X X i X X i ] X X j 49 ]

10 JSM 04 - Section on Nonparametric Statistics By independence and a similar argument as in 33, we have E AX L t, X 0 ] ] 49 = n h n h E AX L t, X n n h E AX L t, X =On 5 ] ] X X ] ] X X 3 5 and 4 50 = n h 4n n h E AX L t, X E AX L t, X X X X X ] ] 0 ] ] ]] X X 3 ]] 4n X X 3 n h E AX L t, X 0 ] =On. The last two terms 5 and 53 are zeroes by the argument 4. Now we decompose the term 3. Again, by 0 and the decomposition of δt in, 3 =h,n EAX ]E L t, X DX ; bx ; ] OU 3 =h,n EAX ]E L t, X D X ; ] 54 4h,n EAX ]E L t, X DX ; bx ; ] 55 h,n EAX ]E L t, X b X ; ] OU = n h EAX E L t, X 4 n h EAX E L t, X i<j n ] ] X X i X X i ] X X j ]

11 JSM 04 - Section on Nonparametric Statistics By a similar argument as in 33, we have EAX ]E L t, X 0 ] ] 57 = n h n h EAX ]E L t, X n n h EAX ]E L t, X =On h,n 59 ] ] X X ] ] X X 3 59 and 4 58 = n h 4n n h EAX ]E L t, X EAX ]E L t, X X X X X ] ] 0 ] ] ]] X X 3 ]] 4n X X 3 n h EAX ]E L t, X 0 ] =On h,n. The last two terms 60 and 6 are zeroes by the argument 4. By similar explanation as Q, 55 = O n = 47. Also notice that 48=56 and 5=59. Therefore, we get, 6 3 = o n. The difference between 7 and 3 To study the difference between 7 and 3, by Proposition of Giné and Sang 4], for the ideal estimator, we have ft; E ft; log n =. n 60 6 Therefore, t X αfx E t X αx = log n n. 3596

12 JSM 04 - Section on Nonparametric Statistics Hence, we have the following for the difference between 7 and =h,n E tx αfx αfx δ t, X ] h,n E tx αfx ] E αfx δ t, X ] =h,n E{E tx αfx αfx δ t, X X ]} h,n E tx αfx ] E{E αfx δ t, X X ]} =h =,n E{ tx αfx E t X αfx ] E αfx δ t, X X ]} log n log n nh 3,n n O sup δ t, x t,x R h,n log n n = on. The difference between 5 and 30 We shall apply the decomposition of δt as in. The difference between 5 and 30 with the second part of has order on as the analysis in the difference between 7 and 3. Therefore 5 30 =h,n E tx αfx M DX ; bx ; ] h,n E tx αx ] E M DX ; bx ; ] on =h,n E tx αfx M DX ; ] h,n E tx αx ] E M DX ; ] on ] X X = n h E t X αfx M N 6,n ] X X n h E t X αx ] E M N on,n 63 ] X X = n h E t X αfx M 64,n n h E t X αfx ] EM N on. 65,n We have the equality 6 by the independence. The term 63 is zero. Since, Lα and f are bounded functions and v has bounded support, we have V = u E t X αfx L t uα X fu fudu] = E t X αfx L t X vα fx vvfx vdv] C E t X αfx ] = O

13 JSM 04 - Section on Nonparametric Statistics and 64 = On. On the other hand, E t X αfx ] = O and u v EM N = L t uα fufu fvdvdu = L t uα fufu wfu wdwdu =O since the function αf in L t is bounded below and above. Therefore in 65, n h E t X αfx ] EM N = On. 67,n Hence 66 and 67 give us 5 30 = V n On. Thus, V ar ˆft;, = V V V 3 n o. 4. Optimal bandwidth selection Denote z n = w v αf v t w. Then lim n z n = wαft since / 0. In the following we change variables twice, then take limit by applying the compact support of and the boundedness below and above of the function α. ] u X V = E t X αfx Mu fudu = wαft w αft wft w T Lz n α f v t wvf v t wdvdw T n T wαftαftft L wαft T α ftvftdvdw = α ftf t ylydy = α ftf tµ 0 := V. 3598

14 JSM 04 - Section on Nonparametric Statistics For the quantity V 3, let x 3 = x s, x = x w and x = t z be the change of variables. Again, since / 0, L has compact support and the function α is bounded below and above, 6 = h,n h L t x α fx fx L t x α fx fx,n x x 3 x x 3 fx 3 dx 3 dx dx = L z wh,n αft z w α ft z w ft z wlzαft zw sα ft zft z ft z w ssdwdzds n f 3 tα ft L zαftsw sdwdzds = f 3 tα ft L zαftdz = f 3 t α ft L ydy := V αft 3. Theorem Suppose f is a density in C 4 R and p is a clipping function in C 5 R. Then the integrated mean squared error on D r is R, D r = D r τ4 D 4 /f Furthermore, the optimal bandwidth is given by 4 h,n = h 8 V V V 3,ndt dt oh 8 D r nh,n. 68,n 576V V V nτ4 D 4/f 3 ] /9. Proof. The bias in the integrated MSE 68 is from Corollary in Giné and Sang 4]. The variance is dominated by V V V 3 n. They are from the analysis of 9, the differences between 3-7 and 8-3 respectively. References ] Abramson, I. 98. On bandwidth variation in kernel estimates - a square-root law, Ann. Statist ] Giné, E. and A. Guillou. 00. Rates of strong uniform consistency for multivariate kernel density estimators. Ann. I. H. Poincaré-Probab. et Stat ] Giné, E. and H. Sang. 00. Uniform asymptotics for kernel density estimators with variable bandwidths. J. Nonparametric Statistics

15 JSM 04 - Section on Nonparametric Statistics 4] Giné, E. and H. Sang. 03. On the estimation of smooth densities by strict probability densities at optimal rates in sup-norm. IMS Collections, From Probability to Statistics and Back: High-Dimensional Models and Processes ] Hall, P. and J. S. Marron Variable window width kernel estimates of probability densities. Probab. Th. Rel. Fields Erratum: Probab. Th. Rel. Fields ] P. Hall, T. Hu and J. S. Marron. Improved Variable Window ernel Estimates of Probability Densities. Ann. Statist ] Mcay, I. J A note on bias reduction in variable kernel density estimates. Canad. J. Statist ] Mcay, I.J. 993.Variable kernel methods in density estimation. Ph.D Dissertation, Queen s University. 9] Nakarmi, J. and H. Sang. 04. Central limit theorem for the variable bandwidth kernel estimator. Submitted. 0] S. Yu. Novak. 999.A generalized kernel density estimator. Russian Teor. Veroyatnost. i Primenen ; translation in Theory Probab. Appl ] Terrell, G. R. and D. Scott. 99.Variable kernel density estimation. Ann. Statist

Central limit theorem for the variable bandwidth kernel density estimators

Central limit theorem for the variable bandwidth kernel density estimators Central limit theorem for the variable bandwidth kernel density estimators Janet Nakarmi a and Hailin Sang b a Department of Mathematics, University of Central Arkansas, Conway, AR 72035, USA. E-mail address:

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

Nonparametric Inference via Bootstrapping the Debiased Estimator

Nonparametric Inference via Bootstrapping the Debiased Estimator Nonparametric Inference via Bootstrapping the Debiased Estimator Yen-Chi Chen Department of Statistics, University of Washington ICSA-Canada Chapter Symposium 2017 1 / 21 Problem Setup Let X 1,, X n be

More information

Nonparametric Density Estimation

Nonparametric Density Estimation Nonparametric Density Estimation Econ 690 Purdue University Justin L. Tobias (Purdue) Nonparametric Density Estimation 1 / 29 Density Estimation Suppose that you had some data, say on wages, and you wanted

More information

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence

More information

12 - Nonparametric Density Estimation

12 - Nonparametric Density Estimation ST 697 Fall 2017 1/49 12 - Nonparametric Density Estimation ST 697 Fall 2017 University of Alabama Density Review ST 697 Fall 2017 2/49 Continuous Random Variables ST 697 Fall 2017 3/49 1.0 0.8 F(x) 0.6

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Nonparametric Function Estimation with Infinite-Order Kernels

Nonparametric Function Estimation with Infinite-Order Kernels Nonparametric Function Estimation with Infinite-Order Kernels Arthur Berg Department of Statistics, University of Florida March 15, 2008 Kernel Density Estimation (IID Case) Let X 1,..., X n iid density

More information

Adaptive Nonparametric Density Estimators

Adaptive Nonparametric Density Estimators Adaptive Nonparametric Density Estimators by Alan J. Izenman Introduction Theoretical results and practical application of histograms as density estimators usually assume a fixed-partition approach, where

More information

Goodness-of-fit tests for the cure rate in a mixture cure model

Goodness-of-fit tests for the cure rate in a mixture cure model Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,

More information

Adaptive Kernel Estimation of The Hazard Rate Function

Adaptive Kernel Estimation of The Hazard Rate Function Adaptive Kernel Estimation of The Hazard Rate Function Raid Salha Department of Mathematics, Islamic University of Gaza, Palestine, e-mail: rbsalha@mail.iugaza.edu Abstract In this paper, we generalized

More information

II. FOURIER TRANSFORM ON L 1 (R)

II. FOURIER TRANSFORM ON L 1 (R) II. FOURIER TRANSFORM ON L 1 (R) In this chapter we will discuss the Fourier transform of Lebesgue integrable functions defined on R. To fix the notation, we denote L 1 (R) = {f : R C f(t) dt < }. The

More information

Estimation of the Bivariate and Marginal Distributions with Censored Data

Estimation of the Bivariate and Marginal Distributions with Censored Data Estimation of the Bivariate and Marginal Distributions with Censored Data Michael Akritas and Ingrid Van Keilegom Penn State University and Eindhoven University of Technology May 22, 2 Abstract Two new

More information

Nonparametric Modal Regression

Nonparametric Modal Regression Nonparametric Modal Regression Summary In this article, we propose a new nonparametric modal regression model, which aims to estimate the mode of the conditional density of Y given predictors X. The nonparametric

More information

Local Polynomial Regression

Local Polynomial Regression VI Local Polynomial Regression (1) Global polynomial regression We observe random pairs (X 1, Y 1 ),, (X n, Y n ) where (X 1, Y 1 ),, (X n, Y n ) iid (X, Y ). We want to estimate m(x) = E(Y X = x) based

More information

Optimal L 1 Bandwidth Selection for Variable Kernel Density Estimates

Optimal L 1 Bandwidth Selection for Variable Kernel Density Estimates Optimal L Bandwidth Selection for Variable Kernel Density Estimates Alain BERLINET, Gérard BIAU and Laurent ROUVIÈRE Institut de Mathématiques et de Modélisation de Montpellier, UMR CNRS 549, Equipe de

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Analysis methods of heavy-tailed data

Analysis methods of heavy-tailed data Institute of Control Sciences Russian Academy of Sciences, Moscow, Russia February, 13-18, 2006, Bamberg, Germany June, 19-23, 2006, Brest, France May, 14-19, 2007, Trondheim, Norway PhD course Chapter

More information

ON SOME TWO-STEP DENSITY ESTIMATION METHOD

ON SOME TWO-STEP DENSITY ESTIMATION METHOD UNIVESITATIS IAGELLONICAE ACTA MATHEMATICA, FASCICULUS XLIII 2005 ON SOME TWO-STEP DENSITY ESTIMATION METHOD by Jolanta Jarnicka Abstract. We introduce a new two-step kernel density estimation method,

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Nonparametric Econometrics

Nonparametric Econometrics Applied Microeconometrics with Stata Nonparametric Econometrics Spring Term 2011 1 / 37 Contents Introduction The histogram estimator The kernel density estimator Nonparametric regression estimators Semi-

More information

A Novel Nonparametric Density Estimator

A Novel Nonparametric Density Estimator A Novel Nonparametric Density Estimator Z. I. Botev The University of Queensland Australia Abstract We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with

More information

Singular Integrals. 1 Calderon-Zygmund decomposition

Singular Integrals. 1 Calderon-Zygmund decomposition Singular Integrals Analysis III Calderon-Zygmund decomposition Let f be an integrable function f dx 0, f = g + b with g Cα almost everywhere, with b

More information

Smooth simultaneous confidence bands for cumulative distribution functions

Smooth simultaneous confidence bands for cumulative distribution functions Journal of Nonparametric Statistics, 2013 Vol. 25, No. 2, 395 407, http://dx.doi.org/10.1080/10485252.2012.759219 Smooth simultaneous confidence bands for cumulative distribution functions Jiangyan Wang

More information

Non-parametric Inference and Resampling

Non-parametric Inference and Resampling Non-parametric Inference and Resampling Exercises by David Wozabal (Last update 3. Juni 2013) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197 MATH 56A SPRING 8 STOCHASTIC PROCESSES 197 9.3. Itô s formula. First I stated the theorem. Then I did a simple example to make sure we understand what it says. Then I proved it. The key point is Lévy s

More information

Math Real Analysis II

Math Real Analysis II Math 4 - Real Analysis II Solutions to Homework due May Recall that a function f is called even if f( x) = f(x) and called odd if f( x) = f(x) for all x. We saw that these classes of functions had a particularly

More information

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Solution. If we does not need the pointwise limit of

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

Estimation of density level sets with a given probability content

Estimation of density level sets with a given probability content Estimation of density level sets with a given probability content Benoît CADRE a, Bruno PELLETIER b, and Pierre PUDLO c a IRMAR, ENS Cachan Bretagne, CNRS, UEB Campus de Ker Lann Avenue Robert Schuman,

More information

Bickel Rosenblatt test

Bickel Rosenblatt test University of Latvia 28.05.2011. A classical Let X 1,..., X n be i.i.d. random variables with a continuous probability density function f. Consider a simple hypothesis H 0 : f = f 0 with a significance

More information

Estimation of a quadratic regression functional using the sinc kernel

Estimation of a quadratic regression functional using the sinc kernel Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073

More information

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock WEYL S LEMMA, ONE OF MANY Daniel W Stroock Abstract This note is a brief, and somewhat biased, account of the evolution of what people working in PDE s call Weyl s Lemma about the regularity of solutions

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

On the asymptotic normality of frequency polygons for strongly mixing spatial processes. Mohamed EL MACHKOURI

On the asymptotic normality of frequency polygons for strongly mixing spatial processes. Mohamed EL MACHKOURI On the asymptotic normality of frequency polygons for strongly mixing spatial processes Mohamed EL MACHKOURI Laboratoire de Mathématiques Raphaël Salem UMR CNRS 6085, Université de Rouen France mohamed.elmachkouri@univ-rouen.fr

More information

41903: Introduction to Nonparametrics

41903: Introduction to Nonparametrics 41903: Notes 5 Introduction Nonparametrics fundamentally about fitting flexible models: want model that is flexible enough to accommodate important patterns but not so flexible it overspecializes to specific

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

3 Nonparametric Density Estimation

3 Nonparametric Density Estimation 3 Nonparametric Density Estimation Example: Income distribution Source: U.K. Family Expenditure Survey (FES) 1968-1995 Approximately 7000 British Households per year For each household many different variables

More information

Acceleration of some empirical means. Application to semiparametric regression

Acceleration of some empirical means. Application to semiparametric regression Acceleration of some empirical means. Application to semiparametric regression François Portier Université catholique de Louvain - ISBA November, 8 2013 In collaboration with Bernard Delyon Regression

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

A FEW REMARKS ON THE SUPREMUM OF STABLE PROCESSES P. PATIE

A FEW REMARKS ON THE SUPREMUM OF STABLE PROCESSES P. PATIE A FEW REMARKS ON THE SUPREMUM OF STABLE PROCESSES P. PATIE Abstract. In [1], Bernyk et al. offer a power series and an integral representation for the density of S 1, the maximum up to time 1, of a regular

More information

Oberwolfach Preprints

Oberwolfach Preprints Oberwolfach Preprints OWP 202-07 LÁSZLÓ GYÖFI, HAO WALK Strongly Consistent Density Estimation of egression esidual Mathematisches Forschungsinstitut Oberwolfach ggmbh Oberwolfach Preprints (OWP) ISSN

More information

Confidence intervals for kernel density estimation

Confidence intervals for kernel density estimation Stata User Group - 9th UK meeting - 19/20 May 2003 Confidence intervals for kernel density estimation Carlo Fiorio c.fiorio@lse.ac.uk London School of Economics and STICERD Stata User Group - 9th UK meeting

More information

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta November 29, 2007 Outline Overview of Kernel Density

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Asymptotic normality of conditional distribution estimation in the single index model

Asymptotic normality of conditional distribution estimation in the single index model Acta Univ. Sapientiae, Mathematica, 9, 207 62 75 DOI: 0.55/ausm-207-000 Asymptotic normality of conditional distribution estimation in the single index model Diaa Eddine Hamdaoui Laboratory of Stochastic

More information

A NOTE ON TESTING THE SHOULDER CONDITION IN LINE TRANSECT SAMPLING

A NOTE ON TESTING THE SHOULDER CONDITION IN LINE TRANSECT SAMPLING A NOTE ON TESTING THE SHOULDER CONDITION IN LINE TRANSECT SAMPLING Shunpu Zhang Department of Mathematical Sciences University of Alaska, Fairbanks, Alaska, U.S.A. 99775 ABSTRACT We propose a new method

More information

Calculus of Variations. Final Examination

Calculus of Variations. Final Examination Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of

More information

Math 576: Quantitative Risk Management

Math 576: Quantitative Risk Management Math 576: Quantitative Risk Management Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Week 11 Haijun Li Math 576: Quantitative Risk Management Week 11 1 / 21 Outline 1

More information

Variance Function Estimation in Multivariate Nonparametric Regression

Variance Function Estimation in Multivariate Nonparametric Regression Variance Function Estimation in Multivariate Nonparametric Regression T. Tony Cai 1, Michael Levine Lie Wang 1 Abstract Variance function estimation in multivariate nonparametric regression is considered

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

1 Local Asymptotic Normality of Ranks and Covariates in Transformation Models

1 Local Asymptotic Normality of Ranks and Covariates in Transformation Models Draft: February 17, 1998 1 Local Asymptotic Normality of Ranks and Covariates in Transformation Models P.J. Bickel 1 and Y. Ritov 2 1.1 Introduction Le Cam and Yang (1988) addressed broadly the following

More information

Math 172 Problem Set 5 Solutions

Math 172 Problem Set 5 Solutions Math 172 Problem Set 5 Solutions 2.4 Let E = {(t, x : < x b, x t b}. To prove integrability of g, first observe that b b b f(t b b g(x dx = dt t dx f(t t dtdx. x Next note that f(t/t χ E is a measurable

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Weighted uniform consistency of kernel density estimators with general bandwidth sequences

Weighted uniform consistency of kernel density estimators with general bandwidth sequences E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 11 2006, Paper no. 33, pages 844 859. Journal URL http://www.math.washington.edu/~ejpecp/ Weighted uniform consistency of kernel density

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Nonparametric Density Estimation fo Title Processes with Infinite Variance.

Nonparametric Density Estimation fo Title Processes with Infinite Variance. Nonparametric Density Estimation fo Title Processes with Infinite Variance Author(s) Honda, Toshio Citation Issue 2006-08 Date Type Technical Report Text Version URL http://hdl.handle.net/10086/16959 Right

More information

The high order moments method in endpoint estimation: an overview

The high order moments method in endpoint estimation: an overview 1/ 33 The high order moments method in endpoint estimation: an overview Gilles STUPFLER (Aix Marseille Université) Joint work with Stéphane GIRARD (INRIA Rhône-Alpes) and Armelle GUILLOU (Université de

More information

SUPPLEMENTARY MATERIAL FOR PUBLICATION ONLINE 1

SUPPLEMENTARY MATERIAL FOR PUBLICATION ONLINE 1 SUPPLEMENTARY MATERIAL FOR PUBLICATION ONLINE 1 B Technical details B.1 Variance of ˆf dec in the ersmooth case We derive the variance in the case where ϕ K (t) = ( 1 t 2) κ I( 1 t 1), i.e. we take K as

More information

Shape constrained kernel density estimation

Shape constrained kernel density estimation Shape constrained kernel density estimation Melanie Birke Ruhr-Universität Bochum Fakultät für Mathematik 4478 Bochum, Germany e-mail: melanie.birke@ruhr-uni-bochum.de March 27, 28 Abstract In this paper,

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

On large deviations of sums of independent random variables

On large deviations of sums of independent random variables On large deviations of sums of independent random variables Zhishui Hu 12, Valentin V. Petrov 23 and John Robinson 2 1 Department of Statistics and Finance, University of Science and Technology of China,

More information

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions A^VÇÚO 1 33 ò 1 6 Ï 2017 c 12 Chinese Journal of Applied Probability and Statistics Dec. 2017 Vol. 33 No. 6 pp. 591-607 doi: 10.3969/j.issn.1001-4268.2017.06.004 Stochastic Comparisons of Order Statistics

More information

Estimation of the functional Weibull-tail coefficient

Estimation of the functional Weibull-tail coefficient 1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Chanseok Park 1 Department of Mathematical Sciences, Clemson University, Clemson, SC 29634 Short Title: Smooth

More information

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s. 20 6. CONDITIONAL EXPECTATION Having discussed at length the limit theory for sums of independent random variables we will now move on to deal with dependent random variables. An important tool in this

More information

On robust and efficient estimation of the center of. Symmetry.

On robust and efficient estimation of the center of. Symmetry. On robust and efficient estimation of the center of symmetry Howard D. Bondell Department of Statistics, North Carolina State University Raleigh, NC 27695-8203, U.S.A (email: bondell@stat.ncsu.edu) Abstract

More information

Nonparametric Regression. Changliang Zou

Nonparametric Regression. Changliang Zou Nonparametric Regression Institute of Statistics, Nankai University Email: nk.chlzou@gmail.com Smoothing parameter selection An overall measure of how well m h (x) performs in estimating m(x) over x (0,

More information

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 21, 2003, 211 226 SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS Massimo Grosi Filomena Pacella S.

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Nonparametric Regression. Badr Missaoui

Nonparametric Regression. Badr Missaoui Badr Missaoui Outline Kernel and local polynomial regression. Penalized regression. We are given n pairs of observations (X 1, Y 1 ),...,(X n, Y n ) where Y i = r(x i ) + ε i, i = 1,..., n and r(x) = E(Y

More information

A Note on Data-Adaptive Bandwidth Selection for Sequential Kernel Smoothers

A Note on Data-Adaptive Bandwidth Selection for Sequential Kernel Smoothers 6th St.Petersburg Workshop on Simulation (2009) 1-3 A Note on Data-Adaptive Bandwidth Selection for Sequential Kernel Smoothers Ansgar Steland 1 Abstract Sequential kernel smoothers form a class of procedures

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

Abstract MEAN INTEGRATED SQUARED ERROR AND DEFICIENCY OF NONPARAMETRIC RECURSIVE KERNEL ESTIMATORS OF SMOOTH DISTRIBUTION FUNCTIONS.

Abstract MEAN INTEGRATED SQUARED ERROR AND DEFICIENCY OF NONPARAMETRIC RECURSIVE KERNEL ESTIMATORS OF SMOOTH DISTRIBUTION FUNCTIONS. Nihonkai Math. J. Vol.8 (1997), 195-207 MEAN INTEGRATED SQUARED ERROR AND DEFICIENCY OF NONPARAMETRIC RECURSIVE KERNEL ESTIMATORS OF SMOOTH DISTRIBUTION FUNCTIONS Keiichi HIROSE* Abstract In this paper

More information

O Combining cross-validation and plug-in methods - for kernel density bandwidth selection O

O Combining cross-validation and plug-in methods - for kernel density bandwidth selection O O Combining cross-validation and plug-in methods - for kernel density selection O Carlos Tenreiro CMUC and DMUC, University of Coimbra PhD Program UC UP February 18, 2011 1 Overview The nonparametric problem

More information

Minimax Estimation of a nonlinear functional on a structured high-dimensional model

Minimax Estimation of a nonlinear functional on a structured high-dimensional model Minimax Estimation of a nonlinear functional on a structured high-dimensional model Eric Tchetgen Tchetgen Professor of Biostatistics and Epidemiologic Methods, Harvard U. (Minimax ) 1 / 38 Outline Heuristics

More information

GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY

GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 17, Number 1, Spring 2009 GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY XIAO-QIANG ZHAO ABSTRACT. The global attractivity

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

A new method of nonparametric density estimation

A new method of nonparametric density estimation A new method of nonparametric density estimation Andrey Pepelyshev Cardi December 7, 2011 1/32 A. Pepelyshev A new method of nonparametric density estimation Contents Introduction A new density estimate

More information

INTRODUCTION TO REAL ANALYTIC GEOMETRY

INTRODUCTION TO REAL ANALYTIC GEOMETRY INTRODUCTION TO REAL ANALYTIC GEOMETRY KRZYSZTOF KURDYKA 1. Analytic functions in several variables 1.1. Summable families. Let (E, ) be a normed space over the field R or C, dim E

More information

UNIFORM IN BANDWIDTH CONSISTENCY OF KERNEL REGRESSION ESTIMATORS AT A FIXED POINT

UNIFORM IN BANDWIDTH CONSISTENCY OF KERNEL REGRESSION ESTIMATORS AT A FIXED POINT UNIFORM IN BANDWIDTH CONSISTENCY OF KERNEL RERESSION ESTIMATORS AT A FIXED POINT JULIA DONY AND UWE EINMAHL Abstract. We consider pointwise consistency properties of kernel regression function type estimators

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

Statistica Sinica Preprint No: SS

Statistica Sinica Preprint No: SS Statistica Sinica Preprint No: SS-017-0013 Title A Bootstrap Method for Constructing Pointwise and Uniform Confidence Bands for Conditional Quantile Functions Manuscript ID SS-017-0013 URL http://wwwstatsinicaedutw/statistica/

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

ECON 721: Lecture Notes on Nonparametric Density and Regression Estimation. Petra E. Todd

ECON 721: Lecture Notes on Nonparametric Density and Regression Estimation. Petra E. Todd ECON 721: Lecture Notes on Nonparametric Density and Regression Estimation Petra E. Todd Fall, 2014 2 Contents 1 Review of Stochastic Order Symbols 1 2 Nonparametric Density Estimation 3 2.1 Histogram

More information