Unified estimation of densities on bounded and unbounded domains

Size: px
Start display at page:

Download "Unified estimation of densities on bounded and unbounded domains"

Transcription

1 Unified estimation of densities on bounded and unbounded domains Kairat Mynbaev International Scool of Economics Kazak-Britis Tecnical University Tolebi 59 Almaty 5, Kazakstan kairat and Carlos Martins-Filo Department of Economics IFPI University of Colorado 233 K Street NW Boulder, CO , USA & Wasington, DC 26-2, USA carlos.martins@colorado.edu c.martins-filo@cgiar.org Marc, 27 Abstract. Kernel density estimation in domains wit boundaries is known to suffer from undesirable boundary effects. We sow tat in te case of smoot densities, a general and elegant approac is to extend te density and estimate its extension. Te resulting estimators in domains wit boundaries ave biases and variances expressed in terms of density extensions. Te result is tat tey ave te same rates at boundary and interior points of te domain. Contrary to te extant literature, our estimators require no kernel modification near te boundary and kernels commonly used for estimation on te real line can be applied. Densities defined on te alf-axis and in a unit interval are considered. Te results are applied to estimation of densities tat are discontinuous or ave discontinuous derivatives, were tey yield te same rates of convergence as for smoot densities on. Keywords and prases. Nonparametric estimation, Hestenes extension, estimation in bounded domains, estimation of discontinuous densities. AMS-MS Classification. 62F2, 62G7, 62G2.

2 Introduction Kernel estimation of densities on te real line is a well-developed area. Te core of te teory is a series of results covering smoot densities tat do not exibit extreme curvature. Let K denote a kernel, an integrable function on, wic satisfies K(tdt =, > be a bandwidt and f be a density on. Assuming tat {X i } n i= is an independent and identically distributed (IID sample from f, te traditional osenblatt-parzen kernel estimator of f(x is defined by ˆf (x = n n i= K ( x X i. Tis estimator as tree desirable caracteristics: tere exists a great profusion of kernels tat can be used to construct te estimator (usually, tey are from te Gaussian or Epanecnikov families; tey are usually symmetric and do not depend on te point (x of estimation, or on te class of densities being estimated; 2 tere is a simple link between te degree of smootness of te density and te order of estimator s bias: if f Cb s (Ω and te kernel is of order s, ten E ˆf(x f(x = O( s. Te use of iger order kernels in te case of smoot densities is also a standard feature; 3 te optimal bandwidt is of order n /(2s for all estimation points, unless tere are areas of extreme curvature or discontinuities. In cases were te domain of f as a boundary, te main problem is bad estimator beavior in te vicinity of te boundary. Tis problem called into being a range of estimation metods. Among te widely used ones are te reflection metod, te boundary kernel metod, te transformation metod and te local linear metod (see, inter alia, Scuster (985, Karunamuni and Alberts (25, Malec and Scienle (24, Wen and Wu (25 and teir references. Oter metods ave proposed te use of asymmetric kernels and kernel adjustments near te boundary. Suc tecniques necessarily require variable bandwidts, separation of densities into subclasses tat vanis or not at te boundary, densities tat ave derivatives of a certain sign at te boundary, etc. Te difficulties of estimation near te boundary precluded researcers from identifying a core class for wic analogs of te standard results mentioned above would be true. In particular, we ave not seen in te literature results tat would guarantee a better bias rate for densities of iger smootness. In tis paper we propose estimation procedures tat permit a unified teoretical study of teir properties Let s N and Ω. Te class of functions f : Ω wic are s-times differentiable wit f (s (x C for some < C < is denoted by C s b (Ω. We say tat te kernel K is of order s 2 if t j K(tdt = for j =,, s and t s K(tdt.

3 under bounded and unbounded domains. We sow tat smootness is all one needs to ave a good bias rate, and for smoot densities te beavior at te boundary is irrelevant (derivatives at endpoints are one-sided derivatives. For densities on te alf-axis [, and on te unit interval (, we introduce new estimators for wic all standard facts old. Usual symmetric kernels and constant bandwidts can be used across te domain and for f Cb s(ω te biases of our estimators are of order O(s. Te bandwidt depends on te sample size in te same way as in case of estimation on te wole line. In te case of estimation of piece-wise continuous densities, wit known discontinuity points, our estimators supply te required jumps at tose points. As in, te estimator for densities in classes were s > 2 is not necessarily nonnegative, because te estimation involves iger-order kernels. Our metod does not cover densities wit poles at endpoints. Our estimation metod is based on Hestenes extension (Hestenes (94. Let D f be te domain of te density f and denote by g its Hestenes extension (te definitions for te alf-axis and interval are given below in te respective sections. Te key observation is tat g can be viewed as a linear combination of densities. Te sample generated from f is used to estimate eac of tese densities and te linear combination of te estimators estimates g. Te restriction of te estimator of g to D f estimates f. We sow tat te teory of estimation on a domain wit boundaries for smoot densities in effect becomes a capter in estimation on te wole line. Te essential link between te proposed estimator ˆf(x of f(x and te properties of g is of type E ˆf(x f(x = K(t (g(x t g(x dt, x D f. Tis representation as eluded previous work, and can be used for evaluating bias. Our estimation procedure does not require knowledge of g. Tere seems to be a sligt loss in te speed of convergence as compared to convergence on te line because te same data is exploited more tan once to estimate different parts of g. However, tis loss does not affect te rate in E ˆf(x f(x = c s o( s ; it affects only te constant c, in comparison wit te classical estimator for densities on te line. In section 2, we start wit estimation of a density on [,. Section 3 treats densities on a bounded interval. In section 4, te approac is extended to estimation of discontinuous densities. Section 5 provides two metods to satisfy zero boundary conditions. 2

4 2 Estimation of densities defined on [, Let w,..., w s be pairwise different positive numbers for s =,,. Of special interest are te decreasing sequence w i = /i, i =,..., s (used by Hestenes (94 and te increasing sequence w i = i. Let te numbers k,..., k s be defined from te following system s ( w i j k i =, j =,..., s. (2. Since tis system as te Van-der-Monde determinant... w w 2... w s , ( w s ( w 2 s... ( w s s i= k,..., k s are uniquely defined. If f as [, as its domain, its Hestenes extension for x < is defined by s φ s (x = k j f( w j x, x <. (2.2 j= φ s is not a density, but it is a linear combination of densities w j f( w j x wit coefficients k j /w j. Assuming tat f as s rigt-and derivatives f(,..., f (s ( (s = means continuity, we see tat te following sewing conditions at zero are satisfied due to (2.: s φ (m s ( = ( w j m k j f (m ( = f (m (, m =,, s. j= Now define g on by g(x = { f(x, x φ s (x, x <, (2.3 wit g being s times differentiable. Moreover, if, for example, f belongs to te Sobolev space W s p ([,, ten g belongs to W s p (, were p < (see Burenkov (998. Suppose f is s times differentiable, m =,,, s, te kernel K is m times differentiable, and let {X i } n i= be an IID sample from f. Te estimator of f (m (x, for x >, is defined by ˆf (m n ( x (x = K (m Xi s ( k j x n m K (m Xi /w j. (2.4 w j i= j= 3

5 Wen te kernel K is an even function and m = s =, ˆf ( (x ˆf S (x is te reflection estimator from Scuster (985, i.e., ˆf S (x = n n i= [ K ( x Xi K ( x Xi ]. Te next assumption is used only for m, wen integration by parts is needed. Assumption 2.. a K is even, m times differentiable and max f (j (x = O(x as x. j m max j m K(j (t t = o( as t ; b Te estimator in (2.4 can be constructed using kernels in te class {M k (x} k N proposed by Mynbaev and Martins-Filo (2, were M k (x = C k 2k k l = ( l C lk ( 2k x K l l wit C l 2k = 2k! (2k l!l! for l =,, 2k. In tis context, K is called te seed of M k. Tese kernel are used togeter wit an order 2k finite difference 2k g(x = k ( lk C lk g(x l l = 2k wen Besov type norms are employed to measure smootness (see Mynbaev and Martins-Filo (2. We let (m ˆf k (x denote te estimator defined in (2.4 wit K replaced by M k. Note tat if K is even, ten M (x = K(x and ˆf (m (x = ˆf (m (x. Teorem 2.. Suppose f is s-times differentiable on D f = [, and te kernel K is m times differentiable on wit m =,,, s. In case m suppose tat Assumption 2. olds. Ten, Te bias of ˆf (m (x as te representation E ˆf (m (x f (m (x = [ ] K(t g (m (x t g (m (x dt, x D f. (2.5 2 If in (2.4 a kernel M k wit seed K is used, ten E (m ˆf k (x f (m (x = ( k C2k k K(t 2k tg (m (xdt, x D f. (2.6 4

6 Proof. By te IID assumption ( E ˆf (m x (x = m E K (m X ( = x t m K (m In te first integral let u = x t E ˆf (m (x = m = m s j= k j w j K (m s f(tdt w j= j, in te oters u = xt/wj. Ten x/ x/ ( x X /w j k j s K (m (u f(x udu k j j= K (m (u f(x udu x/ x/ s K (m (u ( x K (m t/wj f(tdt. (2.7 K (m (uf ( w j (x u du j= k j f ( w j (x u du. In te first integral we ave x u > and f(x u = g(x u; in te second one x u <, so s j= k jf ( w j (x u = g(x u. Hence, E ˆf (m (x = m K (m (u g(x udu. (2.8 By Assumption 2. K (j (ug (m j (x u = o(, as u for j =,..., m, >. Terefore, integration by parts gives te following expression for (2.8 E ˆf (m (x = = m j= m j K(m j (ug (j (x u K(ug (m (x udu K (u g (m (x udu. (2.9 Since K(tdt =, tis implies ( Plug te definition of M k in (2.7 to get (m E ˆf k (x = k ( l C lk 2k C2k k l l m m l = ( x t K (m l s f(tdt k j w j= j ( x K (m t/wj f(tdt. l (2. 5

7 For l <, ( x t s K (m f(tdt l (replacing x t l = l x/(l k j w j= j = u and x t/w j l = u s K (m (u f(x ludu = l K (m (u g(x ludu. Similarly, we ave for l > ( x t s K (m f(tdt l Terefore, (2. gives Finally, wic is (2.6. E k j w j= j ˆf (m k (x = C k 2k j= ( x K (m t/wj f(tdt l x/(l k j K (m (u f( w j (x ludu ( x K (m t/wj f(tdt = l K (m (u g(x ludu. l k l = ( l C lk 2k (l m K (m (u g(x ludu (integrating by parts as above = k C2k k ( l C lk 2k K (u g (m (x ludu. l = (m E ˆf k (x f (m k (x = ( k C2k k ( lk C lk 2k K (u g (m (x ludu l = ( k C2k k ( k C2k k K (u g (m (xdu = ( k C2k k K(u 2k ug (m (xdu Te integral representations for biases obtained in Teorem 2. depend on te extension g, not te density f. Consequently, existing results for smoot functions (not densities on allow us to easily obtain bias estimates. If classical smootness caracteristics in terms of derivatives and Taylor expansions are used, ten part of Teorem 2. is relevant. Tis approac can be used for derivatives of orders m s wen te bias order is O( s m and guaranteed to tend to zero as. If, on te oter and, smootness is caracterized in terms of finite differences and Besov spaces, ten te second representation sould be 6

8 applied. It is appropriate for m = s or m = s wen te derivative of order s may ave a residual fractional smootness of order < r <. For p, q and Ω an open subset of put 2k,Ωf(x = 2k f(x if [x k, x k] Ω and 2k,Ωf(x = oterwise and let f b r p,q (Ω = ( Ω,Ω f(x p /p q dx d r 2k were k is any integer satisfying 2k > r, and in case p = and/or q = te integral(s is (are replaced by sup. Furter, f B r p,q (Ω = f b r p,q (Ω f L p(ω. Te Hestenes extensionis known to be bounded from B r p,q(ω to B r p,q(. /q Assumption 2.2. For m s, f (m b r < wit some r > and q and,q (, ( q K(t q t (r/qq dt < were /q /q =. Teorem 2.2. Let Assumption 2. old and assume tat K(ttj dt =, for j =,..., s m, K(tts m dt < and f (s (x < C for all x >, ten E ˆf (m (x f (m (x = O( s m for all x D f. (2. 2 Let f and K satisfy Assumption 2.2, ten E ˆf (m k (x f (m (x = O( r for all x D f. (2.2 Proof. For part, we note tat since K(tdt = we ave from Teorem 2. ( E ˆf (m (x f (m (x = K(t(g m (x t g (m (xdt = K(t g (m (x( t 2! g(m2 (x( t 2 (s m! g(s (x tτ( t s m dt for some τ (,. If K(ttj =, for j =,..., s m ten E ˆf (m (x f (m (x s m (s m! t s m K(t g (s (x tτ dt C s m, 7

9 were te last inequality follows from te assumptions tat K(tts m dt <, f (s (x < C for all x > and te structure of g (s. For part 2, using (2.6 and Hölder s inequality we ave (m E ˆf k (x f (m (x = c K(t t r/q 2k t g(m (x dt t r/q ( /q c K(t q t (r/qq dt sup 2k x t r t g(m (x (canging variables on te second integral ( /q c r g K(t q t (r/qq dt (m = O( r. b r,q ( q dt t /q In te last line we used te bound g (m B r p,q ( c f (m B r p,q (,. From now on we give just expressions for bias and variance leaving consequences of type (2. and (2.2 to te reader. Teorem 2.3. Suppose tat f is continuous, sup f(x <, sup K (m (x <, x x K (m (t 2 dt and K (m (t (m dt <. In case m > also let Assumption 2. old. Denote F (t = M k (t. Ten ( { } (m V ˆf k (x = n 2m f(x F 2 (tdt o(, x >. (2.3 Te estimator ˆf (m (x as a similar property wit F (t = K (m (t in place of M (m k (t. Proof. Denoting u i = m [ F ( x X i s k j j= w j F ( ] xxi/w j we ave ˆf (m k (x = n n i= u i and given te IID assumption V ( (m ˆf k (x = [ Eu 2 n (Eu 2]. (2.4 Let w = 2, k =, w =, k = (te value of w does not matter. Ten Eu 2 = E s ( k j x X /w j m F 2 w j = 2m2 j= s i,j= k i w i k j w j ( x t/wi F F ( x t/wj f(tdt. (2.5 Denote by B(x, r = {t : x t r} te ball centered at x wit radius r. For any positive ε tere is a > suc tat F (u du < ε. Note tat u >a B i { t : x t/w i } a = {t : w i x t w i a} = B( w i x, w i a. 8

10 Since all w i are different and x >, te balls B i do not overlap for small. Let be tat small and denote F i (t = F ((x t/w i /. Ten using = B i B c i (ere c stands for te complement we ave F i (t F j (t f(tdt sup f B i B c i F i (t F j (t dt [ = c B i (B j Bj F i (t F j (t dt c (using B i B j =, B i Bj c Bj c ( c 2 F j (t dt F i (t dt. Bj c Bi c B c i F i (t F j (t dt ] Furter, B c i F i (t dt = w ixt > w i a It follows tat F i (tf j (tf(tdt = o( and from (2.5 Eu 2 = = = 2m2 2m2 2m [ s i= [ ( ki w i ( x F t/wi dt = w i F (u du = O(ε. u >a 2 ] Fi 2 (t f(tdt o( F 2 ( x t s f(tdt [ x/ F 2 (u f(x udu ( ki w i= i s ( ki i= 2 w i 2 ( ] x F 2 t/wi f(tdt o( F 2 (u f( w i (x udt o( x/ ]. Defining g (x = { f(x, x > ( 2 s k i i= w f( wj i x, x < (2.6 we can write Eu 2 = [ ] 2m F 2 (u g (x udu o(. (2.7 g is not a smoot extension of f but it is integrable on and continuous at x >. Terefore F 2 (u g (x udu g (x F 2 (u du = f(x F 2 (u du. Teorem 2.2, (2.4 and (2.7 finis te proof. 9

11 3 Estimation on a bounded interval Let f be defined on D f = (, and let te vectors w, k be as before. We would like to extend f to te left of zero using (2.2. To obtain a common domain for te components of φ, we put a = min i (/w i and let s φ (x = k j f( w j x, a < x <. Te sewing conditions at are satisfied as before. Put s φ 2 (x = k j f( w j (x, < x < a, j= j= and define te extension by Te sewing condition olds at x = : φ (x, a < x < g(x = f(x, < x < φ 2 (x, < x < a (3. s 2 ( = ( w m j k m f (j ( = f (j (, j =,..., s. φ (j m= Suppose f is s times differentiable, m is an integer, m s, te kernel K is m times differentiable, and let X,..., X n be an IID sample from f. Te estimator of f (m (x, x (,, is defined by ˆf (m n ( x (x = n m K (m Xi s k j ( x K (m Xi /w j w i= j= j X i<aw j ( x (Xi /w j. (3.2 X i> aw j K (m Teorem 3.. Let f be s times differentiable on D f = (, and let K be an m times differentiable kernel wit finite support, m s. Let > be small (specifically, it sould satisfy te condition suppk ( a/, a/. Ten, te following statements old: For a classical kernel K (2.5 is true. 2 If in (3.2 K is replaced by M k, ten (2.6 is true. Proof. Let I A denote te indicator of a set A. Ten, for an arbitrary function g, X i<aw j g(x i =

12 n i= I {X i<aw j}g(x i. Using indicators in (3.2 and te fact tat {X i } n i= is IID, we ave E ˆf (m (x = m Canging variables using x t E ˆf (m (x = m (x / aw j K (m xt/wj = u, (x / x/ (x a/ Applying (3., we ave E ˆf (m (x = m x/ (x / ( x t K (m s f(tdt ( x (t /wj = u, x (t /wj k j w j= j f(tdt [ awj ]} = u, we ave [ s (xa/ K (m (u f(x udu k j j= K (m (u f( w j (x u du (x / (x a/ K (m (u f(x udu ]}. (xa/ x/ ( x K (m t/wj f(tdt. (3.3 x/ s K (m (u k j f( w j (x u du j= K (m (u f( w j (x udu s K (m (u k j f( w j (x udu = (xa/ m K (m (u g(x udu. (3.4 (x a/ egardless of x (,, te interval ((x a/, (x a/ contains ( a/, a/ wic contains suppk j= for all small. Terefore, E ˆf (m (x = m K (m (u g(x udu. For tis to old formally, g sould be extended outside ( a, a smootly; te manner of extension does not affect te above integral. Finally, integration by parts and te condition K(tdt = prove te statement. 2 Since K is assumed to ave finite support, we do not need Assumption 2.. Calculations done in te proof of Teorem 2.2 afterequation (2. include cange of variables and integration by parts and can be easily repeated ere. emark. Instead of requiring K to ave compact support one can define te extension so tat it is sufficiently smoot and as compact support. Take a smoot function suc tat (x = on ( a/2, a/2 and (x = for x outside ( a, a. Instead of (3. consider te extension g (x = (xg(x and cange

13 (3.2 accordingly. Ten te statement of Teorem 3. will be true for g witout te assumption tat K as compact support. Wen m = and integration by parts is not necessary, te function does not ave to be smoot. g can be extended by zero outside ( a, a or, equivalently, one can take (x = on ( a, a and (x = for x outside ( a, a. Teorem 3.2. Under conditions of Teorem 3. te following is true. For a classical kernel K were F (x = K (m (x. ( { } V ˆf (x (m = n 2m f(x F 2 (tdt O(, x D f. 2 If M k is used in place of K, ten te same asymptotic expression is true wit F (x = M (m k (x. Proof. Define u i = m K(m ( x Xi s k j w j= j I {Xi> aw j}k (m ( x (Xi /w j [ ( x I {Xi<aw j}k (m Xi /w j ]}. Ten we can use equations similar to (2.4 and (2.5. We want to sow tat in te analog of (2.5 all cross-products vanis if is small. Suppose suppk ( b, b for some b >. In te cross-products we ave integrands tat include products of functions ( ( ( x t x f (t = K (m, f j (t = K (m t/wj x (t, g j (t = K (m /wj. It is easy to see tat suppf B(x, b, suppf j B( w j x, w j b, supp g j B( w j (x, w j b. Since te radii of te balls ere tend to zero, for small tey do not overlap if teir centers are different. It is easy to sow tat all te centers are indeed different using te conditions tat < x <, w j > for all j and all w j are different. 2

14 Tus, for small Eu 2 = { ( x t 2m2 F 2 f(tdt F 2 aw j ( x (t /wj s ( kj j= w j f(tdt 2 [ awj ]}. ( x F 2 t/wj f(tdt Define ( 2 s k i i= w f( wi i x, a < x < g (x = f(x, x (, ( 2 s k i i= w i f( wi (x, < x < a epeating te canges of variables tat led from (4.2 to (5. and recalling tat suppk ( a/, a/ (3.5 we obtain Eu 2 = (xa/ 2m F 2 (u g(x udu = (x a/ 2m F 2 (u g(x udu. Overall, using Teorem 3.. ( V ˆf (x (m = n = [ 2m n 2m Te proof for M k is te same. ( f(x ( F 2 (u g(x udu F 2 (u du O(. 4 Estimation of smoot pieces of densities m ] 2 K (m (u g(x udu Ideas developed in te previous sections can be applied to estimation of densities wit discontinuities or wit discontinuous derivatives. Here we provide two results. Cline and Hart (99 used Scuster s symmetrization device to improve bias around a discontinuity point. Te first result in tis section applies, for example, to te Laplace distribution wic is continuous everywere but as a discontinuous derivative at zero. Te usual kernel density estimator on te wole line will inevitably ave a large bias at zero. Te suggestion is to estimate its smoot restrictions f and f on te rigt alf-axis (, and left alf-axis (,. As a second example, consider a piece-wise constant density on te interval (,. Te restriction of te density on eac interval were it is constant is smoot 3

15 and can be estimated using our approac. Obviously, te jumps of te estimators will estimate te jumps of te density. f and f do not need to ave te same degree of smootness. Suppose tat te rigt part f is s times differentiable and m s. Te estimator of f (m (x, x >, is defined by ˆf (m ( x (x = K (m Xi s ( k j x n m K (m Xi /w j. w j X i> Teorem 4.. In Teorem 2. and in definition (2.3 let f = f and D f = (,. If te conditions of j= Teorem 2. are satisfied for f and K, ten (2.5 and (2.6 are true and 2 for te variance of ˆf (m (x one as (2.3. Proof. Instead of (2.7 we ave (m ( x E ˆf (x = m E I {X >} K (m X ( = x t m K (m s j= s f (tdt epeating calculations tat led from (2.7 to (2.9 we get E k j w j= j k j w j K (m (m ˆf (x = K (s g (m (x sds (tose calculations did not use te fact tat f was a density. ( x X /w j ( x K (m t/wj f (tdt. 2 Similarly, replacing in (4.3 f by f and repeating te argument of Teorem 2.3 we see tat were F (t = K (m (t. V ( { } (m ˆf (x = n 2m f (x F 2 (tdt o(, x >, Now suppose tat te domain D f of a density f contains a finite segment (c, d suc tat te restriction f r of f onto (c, d is smoot. Denote s φ (x = k j f r (c w j (x c, c a < x < c, j= 4

16 te extension of f r to te left of c and s φ 2 (x = k j f r (d w j (x d, d < x < d a, j= te extension of f r to te rigt of d. Here we coose a = a(d c, to make sure tat c w j (x c and d w j (x d belong to (c, d. Te extended restriction ten is defined by φ (x, c a < x < c, g r (x = f r (x, c < x < d, φ 2 (x, d < x < d a. (4. Definition (3.2 guides us to define ˆf (m (x = n m { s k j w j= j c<x i<d d a w j<x i<d ( x K (m Xi c<x i<ca w j K (m ( x c (Xi c/w j K (m ( x d (Xi d/w j, x (c, d. Teorem 4.2. Let f r be s times differentiable, m s, and let K ave compact support. For sufficiently small (suc tat suppk ( a /, a / we ave E ˆf [ (m (x f r (m (x = K(u g (m r ] (x u g r (m (x du, c < x < d. (4.2 Furter, ( { } V ˆf (x (m = n 2m f(x F 2 (tdt O(, c < x < d, were F (t = K (m (t or F (t = M (m k (t depending on wic kernel is used in te definition of ˆf (m (x. Proof. By te i.i.d. assumption E ˆf (m (x = { d ( x t m K (m f r (tdt c s k j w j= j d [ caw j c d a w j K (m K (m ( wj (x c (t c w j ( wj (x d (t d w j f r (tdt f r (tdt ]}. 5

17 Te obvious canges of variables are: x t = u, w j (x c (t c w j = u, w j (x d (t d w j = u. Te mean value becomes { E ˆf (m (x = m s (x d/ (x c/ K (m (u f r (x udu [ (x ca/ k j K (m (u f r (c w j (x c w j udu j= (x c/ (x d/ (x d a / Applying (4. we see tat tis is te same as K (m (u f r (d w j (x d w j udu { E ˆf (m (x = (x c/ m K (m (u f r (x udu (x d/ = m (x ca/ (x c/ (x d/ (x d a / (x ca/ (x d a / ]} s K (m (u k j f r (c w j (x u cdu j= s K (m (u j= k j f r (d w j (x u ddu K (m (u g r (x udu. egardless of x (c, d, te interval ((x d al/, (x c a/ contains ( a/, a/ wic contains suppk for all small. Terefore, also integrating by parts, E ˆf (m (x = m K (m (u g r (x udu = K (u g r (m (x udu. Te derivation of te expression for variance largely repeats tat from Teorem 3.2. Define. u i = { ( x m I {c<xi<d}k (m Xi s k j w j= j [ ( x c I {c<xi<ca w j}k (m (Xi c/w j I {d aw j<x i<d}k (m ( x d (Xi d/w j ]}. Ten we can use equations similar to (2.4 and (2.5. 6

18 We want to sow tat in te analog of (2.5 all cross-products vanis if is small. Suppose suppk ( b, b for some b >. In te cross-products we ave integrands tat include products of functions It is easy to see tat ( x t f (t = K (m g j (t = K (m ( x d (Xi d/w j ( x c, f j (t = K (m (Xi c/w j., suppf B(x, b, suppf j B(c w j (x c, w j b, supp g j B(d w j (x d, w j b. Since te radii of te balls ere tend to zero, for small tey do not overlap if teir centers are different. It is easy to sow tat all te centers are indeed different using te conditions tat c < x < d, w j > for all j and all w j are different. Tus, for small Eu 2 = { d ( x t 2m2 F 2 f r (tdt c s ( 2 [ caw kj j j= d w j d a w j F 2 ( x c (t F 2 c/wj c ( ]} x d (t d/wj f r (tdt. f r (tdt Define ( 2 s k i i= w i f(c wi (x c, c a < x < c gr(x = f(x, x (c, d ( 2 s k i i= w i f(d wi (x d, d < x < d a epeating te canges of variables tat led from (4.2 to (5. and recalling tat suppk ( a /, a / (4.3 we obtain Eu 2 = 2m (x ca/ (x d a / F 2 (u gr(x udu = 2m F 2 (u gr(x udu. Overall, combining tis wit te bias estimate 4.2 we obtain [ ( V ˆf (x (m = n 2m ( = n 2m f r (x ( F 2 (u gr(x udu F 2 (u du O(. m ] 2 K (m (u gr(x udu 7

19 5 Estimators satisfying zero boundary conditions For simplicity we consider only densities on D f = (,. For estimator (2.4 we provide two modifications designed to satisfy zero boundary conditions for te estimator itself and/or its derivatives. In bot cases te bias rate is retained. Te first estimator also reduces te variance near zero. Te main difference between te estimators is in te number of derivatives tat are guaranteed to vanis. Everywere it is assumed tat f is s times differentiable, m s and te purpose is to estimate f (m (x. Let l be an integer between m and s and let ψ be a function on D f wit properties ψ( =... = ψ (l m ( =, ψ (l m (, (5. ψ(x = for x, ψ(x everywere. If f (m ( =... = f (s ( =, (5.2 put α =. Oterwise, let k be te least integer suc tat m k s, f (k ( and put α = s m k m. For any estimator ˆf (m (x of f (m (x define anoter estimator f (m (x = ψ(x α ˆf (m (x. Teorem 5.. Let te estimator ˆf (m (x of f (m (x satisfy (2.. Ten f (m (x satisfies f (m ( =... = f (l ( =, (5.3 E f (m (x f (m (x = O( s m for all x D f. (5.4 and ( ( V ˆf (x V f (x (m, x α (m = ( V ˆf (x (m (x α 2(l m, x < α. (5.5 Proof. (5. implies ( j d f (m (x x= = dx for j =,..., l m, so (5.3 is satisfied. To prove (5.4, consider two cases. j i= C i j [ ( ] [ i ( j i d d ψ(x α ˆf (x] (m x= =, dx dx Case x α. (5.4 follows trivially from (2. because f (m (x = ˆf (m (x. 8

20 Case x α. Obviously, in te equation E f [ (m (x f (m (x = ψ(x α E ˆf ] (m (x f (m (x [ ψ(x α ] f (m (x te first term on te rigt is O( s m, and it remains to prove tat [ψ(x α ] f (m (x = O( s m. Suppose (5.2 is true, so tat α =. Ten f (m (x = f (m (... f (s ( xs m (s m! o(xs m = o ( ( α s m = o( s m. (5.6 Suppose (5.2 is wrong. Ten α = s m k m and f (m (x = f (m (... f (s ( xk m (k m! o(xk m = O ( x k m = O( s m. (5.7 (5.6 and (5.7 prove wat we need. To prove (5.5, consider two cases. Case x α. Te first part of (5.5 is obvious because f (m (x = ˆf (m (x. Case x α. From (5. it follows tat ψ(x α = ψ(... ψ (l m ( (x α (l m (l m! ( x α (l m wic proves te second part of (5.5. Let ψ be a function wit properties: ψ is m times differentiable on D f, ψ(x = for x (, 2], ψ(x = for x 3, ψ(x everywere. Define ˆf (m n ( x (x = n m ψ(x i /x K (m Xi s ( k j x K (m Xi /w j. w j i= Teorem 5.2. All derivatives of ˆf (m (x tat exist vanis at zero and E ˆf (m (x f (m (x = [ K(t g (m x j= ] (x t g x (m (x dt, x D f, (5.8 ( were g x is te Hestenes extension of f x (t = f(tψ (t/x. Besides, V ˆf (x (m satisfies (2.3. 9

21 Proof. For almost all samples min i X i > and for < x < 3 min i X i one as ψ(x i /x =, i =,..., n. Hence ˆf (m (x vanises, togeter wit all its derivatives, in te neigborood of zero for almost all samples. Following (2.7 we see tat te mean is E ˆf (m (x = ( x t s ( K (m k j x m K (m t/wj ψ w j j= ( t f(tdt. x Here te function f x (t = f(tψ (t/x as support suppf x [, 3]. Implementing canges applied after (2.8, including integration by parts, we obtain an analog of (2.9 wit g x instead of g. g x is obtained by replacing f in (2.2-(2.3 by f x. Te rest is familiar. epeating te calculations from te proof of Teorem 2.3 wit f x in place of f and using f x (x = f(x we obtain ( Simulations We conducted a series of simulations to provide some evidence on te finite sample performances of our estimators and to contrast tem wit tat of some of te most commonly used estimators for densities wit supports tat are subsets of. We focus on two broad cases: first, we consider densities tat are defined on [, ; second we consider te case of a density wit a discontinuity at x =. In te first case we considered random variables wit te following densities:. Normal density left-truncated at x = : f T N (x = 2 2π exp( 2 x2, 2. Gamma density: f G (x = β α Γ(α xα exp( β x wit α = β =, 3. Ci-squared density: f χ (x = 2 v/2 Γ(v/2 xv/2 exp( 2x wit v = 5, 4. Exponential density: f E (x = λexp( λx wit λ =. For eac density we generated samples of size n = 25, 5 and calculated te following estimators: ˆf, ˆfS and ˆf k for k =, 2, 3, w i = i, i and s =, 2. In eac case we used a Gaussian kernel and a Gaussian seed kernel, as necessary. We also calculated te Gamma kernel estimator of Cen (2, wic we denote by ˆf C. For eac estimator we selected an optimal bandwidt by minimizing te integrated squared error over 2

22 a fixed grid on te interval (, 4 wit step 3. Figure gives a set of estimates for one of te generated samples of size n = 25 associated wit te Gamma density. For eac sample, and eac estimator, a root average squared error (ASE over te grid is calculated. Te average of ASE across all generated samples are reported on Table. As expected, te ASE, for eac estimator and across all densities, decreases as n increases from 25 to 5. Except for data generated from te f χ density, all estimators tat are based on Hestenes extension and constructed using te M k kernels (including te case were k = and M = K ave smaller ASE wen w i = i. Also, for estimators ˆf 2 and ˆf 3, coosing s = reduces ASE (compared wit s = 2 for all densities, except f G and f E wen n = 5. Except for te case of te truncated normal density - f T N - tere is always a Hestenes based estimator tat outperforms te estimator ˆf S proposed by Scuster (985. In fact, if we coose w i = i, all Hestenes based estimators ave smaller ASE tan tat of ˆf S. For all densities, tere is always a Hestenes based estimator tat outperforms te estimator ˆf C proposed by Cen (2. In addition, as in te comparison wit ˆf S, if we coose w i = i, all Hestenes based estimators ave smaller ASE tan tat of ˆf C for all densities. Lastly, as expected, te traditional osenblatt-parzen estimator as te poorest performance across all densities and all estimators. Te coice of kernel, or seed-kernel, does not qualitatively impact te relative performance described above. Tis preliminary experimental evidence seems to support te use of w i = i and te coice of s = relative to s = 2. esults for s = 3 and k > 3 (not reported suggest rapid deterioration of performance. Te evidence also suggests tat Hestenes-based estimators outperform te well known estimators proposed by Scuster (985 and Cen (2. In te second broad case, we consider a density tat as a point of discontinuity at x =. Specifically, f(x = { ( exp 2πσ 2 x 2 2 σ 2 if x < 2π exp ( 2 x2 if x were σ 2 controls te size of te jump. If σ 2 = te density is continuous everywere, and for < σ 2 < te jump at x = is given by J f ( = f( f( = 2π ( σ. Te left and rigt panels of Figure 2 provide graps of tis density for σ 2 =.5 and σ 2 =.25. Wit knowledge of te point of discontinuity (x = we evaluate two estimators for f. Te first, ˆfH, (6. 2

23 based on Hestenes extension, is composed of te estimator ˆf (x = ( x Xi s ( k j x Xi /w j K K. n w j X i> for te part f of f defined on [, and ˆf is te analog estimator for te f part of f defined on (, ]. Te jump J f ( is estimated by J ˆf ( = ˆf ( ˆf (. Te second estimator is composed of a osenblatt-parzen estimator ˆf (x = n j= ( x Xi K for f and ˆf is te analog estimator for f. J f ( is estimated by J ˆf ( = ˆf ( ˆf (. X i> For eac density (σ 2 =.25,.5 we generated samples of size n = 25, 5 and calculated te osenblatt- Parzen and Hestenes estimator for w i = i, i, i/(s and s =, 2 using a Gaussian kernel. For eac estimator we selected an optimal bandwidt by minimizing te integrated squared error over a fixed grid on te interval ( 2.5, 3 wit step 5 3 for te case were σ 2 =.5 and (.5, 3 wit step 5 3 for te case were σ 2 =.25. For eac sample, and eac estimator, a root average squared error (ASE over te grid is calculated. Te average of ASE across all generated samples are reported on Table 2 and 3. Average estimated jumps (across all generated samples for bot estimators, as well as teir deviation from te true jump, are also provide for bot estimators. We observe te following general regularities. First, all estimators ave smaller average ASE wen n = 5 compared to n = 25 for bot σ 2 =.5 and σ 2 =.25. All versions of ˆf H ave smaller average ASE tan ˆf and ˆf H performs better wen s = and w i = i for bot σ 2 =.5 and σ 2 =.25. For eac coice of s, te estimator ˆf H performs similarly wen w i = i or w i = i/(s. Te estimated jump is muc closer to te true jump under ˆf H tan under ˆf. J ˆf severely underestimates te true jump for bot σ 2 =.5 and σ 2 =.25. J ˆfH, calculated under all versions of te estimators, is muc closer to te true value of te jump, but for bot σ 2 =.5 and σ 2 =.25 tere seems to be some evidence of a sligt overestimation of te jump. Except for te case were s = and w i = i, jump estimators become more accurate wen n = 5 compared to n = 25. However, results are mixed regarding te impact of s and w i on te accuracy of jump estimates. For an arbitrary sample of size n = 5 and σ 2 =.25, Figure 3 22

24 provides a grap for ˆf and ˆf H for s = 2 and w i = i. 7 Summary and conclusions We provided a set of easily implementable kernel estimators for densities defined on subsets of tat ave boundaries. Te use of Hestenes extensions allows us to obtain teoretical representations for bias and variance of our proposed estimators tat preserve te orders of traditional kernel estimators for densities defined on. In effect, te insigts gained from using Hestenes extensions make te study of suitably defined kernel estimators in sets tat ave boundaries a special case of te teory developed for densities defined on. Preliminary simulations reveal very good finite sample performance relative to a number of commonly used alternative estimators. Furter work sould investigate te possible existence of optimal coices for s and w,, w s under a suitably defined criterion. If possible, tis would produce a best estimator in te class we ave defined. 23

25 .2 Kernels Density, Estmated density Values of x Figure : Density f G (x (blue, ˆf (yellow, ˆf C (green, ˆf S (red and ˆf k (black for k = 2, s =, w i = i Value of te density Value of te density Value of x Value of x Figure 2: Density f(x wit a discontinuity at x =. Left panel is for σ 2 =.5, rigt panel is for σ 2 =.25 24

26 .2 density, Estmated density Values of x Figure 3: True density f(x (blue, ˆf (red and ˆf H for s = 2 and w i = i (black for σ 2 =.25 25

27 Table. Average ASE 2 ˆf and ˆf constructed using K(x = 2π e 2 x2 ˆf ˆfC ˆfS ˆf ˆf2 ˆf3 s wi i i i i i i i i i i i i n = 25 ft N fg fχ fe n = 5 ft N fg fχ fe

28 Table 2. Average ASE and Jump Estimate ˆf and ˆf H for σ 2 =.5 Experiment (s,w i,n Average Squared Error Jump Estimate True Jump: J =.652 ˆf ˆfH J ˆf J ˆfH J ˆf J J ˆfH J (s=,w i = i, (s=2,w i = i, (s=,w i = i, (s=2,w i = i, (s=,w i = i/(s, (s=2,w i = i/(s, (s=,w i = i, (s=2,w i = i, (s=,w i = i, (s=2,w i = i, (s=,w i = i/(s, (s=2,w i = i/(s, Table 3. Average ASE and Jump Estimate ˆf and ˆf H for σ 2 =.25 Experiment (s,w i,n Average Squared Error Jump Estimate True Jump: J =.3989 ˆf ˆfH J ˆf J ˆfH J ˆf J J ˆfH J (s=,w i = i, (s=2,w i = i, (s=,w i = i, (s=2,w i = i, (s=,w i = i/(s, (s=2,w i = i/(s, (s=,w i = i, (s=2,w i = i, (s=,w i = i, (s=2,w i = i, (s=,w i = i/(s, (s=2,w i = i/(s,

29 eferences Burenkov, V. I., 998. Sobolev spaces on domains. B. G. Teubner, Leipzig. Cen, S. X., 2. Probability density function estimation using gamma kernels. Annals of te Institute of Statistical Matematics 52, Cline, D. B. H., Hart, J. D., 99. Kernel estimation of densities wit discontinuities or discontinuous derivatives. Statistics: A Journal of Teoretical and Applied Statistics 22, Hestenes, M.., 94. Extension of te range of a differentiable function. Duke Matematical Journal 8, Karunamuni,. J., Alberts, T., 25. A generalized reflection metod of boundary correction in kernel density estimation. Te Canadian Journal of Statistics 33, Malec, P., Scienle, M., 24. Nonparametric kernel density estimation near te boundary. Computational Statistics and Data Analysis 72, Mynbaev, K., Martins-Filo, C., 2. Bias reduction in kernel density estimation via Lipscitz condition. Journal of Nonparametric Statistics 22, Scuster, E. F., 985. Incorporating support constraints into nonparametric estimators of densities. Communications in Statistics - Teory and Metods, 4, Wen, K., Wu, X., 25. An improved transformation-based kernel density estimator od densities on te unit interval. Journal of te American Statistical Association,

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

Hazard Rate Function Estimation Using Erlang Kernel

Hazard Rate Function Estimation Using Erlang Kernel Pure Matematical Sciences, Vol. 3, 04, no. 4, 4-5 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/0.988/pms.04.466 Hazard Rate Function Estimation Using Erlang Kernel Raid B. Sala Department of Matematics

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Math 1210 Midterm 1 January 31st, 2014

Math 1210 Midterm 1 January 31st, 2014 Mat 110 Midterm 1 January 1st, 01 Tis exam consists of sections, A and B. Section A is conceptual, wereas section B is more computational. Te value of every question is indicated at te beginning of it.

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

arxiv: v1 [math.dg] 4 Feb 2015

arxiv: v1 [math.dg] 4 Feb 2015 CENTROID OF TRIANGLES ASSOCIATED WITH A CURVE arxiv:1502.01205v1 [mat.dg] 4 Feb 2015 Dong-Soo Kim and Dong Seo Kim Abstract. Arcimedes sowed tat te area between a parabola and any cord AB on te parabola

More information

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation A Locally Adaptive Transformation Metod of Boundary Correction in Kernel Density Estimation R.J. Karunamuni a and T. Alberts b a Department of Matematical and Statistical Sciences University of Alberta,

More information

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

arxiv:math/ v1 [math.ca] 1 Oct 2003

arxiv:math/ v1 [math.ca] 1 Oct 2003 arxiv:mat/0310017v1 [mat.ca] 1 Oct 2003 Cange of Variable for Multi-dimensional Integral 4 Marc 2003 Isidore Fleiscer Abstract Te cange of variable teorem is proved under te sole ypotesis of differentiability

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information

The Laplace equation, cylindrically or spherically symmetric case

The Laplace equation, cylindrically or spherically symmetric case Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 -

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 - Eercise. Find te derivative of g( 3 + t5 dt wit respect to. Solution: Te integrand is f(t + t 5. By FTC, f( + 5. Eercise. Find te derivative of e t2 dt wit respect to. Solution: Te integrand is f(t e t2.

More information

Name: Answer Key No calculators. Show your work! 1. (21 points) All answers should either be,, a (finite) real number, or DNE ( does not exist ).

Name: Answer Key No calculators. Show your work! 1. (21 points) All answers should either be,, a (finite) real number, or DNE ( does not exist ). Mat - Final Exam August 3 rd, Name: Answer Key No calculators. Sow your work!. points) All answers sould eiter be,, a finite) real number, or DNE does not exist ). a) Use te grap of te function to evaluate

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

2.3 Algebraic approach to limits

2.3 Algebraic approach to limits CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

f a h f a h h lim lim

f a h f a h h lim lim Te Derivative Te derivative of a function f at a (denoted f a) is f a if tis it exists. An alternative way of defining f a is f a x a fa fa fx fa x a Note tat te tangent line to te grap of f at te point

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk Bob Brown Mat 251 Calculus 1 Capter 3, Section 1 Completed 1 Te Tangent Line Problem Te idea of a tangent line first arises in geometry in te context of a circle. But before we jump into a discussion of

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error. Lecture 09 Copyrigt by Hongyun Wang, UCSC Recap: Te total error in numerical differentiation fl( f ( x + fl( f ( x E T ( = f ( x Numerical result from a computer Exact value = e + f x+ Discretization error

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

LECTURE 14 NUMERICAL INTEGRATION. Find

LECTURE 14 NUMERICAL INTEGRATION. Find LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use

More information

Chapter 2. Limits and Continuity 16( ) 16( 9) = = 001. Section 2.1 Rates of Change and Limits (pp ) Quick Review 2.1

Chapter 2. Limits and Continuity 16( ) 16( 9) = = 001. Section 2.1 Rates of Change and Limits (pp ) Quick Review 2.1 Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 969) Quick Review..... f ( ) ( ) ( ) 0 ( ) f ( ) f ( ) sin π sin π 0 f ( ). < < < 6. < c c < < c 7. < < < < < 8. 9. 0. c < d d < c

More information

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0 3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

Continuity. Example 1

Continuity. Example 1 Continuity MATH 1003 Calculus and Linear Algebra (Lecture 13.5) Maoseng Xiong Department of Matematics, HKUST A function f : (a, b) R is continuous at a point c (a, b) if 1. x c f (x) exists, 2. f (c)

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

, meant to remind us of the definition of f (x) as the limit of difference quotients: = lim

, meant to remind us of the definition of f (x) as the limit of difference quotients: = lim Mat 132 Differentiation Formulas Stewart 2.3 So far, we ave seen ow various real-world problems rate of cange and geometric problems tangent lines lead to derivatives. In tis section, we will see ow to

More information

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t))

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t)) Runge-Kutta metods Wit orders of Taylor metods yet witout derivatives of f (t, y(t)) First order Taylor expansion in two variables Teorem: Suppose tat f (t, y) and all its partial derivatives are continuous

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers.

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers. ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU A. Fundamental identities Trougout tis section, a and b denotes arbitrary real numbers. i) Square of a sum: (a+b) =a +ab+b ii) Square of a difference: (a-b)

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225 THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Mat 225 As we ave seen, te definition of derivative for a Mat 111 function g : R R and for acurveγ : R E n are te same, except for interpretation:

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

2.1 THE DEFINITION OF DERIVATIVE

2.1 THE DEFINITION OF DERIVATIVE 2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative

More information

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is Mat 180 www.timetodare.com Section.7 Derivatives and Rates of Cange Part II Section.8 Te Derivative as a Function Derivatives ( ) In te previous section we defined te slope of te tangent to a curve wit

More information

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix Opuscula Mat. 37, no. 6 (2017), 887 898 ttp://dx.doi.org/10.7494/opmat.2017.37.6.887 Opuscula Matematica OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS Sandra

More information

Math 161 (33) - Final exam

Math 161 (33) - Final exam Name: Id #: Mat 161 (33) - Final exam Fall Quarter 2015 Wednesday December 9, 2015-10:30am to 12:30am Instructions: Prob. Points Score possible 1 25 2 25 3 25 4 25 TOTAL 75 (BEST 3) Read eac problem carefully.

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

Mass Lumping for Constant Density Acoustics

Mass Lumping for Constant Density Acoustics Lumping 1 Mass Lumping for Constant Density Acoustics William W. Symes ABSTRACT Mass lumping provides an avenue for efficient time-stepping of time-dependent problems wit conforming finite element spatial

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t). . Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Discretization of Multipole Sources in a Finite Difference. Setting for Wave Propagation Problems

Discretization of Multipole Sources in a Finite Difference. Setting for Wave Propagation Problems Discretization of Multipole Sources in a Finite Difference Setting for Wave Propagation Problems Mario J. Bencomo 1 and William Symes 2 1 Institute for Computational and Experimental Researc in Matematics,

More information

Uniform Convergence Rates for Nonparametric Estimation

Uniform Convergence Rates for Nonparametric Estimation Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate

More information

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a?

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a? Solutions to Test 1 Fall 016 1pt 1. Te grap of a function f(x) is sown at rigt below. Part I. State te value of eac limit. If a limit is infinite, state weter it is or. If a limit does not exist (but is

More information

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial

More information

MANY scientific and engineering problems can be

MANY scientific and engineering problems can be A Domain Decomposition Metod using Elliptical Arc Artificial Boundary for Exterior Problems Yajun Cen, and Qikui Du Abstract In tis paper, a Diriclet-Neumann alternating metod using elliptical arc artificial

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Calculus I Practice Exam 1A

Calculus I Practice Exam 1A Calculus I Practice Exam A Calculus I Practice Exam A Tis practice exam empasizes conceptual connections and understanding to a greater degree tan te exams tat are usually administered in introductory

More information

The Derivative as a Function

The Derivative as a Function Section 2.2 Te Derivative as a Function 200 Kiryl Tsiscanka Te Derivative as a Function DEFINITION: Te derivative of a function f at a number a, denoted by f (a), is if tis limit exists. f (a) f(a + )

More information

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c) Paper 1: Pure Matematics 1 Mark Sceme 1(a) (i) (ii) d d y 3 1x 4x x M1 A1 d y dx 1.1b 1.1b 36x 48x A1ft 1.1b Substitutes x = into teir dx (3) 3 1 4 Sows d y 0 and states ''ence tere is a stationary point''

More information

Section 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point.

Section 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point. Popper 6: Review of skills: Find tis difference quotient. f ( x ) f ( x) if f ( x) x Answer coices given in audio on te video. Section.1 Te Definition of te Derivative We are interested in finding te slope

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

Estimation of boundary and discontinuity points in deconvolution problems

Estimation of boundary and discontinuity points in deconvolution problems Estimation of boundary and discontinuity points in deconvolution problems A. Delaigle 1, and I. Gijbels 2, 1 Department of Matematics, University of California, San Diego, CA 92122 USA 2 Universitair Centrum

More information

Finding and Using Derivative The shortcuts

Finding and Using Derivative The shortcuts Calculus 1 Lia Vas Finding and Using Derivative Te sortcuts We ave seen tat te formula f f(x+) f(x) (x) = lim 0 is manageable for relatively simple functions like a linear or quadratic. For more complex

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

7.1 Using Antiderivatives to find Area

7.1 Using Antiderivatives to find Area 7.1 Using Antiderivatives to find Area Introduction finding te area under te grap of a nonnegative, continuous function f In tis section a formula is obtained for finding te area of te region bounded between

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information