A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation

Size: px
Start display at page:

Download "A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation"

Transcription

1 A Locally Adaptive Transformation Metod of Boundary Correction in Kernel Density Estimation R.J. Karunamuni a and T. Alberts b a Department of Matematical and Statistical Sciences University of Alberta, Edmonton, Alberta, Canada, T6G 2G1; b Courant Institute of Matematical Sciences, New York University, New York, NY, USA Abstract Kernel smooting metods are widely used in many researc areas in statistics. However, kernel estimators suffer from boundary effects wen te support of te function to be estimated as finite endpoints. Boundary effects seriously affect te overall performance of te estimator. In tis article, we propose a new metod of boundary correction for univariate kernel density estimation. Our tecnique is based on a data transformation tat depends on te point of estimation. Te proposed metod possesses desirable properties suc as local adaptivity and non-negativity. Furtermore, unlike many oter transformation metods available, te proposed estimator is easy to implement. In a Monte Carlo study, te accuracy of te proposed estimator is numerically analyzed and compared wit te existing metods of boundary correction. We find tat it performs well for most sapes of densities. Te teory beind te new metodology, along wit te bias and variance of te proposed estimator, are presented. Results of a data analysis are also given. Keywords: density estimation; mean squared error; kernel estimation; transformation. Sort Title: Kernel Density Estimation AMS Subject Classifications: Primary: 62G07; Secondary: 62G20 Corresponding Autor: R.J.Karunamuni@ualberta.ca 1

2 1 Introduction Nonparametric kernel density estimation is now popular and in wide use wit great success in statistical applications. Kernel density estimates are commonly used to display te sape of a data set witout relying on a parametric model, not to mention te exposition of skewness, multimodality, dispersion, and more. Early results on kernel density estimation are due to Rosenblatt (1956) and Parzen (1962). Since ten, muc researc as been done in te area; see, e.g., te monograps of Silverman (1986), and Wand and Jones (1995). Let f denote a probability density function wit support [0, ) and consider nonparametric estimation of f based on a random sample X 1,..., X n from f. Ten te traditional kernel estimator of f is given by f n (x) = 1 n ( ) x Xi K, (1.1) were K is some cosen unimodal density function, symmetric about zero wit support traditionally on [, 1], and is te bandwidt ( 0 as n ). Te basic properties of f n are well-known, and under some smootness assumptions tese include, for x 0, E f n (x) = f(x) K(t)dt f (1) (x) tk(t)dt f (2) (x) Var f n (x) = f(x) ( ) 1 K 2 (t)dt + o, n n t 2 K(t)dt + o( 2 ), were c = min(x/, 1). For x, te so-called interior points, te bias of f n (x) is of order O( 2 ), wereas at boundary points, i.e. for x [0, ), f n is not even consistent. In nonparametric curve estimation problems tis penomenon is referred to as te boundary effects of (1.1). To remove tese boundary effects a variety of metods ave been developed in te literature. Some well-known metods are summarized below: 2

3 Te reflection metod (Cline and Hart, 1991; Scuster, 1985; Silverman, 1986). Te boundary kernel metod (Gasser and Muller, 1979; Gasser et al., 1983; Jones, 1993; Muller, 1991; Zang and Karunamuni, 2000). Te transformation metod (Marron and Ruppert, 1994; Ruppert and Cline, 1994; Wand et al., 1991). Te pseudo-data metod (Cowling and Hall, 1996). Te local linear metod (Ceng et al., 1997; Ceng, 1997; Zang and Karunamuni, 1998). Oter metods (Zang et al., 1999; Hall and Park, 2002). Te reflection metod is specifically designed for te case f (1) (0) = 0, were f (1) denotes te first derivative of f. Te boundary kernel metod is more general tan te reflection metod in te sense tat it can adapt to any sape of density. However, a drawback of tis metod is tat te estimates migt be negative near te endpoints; especially wen f(0) 0. To correct tis deficiency of boundary kernel metods some remedies ave been proposed; see Jones (1993), Jones and Foster (1996), Gasser and Muller (1979) and Zang and Karunamuni (1998). Te local linear metod is a special case of te boundary kernel metod tat is tougt of by some as a simple, ard-to-beat default approac, partly because of optimal teoretical properties (Ceng et al., 1997) in te boundary kernel implicit in local linear fitting. Te pseudo-data metod of Cowling and Hall (1996) generates some extra data X (i) s using wat tey call te tree-point-rule, wic are ten combined wit te original data X i s to form a kernel type estimator. Marron and Ruppert s (1994) transformation metod consists of a tree-step process. First, a transformation g is selected from a parametric family so tat te density of Y i = g(x i ) as a first derivative tat is approximately equal 3

4 to 0 at te boundaries of its support. Next, a kernel estimator wit reflection is applied to te Y i s. Finally, tis estimator is converted by te cange-of-variables formula to obtain an estimate of f. Among oter metods, two very promising recent ones are due to Zang et al. (1999) and Hall and Park (2002). Te former metod is a combination of te metods of pseudo-data, transformation and reflection; wereas te latter metod is based on wat tey call an empirical translation correction. Te boundary kernel and related metods usually ave low bias but te price for tat is an increase in variance. It as been observed tat approaces involving only kernel modifications witout regard to f or data, suc as te boundary kernel metod, are always associated wit larger variance. Furtermore, te corresponding estimates tend to take negative values near te boundary points. On te oter and, transformation-based boundary correction estimates are non-negative and generally ave low variance, possibly due to non-negativity. It as gradually gotten troug to researcers tat tis non-negativity property is important in practical applications and te approaces producing non-negative estimators are well wort exploring. In tis article, we develop a new transformation approac for boundary correction. Our metod consists of a straigtforward transformation of data. It is easy to implement in practice compared to oter existing transformation metods (compare wit Marron and Ruppert, 1994). A distinct feature of our metod is tat it is locally adaptive; tat is, te transformation depends on te point of estimation. Oter desirable properties of te proposed estimator are: (i) it is non-negative everywere, and (ii) it reduces to te traditional kernel estimator (1.1) at te interior points. In simulations, we compared te proposed estimator wit oter existing metods of boundary correction and observed tat it performs quite well for most sapes of densities. Section 2 contains te metodology and te main results. Section 3 and 4 present simulation studies and a data analysis, respectively. Final comments are given in Section 5. 4

5 2 Locally Adaptive Transformation Estimator 2.1 Metodology For convenience, we sall assume tat te unknown probability density function f as support [0, ), and consider estimation of f based on a random sample X 1,..., X n from f. Our transformation idea is based on transforming te original data X 1,..., X n to g(x 1 ),..., g(x n ), were g is a non-negative, continuous and monotonically increasing function from [0, ) to [0, ). Based on te transformed data, we now define for x = c, c 0, f n (x) = 1 n ( ) x g(xi ) / K K(t)dt, (2.1) were is te bandwidt (again 0 as n ), K is a non-negative symmetric kernel function wit support on [, 1], satisfying K(t)dt = 1, tk(t)dt = 0, and 0 t 2 K(t)dt <. We can now state te following lemma, wic exibits te bias and variance of (2.1) under certain conditions on g and f. Te proof is given in te Appendix. Lemma 2.1. Let f n be defined by (2.1). Assume tat f (2) and g (2) exist and are continuous on [0, ), were f (i) and g (i) denote te i t derivative of f and g, respectively, wit f (0) = f, g (0) = g. Furter assume tat g(0) = 0 and g (1) (0) = 1. Ten for x = c, 0 c 1, we 5

6 ave E f n (x) f(x) = + K(t)dt {f(0)g (2) (0) 2 2 K(t)dt { f (2) (0)c 2 (c t)k(t)dt + f (1) (0) K(t)dt + (t c) 2 K(t)dt } tk(t)dt [ f (2) (0) f(0)g (3) (0) 3g (2) (0)(f (1) (0) f(0)g (2) (0)) ] } + o( 2 ), (2.2) and Var f n (x) = = f(0) c n( K(t)dt)2 f(x) n( K(t)dt)2 ( 1 K 2 (t)dt + o n ( 1 K 2 (t)dt + o n ) ), (2.3) Te last equality follows since f(0) = f(x) cf (1) (x)+(c) 2 f (2) (x)/2+o( 2 ) for x = c (see (A.3)). Note tat te leading term of te variance of fn is not affected by te transformation g. Wen c = 1, Var f n (x) = f(x)(n) 1 K2 (t)dt + o((n) ), wic is exactly te expansion of te interior variance of te traditional estimator (1.1). We sall use our transformation g so tat te first order term in te bias expansion (2.2) is zero. Assuming tat f(0) > 0, it is enoug to let / g (2) (0) = f (1) (0) tk(t)dt f(0) (c t)k(t)dt. (2.4) Note tat te rigt-and side of (2.4) depends on c; tat is, te transformation g depends 6

7 on te point of estimation inside te boundary region [0, ). We call tis property local adaptivity. In order to display tis local dependence we sall denote g by g c, 0 c 1, in wat follows. Combining (2.4) wit te additional assumptions in Lemma 2.1, g c sould now satisfy te following tree conditions for eac c, 0 c 1: (i) g c : [0, ) [0, ), g c is continuous, monotonically increasing and g (i) c exists, i = 1, 2, 3, (ii) g c (0) = 0, g c (1) (0) = 1 (iii) g (2) c (0) = f (1) (0) / tk(t)dt f(0) (c t)k(t)dt. (2.5) Functions satisfying conditions (2.5) can be easily constructed, wit te simplest candidates being polynomials. In order to satisfy property (iii) of (2.5) and maintain g c as an increasing function we require at least a cubic (for te case f (1) (0) < 0). So for 0 c 1 we use te transformation g c (y) = y + d 2 l cy 2 + d 2 l 2 cy 3, (2.6) were / l c = tk(t)dt (c t)k(t)dt (2.7) and d = f (1) (0) / f(0). (2.8) 7

8 Wen c = 1, g c (y) = y. Tis means tat f n (x) defined by (2.1) wit g c of (2.6) reduces to te usual kernel estimator f n (x) defined by (1.1) for interior points, i.e., for x, f n (x) coincides wit f n (x). Higer order polynomials could be used to obtain te same properties but tey are increasingly more complicated to deal wit and offer very little marginal advantage. For g c defined by (2.6), te bias term of (2.2) takes te form E f n (x) f(x) = 2 2 K(t)dt + 6 (f (1) (0)) 2 f(0) { f (2) (0) (t 2 2tc)K(t)dt } (t c) 2 K(t)dt(lc 2 + l c ) + o( 2 ), (2.9) for x = c, 0 c 1. Note tat wen c = 1, E f n (x) f(x) = 2 2 f (2) (x) 1 t2 K(t)dt + o( 2 ), wic is exactly te same expression of te interior bias of te traditional kernel estimator (1.1), since f (2) (x) = f (2) (0) + o(1) for x = c (see (A.5)). 2.2 Estimation of g c In order to implement te transformation (2.6) in practice, one must replace d = f (1) (0)/f(0) wit a pilot estimate. Tis requires te use of anoter density estimation metod, suc as semiparametric, kernel or nearest neigbour metod. Our experience is tat proposed estimator of f given below is insensitive to te fine details of te pilot estimate of d, and terefore any convenient estimate can be employed. A natural estimate would be a fixed kernel estimate wit bandwidt cosen optimally at x = 0 as in Zang et al. (1999). Oter possible estimators of d are proposed in Coi and Hall (1999) and Park et al. (2003). Following Zang et al. (1999), in tis paper we use te kernel type pilot estimate of d given 8

9 by ˆd = (log f n( 1 ) log f n(0))/ 1, (2.10) were f n( 1 ) = 1 n 1 ( ) 1 X i K + 1 (2.11) n 2 1 and f n(0) = max { 1 n 0 ( ) } Xi K (0), 1, (2.12) 0 n 2 were 1 = o() wit as in (2.1), K is a usual symmetric kernel function as in (1.1), K (0) is a so-called endpoint order two kernel satisfying K (0) (t)dt = 1, tk (0) (t)dt = 0, and 0 < t 2 K (0) (t)dt <, and 0 = b(0) 1 wit b(0) given by We now define b(0) = { ( 1 t2 K(t)dt) 2 ( 0 K2 (0) (t)dt) ( 0 t2 K (0) (t)dt) 2 ( 1 K2 (t)dt) }. (2.13) ĝ c (y) = y + ˆd 2 l cy 2 + ˆd 2 l 2 cy 3, (2.14) for 0 c 1, as our estimator of g c (y) defined by (2.6), were ˆd is given by (2.10). Note tat ĝ c and ˆd depend on n but tis dependence is suppressed for notational convenience. 9

10 It as been argued in te literature (see, e.g., Park et al., 2003) tat one may use a bandwidt 1 different from for te pilot estimator of d, and tat 1 sould be of te same order as or faster tan to inerit te full advantages of te proposed density estimator. Tis adjustment also improves te asymptotic teoretical results of our proposed density estimator ˆf n (x) given below at (2.15), and sould act as te basis for te coice of 1 in (2.10) and (2.11). 2.3 Te Proposed Estimator Our proposed estimator of f(x) is defined as, for x = c, c 0, ˆf n (x) = 1 n ( ) x ĝc (X i ) / K K(t)dt, (2.15) were ĝ c is given by (2.14), K is as given in (2.1), and is te bandwidt used in (2.1). For x, ˆfn (x) reduces to te traditional kernel estimator f n (x) given by (1.1). Tus ˆf n (x) is a natural boundary continuation of te usual kernel estimator (1.1). An important adjustment in te estimator (2.15) is tat it is based on a locally adaptive transformation ĝ c given by (2.14), since ĝ c transforms te data depending on te point of estimation. Furtermore, it is important to remark ere tat te adaptive kernel estimator (2.15) is non-negative (provided K is non-negative), a property sared wit reflection based estimators (see Jones and Foster, 1996; Zang et al. 1999) and transformation based estimators (see Marron and Ruppert, 1994; Zang et al. 1999; Hall and Park, 2002), but not wit most boundary kernel approaces. Te next teorem establises te bias and variance of (2.15). Teorem 2.1. Let ˆfn (x) be defined by (2.15) wit a kernel function K as given in (2.1) and a bandwidt = O(n /5 ). Suppose 1 in (2.10) is of te form 1 = O(n /4 ). Furter 10

11 assume tat K (1) exists and is continuous on [, 1], and tat f(0) > 0 and f (2) exists and is continuous in a neigbourood of 0. Ten for x = c, 0 c 1, we ave E ˆf n (x) f(x) = 2 2 K(t)dt + 6 (f (1) (0)) 2 f(0) { f (2) (0) (t 2 2tc)K(t)dt } (t c) 2 K(t)dt(lc 2 + l c ) + o( 2 ), (2.16) were l c is given by (2.7), and Var ˆf n (x) = f(0) c ( ) 1 n( K 2 (t)dt + o. (2.17) K(t)dt)2 n Remark 2.1 As one of te referees correctly pointed out, te coice of bandwidt is very important for te good performance of any kernel estimator. Te optimal bandwidt for (2.15) can be derived as follows. From (2.16) and (2.17), te leading terms of te asymptotic MSE of ˆf n (x) at x = c, 0 c 1 are 4 4 ( ) 2 { K(t)dt f (2) (0) (t 2 2tc)K(t)dt +6(f(0)) (f (1) (0)) 2 ( ) 2 +(n) f(0) K(t)dt K 2 (t)dt. } 2 (t c) 2 K(t)dt(lc 2 + l c ) 11

12 Minimizing te preceding expression wit respect to yields ] [ c = n {[4f(0) /5 K 2 (t)dt f (2) (0) (t 2 2tc)K(t)dt } c 2 1/5 + 6(f(0)) (f (1) (0)) 2 (t c) 2 K(t)dt(lc 2 + l c )] (2.18) wic is te local optimal bandwidt at x = c, 0 c 1, for te transformation estimator (2.15). Expression (2.18) is quite similar to tat of te adaptive variable location kernel estimator given by Park et al. (2003). In order to see tis more clearly we find from teir paper tat te leading terms of te asymptotic MSE of teir adaptive location kernel estimator at x = c, 0 c 1, are 4 4 ( 1 ) 2 { [ 1 K(t)dt f (2) (x) t 2 K(t)dt + 2ρ(c) c c 1 c ] tk (1) (t)dt 1 +(f(x)) (f (1) (x)) 2 ρ(c) 2 K (2) (t)dt ( 1 ) 2 1 +(n) f(x) K(t)dt K 2 (t)dt, c c were ρ(c) = (K(c)) 1 tk(t)dt. Minimizing te preceding expression wit respect to c yields 1 ] [ ( 1 1 ) c = n {[4f(x) /5 K 2 (t)dt f (2) (x) t 2 K(t)dt + 2ρ(c) tk (1) (t)dt c c c 1 2 } 1/5 + (f(x)) (f (1) (x)) 2 ρ(c) 2 K (t)] (2), (2.19) c c } 2 wic is te local optimal bandwidt at x = c, 0 c 1, for Park et al. s (2003) adaptive estimator. It is easy to see te similarity between (2.18) and (2.19). In applications neiter expression can be employed directly, owever, due to teir dependence on te unknown 12

13 functional quantities f(x), f (1) (x) and f (2) (x). An alternative approac is to apply te bandwidt variation function tecnique tat is frequently implemented in boundary kernel metods. Adaptive bandwidt or variable window widt kernel density estimators are generally of te form f n (x) = 1 n ( ) 1 x α(x i ) K Xi, (2.20) α(x i ) were α is some non-negative function; see, e.g. Abramson (1982), Hall et al. (1995) and Terrell and Scott (1992), among oters. It is easy to sow tat te estimator (2.20) also exibits boundary effects wen te unknown density f is supported on [0, ) or [0, 1] and a kernel K wit support on [, 1] is used. To te best of our knowledge tere are no boundary adjusted adaptive bandwidt or variable window widt kernel estimators tat exist in te literature. A transformation tecnique based on suc an estimator would be f n (x) = 1 n K(t)dt 1 α(g(x i )) K ( ) x g(xi ), (2.21) α(g(x i )) were g is a suitable transformation, like (2.6), tat removes te boundary effects. However, te details of estimators of te form (2.21) are beyond te scope of te present paper. Remark 2.2 Smooting metods based on local linear and local polynomial fitting metods are now popular. In te density estimation context te local polynomial metod as been implemented in Ceng et al. (1997), Ceng (1997) and Zang and Karunamuni (1998), among oters. For example, Zang and Karunamuni (1998) ave studied te following local polynomial density estimator (a very similar estimator is given in Ceng (1997) and Ceng et 13

14 al. (1997)): f n(x) = 1 n K o ( ) x Xi (2.22) were K o(t) = e T o S (1, t,..., t p ) T K(t), S = (s j,l ) is a p p matrix wit s j,l = t j+l K(t)dt, 0 j, l p, and e T = (0, 1,..., 0) T is a p 1 vector wose second element is 1 and te oters are zero. Ten te bias and variance of (2.22) at x = c, 0 c 1, are Bias(f n(x)) = 1 (p + 1)! (b(c))p+1 f (p+1) (x) /b(c) t p+1 K o,c/b(c)(t)dt and Var(f n(x)) = 1 nb(c) f(x) /b(c) ( K o,c/b(c) (t) ) 2 dt, were b : [0, 1] R + wit b(1) = 1 is a bandwidt variation function and Ko,c is called an equivalent boundary kernel ; see Zang and Karunamuni (1998) for more details. Te asymptotic MSE of fn(x) is typically less tan tat of ˆfn (x) given by (2.15) in a weak minimax sense; see Ceng et al. (1997). However, local polynomial estimators are based on te so-called equivalent boundary kernels Ko,c wic are not always positive. As a result, local polynomial estimators may take negative values, especially were te unknown density is very small. At te crucial point x = 0 if f(0) 0 ten fn(0) may be negative; see, e.g., Figure 1. Tis unattractive feature of local polynomial density estimators as been criticized by many autors; see, e.g., Zang et al. (1999) and Hall and Park (2002). On te oter and, te proposed estimator ˆf n (x) given by (2.15) is always non-negative wenever te kernel K is non-negative. Tus, as far as applications are concerned, we recommend te use of ˆf n (x) over fn(x), even toug te latter possesses some good teoretical properties. 14

15 3 Simulations and Discussion To test te effectiveness of our estimator we simulated its performance against tat of oter well-known metods. Among tese were some relatively new tecniques, suc as a recent estimator due to Hall and Park (2002) wic is very similar to our own, and te transformation and reflection based metod given by Zang et al. (1999). We also competed against some more classical estimators; namely te boundary kernel and its close counterpart te local linear fitting metod, Jones and Foster s (1996) nonnegative adaptation estimator, and Cowling and Hall s (1996) pseudo-data metod. Te first of te competing estimators is due to Hall and Park (2002) (ereafter H&P). It is defined as, for x = c, 0 c 1, ˆf HP (x) = 1 n ( ) x Xi + ˆα(x) / K K(t)dt, (3.1) were ˆα(x) is a correction term given by ˆα(x) = f 2 (x) ( x ) f(x) ρ, and f(x) = 1 n ( ) x Xi / K K(t)dt, (3.2) wic is referred to as te cut-and-normalized kernel estimator, f (x) is an estimate of f (1) (x), and ρ(u) = K(u) v u vk(v)dv. To estimate f (1) (x), we used te endpoint kernel 15

16 of order (1,2) (see Zang and Karunamuni, 1998) given by K 1,c (t) = 12(2c(1 + t) 3t2 4t 1) (c + 1) 4 I [,c], and te corresponding kernel estimator f (x) = 1 n 2 c ( ) x Xi K 1,c/b(c), c were c = b(c) wit te bandwidt variation function b(c) = 2 1/5 (1 c) + c for 0 c 1. Note tat Hall and Park s estimator is a special case of te estimator (2.1), but wit ĝ c (y) = y ˆα(c), were x = c for 0 c 1. Tis ĝ, owever, does not necessarily satisfy te conditions of (2.5). Te metod of Zang et al. (1999) (ereafter Z,K&J) applies a transformation to te data and ten reflects it across te left endpoint of te support of te density. Te resultant kernel estimator is of te form ˆf ZKJ (x) = 1 n { K ( ) ( )} x Xi x + gn (X i ) + K (3.3) were g n (x) = x + d n x 2 + Ad 2 nx 3. Note tis g n is very similar to our own, te main difference being it is not dependent on te point of estimation. Te only requirement on A is tat 3A > 1, and in practice we used te recommended value of A =.55. However, d n is again an estimate of d = f (1) (0)/f(0), and te metodology used is te same as tat given by (2.10) to (2.13). 16

17 Te boundary kernel wit bandwidt variation is defined (see Zang and Karunamuni, 1998) as ˆf B (x) = 1 n c ( ) x Xi K (c/b(c)) c (3.4) wit c = min(x/, 1). On te boundary te bandwidt variation function c = b(c) is employed, ere b(c) = 2 c. We used te boundary kernel K (c) (t) = { } 12 (1 + t) (1 2c)t + 3c2 2c + 1 I (1 + c) 4 [,c]. (3.5) 2 Zang and Karuamuni ave sown tat tis kernel is optimal in te sense of minimizing te MSE in te class of all kernels of order (0,2) wit exactly one cange of sign in teir support. A simple modification of te boundary kernel gives us te local linear fitting metod (LL). We used te kernel K (c) (t) = 12(1 t 2 ) (1 + c) 4 (3c 2 18c + 19) {8 16c + 24c2 12c 3 + t(15 30c + 15c 2 )}I [,c] (3.6) in (3.4) and call te resulting estimator ˆf LL (x). In tis case te bandwidt variation function is b(c) = c. Te Jones and Foster (1996) nonnegative adaptation estimator (ereafter J&F) is defined as ˆf JF (x) = f(x) exp { } ˆf(x) f(x) 1 (3.7) were f(x) is te cut-and-normalized kernel estimator of (3.2). Tis version of te J&F estimator takes ˆf to be some boundary kernel estimate of f, ere we use ˆf(x) = ˆf B (x) as defined by (3.5). 17

18 Cowling and Hall s (1996) pseudo-data estimator generates data beyond te left endpoint of te support of te density. Teir estimator (ereafter C&H) is defined as [ ˆf CH (x) = 1 K n ( ) x Xi + m ( ) ] x X( i) K (3.8) were m is an integer of larger order tan n but smaller order tan n; we cose m = n 9/10. Furter X ( i) is given by teir tree-point rule X ( i) = 5X (i/3) 4X (2i/3) X (i) were, for real t > 0, X (t) linearly interpolates among te values 0, X (1),..., X (n), were X (i) represents te i t order statistic. Trougout our simulations, we used te Epanecnikov kernel K(t) = 3(1 4 t2 )I [,1] wenever a kernel of order two was required. Note tat outside of te boundary, te majority of te estimators presented reduce to te regular kernel metod given by (1.1) wit te Epanecnikov. Te only exception is Cowling and Hall s, wic is a close approximation to te kernel estimator away from te origin. Eac estimator was tested over various sapes of density curves, but we ave collapsed our results into four specific densities wic are representative of te different possible combinations of te beaviour of f at 0. Density 1 andles te case f(0) = 0 and Densities 2, 3, & 4 satisfy f(0) > 0 but f (1) (0) = 0, f (1) (0) > 0 and f (1) (0) < 0, respectively. In all simulations a sample size of n = 200 was used. Te bandwidt was taken to be te optimal global bandwidt of te regular kernel estimator (1.1), given by te formula = { 1 K(t)2 dt [ 1 t2 K(t)dt] 2 [f (2) (x)] 2 dx } 1/5 n /5, (3.9) 18

19 (see Silverman, 1986, C. 3). Tis particular coice was made so tat it is possible to compare te different estimators witout regard to bandwidt effects. In our estimation of d we used te bandwidt 1 = n /20. For eac density we ave calculated te bias, variance and MSE of te estimators at zero, along wit te MISE over te region [0, ). Te results are all averaged over 1000 trials and are presented in Table 1. We ave also graped te performance of ten typical performances of eac estimator, along wit te regular kernel estimator, over te boundary region and part of te interior. Tese are presented in Figures 1 troug 4. Table 1 and Figures 1-4 about ere. In Density 1, wit f(0) = 0, our estimator beaves quite well at zero. It is only sligtly beind Hall and Park s in terms of MSE at zero. In terms of MISE te boundary kernel puts in te best sowing, followed closely by te LL metod and J&F. It sould be noted toug tat te boundary kernel and LL metod are usually negative near zero. Despite it s good sowing at zero te H&P metod comes in fourt in MISE as it sligtly underestimates te density over [0, ). In fift and sixt place are te Z,K&J and C&H metods. In last place in te MISE column is our estimator, wic, from Figure 1, is evidently te result of severely underestimating f to te rigt of zero. We believe tat error in te estimate ˆd of d is te reason beind tis. Wen f(0) = 0, ˆd may be inaccurate and introduce error into our metod. Future work may be done on creating better estimates of ˆd in tis situation. For Density 2, wit f (1) (0) = 0 and f(0) > 0, our estimator is te clear winner in terms of MSE at zero and te MISE over te boundary. Tis is also obvious from Figure 2. It is somewat surprising tat Hall and Park s estimator is muc weaker for tis density, especially given tat it and our estimators are all special cases of te same general form (2.1). 19

20 Density 3 proves to be difficult for all of te estimators. Tis is most evident from Figure 3, in wic we see a large amount of variability. In terms of MISE all te estimators perform similarly except for H&P, wic does sligtly worse tan te rest, and C&H, wic does poorly over te wole boundary. At zero, our estimator is te best overall but only by a slim margin tis time. In terms of MISE it is te LL and J&F metods tat are te winners, but again only by a small amount. Our estimator as its poorest sowing on te exponential sape of Density 4. At zero our estimator produces an extremely ig MSE value relative to te oters, and even in terms of MISE it is andily beaten (except by C&H). From Figure 4 we see te underlying reason is te inability of our estimator to accurately capture te steep growt of te exponential density. Te Z,K&J estimator is quite accurate at zero and over te entire boundary, and te performance of te boundary kernel, LL metod, J&F and H&P estimators do not lag far beind. Even Cowling and Hall s estimator does well at zero, but pays te price for it in MISE over te rest of te boundary region. 4 Data Analysis We tested our proposed estimator over two datasets. Te first consists of 35 measurements of te average December precipitation in Des Moines, Iowa from 1961 to Te bandwidt = was produced by cross-validation (see Bowman and Azzalini, 1997). Figure 5 sows our proposed estimator (solid line) along wit Hall and Park s (dased line). Over most of te boundary region tey are similar, but around zero tey disagree on te sape of te density. Hall and Park s estimator seems to believe tere s a local minimum to te rigt of zero, but te istogram and te rug sow oterwise. A larger value for would probably diminis tis beavior. We ave observed tat te boundary kernel, LL metod and J&F 20

21 estimators tend to linearly approximate te density over te boundary region, wit little regard for curvature. Figure 5 about ere. Te second dataset is comprised of 365 wind speed measurements taken daily in 1993 at 2 AM in Eindoven, te Neterlands (source: Royal Neterlands Meteorological Institute). Intuition would suggest tat te underlying density f sould be similar to Density 3, wit f(0) > 0 and f (1) (0) > 0, so tat tere is substantial mass at zero but te mode is sligtly to te rigt. We used bandwidt = 2.5 wic we cose subjectively. Values suggested by cross-validation typically seem to undersmoot tis data, ence te subjective coice. We ave observed tat for tis dataset te sape of te estimators is fairly insensitive to te coice of bandwidt, provided tat it is large enoug, and so we believe our coice is justified. Figure 6 sows our proposed estimator (solid line) and te LL metod (dased line) applied to te wind data. Bot estimators capture te ypotesized sape of te density and are similar over te boundary region. However our estimator clearly produces a smooter curve. Figure 6 about ere. 5 Concluding Remarks Marron and Ruppert s (1994) excellent work on kernel density estimation was among te first to introduce a transformation tecnique in boundary correction (also see Wand et al., 1991). Teir sopisticated transformation metodology, owever, as been labelled complicated by some researcers; see, e.g., Jones and Foster (1996). In tis paper, we presented an easy-to-implement, general transformation metod given by (2.1). Tis class of estimators possesses certain desirable properties, suc as being everywere non-negative and aving 21

22 unit integral asymptotically, and can be tailored specifically to combat te boundary effects found in te traditional kernel estimator. Te proposed estimator defined by (2.15) and te boundary corrected estimator of Hall and Park (2002) are special cases of te preceding general tecnique. Beyond tis, bot estimators are locally adaptive in te sense tat te amount of transformation depends upon te point of estimation, and, even furter, bot transformations are estimated from te data itself. We believe te latter point is especially important, as it allows for all-important reductions in bias wile, at te same time, maintaining low variance. Te transformation tat we ave cosen at (2.14) is a convenient and computationally easy one, but it is by no means te result of an exaustive searc of functions satisfying te conditions (2.5). In fact, we conjecture tat it may be possible to construct different transformations satisfying tese conditions but wit muc improved performance, in te sense tat tey will be more robust to te sape of te density. Acknowledgements: Tis researc was supported by a grant from te Natural Sciences and Engineering Researc Council of Canada. We wis to tank te Editor, te Associate Editors and te referees for very constructive and useful comments tat led to a muc improved presentation. We also tank Drs. B.U. Park and S. Zang for some personal correspondence. 22

23 A Appendix: Proofs Proof of Lemma 2.1: For x = c, 0 c 1, we ave from (2.1) E f n (x) K(t)dt = 1 ( ) x E K g(xi ) = 1 ( x g(y) K = 0 ) f(y)dy K(t) f(g ((c t))) g (1) (g ((c t))) dt. (A.1) Letting q(t) = g ((c t)) we ave q (1) (t) = g (1) (q(t)), and q(2) (t) = 2 g (2) (q(t)), g (1) (q(t)) 3 so ten a Taylor expansion of order 2 on te function f(q( ))/g (1) (q( )) at t = c gives f(q(t)) g (1) (q(t)) = f(q(c)) [ ] f (1) (q(c))g (1) (q(c)) f(q(c))g (2) (q(c)) (t c) g (1) (q(c)) [g (1) (q(c))] (t c) 2 {[ f (2) (q(c)) [g (1) (q(c))] 3 g (1) (q(c)) f (1) (q(c)) g(2) (q(c)) [g (1) (q(c))] 2 ] f(q(c)) g(3) (q(c))g (1) (q(c)) g (2) (q(c)) 2 [g (1) (q(c))] [g (1) (q(c))] 3 ( 2 f (1) (q(c)) f(q(c)) g(2) (q(c)) g (1) (q(c)) = f(0) (t c)[f (1) (0) f(0)g (2) (0)] ) } g (2) (q(c)) + o( 2 ) g (1) (q(c)) (t c)2 [f (2) (0) f(0)g (3) (0) 3g (2) (0)(f (1) (0) f(0)g (2) (0))] + o( 2 ), (A.2) 23

24 wit te last equality following from q(c) = g (0) = 0 and g (1) (q(c)) = g (1) (0) = 1. Also, by te existence and continuity of f (2) ( ) near 0, for x = c we ave f(0) = f(x) cf (1) (x) + (c)2 2 f (2) (x) + o( 2 ), (A.3) f (1) (x) = f (1) (0) + cf (2) (0) + o(), (A.4) and f (2) (x) = f (2) (0) + o(1). (A.5) Now from (A.1) to (A.5), we obtain for x = c, 0 c 1, E f n (x) K(t)dt = K(t)dt(f(x) cf (1) (x) + (c)2 2 f (2) (x) + o( 2 )) (t c)k(t)dt(f (1) (0) f(0)g (2) (0)) (c) 2 f (2) (0) (t c) 2 K(t)dt { f (2) (0) f(0)g (3) (0) 3g (2) (0)(f (1) (0) f(0)g (2) (0)) } = f(x) K(t)dt K(t)dt + o( 2 ) { f (1) (0) f(0)g (2) (0) tk(t)dt } + f(0)g (2) (0) ck(t)dt {( + 2 [f (t c) K(t)dt) 2 (2) (0) f(0)g (3) (0) 2 3g (2) (0)(f (1) (0) f(0)g (2) (0)) ] } 2c 2 f (2) (0) K(t)dt + o( 2 ). (A.6) 24

25 Te proof of (2.2) is now completed by dividing bot sides of (A.6) by K(t)dt. In order to prove (2.3) we first note tat from (2.1), ( { c 2 Var f 1 ( ) n (x) = K(t)dt) } x g(xi ) Var K n ( ) { 2 ( ) ( ( )) } 2 1 x = K(t)dt E K 2 g(xi ) x g(xi ) E K n 2 ( 2 = K(t)dt) (I 1 + I 2 ), (A.7) were I 1 = 1 ( ) x n E g(xi ) 2 K2 = 1 K 2 (t) f(g ((c t))) n = 1 n f(0) g (1) (g ((c t))) dt ( 1 K 2 (t)dt + o n ), (A.8) from (A.2). Similarly, I 2 = 1 ( E K n 2 = 1 ( = o n ( 2 1 n ( )) 2 x g(xi ) K(t) f(g ((c t))) g (1) (g ((c t))) dt ), (A.9) ) 2 again from (A.2). By combining (A.7) to (A.9) we now complete te proof of (2.3). Lemma A.1. Let ˆd be defined by (2.10) wit = O(n /5 ) and 1 = O(n /4 ). Suppose 25

26 tat f(x) > 0 for x = 0, and tat f (2) is continuous in a neigbourood of 0. Ten [ ] 3 E ˆd d Xi = x i, X j = x j = O( 3 1) for any integer i, 1 i, j n, were d is given by (2.8). Proof. Similar to te proof of Lemma A.2 of Zang, Karunamuni and Jones (1999). Proof of Teorem 2.1: For x = c, 0 c 1, we write E ˆf n (x) f(x) = I 3 + I 4, (A.10) were I 3 = E ˆf n (x) E f n (x) and I 4 = E f n (x) f(x), wit f n (x) given by (2.1). From Lemma 2.1, we obtain I 4 = o( 2 ) wen g is cosen to satisfy (2.4). By an application of Taylor s expansion of order 1 on K, we see I 3 satisfies I 3 ( K(t)dt) n = ( K(t)dt) n ( ) ( ) E x K ĝc (X i ) x gc (X i ) K ( ) ( ) E gc (X i ) ĝ c (X i ) x K (1) gc (X i ) + ε(g c (X i ) ĝ c (X i )), (A.11) were 0 < ε < 1 is a constant. Note tat for any constants d and l c, we ave for any y 0, g c (y) = y + d 2 l cy 2 + (dl c ) 2 y 3 = y [ ( dl c y + 1 ) ] y. Tus g c (y) for y 16. Terefore, εĝ 15 c(x i ) + (1 ε)g c (X i ) for X i 16. Since K 15 26

27 vanises outside [, 1], from (A.11) we ave I 3 ( K(t)dt) n C n 2 { ( ) E gc (X i ) ĝ c (X i ) ( ) x εĝc (X i ) (1 ε)g c (X i ) K(1) I[0 X i ρ]} E ĝ c (X i ) g c (X i ) I[0 X i ρ], (A.12) were ρ = 16/15 and C = sup t 1 K (1) (t). Now by te definitions of g c and ĝ c (see (2.6) and (2.14)) we obtain E ĝ c (X i ) g c (X i ) I[0 X i ρ] = E c l 2 ( ˆd d)xi 2 + lc( 2 ˆd 2 d 2 )Xi 3 I[0 X i ρ] l c 2 2 ρ 2 E ˆd d I[0 X i ρ] + l 2 c(ρ) 3 E ˆd 2 d 2 I[0 X i ρ]. (A.13) Te Caucy-Scwartz inequality and Lemma A.1 yield tat E [ ˆd d k ] X i = x i = O( k 1), for 1 k 3. From tis we ave E ˆd { d I[0 X i ρ] = E E[ ˆd d I[0 X i ρ] } X i = x i ] { = E I[0 X i ρ] E[ ˆd d } Xi = x i ] O( 1 ) E I[0 X i ρ] = o( 2 ), (A.14) were te last equality follows from te fact tat lim n E I[0 X i ρ] = f(0) for 27

28 eac 0 c 1. From te same expression we also obtain tat E ˆd 2 d 2 I[0 X i ρ] = E ˆd d ˆd + d I[0 X i ρ] = E ˆd d ˆd d + 2d I[0 X i ρ] E ˆd d 2 I[0 X i ρ] + 2 d E ˆd d I[0 X i ρ] = o( 2 ). (A.15) By combining (A.12) to (A.15), we now ave I 3 = o( 2 ). Te proof is now completed by te preceding result, te fact tat I 4 = o( 2 ), and (A.10). We now prove (2.17). First write ( ) { 2 K(t)dt Var ˆf n (x) = 1 ( ) } x (n) Var ĝc (X i ) K 2 { = 1 [ ( ) ( )] x (n) Var ĝc (X i ) x gc (X i ) K K 2 ( ) } x gc (X i ) + K = I 5 + I 6 + I 7, (A.16) were g c is given by (2.6), I 5 = 1 (n) 2 Var { [ K ( ) ( )] } x ĝc (X i ) x gc (X i ) K, (A.17) I 6 = 1 ( ) x (n) Var gc (X i ) K, (A.18) 2 28

29 and I 7 = 2 (n) 2 Cov ( [ K ( ) ( x ĝc (X i ) x gc (X i ) K )], ( ) ) x gc (X i ) K. (A.19) From Lemma 2.1, we ave I 6 = f(0) n ( ) 1 K 2 (t)dt + o. (A.20) n Now consider I 5. Note tat I 5 1 (n) 2 E = 1 (n) (n) 2 { E [ K 1 i<j n ( ) ( )] } 2 x ĝc (X i ) x gc (X i ) K ( ) ( )] 2 x ĝc (X i ) x gc (X i ) K [ ( ) ( )] x ĝc (X i ) x gc (X i ) E K K [ ( ) ( )] x ĝc (X j ) x gc (X j ) K K [ K = I 8 + I 9. (A.21) 29

30 By an application of Taylor s expansion of order 1 on K, we obtain I 8 = 1 (n) 2 = 1 (n) 2 1 n 2 4 C n 2 4 [ E K ( ) ( )] 2 x ĝc (X i ) x gc (X i ) K [ gc (X i ) ĝ c (X i ) E K (1) ( x gc (X i ) + ε(g c (X i ) ĝ c (X i )) )] 2 [ ( )] 2 x E (g c (X i ) ĝ c (X i ))K (1) gc (X i ) + ε(g c (X i ) ĝ c (X i )) E(ĝ c (X i ) g c (X i )) 2 I[0 X i ρ], (A.22) using an argument similar to (A.12) above, were 0 < ε < 1, ρ = 16/15 and C > 0 are all constants independent of n. Similar to (A.13) we can write E(ĝ c (X i ) g c (X i )) 2 I[0 X i ρ] [ ] 2 lc = E 2 ( ˆd d)xi 2 + lc( 2 ˆd 2 d 2 )Xi 3 I[0 X i ρ] l2 c 2 (ρ)4 E( ˆd d) 2 I[0 X i ρ] + 2l 4 c(ρ) 6 E( ˆd 2 d 2 ) 2 I[0 X i ρ] O( 4 2 1) + O( 6 2 1) = o( 7 ). (A.23) Now combining (A.21) and (A.23), we obtain I 8 = o (n 3 ) = o ((n) ). An argument similar to (A.22) yields tat I 9 C n i j n E ĝ c (X i ) g c (X i ) ĝ c (X j ) g c (X j ) I[0 X i ρ, 0 X j ρ], (A.24) were ρ = 16/15, and C > 0 is a constant independent of n. As in (A.23), we obtain using 30

31 Lemma A.1, E ĝ c (X i ) g c (X i ) ĝ c (X j ) g c (X j ) I[0 X i ρ, 0 X j ρ] = E c l 2 ( ˆd d)xi 2 + lc( 2 ˆd 2 d 2 )Xi 3 c l 2 ( ˆd d)xj 2 + lc( 2 ˆd 2 d 2 )Xj 3 I[0 X i ρ, 0 X j ρ] { C 1 E 2 ˆd d + 3 ˆd 2 2 d } 2 I[0 Xi ρ, 0 X j ρ] 2C 1 {E 4 ˆd d 2 I[0 X i ρ, 0 X j ρ] + E ˆd } 2 d 2 2 I[0 X i ρ, 0 X j ρ] C 2 4 { 2 1 E I[0 X i ρ, 0 X j ρ] } = O( ) = o( 8 ), (A.25) were C 1, C 2 are positive constants independent of n. Now combine (A.24) and (A.25) to conclude tat I 9 = o( 4 ) = o((n) ). Similarly, it is easy to sow tat I 4 = o((n) ) using te covariance inequality. Tis completes te proof of (2.17). 31

32 References [1] Abramson, I. (1982). On bandwidt variation in kernel estimates - A square root law. Te Annals of Statistics, 10, [2] Bowman, A.W. and Azzalini, A. (1997). Applied Smooting Tecniques for Data Analysis: te kernel approac wit S-plus illustrations, Oxford University Press. [3] Ceng, M.Y. (1997). Boundary-aware estimators of integrated density derivative. Journal of te Royal Statistical Society Ser. B, 59, [4] Ceng, M.Y., Fan, J. and Marron, J.S. (1997). On Automatic Boundary Corrections. Te Annals of Statistics, 25, [5] Cline, D.B.H and Hart, J.D. (1991). Kernel Estimation of Densities of Discontinuous Derivatives. Statistics, 22, [6] Coi, E. and Hall, P. (1999). Data sarpening as a prelude to density estimation. Biometrika, 86, [7] Cowling, A. and Hall, P. (1996). On Pseudodata Metods for Removing Boundary Effects in Kernel Density Estimation. Journal of te Royal Statistical Society Ser. B, 58, [8] Gasser, T. and Müller, H.G. (1979). Kernel Estimation of Regression Functions. In Smooting Tecniques for Curve Estimation, Lecture Notes in Matematics 757, eds. T. Gasser and M. Rosenblatt, Heidelberg: Springer-Verlag, pp [9] Gasser, T., Müller, H.G. and Mammitzsc, V. (1985). Kernels for Nonparametric Curve Estimation. Journal of te Royal Statistical Society Ser. B., 47,

33 [10] Hall, P., Hu, T.C., and Marron, J.S. (1995). Improved variable window kernel estimates of probability densities. Te Annals of Statistics, 23, [11] Hall, P. and Park, B.U. (2002). New metods for bias correction at endpoints and boundaries. Te Annals of Statistics, 30, [12] Jones, M.C. (1993). Simple Boundary Correction for Kernel Density Estimation. Statistics and Computing, 3, [13] Jones, M.C. and Foster, P.J. (1996). A Simple Nonnegative Boundary Correction Metod for Kernel Density Estimation. Statistica Sinica, 6, [14] Marron, J.S. and Ruppert, D. (1994). Transformations to Reduce Boundary Bias in Kernel Density Estimation. Journal of te Royal Statistical Society Ser. B, 56, [15] Müller, H.G. (1991). Smoot Optimum Kernel Estimators Near Endpoints. Biometrika, 78, [16] Parzen, E. (1962). On estimation of a probability density function and mode. Annals of Matematical Statistics, 33, [17] Park, B.U., Jeong, S.O., Jones, M.C. and Kang, K.H. (2003). Adaptive Variable Location Kernel Density Estimators wit Good Performance at Boundaries. Nonparametric Statistics, 15, [18] Rosenblatt, M. (1956). Remarks on Some Nonparametric Estimates of a Density Function. Annals of Matematical Statistics, 27, [19] Royal Neterlands Meteorological Institute (2001), Available wind speed data. ttp:// data/time series.tm, s370.asc, June 23,

34 [20] Scuster, E.F. (1985). Incorporating Support Constraints Into Nonparametric Estimators of Densities. Communications in Statistics, Part A - Teory and Metods, 14, [21] Silverman, B.W. (1986). Density Estimation for Statistics and Data Analysis, London: Capman and Hall. [22] Terrell, G.R. and Scott, D.W. (1992). Variable kernel density estimation. Te Annals of Statistics, 20, [23] Wand, M.P. and Jones, M.C. (1995). Kernel Smooting, London: Capman and Hall. [24] Wand, M.P., Marron, J.S. and Ruppert, D. (1991). Transformations in Density Estimation (wit discussion). Journal of te American Statistical Association, 86, [25] Zang, S. and Karunamuni, R.J. (1998). On Kernel Density Estimation Near Endpoints. Journal of Statistical Planning and Inference, 70, [26] Zang, S. and Karunamuni, R.J. (2000). On Nonparametric Density Estimation at te Boundary. Nonparametric Statistics, 12, [27] Zang S., Karunamuni, R.J. and Jones, M.C. (1999). An Improved Estimator of te Density Function at te Boundary. Journal of te American Statistical Association, 448,

35 Table 1: Bias, variance and MSE of te indicated estimators at 0, and teir MISE over [0, ). Density 1: f(x) = x2 2 e x, x 0, Density 2: f(x) = 2/πe x2 /2, x 0, wit n = 200, = wit n = 200, = Bias Var MSE MISE Bias Var MSE MISE New Metod H&P metod Z,K&J metod Boundary Kernel LL metod J&F metod C&H metod Density 3: f(x) = (x 2 + 2x )e 2x, Density 4: f(x) = 2e 2x, x 0, x 0, wit n = 200, = wit n = 200, = Bias Var MSE MISE Bias Var MSE MISE New Metod H&P metod Z,K&J metod Boundary Kernel LL metod J&F metod C&H metod

36 New Metod Kernel H&P Z,K&J Boundary LL J&F C&H Figure 1: Ten typical estimates of density 1, f(x) = x2 2 e x, x 0, wit te optimal global bandwidt =

37 New Metod Kernel H&P Z,K&J Boundary LL J&F C&H Figure 2: Ten typical estimates of density 2, f(x) = 2/πe x2 /2, wit te optimal global bandwidt =

38 New Metod Kernel H&P Z,K&J Boundary LL J&F C&H Figure 3: Ten typical estimates of density 3, f(x) = 1 2 (2x2 + 4x + 1)e 2x, wit te optimal global bandwidt =

39 New Metod Kernel H&P Z,K&J Boundary LL J&F C&H Figure 4: Ten typical estimates of density 4, f(x) = 2e 2x, wit te optimal global bandwidt =

40 Figure 5: Density estimates for 35 measurements of average December precipitation in Des Moines, Iowa from , sown on te rug, wit = Te solid line is our proposed estimator (witout BVF) and te dased line is Hall and Park s.

41 Figure 6: Density estimates for 365 measurements of wind speed at Eindoven, te Neterlands, taken at 2 AM every day during Te values are plotted on te bottom. Te solid line is our proposed estimator (witout BVF) and te dased line is te LL estimator, bot wit bandwidt = 2.5.

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Hazard Rate Function Estimation Using Erlang Kernel

Hazard Rate Function Estimation Using Erlang Kernel Pure Matematical Sciences, Vol. 3, 04, no. 4, 4-5 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/0.988/pms.04.466 Hazard Rate Function Estimation Using Erlang Kernel Raid B. Sala Department of Matematics

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

Boosting Kernel Density Estimates: a Bias Reduction. Technique?

Boosting Kernel Density Estimates: a Bias Reduction. Technique? Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Deconvolution problems in density estimation

Deconvolution problems in density estimation Deconvolution problems in density estimation Dissertation zur Erlangung des Doktorgrades Dr. rer. nat. der Fakultät für Matematik und Wirtscaftswissenscaften der Universität Ulm vorgelegt von Cristian

More information

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta November 29, 2007 Outline Overview of Kernel Density

More information

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys American Journal of Teoretical and Applied Statistics 2018; 7(3): 92-101 ttp://www.sciencepublisinggroup.com/j/ajtas doi: 10.11648/j.ajtas.20180703.11 ISSN: 2326-8999 (Print); ISSN: 2326-9006 (Online)

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Statistica Sinica 15(2005), 73-98 UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Peter Hall 1 and Kee-Hoon Kang 1,2 1 Australian National University and 2 Hankuk University of Foreign Studies Abstract:

More information

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

Logistic Kernel Estimator and Bandwidth Selection. for Density Function

Logistic Kernel Estimator and Bandwidth Selection. for Density Function International Journal of Contemporary Matematical Sciences Vol. 13, 2018, no. 6, 279-286 HIKARI Ltd, www.m-ikari.com ttps://doi.org/10.12988/ijcms.2018.81133 Logistic Kernel Estimator and Bandwidt Selection

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Unified estimation of densities on bounded and unbounded domains

Unified estimation of densities on bounded and unbounded domains Unified estimation of densities on bounded and unbounded domains Kairat Mynbaev International Scool of Economics Kazak-Britis Tecnical University Tolebi 59 Almaty 5, Kazakstan email: kairat mynbayev@yaoo.com

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL IFFERENTIATION FIRST ERIVATIVES Te simplest difference formulas are based on using a straigt line to interpolate te given data; tey use two data pints to estimate te derivative. We assume tat

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract

More information

How to combine M-estimators to estimate. quantiles and a score function.

How to combine M-estimators to estimate. quantiles and a score function. How to combine M-estimators to estimate quantiles and a score function. Andrzej S. Kozek Department of Statistics, C5C, Macquarie University Sydney, NSW 2109, Australia October 29, 2004 Abstract. In Kozek

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information

Nonparametric estimation of the average growth curve with general nonstationary error process

Nonparametric estimation of the average growth curve with general nonstationary error process Nonparametric estimation of te average growt curve wit general nonstationary error process Karim Benenni, Mustapa Racdi To cite tis version: Karim Benenni, Mustapa Racdi. Nonparametric estimation of te

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE U N I V E R S I T Ä T H A M B U R G A note on residual-based empirical likeliood kernel density estimation Birte Musal and Natalie Neumeyer Preprint No. 2010-05 May 2010 DEPARTMENT MATHEMATIK SCHWERPUNKT

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

arxiv: v1 [math.pr] 28 Dec 2018

arxiv: v1 [math.pr] 28 Dec 2018 Approximating Sepp s constants for te Slepian process Jack Noonan a, Anatoly Zigljavsky a, a Scool of Matematics, Cardiff University, Cardiff, CF4 4AG, UK arxiv:8.0v [mat.pr] 8 Dec 08 Abstract Slepian

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING Statistica Sinica 13(2003), 641-653 EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING J. K. Kim and R. R. Sitter Hankuk University of Foreign Studies and Simon Fraser University Abstract:

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

Estimation of boundary and discontinuity points in deconvolution problems

Estimation of boundary and discontinuity points in deconvolution problems Estimation of boundary and discontinuity points in deconvolution problems A. Delaigle 1, and I. Gijbels 2, 1 Department of Matematics, University of California, San Diego, CA 92122 USA 2 Universitair Centrum

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

How to Combine M-estimators to Estimate Quantiles and a Score Function

How to Combine M-estimators to Estimate Quantiles and a Score Function Sankyā : Te Indian Journal of Statistics Special Issue on Quantile Regression and Related Metods 5, Volume 67, Part, pp 77-94 c 5, Indian Statistical Institute How to Combine M-estimators to Estimate Quantiles

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Uniform Convergence Rates for Nonparametric Estimation

Uniform Convergence Rates for Nonparametric Estimation Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate

More information

Generic maximum nullity of a graph

Generic maximum nullity of a graph Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is Mat 180 www.timetodare.com Section.7 Derivatives and Rates of Cange Part II Section.8 Te Derivative as a Function Derivatives ( ) In te previous section we defined te slope of te tangent to a curve wit

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

On Boundary Correction in Kernel Estimation of ROC Curves

On Boundary Correction in Kernel Estimation of ROC Curves AUSTRIAN JOURNAL OF STATISTICS Volume 38 29, Number 1, 17 32 On Boundary Correction in Kernel Estimation of ROC Curves Jan Koláček 1 and Roana J. Karunamuni 2 1 Dept. of Matematics and Statistics, Brno

More information

WEIGHTED KERNEL ESTIMATORS IN NONPARAMETRIC BINOMIAL REGRESSION

WEIGHTED KERNEL ESTIMATORS IN NONPARAMETRIC BINOMIAL REGRESSION WEIGHTED KEREL ESTIMATORS I OPARAMETRIC BIOMIAL REGRESSIO Hidenori OKUMURA and Kanta AITO Department of Business management and information science, Cugoku Junior College, Okayama, 70-097 Japan Department

More information

Bootstrap prediction intervals for Markov processes

Bootstrap prediction intervals for Markov processes arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA

More information

Robust Average Derivative Estimation. February 2007 (Preliminary and Incomplete Do not quote without permission)

Robust Average Derivative Estimation. February 2007 (Preliminary and Incomplete Do not quote without permission) Robust Average Derivative Estimation Marcia M.A. Scafgans Victoria inde-wals y February 007 (Preliminary and Incomplete Do not quote witout permission) Abstract. Many important models, suc as index models

More information

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Time (hours) Morphine sulfate (mg)

Time (hours) Morphine sulfate (mg) Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15

More information

estimate results from a recursive sceme tat generalizes te algoritms of Efron (967), Turnbull (976) and Li et al (997) by kernel smooting te data at e

estimate results from a recursive sceme tat generalizes te algoritms of Efron (967), Turnbull (976) and Li et al (997) by kernel smooting te data at e A kernel density estimate for interval censored data Tierry Ducesne and James E Staord y Abstract In tis paper we propose a kernel density estimate for interval-censored data It retains te simplicity andintuitive

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Section 3.1: Derivatives of Polynomials and Exponential Functions

Section 3.1: Derivatives of Polynomials and Exponential Functions Section 3.1: Derivatives of Polynomials and Exponential Functions In previous sections we developed te concept of te derivative and derivative function. Te only issue wit our definition owever is tat it

More information

7.1 Using Antiderivatives to find Area

7.1 Using Antiderivatives to find Area 7.1 Using Antiderivatives to find Area Introduction finding te area under te grap of a nonnegative, continuous function f In tis section a formula is obtained for finding te area of te region bounded between

More information

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here!

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here! Precalculus Test 2 Practice Questions Page Note: You can expect oter types of questions on te test tan te ones presented ere! Questions Example. Find te vertex of te quadratic f(x) = 4x 2 x. Example 2.

More information

Mass Lumping for Constant Density Acoustics

Mass Lumping for Constant Density Acoustics Lumping 1 Mass Lumping for Constant Density Acoustics William W. Symes ABSTRACT Mass lumping provides an avenue for efficient time-stepping of time-dependent problems wit conforming finite element spatial

More information

LECTURE 14 NUMERICAL INTEGRATION. Find

LECTURE 14 NUMERICAL INTEGRATION. Find LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use

More information

One-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes

One-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes DOI 10.1007/s10915-014-9946-6 One-Sided Position-Dependent Smootness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meses JenniferK.Ryan Xiaozou Li Robert M. Kirby Kees Vuik

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information

A Reconsideration of Matter Waves

A Reconsideration of Matter Waves A Reconsideration of Matter Waves by Roger Ellman Abstract Matter waves were discovered in te early 20t century from teir wavelengt, predicted by DeBroglie, Planck's constant divided by te particle's momentum,

More information

New Distribution Theory for the Estimation of Structural Break Point in Mean

New Distribution Theory for the Estimation of Structural Break Point in Mean New Distribution Teory for te Estimation of Structural Break Point in Mean Liang Jiang Singapore Management University Xiaou Wang Te Cinese University of Hong Kong Jun Yu Singapore Management University

More information

New families of estimators and test statistics in log-linear models

New families of estimators and test statistics in log-linear models Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics

More information

Bandwidth Selection in Nonparametric Kernel Testing

Bandwidth Selection in Nonparametric Kernel Testing Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

Fast optimal bandwidth selection for kernel density estimation

Fast optimal bandwidth selection for kernel density estimation Fast optimal bandwidt selection for kernel density estimation Vikas Candrakant Raykar and Ramani Duraiswami Dept of computer science and UMIACS, University of Maryland, CollegePark {vikas,ramani}@csumdedu

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

CHAPTER (A) When x = 2, y = 6, so f( 2) = 6. (B) When y = 4, x can equal 6, 2, or 4.

CHAPTER (A) When x = 2, y = 6, so f( 2) = 6. (B) When y = 4, x can equal 6, 2, or 4. SECTION 3-1 101 CHAPTER 3 Section 3-1 1. No. A correspondence between two sets is a function only if eactly one element of te second set corresponds to eac element of te first set. 3. Te domain of a function

More information

Kernel Smoothing and Tolerance Intervals for Hierarchical Data

Kernel Smoothing and Tolerance Intervals for Hierarchical Data Clemson University TigerPrints All Dissertations Dissertations 12-2016 Kernel Smooting and Tolerance Intervals for Hierarcical Data Cristoper Wilson Clemson University, cwilso6@clemson.edu Follow tis and

More information

2.3 Algebraic approach to limits

2.3 Algebraic approach to limits CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.

More information

0.1 Differentiation Rules

0.1 Differentiation Rules 0.1 Differentiation Rules From our previous work we ve seen tat it can be quite a task to calculate te erivative of an arbitrary function. Just working wit a secon-orer polynomial tings get pretty complicate

More information

Chapter 4: Numerical Methods for Common Mathematical Problems

Chapter 4: Numerical Methods for Common Mathematical Problems 1 Capter 4: Numerical Metods for Common Matematical Problems Interpolation Problem: Suppose we ave data defined at a discrete set of points (x i, y i ), i = 0, 1,..., N. Often it is useful to ave a smoot

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x

More information

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix Opuscula Mat. 37, no. 6 (2017), 887 898 ttp://dx.doi.org/10.7494/opmat.2017.37.6.887 Opuscula Matematica OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS Sandra

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information

Nonparametric regression on functional data: inference and practical aspects

Nonparametric regression on functional data: inference and practical aspects arxiv:mat/0603084v1 [mat.st] 3 Mar 2006 Nonparametric regression on functional data: inference and practical aspects Frédéric Ferraty, André Mas, and Pilippe Vieu August 17, 2016 Abstract We consider te

More information

Analytic Functions. Differentiable Functions of a Complex Variable

Analytic Functions. Differentiable Functions of a Complex Variable Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general

More information

CHOOSING A KERNEL FOR CROSS-VALIDATION. A Dissertation OLGA SAVCHUK

CHOOSING A KERNEL FOR CROSS-VALIDATION. A Dissertation OLGA SAVCHUK CHOOSING A KERNEL FOR CROSS-VALIDATION A Dissertation by OLGA SAVCHUK Submitted to te Office of Graduate Studies of Texas A&M University in partial fulfillment of te requirements for te degree of DOCTOR

More information

arxiv: v1 [stat.me] 27 Jul 2016

arxiv: v1 [stat.me] 27 Jul 2016 Bandwidt Selection for Kernel Density Estimation wit a Markov Cain Monte Carlo Sample arxiv:1607.08274v1 [stat.me] 27 Jul 2016 Hang J. Kim Department of Matematical Sciences, University of Cincinnati,

More information

Data-Based Optimal Bandwidth for Kernel Density Estimation of Statistical Samples

Data-Based Optimal Bandwidth for Kernel Density Estimation of Statistical Samples Commun. Teor. Pys. 70 (208) 728 734 Vol. 70 No. 6 December 208 Data-Based Optimal Bandwidt for Kernel Density Estimation of Statistical Samples Zen-Wei Li ( 李振伟 ) 2 and Ping He ( 何平 ) 3 Center for Teoretical

More information