A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation

Size: px
Start display at page:

Download "A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation"

Transcription

1 A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN Abstract It is known tat te fitted regression function based on conventional local smooting procedures is not statistically consistent at jump positions of te true regression function In tis article, a curve-fitting procedure based on local piecewise-linear kernel estimation is suggested In a neigborood of a given point, a piecewise-linear function wit a possible jump at te given point is fitted by te weigted least squares procedure wit te weigts determined by a kernel function Te fitted value of te regression function at tis point is ten defined by one of te two estimators provided by te two fitted lines (te left and rigt lines) wit te smaller value of te weigted residual sum of squares It is proved tat te fitted curve by tis procedure is consistent in te entire design space In oter words, tis procedure is jump-preserving Several numerical examples are presented to evaluate its performance in small-to-moderate sample size cases Key Words: Jump-preserving curve fitting; Local piecewise-linear kernel estimation; Local smooting; Nonparametric regression; Strong consistency 1 Introduction Regression analysis provides a tool to build functional relationsips between dependent and independent variables In some applications, regression models wit jumps in te regression functions appear to be more appropriate to describe te data For example, it was confirmed by several statisticians tat te annual volume of te Nile river ad a jump around year 1899 (Cobb, 1978) 1

2 Te December sea-level pressure in Bombay India was found to ave a jump discontinuity around year 1960 (Sea at al 1994) Some pysiological parameters can likewise jump after pysical or cemical socks As an example, te percentage of time a rat in rapid-eye-movement state in eac five-minute interval will most probably ave an abrupt cange after te ligting condition is suddenly canged (Qiu et al 1999) Te objective of tis article is to provide a metodology to fit regression curves wit jumps preserved Suppose tat te regression model concerned is y i = f(x i ) + ɛ i, for i = 1, 2,, n, (11) were 0 < x 1 < x 2 < < x n < 1 are design points, ɛ i are iid random errors wit mean 0 and variance σ 2 Te regression function f( ) is continuous in [0, 1] except at positions 0 < s 1 < s 2 < < s m < 1 were f( ) as jumps wit magnitudes d j 0 for j = 1, 2,, m Figure 11 below presents a case wen m = 2 It is known tat te fitted curve by te conventional local smooting procedures is not statistically consistent at positions were f( ) as jumps For example, te local linear kernel smooter is based on te following minimization procedure (cf Fan and Gijbels 1996): min a 0,a 1 {y i [a 0 + a 1(x i x)]} 2 K( x i x ), (12) were K( ) is a kernel function wit support [ 1/2, 1/2] and is a bandwidt parameter Ten te solution of (12) for a 0 is defined as te local linear kernel estimator of f(x) In Figure 11, te solid curve denotes te true regression function It as two jumps at x = 03 and x = 07 Te dased curve denotes te conventional fit by te local linear kernel smooting procedure It can be seen tat blurring is present in te curve fitting around te two jumps As a comparison, te fitted curve by te procedure suggested in tis paper is represented by te dotted curve Te two jumps are preserved well by our procedure More explanation of tis plot is given in Section 4 A major reason for te local linear kernel smooting procedure (12) not to preserve jumps is tat it uses a local continuous function (a linear function) to approximate te true regression function in a neigborood of a given point x even if tere is a jump at x overcome tis limitation is to fit a local piecewise-linear function at x as follows: A natural idea to 2

3 y true curve our fit conventional fit Figure 11: Small dots denote noisy data Te solid curve represents te true regression model Te dased and dotted curves denote te conventional fit by te local linear kernel smooting procedure and te fit by te procedure suggested in tis paper x min a l,0,a l,1 ;a r,0,a r,1 {y i [a l,0 + a l,1 (x i x)] [(a r,0 a l,0 )I(x i x) + (a r,1 a l,1 )(x i x)i(x i x)]} 2 K( x i x ), (13) were I( ) is an indicator function defined by I(a) = 1 if a 0 and = 0 oterwise Te minimization procedure (13) fits a piecewise-linear function a l,0 + a l,1 (u x) + (a r,0 a l,0 )I(u x) + (a r,1 a l,1 )(u x)i(u x) in u [x /2, x + /2] wit a possible jump at x Tis is equivalent to fitting two different lines a l,0 + a l,1 (u x) and a r,0 + a r,1 (u x) in [x /2, x) and [x, x + /2], respectively Let {â l,j (x), â r,j (x), j = 0, 1} denote te solution of (13) Ten â l,0 (x) and â r,0 (x) are estimated from observations in [x /2, x) and [x, x + /2], respectively Tus tey are good estimators of f (x) and f + (x), te left and rigt limits of f( ) at x, in te case wen x is a jump point Wen tere is no jump in [x /2, x + /2], bot of tem estimate f(x) well In te case wen x itself is not a jump point but a jump point exists in its neigborood [x /2, x + /2], only one of â l,0 (x) and â r,0 (x) provides a good estimator of f(x) Terefore we need to coose one of tem as an estimator of f(x) in suc case By combining all tese considerations, we define f(x) = â l,0 (x)i (RSS r (x) RSS l (x)) + â r,0 (x)i (RSS l (x) RSS r (x)) (14) as an estimator of f(x) for x [ /2, 1 /2], were I (a) is defined by I (a) = 1 if a > 0, 1/2 if a = 0 and 0 if a < 0; RSS l (x) and RSS r (x) are te weigted residual sums of squares (RSS) wit 3

4 respect to observations in [x /2, x) and [x, x + /2], respectively Tat is, RSS l (x) = x i <x RSS r (x) = x i x {y i â l,0 (x) â l,1 (x)(x i x)} 2 K( x i x ); {y i â r,0 (x) â r,1 (x)(x i x)} 2 K( x i x ) Basically f(x) is defined by one of â l,0 (x) and â r,0 (x) wit te smaller RSS value In te literature, tere are several existing procedures to fit regression curves wit jumps preserved McDonald and Owen (1996) proposed an algoritm based on tree local ordinary least squares estimates of te regression function, corresponding to te observations on te rigt, left and bot sides of a given point, respectively Tey ten constructed teir split linear fit as a weigted average of tese tree estimates, wit weigts determined by te goodness-of-fit values of te estimates Hall and Titterington (1992) suggested an alternative but simpler metod by establising some relations among tree local linear smooters and using tem to detect te jumps Te regression curve was ten fitted as usual in regions separated by te detected jumps Our procedure is different from tese two procedures in tat we put te problem to fit regression curves wit jumps preserved in te same framework as tat of local linear kernel estimation except tat a local piecewise-linear function is fitted at a given point in our procedure, making te curve estimator (14) easier to use Most oter jump-preserving curve fitting procedures in te literature consist of two steps: (i) detecting possible jumps under te assumption tat te number of jumps is known (it is often assumed to be 1) and (ii) fitting te regression curve as usual in design subintervals separated by te detected jump points Various jump detectors are based on one-sided constant kernel smooting (Müller 1992, Qiu et al 1991, Wu and Cu 1993), one-sided linear kernel smooting (Loader 1996), local least squares estimation (Qiu and Yandell 1998), wavelet transformation (Wang 1995), semiparametric modeling (Eubank and Speckman 1994) and smooting spline modeling (Koo 1997, Siau et al 1986) Te case wen te number of jumps is unknown is considered by several autors including Qiu (1994) and Wu and Cu (1993) Tey first estimated te number of jumps and jump positions by performing a series of ypotesis tests and ten fitted te regression curve in subintervals separated by te detected jump points Comparing wit te above mentioned metods, te metod presented in tis paper automatically accommodates te jumps in fitting te regression curve witout knowing te number of jumps and witout performing any ypotesis tests 4

5 Tis paper is organized as follows In next section, we discuss te jump-preserving curve fitting procedure (14) in some detail Properties of te fitted curve are discussed in Section 3 In Section 4, we present some numerical examples concerning te goodness-of-fit and bandwidt selection Te procedure is applied to a real-life dataset in Section 5 Section 6 contains some concluding remarks 2 Te Jump-Preserving Curve Fitting Procedure First we notice tat te minimization procedure (13) is equivalent to te combination of: and min a r,0,a r,1 min a l,0,a l,1 {y i a l,0 a l,1 (x i x)} 2 K l ( x i x ) (21) {y i a r,0 I(x i x) a r,1 (x i x)i(x i x)} 2 K r ( x i x ), (22) were K l ( ) is defined by K l (x) = K(x) if x [ 1/2, 0) and 0 oterwise and K r ( ) is defined by K r (x) = K(x) if x [0, 1/2] and 0 oterwise Clearly, (21) is equivalent to te local linear kernel smooting procedure to fit f (x) by te observations in [x /2, x), te left alf of [x /2, x + /2], and (22) is equivalent to te local linear kernel smooting procedure to fit f + (x) by te observations in [x, x + /2], te rigt alf of [x /2, x + /2] Te subscripts l and r in notations {a l,j, a r,j, j = 0, 1}, K l ( ) and K r ( ) represent left and rigt, respectively, wic are also used in oter notation defined below Solutions of (21) and (22) can be written as: â l,0 (x) = â l,1 (x) = â r,0 (x) = â r,1 (x) = y i K l ( x i x ) w l,2 w l,1 (x i x) w l,0 w l,2 w 2 l,1 y i K l ( x i x ) w l,0(x i x) w l,1 w l,0 w l,2 w 2 l,1 y i K r ( x i x ) w r,2 w r,1 (x i x) w r,0 w r,2 w 2 r,1 y i K r ( x i x ) w r,0(x i x) w r,1 w r,0 w r,2 w 2 r,1 were w l,j = n K l ( x i x )(x i x) j and w r,j = n K r ( x i x )(x i x) j for j = 0, 1, 2 5

6 Figure 21 presents â l,0 ( ), â r,0 ( ) and f( ) by te dotted, dased and solid curves in te case of Figure 11 except tat te noise in te data as been ignored by setting σ = 0 It can be seen tat blurring occurs in [x 0, x 0 + /2] if â l,0 ( ) is used to fit f( ) and point x 0 is a jump point Similarly, blurring occurs in [x 0 /2, x 0 ) if â r,0 ( ) is used to fit f( ) and point x 0 is a jump point Our estimator f( ), owever, can preserve te jumps well because f( ) is defined as â l,0 ( ) in [x 0 /2, x 0 ) and as â r,0 ( ) in [x 0, x 0 + /2] wen x 0 is a jump point y fat aat_{l,0} aat_{r,0} Figure 21: Te dotted, dased and solid curves denote â l,0 ( ), â r,0 ( ) and f( ) in te case of Figure 11 except tat te noise in data as been ignored by setting σ = 0 x Wen x is in boundary regions [0, /2) and (1 /2, 1], estimator of f(x) is not defined by (14) In suc case tere are several possible approaces to estimate f(x) if no jumps exist in [0, ) and (1, 1] For example, f(x) could be defined by te conventional local linear kernel estimator constructed from observations in [0, x + /2] or [x /2, 1] depending on weter x [0, /2) or x (1 /2, 1] In te following sections, we define f(x) = â r,0 (x) wen x [0, /2) and f(x) = â l,0 (x) wen x (1 /2, 1] for simplicity If tere are jump points in [0, ) (or (1, 1]), owever, estimation of f(x) in boundary region [0, /2) (or (1 /2, 1]) is still an open problem In te literature, tere are several existing data-driven bandwidt selection procedures suc as te plug-in procedures, te cross-validation procedure, te Mellow s C p criterion and te Akaike s information criterion (cf eg, Cu and Marron 1991; Loader 1999) Since te exact expressions for te mean and variance of te jump-preserving estimator f( ) in (14) are not available at tis moment, te plug-in procedures are not considered ere In te numerical examples presented in Sections 4 and 5, we determine te bandwidt by te cross-validation procedure Tat is, te 6

7 optimal is cosen by minimizing te following cross-validation criterion: CV ( ) = 1 n ( y i f i (x i )) 2, (23) were f i (x) is te leave-1-out estimator of f(x) wit bandwidt Namely, te observation (x i, y i ) is left out in constructing f i (x), for i = 1, 2,, n A numerical example in Section 4 sows tat te cosen bandwidt based upon (23) performs well 3 Strong Consistency Te conventional local smooting estimators of f( ) suc as te one from (12) are not statistically consistent at jump positions In tis section we establis te almost sure consistency of te jumppreserving estimator f( ) wic says tat f( ) converges almost surely to te true regression function in te entire design space [0, 1] under some regularity conditions Tat is, f( ) is jump-preserving First we ave te following result for â l,0 ( ) and â r,0 ( ) Teorem 31 Suppose tat f( ) as a continuous second-order derivative in [0, 1]; max 1 i n+1 (x i x i 1 ) = O(1/n) were x 0 = 0 and x n+1 = 1; te kernel function K( ) is Lipscitz (1) continuous; te bandwidt = O(n 1/5 ) Ten were g [a,b] denotes max a x b g(x) log n log log n â l,0 f [n/2,1] = o(1), as (31) log n log log n â r,0 f [0,1 n/2] = o(1), as (32) Teorem 31 establises te almost sure uniform consistency of â l,0 ( ) and â r,0 ( ) wen f( ) is continuous in te design space [0, 1] Its proof is given in Appendix A Wen f( ) as jumps in [0, 1] as specified by model (11), Teorem 31 also gives almost sure consistency of f( ) in continuous regions D 1 := [0, 1]\ m j=1 (s j /2, s j + /2) since f f D1 max( â l,0 f D1, â r,0 f D1 ) by (14) In te neigborood of jump points D 2 := m j=1 (s j /2, s j + /2), we ave te following result Teorem 32 Suppose tat x is a given point in (0, 1); max 1 i n+1 (x i x i 1 ) = O(1/n) were x 0 = 0 and x n+1 = 1; te kernel function K( ) is Lipscitz (1) continuous; lim n = 0 and 7

8 lim n n = If f( ) as a continuous first-order derivative in [x, x + /2], ten RSS r (x) = v r,0 σ 2 n + o(n ), as (33) If f( ) as a jump in [x, x + /2] at x := x + wit magnitude d were 0 1/2 and f( ) as a continuous first-order derivative in [x, x + /2] except at x at wic f( ) as a rigt (wen = 0) or left (wen = 1/2) or bot (wen 0 < < 1/2) first-order derivatives f + (x ) and f (x ), ten RSS r (x) = (v r,0 σ 2 + d 2 C2 )n + o(n ), as, (34) were C 2 = 1 (v r,0 v r,2 v 2 r,1 )2 1 (v r,0 v r,2 v 2 r,1 )2 [ 0 [ 1/2 0 2 (v r,2 v r,1 x)k r (x)dx + u (v r,0 x v r,1 )K r (x)dx] K r (u)du + 2 (v r,2 v r,1 x)k r (x)dx u (v r,0 x v r,1 )K r (x)dx] K r (u)du and v r,j = 0 x j K r (x)dx for j = 0, 1, 2 Similar results could be derived for RSS l (x) It can be cecked tat C 2 is positive wen (0, 1/2) and 0 wen = 0 or 1/2 If te kernel function K( ) is cosen to be te Epanecnikov function defined by K(x) = 15(1 4x 2 ) wen x [ 1/2, 1/2] and 0 oterwise (cf Section 326, Fan and Gijbels 1996), ten C 2 as a function of is displayed in Figure tau Figure 31: C 2 as a function of wen K( ) is cosen to be te Epanecnikov function By (33) and (34), if tere is a jump in [x /2, x + /2], a neigborood of a given point x, and tis jump point is located on te rigt side of x, ten RSS l (x) < RSS r (x), as, wen n is 8

9 large enoug Consequently, f(x) = â l,0 (x), as, wen n is large enoug On te oter and, if te jump point is located on te left side of x, ten RSS l (x) > RSS r (x), as, and f(x) = â r,0 (x), as, wen n is large enoug By combining tis fact and (31)-(32) in Teorem 31, we ave te following results Teorem 33 Suppose tat f( ) as a continuous second-order derivative in [0, 1] except at te jump positions {s j, j = 1, 2,, m} were f( ) as left and rigt second-order derivatives; max 1 i n+1 (x i x i 1 ) = O(1/n) were x 0 = 0 and x n+1 = 1; te kernel function K( ) is Lipscitz (1) continuous; and te bandwidt = O(n 1/5 ) Ten (i) log n log log n f f D1 = o(1), as; (ii) for eac x D 2, log n log log n ( f(x) f(x)) = o(1), as; (iii) for any small number 0 < δ < 1/4, log n log log n f f D2,δ = o(1), as, were D 2,δ = m j=1 {[s j (1/2 δ), s j δ ] [s j + δ, s j + (1/2 δ) ]} Teorem 33 says tat f( ) is uniformly consistent in continuous regions D 1 wit rate o(n 2/5 log n log log n) In te neigborood of jump points, it is consistent pointwise wit te same rate Because C 2 as a positive lower bound wen [δ, 1/2 δ] for any given number 0 < δ < 1/4, f( ) is also uniformly consistent wit rate o(n 2/5 log n log log n) in D 2,δ wic equals to D 2 \D δ were D δ = m j=1 [(s j /2, s j (1/2 δ) ) (s j δ, s j + δ ) (s j + (1/2 δ), s j + /2)] 4 Simulation Study We present some simulation results regarding bandwidt selection and te numerical performance of te jump-preserving curve fitting procedure (14) in tis section Let us revisit te example of Figure 11 first Te true regression function in tis example is f(x) = 3x + 2 wen x [0, 03); f(x) = 3x+3 sin((x 03)π/02) wen x [03, 07); and f(x) = 05x+155 wen x [07, 1] It 9

10 as two jump points at x = 03 and x = 07 Bot jump magnitudes are equal to 1 Observations are generated from model (11) wit ɛ i N(0, σ 2 ) for i = 1, 2,, n Te bandwidt used in procedure (14) is assumed to ave te form = k/n, were k is an odd integer, for convenience Witout confusion, k is sometimes called te bandwidt in tis section Figure 41 presents te MSE values of te fitted curve by te jump-preserving procedure (14) wit several k values wen n = 200 and σ = 02 To remove some randomness in te results, all MSE values presented in tis section are actually averages of 1000 replications It can be seen from te plot tat te MSE value first decreases and ten increases wen k increases Te bandwidt k works as a tuning parameter to balance underfit and overfit as in te conventional local smooting procedures Te best bandwidt in tis case is k = 29 wic makes te MSE reac te minimum Te dotted curve in Figure 11 sows one realization of te fitted curve wit te best bandwidt k = 29 Te dased curve sows te conventional local linear kernel estimator wit te same bandwidt MSE Figure 41: MSE values of te fitted curve by te jump-preserving procedure (14) wit several k values wen n = 200 and σ = 02 k We ten perform simulations wit several different n and σ values Te optimal bandwidts and te corresponding MSE values are presented in Figures 42(a) and 42(c), respectively From te plots, it can be seen tat (1) te optimal k increases wen sample size n increases or σ increases and (2) te corresponding MSE value decreases wen n increases or σ decreases Te first finding suggests tat te bandwidt sould be cosen larger wen te sample size is larger or te data is noisier, wic is intuitively reasonable Te second finding migt reflect te consistency of te fitted curve 10

11 k sigma=005 sigma=01 sigma=02 sigma=05 k sigma=005 sigma=01 sigma=02 sigma= sample size (a) sample size (b) MSE sigma=005 sigma=01 sigma=02 sigma=05 CV sigma=005 sigma=01 sigma=02 sigma= sample size (c) sample size (d) Figure 42: (a) Te optimal bandwidts by te MSE criterion; (b) te optimal bandwidts by te CV criterion; (c) te corresponding MSE values wen te bandwidts in plot (a) are used; (d) te corresponding CV values wen te bandwidts in plot (b) are used As a comparison, te optimal bandwidts by te cross-validation procedure are presented in Figure 42(b) Te corresponding CV values (defined by equation (23)) are sown in Figure 42(d) By comparing Figures 42(a) and 42(b), it can be seen tat bandwidts selected by te cross-validation procedure are close to te optimal bandwidts based on te MSE criterion From Figure 11, it can be seen tat bluring occurs around te jump points if f( ) is estimated by te conventional local linear kernel estimator Te jump-preserving estimator (14) preserves te jumps quite well, wic is furter confirmed by Figure 43 In Figure 43(a), te solid curve denotes te true regression model, te dotted curve denotes te averaged estimator by te jumppreserving procedure wic is calculated from 1000 replications Te lower and upper dased 11

12 curves represent te 25 and 975 percentiles of tese 1000 replications We can see tat te two sarp jumps are preserved well by te procedure (14) As a comparison, te averaged estimator and te corresponding percentiles by te conventional local linear kernel smooting procedure wit te same bandwidt are presented in Figure 43(b) It can be seen tat te two jumps are blurred x (a) x (b) Figure 43: Te solid curve denotes te true regression model, te dotted curve denotes te averaged estimator wic is calculated from 1000 replications Te lower and upper dased curves represent te 25 and 975 percentiles of tese 1000 replications (a) Results from te jump-preserving procedure (14); (b) results from te conventional local linear kernel smooting procedure 5 An Application In tis section, we apply te jump-preserving curve fitting procedure (14) to a sea-level pressure dataset In Figure 51, small dots denote te December sea-level pressures during observed by te Bombay weater station in India Meteorologists (cf Sea et al 1994) noticed a jump around year 1960 in tis dataset and te existence of tis jump was confirmed by Qiu and Yandell (1998) wit teir local polynomial jump detection algoritm In Figure 51, te solid curve denotes te fitted regression curve by our jump-preserving curve fitting procedure (14) In te procedure, te bandwidt is cosen to be k = 25 wic is determined by te cross-validation procedure (23) As indicated by te plot, te jump around year 1960 is preserved well by our procedure 12

13 PRESSURE YEAR Figure 51: Small dots denote te December sea-level pressures during observed by te Bombay weater station in India Te solid curve is te jump-preserving estimator by te procedure (14) 6 Concluding Remarks We ave presented a jump-preserving curve fitting procedure wic automatically accommodates possible jumps of te regression curve witout knowing te number of jumps Te fitted curve is proved to be statistically consistent in te entire design space Numerical examples sow tat it works reasonably well in applications Te following issues related to tis topic need furter investigation First, te procedure (14) works well in boundary regions [0, /2) and (1 /2, 1] only under te condition tat tere are no jumps in [0, ) and (1, 1] Tis condition can always be satisfied wen te sample size is large Wen te sample size is small, owever, tis condition may not be true in some cases and it is still an open problem to fit f( ) wen jumps exist in te boundary regions Second, te plog-in procedures to coose bandwidt of a local smooter are often based on te bias-variance trade-off of te fitted regression model Exact expressions for te mean and variance of te jump-preserving procedure (14) are not available yet, wiceeds furter researc Acknowledgement: Te autor would like to tank Mr Alexandre Lambert of te Institut de Statistique at Universite catolique de Louvain in Belgium for pointing out a mistake in te expression of C 2 appeared in (34) 13

14 Appendix A Proof of Teorem 31 We only prove equation (31) ere Equation (32) can be proved similarly First of all, E(â l,0 (x)) = f(x i )K l ( x i x ) w l,2 w l,1 (x i x) n w l,0 w l,2 wl,1 2 (A1) We notice tat te summation on te rigt and side of (A1) is only for tose x i in [x /2, x) By Taylor s expansion, f(x i ) = f(x) + f (x)(x i x) f (x)(x i x) 2 + o( 2 n ), (A2) were x i [x /2, x) By combining (A1) and (A2), we ave E(â l,0 (x)) = f(x) + f (x) w2 l,2 w l,1w l,3 2(w l,0 w l,2 w 2 l,1 ) + o(2 n), (A3) were w l,3 = n K l ( x i x )(x i x) 3 Furtermore it can be cecked tat w l,0 n = v l,0 + o(1), w l,1 n 2 n = v l,1 + o(1), w l,2 n 3 n = v l,2 + o(1), w l,3 n 4 n = v l,3 + o(1), (A4) were v l,j = 0 1/2 xj K l (x)dx for j = 0, 1, 2, 3 By combining (A3) and (A4), we ave E(â l,0 (x)) = f(x) + f (x) v2 l,2 v l,1v l,3 2(v l,0 v l,2 v 2 l,1 )2 n + o(2 n ) Terefore Now let E(â l,0 (x)) f(x) = f (x) v2 l,2 v l,1v l,3 2(v l,0 v l,2 vl,1 2 )2 n + o(2 n ) (A5) ɛ i = ɛ i I(i 1/2 ɛ i ), i = 1, 2,, n g n (x) = K l ( x i x ) w l,2 w l,1 (x i x) n w l,0 w l,2 wl,1 2 ɛ i g n (x) = K l ( x i x ) w l,2 w l,1 (x i x) n w l,0 w l,2 wl,1 2 ɛ i =: g n (i) 14

15 For any ε > 0, P ( log n log log n [ g n(x) E( g n (x))] > ε) exp(log n ε(log log n)1/2 )E(Π n exp( (log log n) 1/2 [ g n(i) E( g n (i))])) n ε(log log n 4/5 n)1/2 exp( V ar( g n (i))) log log n by an application of te Cebysev s inequality of te exponential form Now were C l,1 (K) is a constant So for all x [ /2, 1] V ar( g n (i)) σ 2 n Kl 2 (x i x )[ w l,2 w l,1 (x i x) n w l,0 w l,2 wl,1 2 ] 2 = σ2 0 Kl 2 (x)( v2 l,2 v l,1x n 1/2 v l,0 v l,2 vl,1 2 ) 2 dx = σ2 n C l,1 (K), P ( log n log log n [ g n(x) E( g n (x))] > ε) = O(n ε(log log n)1/2 ) (A6) We now define D n = {x : x n 1/δ + 1, x R}, for some δ > 0 Let E n be a set suc tat, for any x D n, tere exists some Z(x) E n suc tat x Z(x) < n 2, and E n as at most N n = [2n 2 (n 1/δ + 1)] + 1 elements, were [x] denotes te integral part of x Ten were log n log log n g n E( g n ) [n/2,1] D n S 1n + S 2n + S 3n, S 1n = S 2n = S 3n = sup log n log log n g n (x) g n (Z(x)) x [/2,1] D n sup log n log log n g n (Z(x)) E( g n (Z(x))) x [/2,1] D n sup log n log log n E( g n (Z(x))) E( g n (x)) x [/2,1] D n From (A6), P (S 2n > ε) = O(N n n ε(log log n)1/2 ) By te Borel-Cantelli Lemma, lim S 2n = 0, as n (A7) 15

16 Now S 1n = sup log n log log n [K l ( x i x ) w l,2 w l,1 (x i x) x [/2,1] D n n w l,0 w l,2 wl,1 2 K l ( x i Z(x) ) w l,2 w l,1 (x i Z(x)) w l,0 w l,2 wl,1 2 ] ɛ i n 1/2 1 sup log n log log n [K l ( x i x ) v l,2 v l,1 (x i x)/ x [/2,1] D n n n v l,0 v l,2 vl,1 2 K l ( x i Z(x) +1/2 C l,2 (K) log n log log n n 2, ) v l,2 v l,1 (x i Z(x))/ v l,0 v l,2 vl,1 2 ] were C l,2 (K) is a constant In te last inequality above, we ave used te Lipscitz (1) property of K l ( ) Terefore lim S 1n = 0, as n (A8) Similarly, lim S 3n = 0 n (A9) By combining (A7)-(A9), we ave log n log log n g n E( g n ) [n/2,1] D n = o(1), as (A10) Now, g n E(g n ) [n/2,1] g n g n [n/2,1] + g n E( g n ) [n/2,1] + E( g n ) E(g n ) [n/2,1] Since E(ɛ 2 1 ) <, tere exists a full set Ω 0 suc tat for eac ω Ω 0 tere exists a finite positive integer N ω and for n N ω, So for all n N ω, g n (x) g n (x) 1 n N ω C(N ω) n, were C(N ω ) is a constant Terefore, ɛ n (ω) = ɛ n (ω) K l ( x i x ) v l,2 v l,1 (x i x)/ v l,0 v l,2 vl,1 2 ɛ i ɛ i log n log log n g n g n [n/2,1] = o(1), as (A11) 16

17 Similarly, By (A10)-(A12), we ave log n log log n E( g n) E(g n ) [n/2,1] = o(1) log n log log n g n E(g n ) [n/2,1] = o(1), as (A12) (A13) By (A5) and (A13), we get equation (31) B Proof of Teorem 32 By te definition of RSS r (x), RSS r (x) = [y i â r,0 (x) â r,1 (x)(x i x)] 2 K r ( x i x ) n = [ɛ i + f(x i ) â r,0 (x) â r,1 (x)(x i x)] 2 K r ( x i x ) n = ɛ 2 i K r( x i x ) + 2 ɛ i [f(x i ) â r,0 (x) â r,1 (x)(x i x)]k r ( x i x ) + n n [f(x i ) â r,0 (x) â r,1 (x)(x i x)] 2 K r ( x i x ) n =: I 1 + I 2 + I 3 Let us first prove equation (33) under te condition tat f( ) as continuous first-order derivative in [x, x + /2] By similar auguments to tose in Appendix A, I 1 = v r,0 σ 2 n + o(n ), as (B1) Now I 2 = 2 ɛ i [f(x) + f (x)(x i x) â r,0 (x) â r,1 (x)(x i x) + o( )]K r ( x i x ) = 2(f(x) â r,0 (x)) ɛ i K r ( x i x ) + 2(f (x) â r,1 (x)) = o(n ) + o( 1 n ) O(n) O( ) + o(n ) ɛ i K r ( x i x )(x i x) + o(n ) = o(n ) (B2) In te tird equation above, we ave used te results tat f(x) â r,0 (x) = o(1), as, and f (x) â r,1 (x) = o(1/ ), as, were te first result is from Teorem 31 and te second result can be 17

18 derived by similar arguments to tose in Appendix A It can be similarly cecked tat I 3 = o(n ), as (B3) By combining (B1)-(B3), we get equation (33) Next we prove equation (34) under te condition tat f( ) as a jump in [x, x + /2] at x = x + were 0 1/2 is a constant First, â r,0 (x) = y i K r ( x i x ) w r,2 w r,1 (x i x) n w r,0 w r,2 wr,1 2 = f(x i )K r ( x i x ) w r,2 w r,1 (x i x) x i <x w r,0 w r,2 wr,1 2 + f(x i )K r ( x i x ) w r,2 w r,1 (x i x) x i x n w r,0 w r,2 wr,1 2 + ɛ i K r ( x i x ) w r,2 w r,1 (x i x) n w r,0 w r,2 wr,1 2 = (f (x ) + o(1))k r ( x i x ) w r,2 w r,1 (x i x) x i <x w r,0 w r,2 wr,1 2 + (f (x ) + d + o(1))k r ( x i x ) w r,2 w r,1 (x i x) x i x n w r,0 w r,2 wr,1 2 + o(1), as = f (x ) + d K r (x)(v r,2 v r,1 x)dx v r,0 v r,2 vr,1 2 + o(1), as (B4) In te last equation above, we ave used (A4) Similarly we can ceck tat Ten I 2 = 2 â r,1 (x) = d x i <x ɛ i [f(x i ) f (x ) d 2 ɛ i [f(x i ) f (x ) d x i x 2 ɛ i (x i x)k r ( x i x ) d = 2 d K r (x)(v r,0 x v r,1 )dx (v r,0 v r,2 v 2 r,1 ) + o(1/ ), as (B5) K r (x)(v r,2 v r,1 x)dx v r,0 v r,2 vr,1 2 K r (x)(v r,2 v r,1 x)dx v r,0 v r,2 vr,1 2 ]K r ( x i x ) + K r (x)(v r,2 v r,1 x)dx v r,0 v r,2 v 2 r,1 ]K r ( x i x ) K r (x)(v r,0 x v r,1 )dx (v r,0 v r,2 v 2 r,1 ) + o(n ) ɛ i K r ( x i x ) + x i <x K r (x)(v r,2 v r,1 x)dx 2d (1 v r,0 v r,2 vr,1 2 ) ɛ i K r ( x i x ) + o(n ), as x i x n = o(n ), as (B6) 18

19 and I 3 = x i <x [f(x i ) f (x ) d d 1/2 (v r,0 v r,2 vr,1 2 ) K r (x)(v r,0 x v r,1 )dx x i x [f(x i ) f (x ) d d K r (x)(v r,2 v r,1 x)dx v r,0 v r,2 vr,1 2 (x i x)] 2 K r ( x i x ) + K r (x)(v r,2 v r,1 x)dx v r,0 v r,2 vr,1 2 K r (x)(v r,0 x v r,1 )dx (v r,0 v r,2 vr,1 2 ) (x i x)] 2 K r ( x i x ) + o(n ), as = n [ d K r (x)(v r,2 v r,1 x)dx 0 v r,0 v r,2 vr,1 2 + d n [ d 0 K r(x)(v r,2 v r,1 x)dx v r,0 v r,2 vr,1 2 d o(n ), as K r (x)(v r,0 x v r,1 )dx v r,0 v r,2 vr,1 2 u] 2 K r (u)du + K r (x)(v r,0 x v r,1 )dx v r,0 v r,2 vr,1 2 u] 2 K r (u)du + = d 2 C 2 n + o(n ), as (B7) By combining (B1), (B6) and (B7), we get equation (34) 19

20 References Cu, CK, and Marron, JS (1991), Coosing a kernel regression estimator, Statistical Science 6, Cobb, GW(1978), Te problem of te Nile: conditional solution to a cangepoint problem, Biometrika 65, Eubank, RL, and Speckman, PL(1994), Nonparametric estimation of functions wit jump discontinuities, IMS Lecture Notes, vol 23, Cange-Point Problems (E Carlstein, HG Müller and D Siegmund eds), Fan, J, and Gijbels, I (1996), Local Polynomial Modelling and Its Applications, Capman & Hall: London Hall, P, and Titterington, M(1992), Edge-preserving and peak-preserving smooting, Tecnometrics 34, Hastie, T, and Loader, C (1993), Local regression: automatic kernel carpentry, Statistical Science 8, Koo, JY (1997), Spline estimation of discontinuous regression functions, Journal of Computational and Grapical Statistics 6, Loader, CR (1996), Cange point estimation using nonparametric regression, Te Annals of Statistics 24, Loader, CR (1999), Bandwidt selection: classical or plug-in?, Te Annals of Statistics 27, McDonald, JA, and Owen, AB(1986), Smooting wit split linear fits, Tecnometrics 28, Müller, HG(1992), Cange-points in nonparametric regression analysis, Te Annals of Statistics 20, Qiu, P(1994), Estimation of te number of jumps of te jump regression functions, Communications in Statistics-Teory and Metods 23,

21 Qiu, P, Asano, Ci, and Li, X(1991), Estimation of jump regression functions, Bulletin of Informatics and Cybernetics 24, Qiu, P, Cappell, R, Obermeyer, W, and Benca, R (1999), Modelling daily and subdaily cycles in rat sleep data, Biometrics 55, Qiu, P, and Yandell, B (1998), A local polynomial jump detection algoritm in nonparametric regression, Tecnometrics 40, Sea, DJ, Worley, SJ, Stern, IR, and Hoar, TJ (1994), An introduction to atmosperic and oceanograpic data, NCAR/TN-404+IA, Climate and Global Dynamics Division, National Center For Atmosperic Researc, Boulder, Colorado Siau, JH, Waba, G, and Jonson, DR (1986), Partial spline models for te inclusion of tropopause and frontal boundary information in oterwise smoot two- and tree-dimensional objective analysis, Journal of Atmosperic and Oceanic Tecnology 3, Wang, Y (1995), Jump and sarp cusp detection by wavelets, Biometrika 82, Wu, JS, and Cu, CK(1993), Kernel type estimators of jump points and values of a regression function, Te Annals of Statistics 21,

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Bandwidth Selection in Nonparametric Kernel Testing

Bandwidth Selection in Nonparametric Kernel Testing Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Estimation of boundary and discontinuity points in deconvolution problems

Estimation of boundary and discontinuity points in deconvolution problems Estimation of boundary and discontinuity points in deconvolution problems A. Delaigle 1, and I. Gijbels 2, 1 Department of Matematics, University of California, San Diego, CA 92122 USA 2 Universitair Centrum

More information

Math 161 (33) - Final exam

Math 161 (33) - Final exam Name: Id #: Mat 161 (33) - Final exam Fall Quarter 2015 Wednesday December 9, 2015-10:30am to 12:30am Instructions: Prob. Points Score possible 1 25 2 25 3 25 4 25 TOTAL 75 (BEST 3) Read eac problem carefully.

More information

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

Uniform Convergence Rates for Nonparametric Estimation

Uniform Convergence Rates for Nonparametric Estimation Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys American Journal of Teoretical and Applied Statistics 2018; 7(3): 92-101 ttp://www.sciencepublisinggroup.com/j/ajtas doi: 10.11648/j.ajtas.20180703.11 ISSN: 2326-8999 (Print); ISSN: 2326-9006 (Online)

More information

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here!

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here! Precalculus Test 2 Practice Questions Page Note: You can expect oter types of questions on te test tan te ones presented ere! Questions Example. Find te vertex of te quadratic f(x) = 4x 2 x. Example 2.

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE U N I V E R S I T Ä T H A M B U R G A note on residual-based empirical likeliood kernel density estimation Birte Musal and Natalie Neumeyer Preprint No. 2010-05 May 2010 DEPARTMENT MATHEMATIK SCHWERPUNKT

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Analysis of Solar Generation and Weather Data in Smart Grid with Simultaneous Inference of Nonlinear Time Series

Analysis of Solar Generation and Weather Data in Smart Grid with Simultaneous Inference of Nonlinear Time Series Te First International Worksop on Smart Cities and Urban Informatics 215 Analysis of Solar Generation and Weater Data in Smart Grid wit Simultaneous Inference of Nonlinear Time Series Yu Wang, Guanqun

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky

More information

Finding and Using Derivative The shortcuts

Finding and Using Derivative The shortcuts Calculus 1 Lia Vas Finding and Using Derivative Te sortcuts We ave seen tat te formula f f(x+) f(x) (x) = lim 0 is manageable for relatively simple functions like a linear or quadratic. For more complex

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

Artificial Neural Network Model Based Estimation of Finite Population Total

Artificial Neural Network Model Based Estimation of Finite Population Total International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

Boosting Kernel Density Estimates: a Bias Reduction. Technique?

Boosting Kernel Density Estimates: a Bias Reduction. Technique? Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy

More information

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2 MTH - Spring 04 Exam Review (Solutions) Exam : February 5t 6:00-7:0 Tis exam review contains questions similar to tose you sould expect to see on Exam. Te questions included in tis review, owever, are

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h Lecture Numerical differentiation Introduction We can analytically calculate te derivative of any elementary function, so tere migt seem to be no motivation for calculating derivatives numerically. However

More information

Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series

Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Te University of Adelaide Scool of Economics Researc Paper No. 2009-26 Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Jiti Gao, Degui Li and Dag Tjøsteim Te University of

More information

5.1 introduction problem : Given a function f(x), find a polynomial approximation p n (x).

5.1 introduction problem : Given a function f(x), find a polynomial approximation p n (x). capter 5 : polynomial approximation and interpolation 5 introduction problem : Given a function f(x), find a polynomial approximation p n (x) Z b Z application : f(x)dx b p n(x)dx, a a one solution : Te

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

LECTURE 14 NUMERICAL INTEGRATION. Find

LECTURE 14 NUMERICAL INTEGRATION. Find LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial

More information

Time (hours) Morphine sulfate (mg)

Time (hours) Morphine sulfate (mg) Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15

More information

The Verlet Algorithm for Molecular Dynamics Simulations

The Verlet Algorithm for Molecular Dynamics Simulations Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical

More information

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0 3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,

More information

Hazard Rate Function Estimation Using Erlang Kernel

Hazard Rate Function Estimation Using Erlang Kernel Pure Matematical Sciences, Vol. 3, 04, no. 4, 4-5 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/0.988/pms.04.466 Hazard Rate Function Estimation Using Erlang Kernel Raid B. Sala Department of Matematics

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional

More information

The total error in numerical differentiation

The total error in numerical differentiation AMS 147 Computational Metods and Applications Lecture 08 Copyrigt by Hongyun Wang, UCSC Recap: Loss of accuracy due to numerical cancellation A B 3, 3 ~10 16 In calculating te difference between A and

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers.

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers. ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU A. Fundamental identities Trougout tis section, a and b denotes arbitrary real numbers. i) Square of a sum: (a+b) =a +ab+b ii) Square of a difference: (a-b)

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Nonparametric estimation of the average growth curve with general nonstationary error process

Nonparametric estimation of the average growth curve with general nonstationary error process Nonparametric estimation of te average growt curve wit general nonstationary error process Karim Benenni, Mustapa Racdi To cite tis version: Karim Benenni, Mustapa Racdi. Nonparametric estimation of te

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

ERROR BOUNDS FOR FINITE-DIFFERENCE METHODS FOR RUDIN OSHER FATEMI IMAGE SMOOTHING

ERROR BOUNDS FOR FINITE-DIFFERENCE METHODS FOR RUDIN OSHER FATEMI IMAGE SMOOTHING ERROR BOUNDS FOR FINITE-DIFFERENCE METHODS FOR RUDIN OSHER FATEMI IMAGE SMOOTHING JINGYUE WANG AND BRADLEY J. LUCIER Abstract. We bound te difference between te solution to te continuous Rudin Oser Fatemi

More information

Derivatives. By: OpenStaxCollege

Derivatives. By: OpenStaxCollege By: OpenStaxCollege Te average teen in te United States opens a refrigerator door an estimated 25 times per day. Supposedly, tis average is up from 10 years ago wen te average teenager opened a refrigerator

More information

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation A Locally Adaptive Transformation Metod of Boundary Correction in Kernel Density Estimation R.J. Karunamuni a and T. Alberts b a Department of Matematical and Statistical Sciences University of Alberta,

More information

f a h f a h h lim lim

f a h f a h h lim lim Te Derivative Te derivative of a function f at a (denoted f a) is f a if tis it exists. An alternative way of defining f a is f a x a fa fa fx fa x a Note tat te tangent line to te grap of f at te point

More information

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set WYSE Academic Callenge 00 Sectional Matematics Solution Set. Answer: B. Since te equation can be written in te form x + y, we ave a major 5 semi-axis of lengt 5 and minor semi-axis of lengt. Tis means

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Continuity. Example 1

Continuity. Example 1 Continuity MATH 1003 Calculus and Linear Algebra (Lecture 13.5) Maoseng Xiong Department of Matematics, HKUST A function f : (a, b) R is continuous at a point c (a, b) if 1. x c f (x) exists, 2. f (c)

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a?

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a? Solutions to Test 1 Fall 016 1pt 1. Te grap of a function f(x) is sown at rigt below. Part I. State te value of eac limit. If a limit is infinite, state weter it is or. If a limit does not exist (but is

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

Local Rank Inference for Varying Coefficient Models

Local Rank Inference for Varying Coefficient Models Local Rank Inference for Varying Coefficient Models Lan WANG, BoAI, and Runze LI By allowing te regression coefficients to cange wit certain covariates, te class of varying coefficient models offers a

More information

Nonparametric density estimation for linear processes with infinite variance

Nonparametric density estimation for linear processes with infinite variance Ann Inst Stat Mat 2009) 61:413 439 DOI 10.1007/s10463-007-0149-x Nonparametric density estimation for linear processes wit infinite variance Tosio Honda Received: 1 February 2006 / Revised: 9 February

More information

Topics in Generalized Differentiation

Topics in Generalized Differentiation Topics in Generalized Differentiation J. Marsall As Abstract Te course will be built around tree topics: ) Prove te almost everywere equivalence of te L p n-t symmetric quantum derivative and te L p Peano

More information

arxiv: v1 [math.pr] 28 Dec 2018

arxiv: v1 [math.pr] 28 Dec 2018 Approximating Sepp s constants for te Slepian process Jack Noonan a, Anatoly Zigljavsky a, a Scool of Matematics, Cardiff University, Cardiff, CF4 4AG, UK arxiv:8.0v [mat.pr] 8 Dec 08 Abstract Slepian

More information

Local Orthogonal Polynomial Expansion (LOrPE) for Density Estimation

Local Orthogonal Polynomial Expansion (LOrPE) for Density Estimation Local Ortogonal Polynomial Expansion (LOrPE) for Density Estimation Alex Trindade Dept. of Matematics & Statistics, Texas Tec University Igor Volobouev, Texas Tec University (Pysics Dept.) D.P. Amali Dassanayake,

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

2.3 Algebraic approach to limits

2.3 Algebraic approach to limits CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.

More information

Online Learning: Bandit Setting

Online Learning: Bandit Setting Online Learning: Bandit Setting Daniel asabi Summer 04 Last Update: October 0, 06 Introduction [TODO Bandits. Stocastic setting Suppose tere exists unknown distributions ν,..., ν, suc tat te loss at eac

More information

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Deconvolution problems in density estimation

Deconvolution problems in density estimation Deconvolution problems in density estimation Dissertation zur Erlangung des Doktorgrades Dr. rer. nat. der Fakultät für Matematik und Wirtscaftswissenscaften der Universität Ulm vorgelegt von Cristian

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

Bootstrap prediction intervals for Markov processes

Bootstrap prediction intervals for Markov processes arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA

More information

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t). . Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

Taylor Series and the Mean Value Theorem of Derivatives

Taylor Series and the Mean Value Theorem of Derivatives 1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential

More information

arxiv: v1 [math.na] 28 Apr 2017

arxiv: v1 [math.na] 28 Apr 2017 THE SCOTT-VOGELIUS FINITE ELEMENTS REVISITED JOHNNY GUZMÁN AND L RIDGWAY SCOTT arxiv:170500020v1 [matna] 28 Apr 2017 Abstract We prove tat te Scott-Vogelius finite elements are inf-sup stable on sape-regular

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information