Uniform Convergence Rates for Nonparametric Estimation

Size: px
Start display at page:

Download "Uniform Convergence Rates for Nonparametric Estimation"

Transcription

1 Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate of uniform consistency results for kernel estimators of density functions and regressions functions. We generalize te eisting literature by allowing for stationary strong miing multivariate data wit infinite support and kernels wit unbounded support. Researc supported by te National Science Foundation. Department of Economics, 1180 Observatory Drive, University of Wisconsin, Madison, WI

2 1 Introduction Tis paper presents a set of rate of uniform consistency results for kernel estimators of density functions and regressions functions. We generalize te eisting literature by allowing for stationary strong miing multivariate data wit infinite support and kernels wit unbounded support. Te kernel estimators tat we eamine were first introduced by Rosenblatt (1956) for density estimation, by Nadaraya (1964) and Watson (1964) for regression estimation. Te local linear estimator was introduced by Stone (1977) and came into prominence troug te work of Fan (1992, 1993). Andrews (1995) provides a compreensive set of results concerning te uniform consistency of kernel estimators, but is rates are not sarp. Masry (1996) derived sarp rates for uniform almost sure convergence, but confined attention to te case of bounded regression support, and placed overly restrictive conditions on te regression functions. Fan and Yao (2003) also ave a set of results, but are quite restrictive in application. In tis paper we attempt to provide a general set of results wit broad applicability. Our main result is te uniform convergence of a sample average functional, wic can be easily used for application to density and regression estimation. Te conditions imposed on te functional are quite general, allowing for kernels wit unbounded support (suc as te standard normal), so long as tey satisfy a Lipscitz condition. Te data are allowed to be generated eiter from a random sample or from a stationary strong miing time series. Te support fortedataisallowedtobeinfinite, and our convergence results include te entire sample space rater tan being restricted to a compact subset. Te rate of decay for te bandwidt is allowed considerable fleibility. Our proof metod is a generalization of tat in Fan and Yao (2003), and like teirs is based on an eponential inequality from Bosq (1998). We also borrow te trimming argument of Andrews (1995) to allow for unbounded regression support. Section 2 of te paper presents te main results: a variance bounded and te rate of convergence for te sample average functional. Section 3 provides an application to density estimation, and Section 4 to regression estimation. Te proofs are in te Appendi. 1

3 2 Basic Results Let {X i,y i } R d R be a sequence of random vectors. We are interested in averages of te form Ĝ () = 1 n µ Xi Y i G (1) were G() :R d R and = cn γ (2) is a bandwidt, were 0 <c< and 0 <γ<1/d. For typical applications, te function G is eiter a kernel or te product of a kernel wit a polynomial. Assumption 1 For all, 0 R d,someλ < and η>1/γ ten ( Λ, 1 G() Λ η, > 1, (3) G() G( 0 ) Λ 0, (4) Equation (3) imposes boundedness and a tail condition related to te bandwidt. Equation (4) is a Lipscitz condition. We require te following moment and smootness conditions on te observations. Let f Y X (y ) and f X () denote te conditional density of Y i given X i, and te marginal density of X i, respectively, and for any j 1 let f j ( 0, j ) denote te joint density of (X 0,X j ). Assumption 2 {X i,y i } is strictly stationary and ergodic, wit strong miing coefficients α m am β for some a< and β> 2s 2 (5) s 2 for some s>2. Furtermore,E Y i+j s <, and for all j 0 and all t s sup E Y i+j t X i = f X () Ψ 1 <, (6) for j d +1, sup f j ( 1, 2 ) Ψ 2 <, (7) 1, 2 and for large and some 1 <µ E ( Y i X i = ) f X () Ψ 3 µ. (8) Note tat equations (6) and (8) involve te product of te conditional mean and te marginal density, and tus are not very restrictive. For independent data, f j ( 0, j )=f X ( 0 )f X ( j ), so (7) 2

4 olds wen te density f X () is bounded. If te support of X i is bounded, ten we can take µ = in (8) We first describe a uniform bound on te variance of Ĝ(). Teorem 1 Under Assumptions 1 and 2, tere is J< suc tat Var³Ĝ () Jd n. (9) We now present our main result concerning te rate of convergence for Ĝ(). Teorem 2 Under Assumptions 1 and 2, if in addition β> 3 µ s 2 + d µ µ 1 µ (1 + γd)+3+7γd 2 s (s 2) (1 γd) (10) ten were ³ sup Ĝ() EĜ() = O p d r n R d r n = µ log n 1/2 d. n (11) Te rate for te bandwidt is controlled by (10) wic is satisfied for sufficiently large β. In particular, in te case of independent data or eponential decay for te miing coefficients, we ave β = and (10) is automatically satisfied. Te rigt side of equation (10) is increasing in γ, so te inequality is least restrictive by selecting γ close to zero (so te bandwidt declines to zero slowly). Te limiting case (γ =0)is β> 3 µ s 2 + d µ µ µ s 2 s 2 Furtermore, if Y i as all moments finite and te support of X i is bounded, ten s = and µ =, and tis simplies to β>d In particular, for d =1, ten te inequality is β>5/2, te restriction used by Fan and Yao (2003), Lemma 6.1. Tus Teorem 2 generalizes teir result to te case of mutlivariate data wit unbounded support. 3

5 3 Density Estimation Consider te estimation of f X (), te density of X i.letk(u) :R R denote a kernel function and let K() = dy k ( j ) be a product kernel. Let be a bandwidt. Te kernel density estimator of f X () is ˆf X () = 1 n d j=1 µ Xi K. It is asymptotically optimal to set = cn γ wit γ =1/ (d +4), wic we now assume. Assumption 3 (a) R R k(u)du =1;(b) k( u) =k(u); k(u) satisfies (3) and (4) for some η>d+4. Te bandwidt satisfies = cn 1/(d+4). We can use Teorem 2 to obtain te uniform rate of convergence for f X (). Teorem 3 If Assumption 2 olds wit Y i =1and s =, Assumption 3 olds, and β> 1 4 µ d µ (2d +4)+6+5d µ 1 (12) ten sup ˆf X () f X () = O p (r n ) R d were r n = n 2/(d+4) log 1/2 n Alternative results for te uniform rate of convergence for kernel density estimates ave been provided by Andrews (1995, Teorem 1) and Fan and Yao (2003, Teorem 5.3). Andrews result is more general in allowing for near-epoc-dependent arrays but obtains a slower rate of convergence. Fan and Yao obtained te same rate of convergence, but teir result is restricted to univariate data, compact support for X i, and kernels wit compact support. 4

6 4 Nadaraya-Watson Estimator Consider te estimation of te conditional mean m() =E (Y i X i = ). Let te multivariate kernel K and bandwidt be defined as in te previous section. Te Nadaraya-Watson estimator of m() is ˆm() = P n Y ik ³ Xi P n K ³ Xi Te local linear (LL) estimator of m() is obtained from a weigted regression of Y i on X i. Letting Ã! 1 z i = ten te LL estimator is m() = P n Y ik ³ Xi P n K ³ Xi P n z0 i K ³ Xi P n z0 i K ³ Xi We introduce te following smootness condition. Assumption 4 For some δ>0, X i. ³ Pn z iz i K ³ Pn z iz i K sup m(1 ) 0 f X ( 2 ) < 1 2 δ sup 2 m( 1 )f X ( 2 ) < 1 2 δ ³ Xi ³ Xi 1 P n z iy i K 1 P n z ik ³ Xi ³ Xi Observe tat Assumption 4 does not require te regression function and its derivatives to be bounded. Rater, m() and 2 m() are required to not diverge faster tan f X () and f X () decline to zero in te tails. For any positive sequence δ n define A n = Teorem 4 Under Assumptions 2, 3, 4, and (10), n R d : f() δ n o. sup ˆm() m() = O p δ 2 n r n. A n 5

7 Alternative results for te uniform rate of convergence for te Nadaraya-Watson estimator ave been provided by Andrews (1995, Teorem 1). His results allow for near-epoc-dependent arrays but obtains a slower rate of convergence. Teorem 5 Under [conditions] sup m() m() = O p δ 2 n r n. A n Te conditions and proof are incomplete. Tis result complements tat of Masry (1996). Masry obtains almost sure convergence over compact sets, but imposed stronger conditions on te regression function. 6

8 References [1] Andrews, Donald W.K. (1995): Nonparametric kernel estimation for semiparametric models, Econometric Teory, 11, [2] Bosq, D. (1998): Nonparametric Statistics for Stocastic Processes: Estimation and Prediction (2nd ed.). Lecture Notes in Statistics 110. Springer-Verlag. [3] Fan, Jianqing (1992): Design-adaptive nonparametric regression, Journal of te American Statistical Association, 87, [4] Fan, Jianqing (1993): Local linear regression smooters and teir minima efficiency, Annals of Statistics, 21, [5] Fan, Jianqing and Qiwei Yao (2003): Nonlinear Time Series: Nonparametric and Parametric Metods. Springer-Verlag. [6] Masry, Elias (1996): Multivariate local polynomial regression for time series: Uniform strong consistency and rates, Journal of Time Series Analysis, 17, [7] Nadaraya, E. A. (1964): On estimating regression, Teory of Probability and Its Applications, 9, [8] Rosenblatt, M. (1956): Remarks on some non-parametric estimates of a density function, Annals of Matematical Statistics, 27, [9] Stone, C.J. (1977): Consistent nonparametric regression, Annals of Statistics, 5, [10] Watson, G.S. (1964): Smoot regression analysis, Sankya Series A, 26,

9 5 Appendi ProofofTeorem1. WLOG assume tat Λ 1 and Ψ 1. From (3) we observe tat G () d Λ d + Λ η d ΛV dη R d 1 >1 η 1 were V d is te volumn of te unit spere in R d. Since G () Λ It follows tat for any 1 t s G () t d Λ t 1 ΛV dη R d η 1 Λs V d η Θ. (13) η 1 ³ Let U ni () =Y i G Xi. By a cange of variables, for any 1 t s, using (13) and (6), µ E U ni () t y G t u t f Y X (y u) f X (u) dudy µ = d G (u) t y t f Y X (y u) dy f X ( u) du ΘΨ 1 d. (14) We now develop several alternative bounds on te covariances Cov (U n0 (),U nj ()). First, by te Caucy-Scwarz inequality and (14) wit t =2, Cov (U n0 (),U nj ()) Var(U n0 ()) 2EU n0 () 2 2ΘΨ 1 d. (15) Second, take j d +1and observe tat µ µ Cov (U n0 (),U nj ()) E Y 0 Y j EY 0 EY j X0 Xj G G µ µ +(EY 0 ) 2 E X0 Xj G G +(E U n0() ) 2. (16) We eamine te terms on te rigt-and-side of (16). By cange of variables, (13), and (7), µ µ µ µ E X0 Xj u0 uj G G = G G f j (u 0,u j ) du 0 du j = 2d G (u 0 ) G (u j ) f j ( u 0, u j ) du 0 du j 2d Θ 2 sup f j ( 1, 2 ) 1, 2 2d Θ 2 Ψ 2. (17) 8

10 Let g j (y 0,y j ) denote te conditional density of (Y 0,Y j ) given X 0 =. Using (3), a cange of variables, conditioning, (13), Davydov s Lemma, (6), and te assumption α j aj β, we find µ µ µ E Y 0 Y j EY 0 EY j X0 Xj G G µ µ ΛE Y 0 Y j EY 0 EY j X0 G µ = Λ y 0 y j EY 0 EY j u G g j (y 0,y j u) f X (u)dudy 0 dy j = d Λ y 0 y j EY 0 EY j G (u) g j (y 0,y j u) f X ( u)dudy 0 dy j d ΛΘ sup E ( Y 0 Y j EY 0 EY j X 0 = ) f X () d ΛΘα 1 2/s j sup (E ( Y 0 s X 0 = ) f X ()) 1/s (E ( Y j s X 0 = ) f X ()) 1/s d aλθψ 1 j β[1 2/s]. (18) Equations (16)-(18) combine to sow tat for j d +1 ³ Cov (U n0 (),U nj ()) aλθψ 1 j β[1 2/s] d + (EY 0 ) 2 Ψ 2 + Ψ 2 1 Θ 2 2d. (19) Tird, using Davydov s Lemma, (14) wit t = s, and te assumption α j aj β, Cov (U n0 (),U nj ()) 16α 1 2/s j (E U ni () s ) 2/s 16aj β(1 2/s) ΘΨ 1 2d/s were te final inequality olds since 2/s 2 β (1 2/s) under (5). Te bounds (15), (19) and (20) sow tat 16aj β(1 2/s) ΘΨ 1 d[2 β(1 2/s)] (20) nv ar ³Ĝ () = 1 n E Ã U ni () EU ni ()! 2 Var(U n0 ()) d X j=d+1 X dx Cov (U n0 (),U nj ()) j=1 Cov (U n0 (),U nj ()) j= d Cov (U n0 (),U nj ()) 9

11 2ΘΨ 1 d (1 + 2d) d X j=d+1 X ³ aλθψ 1 j β(1 2/s) d + (EY 0 ) 2 Ψ 2 + Ψ 2 1 Θ 2 2di j= d 16aΘΨ 1 j β(1 2/s) d[2 β(1 2/s)] 2ΘΨ 1 (1 + 2d) d + 2aΛΘΨ ³ 1 β 1 2 d +2 (EY 0 ) 2 Ψ 2 + Ψ 2 1 Θ 2 d s aΘΨ 1 β 1 2 d s 1 J d wic is (9) wit J =2ΘΨ 1 (1 + 2d)+ 2aΛΘΨ ³ 1 β (EY 0 ) 2 Ψ 2 + Ψ 2 1 Θ aΘΨ 1 s 1 β 1 2 (21) s 1 Tis is (9). For te final inequality we ave used te fact tat for δ>0 and k 1 Tis completes te proof. X j=k+1 j δ 1 k δ 1 d = 1 δk δ. ProofofTeorem2. We start by introducing some notation. First, define µ Xi V ni () =Y i G Ã 1 Y i r 2/s n 2! and ˆV () = 1 n V ni (). V ni () as been truncated so tat V ni () EV ni () rn 2/s. Second, for µ defined in Assumption 2 let τ n =2 1 d 1/(µ 1) r n and set A = { : τ n }. Te region A can be covered wit µ τ d n N = 1+d = d(1+dµ/2(µ 1)) r n 2 d µ n dµ/2(µ 1) (22) log n ypersperes of te form A j = : j 1+d ª r n, wic are centered at j and ave radius 1+d r n.leta c = { : >τ n }. 10

12 We sow below tat sup Ĝ() EĜ() = sup ˆV () E ˆV () + O p ( d r n ). (23) sup ˆV () E ˆV () = ma ˆV ( j ) E ˆV ( j ) + O p ( d r n ) (24) A 1jN sup ˆV () E ˆV () = O p ( d r n ) (25) A c ˆV ( j ) E ˆV ( j ) = O p ( d r n ). (26) ma 1jN Togeter, tese establis tat sup Ĝ() EĜ() = O p ( d r n ) as desired. It remains to sow (23)-(26), wic we take sequentially. Proofof(23):First, P µ ³ d r n 1 sup Ĝ() ˆV () > 0 Ã! P ma Y i > r 2/s n 1in 2 µ np Y i s > 1 2 s rn µ µ 2 2 s E Y i s 1 Y i s > 1 2 s rn 2 0 using Markov s inequality since r n 0 as n. Hence sup Ĝ() ˆV () = o p ( d r n ). (27) Second, by a cange of variables and using (13) sup E Ĝ() ˆV () sup d sup d d Θ y rn 2/s /2 y rn 2/s /2 G (u) du sup à r 2/s n 2 µ u G y f Y X (y u) f X (u) dydu G (u) y f Y X (y u) f X ( u) dydu Ã! 1 s sup y f Y X (y ) dyf X () y rn 2/s /2 µ y s f Y X (y ) dyf X () 2 s ΘΨ 1 d r n, (28)! 11

13 te final inequality using (6) and te fact tat for s>2 and n large r 2(s 1) s n r n. Equation (23) follows from (27) and (28). Proofof(24):Te Lipscitz condition (4) and te radius of A j imply tat for all A j and tus for all A j µ µ Xi j X i G G Λ 1 j Λ d r n ˆV () ˆV ( j ) 1 n µ µ Y i Xi j X i G G ˆµ n Λ d r n were ˆµ n = 1 n Y i = O p (1). Similarly E ˆV () E ˆV ( j ) E ˆV () ˆV ( j ) E Y i Λ d r n. Terefore sup ˆV () E ˆV () = ma sup ˆV () E ˆV () A 1jN A j = ma ˆV ( j ) E ˆV ( j ) 1jN + ma ³ sup ˆV () ˆV ( j ) + E ˆV ( j ) E ˆV () 1jN A j ˆV ( j ) E ˆV ( j ) +(ˆµ n + E Y i ) Λ d r n ma 1jN wic implies (24). Proofof(25):For η defined in Assumption 1, let δ n = d 1/η r n. Observetatsinceη>1/γ ten (1 + γd) /2η <γand µ n 1/2η δ n = d O ³n γ+(1+γd)/2η = o(1). (29) log n If X i τ n δ n and A c = { : >τ n }, ten X i µ Xi G G (δ n ) Λδ η n δ n and = d r n 12

14 by (3). Hence µ µ E Y i sup Xi G A c 1( X i τ n δ n ) Λ d r n E Y i. Furtermore using (8), µ µ E Y i sup Xi G A c 1( X i >τ n δ n ) E ( Y i 1( X i >τ n δ n )) = E ( Y i X i = ) f X ()d >τ n δ n 2Ψ 3 µ d τ n δ n O ³(τ 1 µ n δ n ) O τ n 1 µ O( d r n ) te second-to-last inequality using (29) and te final inequality by te definition of τ n. Tus E sup ˆV () E ˆV µ µ () 2E Y i sup Xi G = O( d r n ). A c A c (25) follows by application of Markov s inequality. Proofof(26): Define à m 2 X σ 2 m() =E (V ni () EV ni ())!. By Teorem 1 and n sufficiently large, σ 2 m() mj d (observe tat (5) olds under (10)). Since V ni EV ni rn 2/s, by Teorem 1.3 of Bosq (1998), for all, q (0, 1] and ε>0 P ³ ˆV () E ˆV () >ε Ã! 1/2 ε 4ep 2 n 32qσ 2 [1/q] ()+8q 1 εrn 2/s +11nqα [1/q] 1+ 4 εrn 2/s Ã! ε 2 n 4ep 32J d +8q 1 εrn 2/s +12anq 1+β ε 1/2 rn 1/s were te second inequality olds for n sufficiently large and te assumption on te miing coefficients. 13

15 Set q = rn 1 2/s and ε = Mr n d for M>J.Tis gives te bound P ³ ˆV () E ˆV d () >Mr n µ 4ep M 2 rn 2 2d n 32J d +8M d +12aM 1/2 nr n (1+β)(1 2/s) 1/2 1/s d/2 = 4n M/40 +12aM 1/2 d(3/2+β(1 2/s) 3/s)/2 n (3/2 β(1 2/s)+3/s)/2 (log n) λ were λ =(β (1 2/s)+1/2 3/s) /2. Hence µ P ˆV ( j ) E ˆV ( j ) >M d r n ma 1jN NP ³ ˆV ( j ) E ˆV ( j ) >M d r n 4Nn M/40 +12a Nn(3/2 β(1 2/s)+3/s)/2 (log n) λ M 1/2 d(3/2+β(1 2/s) 3/s)/2. (30) We now sow tat te rigt-and-side of (30) is o(1) for M sufficiently large, wic implies (26). Indeed, using definition (22) Nn M/40 = d(1+dµ/2(µ 1)) 2 d µ n dµ/2(µ 1) n M/40 log n n γd(1+dµ/2(µ 1))+dµ/2(µ 1) M/40 wic is = o(1) wen M>40γd(1 + dµ/2(µ 1)) + 20dµ/ (µ 1). Furtermore, Nn (3/2 β(1 2/s)+3/s)/2 (log n) λ d(3/2+β(1 2/s) 3/s)/2 = = Ã n dµ/(µ 1)+3/2 β(1 2/s)+3/s d(7/2+dµ/(µ 1)+β(1 2/s) 3/s)! 1/2 (log n) λ dµ/2(µ 1) ³ n γd(7/2+dµ/(µ 1)+β(1 2/s) 3/s)+dµ/(µ 1)+3/2 β(1 2/s)+3/s1/2 (log n) λ dµ/2 = o(1) since µ 7 γd 2 + d µ µ µ 1 + β d µ s s µ µ 2 β s s < 0 under (10). Tus (30) is o(1) wic establises (26) as desired. We ave sown (23)-(26), wic completes te proof. ProofofTeorem3. In te notation of Teorem 2, ˆfX () = d Ĝ () wit G() =K(). Since k() satisfies Assumption 1, so does te product K(). Equation (12) is (10) substituting s = and γ =1/(d +4). Ten by Teorem 2 sup ˆf X () E ˆf X () = d sup Ĝ() EĜ() = O p (r n ) R d R d 14

16 were µ log n 1/2 r n = n d/(d+4) = log1/2 n n n 2/(d+4). It is well known tat under tese assumptions E ˆf X () f X () =O 2 = O ³n 2/(d+4) O (r n ). Togeter we obtain sup ˆf X () f X () = O p (r n ). R d ProofofTeorem4.Lete i = Y i m(x i ). Ten P ³ n (Y i m()) K Xi ˆm() m() = P ³ n K Xi P n (e i + m(x i ) m()) K = P ³ n K Xi à = ˆf X () 1 1 µ Xi n d e i K ³ Xi + 1 n d Now consider te terms on te rigt and side. First, by a Taylor epansion, Teorem 3, and te definition of A n µ! Xi (m(x i ) m()) K. sup ˆfX () 1 f X () 1 sup f X () 2 sup ˆfX () f X () δ 2 n O p (r n ), A n A n A n Second, by Teorem 2 and te fact tat E (e i X i )=0 Tird, by Teorem 2 1 sup n d A n sup A n 1 n d µ (m(x i ) m()) K µ Xi e i K = O p (r n ). µ Xi E (m(x i ) m()) K µ Xi = O p (r n ). 15

17 Now, letting denote a point intermediate between and u, µ 1 d E (m(x Xi i) m()) K = 1 µ u d (m(u) m()) K f X (u)du = (m( u) m()) K (u) f X ( u)du µ = 2 m() 0 uu 0 K (u) f X ( )du tr 2 m( )uu 0 K (u) f X ( u)du. Tus for sufficiently small µ 1 d E (m(x Xi i) m()) K Ã 2 uu 0 K (u) du m(1 ) 0 f X ( 2 ) + = O 2 O p (r n ) Togeter, tese complete te proof. sup 1 2 δ sup 1 2 δ! 2 m( 1 )f X ( 2 ) 16

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

A New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model

A New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model e University of Adelaide Scool of Economics Researc Paper No. 2009-6 October 2009 A New Diagnostic est for Cross Section Independence in Nonparametric Panel Data Model Jia Cen, Jiti Gao and Degui Li e

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series

Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Te University of Adelaide Scool of Economics Researc Paper No. 2009-26 Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Jiti Gao, Degui Li and Dag Tjøsteim Te University of

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

Bandwidth Selection in Nonparametric Kernel Testing

Bandwidth Selection in Nonparametric Kernel Testing Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Hazard Rate Function Estimation Using Erlang Kernel

Hazard Rate Function Estimation Using Erlang Kernel Pure Matematical Sciences, Vol. 3, 04, no. 4, 4-5 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/0.988/pms.04.466 Hazard Rate Function Estimation Using Erlang Kernel Raid B. Sala Department of Matematics

More information

A Goodness-of-fit test for GARCH innovation density. Hira L. Koul 1 and Nao Mimoto Michigan State University. Abstract

A Goodness-of-fit test for GARCH innovation density. Hira L. Koul 1 and Nao Mimoto Michigan State University. Abstract A Goodness-of-fit test for GARCH innovation density Hira L. Koul and Nao Mimoto Micigan State University Abstract We prove asymptotic normality of a suitably standardized integrated square difference between

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional

More information

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL IFFERENTIATION FIRST ERIVATIVES Te simplest difference formulas are based on using a straigt line to interpolate te given data; tey use two data pints to estimate te derivative. We assume tat

More information

Analytic Functions. Differentiable Functions of a Complex Variable

Analytic Functions. Differentiable Functions of a Complex Variable Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

A Simple Matching Method for Estimating Sample Selection Models Using Experimental Data

A Simple Matching Method for Estimating Sample Selection Models Using Experimental Data ANNALS OF ECONOMICS AND FINANCE 6, 155 167 (2005) A Simple Matcing Metod for Estimating Sample Selection Models Using Experimental Data Songnian Cen Te Hong Kong University of Science and Tecnology and

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix Opuscula Mat. 37, no. 6 (2017), 887 898 ttp://dx.doi.org/10.7494/opmat.2017.37.6.887 Opuscula Matematica OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS Sandra

More information

Nonparametric density estimation for linear processes with infinite variance

Nonparametric density estimation for linear processes with infinite variance Ann Inst Stat Mat 2009) 61:413 439 DOI 10.1007/s10463-007-0149-x Nonparametric density estimation for linear processes wit infinite variance Tosio Honda Received: 1 February 2006 / Revised: 9 February

More information

Nonparametric estimation of the average growth curve with general nonstationary error process

Nonparametric estimation of the average growth curve with general nonstationary error process Nonparametric estimation of te average growt curve wit general nonstationary error process Karim Benenni, Mustapa Racdi To cite tis version: Karim Benenni, Mustapa Racdi. Nonparametric estimation of te

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract

More information

A Bootstrap Procedure for Inference in Nonparametric Instrumental Variables

A Bootstrap Procedure for Inference in Nonparametric Instrumental Variables A Bootstrap Procedure for Inference in Nonparametric Instrumental Variables Andres Santos Department of Economics University of California, San Diego e-mail: asantos@ucsd.edu May 3, 8 Abstract Tis paper

More information

Mass Lumping for Constant Density Acoustics

Mass Lumping for Constant Density Acoustics Lumping 1 Mass Lumping for Constant Density Acoustics William W. Symes ABSTRACT Mass lumping provides an avenue for efficient time-stepping of time-dependent problems wit conforming finite element spatial

More information

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE U N I V E R S I T Ä T H A M B U R G A note on residual-based empirical likeliood kernel density estimation Birte Musal and Natalie Neumeyer Preprint No. 2010-05 May 2010 DEPARTMENT MATHEMATIK SCHWERPUNKT

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

This is a repository copy of Uniform Consistency of Nonstationary Kernel-Weighted Sample Covariances for Nonparametric Regression.

This is a repository copy of Uniform Consistency of Nonstationary Kernel-Weighted Sample Covariances for Nonparametric Regression. Tis is a repository copy of Uniform Consistency of Nonstationary Kernel-Weigted Sample Covariances for Nonparametric Regression. Wite Rose Researc Online URL for tis paper: ttp://eprints.witerose.ac.uk/02283/

More information

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial

More information

Continuity. Example 1

Continuity. Example 1 Continuity MATH 1003 Calculus and Linear Algebra (Lecture 13.5) Maoseng Xiong Department of Matematics, HKUST A function f : (a, b) R is continuous at a point c (a, b) if 1. x c f (x) exists, 2. f (c)

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

Boosting local quasi-likelihood estimators

Boosting local quasi-likelihood estimators Ann Inst Stat Mat (00) 6:5 48 DOI 0.007/s046-008-07-5 Boosting local quasi-likeliood estimators Masao Ueki Kaoru Fueda Received: Marc 007 / Revised: 8 February 008 / Publised online: 5 April 008 Te Institute

More information

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z

More information

Logistic Kernel Estimator and Bandwidth Selection. for Density Function

Logistic Kernel Estimator and Bandwidth Selection. for Density Function International Journal of Contemporary Matematical Sciences Vol. 13, 2018, no. 6, 279-286 HIKARI Ltd, www.m-ikari.com ttps://doi.org/10.12988/ijcms.2018.81133 Logistic Kernel Estimator and Bandwidt Selection

More information

Local Instrumental Variable (LIVE) Method For The Generalized Additive-Interactive Nonlinear Volatility Model

Local Instrumental Variable (LIVE) Method For The Generalized Additive-Interactive Nonlinear Volatility Model Local Instrumental Variable (LIVE) Metod For Te Generalized Additive-Interactive Nonlinear Volatility Model Micael Levine and Jinguang (Tony) Li Abstract In tis article we consider a new separable nonparametric

More information

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1 Numerical Analysis MTH60 PREDICTOR CORRECTOR METHOD Te metods presented so far are called single-step metods, were we ave seen tat te computation of y at t n+ tat is y n+ requires te knowledge of y n only.

More information

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Lawrence D. Brown, Pilip A. Ernst, Larry Sepp, and Robert Wolpert August 27, 2015 Abstract We consider te class,

More information

Stability properties of a family of chock capturing methods for hyperbolic conservation laws

Stability properties of a family of chock capturing methods for hyperbolic conservation laws Proceedings of te 3rd IASME/WSEAS Int. Conf. on FLUID DYNAMICS & AERODYNAMICS, Corfu, Greece, August 0-, 005 (pp48-5) Stability properties of a family of cock capturing metods for yperbolic conservation

More information

A note on testing the regression functions via nonparametric smoothing

A note on testing the regression functions via nonparametric smoothing 08 Te Canadian Journal of Statistics Vol. 39, No., 20, Pages 08 25 La revue canadienne de statistique A note on testing te regression functions via nonparametric smooting Weixing SONG* and Juan DU Department

More information

Bootstrap prediction intervals for Markov processes

Bootstrap prediction intervals for Markov processes arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

Estimation of boundary and discontinuity points in deconvolution problems

Estimation of boundary and discontinuity points in deconvolution problems Estimation of boundary and discontinuity points in deconvolution problems A. Delaigle 1, and I. Gijbels 2, 1 Department of Matematics, University of California, San Diego, CA 92122 USA 2 Universitair Centrum

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

STAT Homework X - Solutions

STAT Homework X - Solutions STAT-36700 Homework X - Solutions Fall 201 November 12, 201 Tis contains solutions for Homework 4. Please note tat we ave included several additional comments and approaces to te problems to give you better

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

MATH 173: Problem Set 5 Solutions

MATH 173: Problem Set 5 Solutions MATH 173: Problem Set 5 Solutions Problem 1. Let f L 1 and a. Te wole problem is a matter of cange of variables wit integrals. i Ff a ξ = e ix ξ f a xdx = e ix ξ fx adx = e ia+y ξ fydy = e ia ξ = e ia

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

arxiv: v1 [math.dg] 4 Feb 2015

arxiv: v1 [math.dg] 4 Feb 2015 CENTROID OF TRIANGLES ASSOCIATED WITH A CURVE arxiv:1502.01205v1 [mat.dg] 4 Feb 2015 Dong-Soo Kim and Dong Seo Kim Abstract. Arcimedes sowed tat te area between a parabola and any cord AB on te parabola

More information

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error. Lecture 09 Copyrigt by Hongyun Wang, UCSC Recap: Te total error in numerical differentiation fl( f ( x + fl( f ( x E T ( = f ( x Numerical result from a computer Exact value = e + f x+ Discretization error

More information

INTRODUCTION AND MATHEMATICAL CONCEPTS

INTRODUCTION AND MATHEMATICAL CONCEPTS Capter 1 INTRODUCTION ND MTHEMTICL CONCEPTS PREVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

University Mathematics 2

University Mathematics 2 University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at

More information

Kernel estimates of nonparametric functional autoregression models and their bootstrap approximation

Kernel estimates of nonparametric functional autoregression models and their bootstrap approximation Electronic Journal of Statistics Vol. (217) ISSN: 1935-7524 Kernel estimates of nonparametric functional autoregression models and teir bootstrap approximation Tingyi Zu and Dimitris N. Politis Department

More information

Section 15.6 Directional Derivatives and the Gradient Vector

Section 15.6 Directional Derivatives and the Gradient Vector Section 15.6 Directional Derivatives and te Gradient Vector Finding rates of cange in different directions Recall tat wen we first started considering derivatives of functions of more tan one variable,

More information

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if Computational Aspects of its. Keeping te simple simple. Recall by elementary functions we mean :Polynomials (including linear and quadratic equations) Eponentials Logaritms Trig Functions Rational Functions

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

MA119-A Applied Calculus for Business Fall Homework 4 Solutions Due 9/29/ :30AM

MA119-A Applied Calculus for Business Fall Homework 4 Solutions Due 9/29/ :30AM MA9-A Applied Calculus for Business 006 Fall Homework Solutions Due 9/9/006 0:0AM. #0 Find te it 5 0 + +.. #8 Find te it. #6 Find te it 5 0 + + = (0) 5 0 (0) + (0) + =.!! r + +. r s r + + = () + 0 () +

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Global Existence of Classical Solutions for a Class Nonlinear Parabolic Equations

Global Existence of Classical Solutions for a Class Nonlinear Parabolic Equations Global Journal of Science Frontier Researc Matematics and Decision Sciences Volume 12 Issue 8 Version 1.0 Type : Double Blind Peer Reviewed International Researc Journal Publiser: Global Journals Inc.

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky

More information

Exam 1 Solutions. x(x 2) (x + 1)(x 2) = x

Exam 1 Solutions. x(x 2) (x + 1)(x 2) = x Eam Solutions Question (0%) Consider f() = 2 2 2 2. (a) By calculating relevant its, determine te equations of all vertical asymptotes of te grap of f(). If tere are none, say so. f() = ( 2) ( + )( 2)

More information

Cointegration in functional autoregressive processes

Cointegration in functional autoregressive processes Dipartimento di Scienze Statistice Sezione di Statistica Economica ed Econometria Massimo Franci Paolo Paruolo Cointegration in functional autoregressive processes DSS Empirical Economics and Econometrics

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

Higher Derivatives. Differentiable Functions

Higher Derivatives. Differentiable Functions Calculus 1 Lia Vas Higer Derivatives. Differentiable Functions Te second derivative. Te derivative itself can be considered as a function. Te instantaneous rate of cange of tis function is te second derivative.

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is

Section 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is Mat 180 www.timetodare.com Section.7 Derivatives and Rates of Cange Part II Section.8 Te Derivative as a Function Derivatives ( ) In te previous section we defined te slope of te tangent to a curve wit

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)

= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c) Paper 1: Pure Matematics 1 Mark Sceme 1(a) (i) (ii) d d y 3 1x 4x x M1 A1 d y dx 1.1b 1.1b 36x 48x A1ft 1.1b Substitutes x = into teir dx (3) 3 1 4 Sows d y 0 and states ''ence tere is a stationary point''

More information

LOCAL M-ESTIMATION FOR CONDITIONAL VARIANCE FUNCTION WITH DEPENDENT DATA

LOCAL M-ESTIMATION FOR CONDITIONAL VARIANCE FUNCTION WITH DEPENDENT DATA ROCKY MOUNTAIN JOURNAL OF MATHEMATICS Volume 46, Number, 206 LOCAL M-ESTIMATION FOR CONDITIONAL VARIANCE FUNCTION WITH DEPENDENT DATA YUNYAN WANG AND MINGTIAN TANG ABSTRACT. In tis paper, a local M-estimation

More information

PUBLISHED VERSION. Copyright 2009 Cambridge University Press.

PUBLISHED VERSION. Copyright 2009 Cambridge University Press. PUBLISHED VERSION Gao, Jiti; King, Maxwell L.; Lu, Zudi; josteim, D.. Nonparametric specification testing for nonlinear time series wit nonstationarity, Econometric eory, 2009; 256 Suppl:1869-1892. Copyrigt

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

How to Combine M-estimators to Estimate Quantiles and a Score Function

How to Combine M-estimators to Estimate Quantiles and a Score Function Sankyā : Te Indian Journal of Statistics Special Issue on Quantile Regression and Related Metods 5, Volume 67, Part, pp 77-94 c 5, Indian Statistical Institute How to Combine M-estimators to Estimate Quantiles

More information

Smoothed projections in finite element exterior calculus

Smoothed projections in finite element exterior calculus Smooted projections in finite element exterior calculus Ragnar Winter CMA, University of Oslo Norway based on joint work wit: Douglas N. Arnold, Minnesota, Ricard S. Falk, Rutgers, and Snorre H. Cristiansen,

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Math 312 Lecture Notes Modeling

Math 312 Lecture Notes Modeling Mat 3 Lecture Notes Modeling Warren Weckesser Department of Matematics Colgate University 5 7 January 006 Classifying Matematical Models An Example We consider te following scenario. During a storm, a

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

MIXED DISCONTINUOUS GALERKIN APPROXIMATION OF THE MAXWELL OPERATOR. SIAM J. Numer. Anal., Vol. 42 (2004), pp

MIXED DISCONTINUOUS GALERKIN APPROXIMATION OF THE MAXWELL OPERATOR. SIAM J. Numer. Anal., Vol. 42 (2004), pp MIXED DISCONTINUOUS GALERIN APPROXIMATION OF THE MAXWELL OPERATOR PAUL HOUSTON, ILARIA PERUGIA, AND DOMINI SCHÖTZAU SIAM J. Numer. Anal., Vol. 4 (004), pp. 434 459 Abstract. We introduce and analyze a

More information

3.1. COMPLEX DERIVATIVES 89

3.1. COMPLEX DERIVATIVES 89 3.1. COMPLEX DERIVATIVES 89 Teorem 3.1.4 (Cain rule) Let f : D D C and g : D C be olomorpic functions on te domains D and D respectively. Suppose tat bot f and g are C 1 -smoot. 10 Ten (g f) (z) g (f(z))

More information

Estimation in threshold autoregressive models with a stationary and a unit root regime

Estimation in threshold autoregressive models with a stationary and a unit root regime ISSN 440-77X Australia Department of Econometrics and Business Statistics ttp://www.buseco.monas.edu.au/depts/ebs/pubs/wpapers/ Estimation in tresold autoregressive models wit a stationary and a unit root

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Deconvolution problems in density estimation

Deconvolution problems in density estimation Deconvolution problems in density estimation Dissertation zur Erlangung des Doktorgrades Dr. rer. nat. der Fakultät für Matematik und Wirtscaftswissenscaften der Universität Ulm vorgelegt von Cristian

More information

1. Introduction. Consider a semilinear parabolic equation in the form

1. Introduction. Consider a semilinear parabolic equation in the form A POSTERIORI ERROR ESTIMATION FOR PARABOLIC PROBLEMS USING ELLIPTIC RECONSTRUCTIONS. I: BACKWARD-EULER AND CRANK-NICOLSON METHODS NATALIA KOPTEVA AND TORSTEN LINSS Abstract. A semilinear second-order parabolic

More information

An approximation method using approximate approximations

An approximation method using approximate approximations Applicable Analysis: An International Journal Vol. 00, No. 00, September 2005, 1 13 An approximation metod using approximate approximations FRANK MÜLLER and WERNER VARNHORN, University of Kassel, Germany,

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information