Nonparametric density estimation for linear processes with infinite variance
|
|
- Lorin Phelps
- 5 years ago
- Views:
Transcription
1 Ann Inst Stat Mat 2009) 61: DOI /s x Nonparametric density estimation for linear processes wit infinite variance Tosio Honda Received: 1 February 2006 / Revised: 9 February 2007 / Publised online: 21 September 2007 Te Institute of Statistical Matematics, Tokyo 2007 Abstract We consider nonparametric estimation of marginal density functions of linear processes by using kernel density estimators. We assume tat te innovation processes are i.i.d. and ave infinite-variance. We present te asymptotic distributions of te kernel density estimators wit te order of bandwidts fixed as = cn 1/5, were n is te sample size. Te asymptotic distributions depend on bot te coefficients of linear processes and te tail beavior of te innovations. In some cases, te kernel estimators ave te same asymptotic distributions as for i.i.d. observations. In oter cases, te normalized kernel density estimators converge in distribution to stable distributions. A simulation study is also carried out to examine small sample properties. Keywords Linear processes Kernel density estimator Domain of attraction Stable distribution Noncentral limit teorem Martingale central limit teorem 1 Introduction Let {X i } i=1 be a linear process defined by X i = b j ɛ i j, i = 1, 2,..., 1) j=0 were {ɛ i } i= is an i.i.d. process, b 0 = 1, and b j c 0 j β, j = 1, 2,...,and c 0 is a positive constant. a j a j means tat a j/a j 1as j. Te marginal density T. Honda B) Graduate Scool of Economics, Hitotsubasi University, 2-1 Naka, Kunitaci, Tokyo , Japan onda@econ.it-u.ac.jp
2 414 T. Honda function of {X i } i=1 is denoted by f x). We will specify te conditions on {ɛ i} and β later in tis section. In tis paper, we estimate te marginal density function f x) by kernel density estimators and present te asymptotic properties wen ɛ 1 as infinite variance. Te asymptotic distributions depend on bot te tail beavior of ɛ 1 and β. A lot of autors ave examined te asymptotic properties of kernel density estimators of marginal density functions of dependent observations. Most of tem considered density estimation for mixing processes by imposing assumptions on joint density functions and te order of mixing coefficients until about te early 1990s. See Fan and Yao 2003) for a review of te results for strongly mixing processes. However, it is difficult to ensure tat te order of mixing coefficients satisfies te assumptions unless te coefficients b j decay sufficiently fast. See Doukan 1994) for sufficient conditions for linear processes to be strongly mixing. Terefore attention as been focused on te asymptotic properties of kernel density estimators for subordinated Gaussian processes {GX i )}, were {X i } is a stationary Gaussian process, and linear processes, especially subordinated Gaussian processes and linear processes wit long memory since te late 1980s or te early 1990s. Note tat Hall and Hart 1990) pointed out tat te asymptotic properties depend on te degree of long memory wen we estimate te marginal density functions of linear processes wit long memory. Tis is true of subordinated Gaussian processes wit long memory. As for subordinated Gaussian processes wit long memory, tere are, for example, Ceng and Robinson 1991),Csörgő and Mielniczuk 1995), and Ho 1996). See also te references terein. Tey examined te asymptotic properties of kernel density estimators by exploiting Hermite expansions. Ho 1996) proved tat kernel density estimators beave asymptotically in te same way as for i.i.d. observations wen te degree of long memory does not exceed a level, or we can say wen te degree of long memory is weak. He proved it by evaluating te moments. Hidalgo 1997) studied te asymptotic properties of kernel density estimators for linear processes by appealing to Appell expansions. Except for Gaussian cases, very restrictive conditions are necessary to verify te validity of Appell expansions and te paper does not mention tose conditions at all. In Gaussian cases, Appell expansions coincide wit Hermite expansions. See Sect. 6 of Giraitis and Surgailis 1986) and pp of Surgailis 2004) for te conditions. Teoretical studies for linear processes wit long memory ave developed since te seminal papers, Ho and Hsing 1996, 1997). Tey applied te martingale decomposition metod to examine te properties of subordinated linear processes wit long memory. See Koul and Surgailis 2002) for bot Hermite expansions for Gaussian cases and te martingale approac for linear processes of Ho and Hsing 1996, 1997). Recently several autors considered te asymptotic properties of kernel density estimators for linear processes wit sort memory or long memory by using te martingale approac initiated by Ho and Hsing 1996, 1997), for example, Honda 2000), Wu and Mielniczuk 2002), Bryk and Mielniczuk 2005), and Scick and Wefelmeyer 2006). See also te references terein. Especially Wu and Mielniczuk 2002) fully examined te asymptotic properties of kernel density estimators for linear processes. However, all of tem assumed ɛ 1 as finite variance and tat te distribution of ɛ 1 satisfies some restrictive assumptions, for example, te existence of te bounded and
3 Nonparametric density estimation 415 Lipscitz continuous density function. Under tose conditions, Wu and Mielniczuk 2002) proved tat kernel density estimators beave asymptotically in te same way as for i.i.d. observations in te cases of sort memory and weak long memory and tat kernel density estimators beave asymptotically in te same way as te sample means wen te degree of long memory exceeds a level. It is well known tat standardized sample means of linear processes wit long memory converge in distribution to te standard normal distribution wen ɛ 1 as finite variance. Te standardization is different from tat for linear processes wit sort memory. See Teorem of Taniguci and Kakizawa 2000). Note tat Hallin and Tran 1996) considered kernel density estimation for linear processes wit sort memory by appealing to truncation arguments. Tey assumed tat ɛ 1 as finite variance and β>4. From a teoretical point of view, te marginal density function of {X i }, f x), exists witout finite variance or te bounded density function of ɛ 1. It is strange tat teoretical studies of kernel density estimators are limited to te cases were ɛ 1 as finite variance and te bounded density function. Besides recently a lot of attention is paid to eavy tailed time series data. Terefore it is important to investigate te asymptotic properties of kernel density estimators in te cases were ɛ 1 does not ave finite variance. We examine kernel density estimators in te cases were ɛ 1 does not ave finite variance by exploiting te results of Hsing 1999), Koul and Surgailis 2001), Surgailis 2002), and Pipiras and Taqqu 2003). We treat te asymptotic properties in a compreensive way. We briefly mention te results of te above papers later in tis section. In addition, C2 below also allows unbounded or discontinuous density functions of ɛ 1. Wen ɛ 1 does not ave finite variance, te asymptotic distributions depend on bot te tail beavior of ɛ 1 and β. Wen te effect of te eavy tail of ɛ 1 and dependence among observations does not appear, te asymptotic distributions are te same as for i.i.d. observations. Wen te effect of te eavy tail of ɛ 1 and dependence among observations appears, te asymptotic distributions are stable distributions. Hereafter we sall call te effect tat of α and β. In order to see te differences between asymptotic properties and small sample properties, we carried out a simulation study and te result is given in Sect. 3. We describe te conditions on {ɛ i }.LetGx) denote te distribution function of ɛ 1. C1: Suppose tat 0 <α<2. Ten lim x x α Gx) = lim x x α 1 Gx)) = c 1 > 0. E{ɛ 1 }=0wen 1 <α<2. C2: Letting φθ) denote te caracteristic function of ɛ 1,weave φθ) < C1 + θ ) δ for some positive δ. C stands for generic positive constants wose values cange from place to place and are independent of te sample size n. C1 implies E{ ɛ 1 r } <, 0 < r <α, and E{ ɛ 1 r }=, r α, 2) and tat te distribution of ɛ 1 belongs to te domain of attraction of a symmetric α-stable distribution. Te caracteristic function of te α-stable distribution S α σ,η,µ) as te form of
4 416 T. Honda { exp{ σ α θ α 1 iηsignθ) tanπα/2)) + iµθ}, α = 1 exp{ σ θ 1 + 2iηsignθ) log θ /π) + iµθ}, α = 1, 3) were i stands for te imaginary unit. It is called symmetric if η = µ = 0. See Samorodnitsky and Taqqu 1994) for more details about stable distributions. C2 is necessary for te existence and regularity conditions of bot f x) and joint density functions of some random variables. It is because C2 guarantees te desirable properties of te caracteristic functions of tose random variables. See P1-3 in Sect. 2. We fix x 0 and estimate f x 0 ) by te kernel density estimator f ˆx 0 ) defined below. f ˆx 0 ) = 1 ) K, 4) i=1 were is a bandwidt and K u) is a kernel function. We take = c 2 n 1/5 for some positive c 2 because of simplicity of presentation and partly because tis is te optimal order wen f x) is twice differentiable at x 0 and te effect of α and β does not appear. A comment on te effect of bandwidts is given in Sect. 2. We assume tat K u) is a symmetric bounded density function wit compact support. We examine te asymptotic properties of f ˆx 0 ) E{ f ˆx 0 )} in te following tree cases. Case 1: 1 <α<2 and 1/α < β < 1 Case 2: 1 <α<2 and 1 <β<2/α Case 3: 0 < α < 2 and 2/α < β. Wen 1 <α<2, we ave by te von Bar and Esseen inequality tat E{ X 1 r } C j βr < j=1 for any r suc tat βr > 1 and 1 < r <α. Note tat X 1 as infinite variance. X 1 is well defined in Case 3, too. See te proof of Teorem 2.2 of Pipiras and Taqqu 2003). Some autors say tat {X i } as long memory in Case 1. Koul and Surgailis 2001) deals wit Case 1 and te weak convergence of empirical distribution functions is proved. Te asymptotics of M-estimators of linear regression models are also examined tere. Surgailis 2002) deals wit Case 2. Te asymptotic properties of empirical distribution functions of X i and partial sums of HX i ), were Hx) is any bounded function, are given tere. Hsing 1999) and Pipiras and Taqqu 2003) examined te asymptotic properties of partial sums of HX i ), were Hx) is any bounded function, in Case 3. Tose papers are crucial to our results and tose papers are also based on te martingale decomposition metod of Ho and Hsing 1996, 1997). We ave to obtain necessary teoretical results to deal wit te cases of infinite variance oter tan Cases 1 3 above. Tey are te cases were 0 < α < 1 and 1/α < β < 2/α. Te approaces of Koul and Surgailis 2001) and Surgailis 2002)
5 Nonparametric density estimation 417 crucially depend on te assumption of α>1. We need anoter metod to deal wit te cases and it is a subject of future researc. Peng and Yao 2004) applied Hsing 1999), Koul and Surgailis 2001), and Surgailis 2002) to nonparametric estimation of trend functions, i.e., nonparametric regression wit fixed design. Tere is a close similarity between nonparametric regression wit random design and kernel density estimation. However, nonparametric estimation of trend functions and kernel density estimation are different problems. In Case 1, te effect of te eavy tail of ɛ 1 and te dependence among observations, wic we call tat of α and β, appears wen β < 1/α + 2/5. In Case 2 wen αβ < 5/3. However, we see no effect of α and β in Case 3. We repeat tat te asymptotic distributions are te same as for i.i.d. observations wen te effect of α and β does not appear. In Peng and Yao 2004), te effect of α and β always appears. Te paper is organized as follows. In Sect. 2, we decompose f ˆx 0 ) E{ f ˆx 0 )} into two components and give a euristic argument of te asymptotic asymptotic properties of f ˆx 0 ) E{ f ˆx 0 )}. Ten te main teorems of tis paper are presented. We state te result of a simulation study in Sect. 3. Te main teorems are proved in Sect. 4. Te proofs of tecnical lemmas are confined to Sect Te asymptotic distributions We state te main results of tis paper in Teorems 1 3. First we give definitions and notations. Ten necessary properties of density functions are described. We present te asymptotic distributions of kernel density estimators after a euristic argument. Te proofs of te teorems are deferred to Sect. 4. Let d and p stand for convergence in distribution and convergence in probability, respectively. We omit n and a.s. for brevity. We rewrite X i as X i = X + X, 5) were j 1 X = b l ɛ i l and X = b l ɛ i l. l=0 We denote te distribution functions of X and X by F j x) and F j x), respectively. C2 and Lemma 1 of Giraitis et al. 1996) imply te existence of te density functions and we denote tem by f j x) and f j x), respectively. We state necessary properties of density functions, wic can be verified by using C1 and C2. P1 and P2 are derived by following te proof of Lemmas 1 2 of Giraitis et al. 1996). P3 is part of Lemma 4.2 of Koul and Surgailis 2001). Tere exists a positive integer s 1 for wic P1, P2, and P3 old. P1: f s x) is twice continuously differentiable and f s x) and all te derivatives up to te second order are uniformly bounded for s s 1. Note tat we can take s =. P2: X 1,s1, X i,s1 +i 1) as te bounded joint density function for any i 2. l= j
6 418 T. Honda P3: Wen 1 <α<2, 1 < r <α, and rβ >1, tere exists a constant C suc tat f x) f s x) C s 1/r β uniformly in x for any s s 1. Note tat C depends on α, β, and r. Wen we use P3 in te proofs of Lemmas 2 and 4, r is specified and satisfies te conditions in P3. Before we state Teorems 1 3, we give a euristic argument of te asymptotics of f ˆx 0 ). We need to decompose f ˆx 0 ) E{ f ˆx 0 )} into two components for te argument. Let be s 2 a large positive integer and put s 0 = s 1 + s 2. We will be more specific about s 2 in te proofs of Teorems 1 3. We write S i for te σ -field generated by {ɛ j j i}. f ˆx 0 ) E{ f ˆx 0 )}=S a + S b, 6) were S a = 1 = 1 n S b = 1 = 1 n i=1 i=1 i=1 [ K [ 1 K [ i=1 ) { ) Xi x Si s0}] 0 E K ) ] K ξ) f s0 x 0 + ξ X i,s0 )dξ ) { )}] Si s0} E K K ξ) f s0 x 0 + ξ X i,s0 )dξ 1 {K E [ { E K 7) )}]. 8) Te domain of integration is, ) wen it is omitted. Remember tat X = j 1 l=0 b lɛ i l and tat f j x) is te density function of X. Similar expressions can be found in 3) 5) of Wu and Mielniczuk 2002). In Wu and Mielniczuk 2002) and Bryk and Mielniczuk 2005), s 0 = 1 and te Lipscitz continuous density function of ɛ 1 is assumed. Tey applied te martingale central limit teorem to S a. A tecnique is devised to avoid suc assumptions on ɛ 1 in tis paper. Te asymptotic properties of S a are examined in Lemma 1 below. We investigate te asymptotic properties of S b in Sect. 4 by using te results of Hsing 1999), Koul and Surgailis 2001), Surgailis 2002), Pipiras and Taqqu 2003), and Surgailis 2004). Te asymptotic distribution of 6) depends on wic of S a and S b is stocastically larger. We put = c 2 n γ γ > 0) only in tis euristic argument. In eiter case, we ave S a = O p ) 1/2 ) and E{ ˆ f x 0 )} f x 0 ) 2 2 f x 0 )ν, 9)
7 Nonparametric density estimation 419 were ν = u 2 K u)du. Te asymptotic properties of S b are independent of and depend only on α and β. In addition we ave S b = O p n 1/α β ) in Case 1, S b = O p n 1+1/αβ) ) in Case 2, S b = O p n 1/2 ) in Case 3. Te stocastic order is exact in all te above expressions. Ten 9) and a simple calculation imply tat we cannot improve te rate of convergence of f ˆx 0 ) by coosing γ oter tan 1/5. If tere are tree parameters, α, β, and γ, tings will be notationally complicated and te paper will be longer. Tus we present te teorems wit γ = 1/5. Note tat tere is no teoretical difficulty for tree parameters, α, β, and γ. Wen S b is stocastically larger tan S a, te effect of α and β appears in te asymptotic properties of f ˆx 0 ). Since te asymptotic properties of S b are independent of and depend only on α and β, we ave no optimal bandwidt and we can coose larger bandwidts witout affecting te asymptotic properties of f ˆx 0 ). Wen S a and S b ave te same stocastic order, we can say tat te effect of α and β still appears. However, we ave no result on te joint distribution of S a and S b and we do not refer to tis case in tis paper. Wen E{ ɛ 1 2+δ } < for some positive δ and = c 2 n 1/5, te effect of dependence among observations does not appear in te case of β>9/10 in contrast to Case 2 below. Here we state te main results of tis paper. Case 1 Wen β is smaller tan 1 1/α + 2/5), te effect of α and β appears in te asymptotic properties. Wen α is smaller tan 5/3, te effect of long memory always appears. Teorem 1 Suppose tat C1 and C2 old and tat 1 <α<2and 1/α < β < 1. Ten we ave 1/α β< 2/5: f ˆx 0 ) E{ f ˆx 0 )}) d N0,κf x 0 )), 1/α β> 2/5: n β 1/α f ˆx 0 ) E{ f ˆx 0 )}) f d x 0 )c A Z, were κ = K 2 u)du, Ɣ2 α) cosαπ/2) 1 c A = c 0 2c 1 1 α 1 0 t s) β + dtds ) 1/α x + = x 0, and Z is a random variable wose distribution is S α 1, 0, 0). Te asymptotic joint distributions of te kernel density estimators at different points are independent in case of 1/α β< 2/5and degenerate in case of 1/α β> 2/5, respectively. Case 2 Wen 1 <β<5/3α), te effect of α and β appears in te asymptotic properties. Wen α is larger tan 5/3, te effect does not appear.,
8 420 T. Honda Teorem 2 Suppose tat C1 and C2 old and tat 1 <α<2and 1 <β<2/α. Ten we ave 1/αβ) < 3/5: f ˆx 0 ) E{ f ˆx 0 )}) d N0,κf x 0 )), 1/αβ) > 3/5: n 1 1/αβ) d f ˆx 0 ) E{ f ˆx 0 )}) c 1 c0 α/σ αββ αβ )) 1/αβ) c + f L + + c f L ), were L + and L are mutually independent random variables wose distributions are S αβ 1, 1, 0), c + f = c f = 0 0 f x 0 t) f x 0 ))t 1 1/β dt, f x 0 + t) f x 0 ))t 1 1/β dt, σ αβ = Ɣ2 αβ) cosπαβ/2) /αβ 1). Te asymptotic joint distributions of te kernel density estimators at different points are independent in case of 1/αβ) < 3/5 and degenerate in case of 1/αβ) > 3/5, respectively. Case 3 In tis case, we see no effect of α and β in te asymptotic properties. Teorem 3 Suppose tat C1 and C2 old and tat 0 <α<2and 2/α < β. Ten we ave f ˆx 0 ) E{ f ˆx 0 )}) d N0,κf x 0 )). Te asymptotic joint distributions of te kernel density estimators at different points are independent. Wen te effect of α and β does not appear, we can define te asymptotically optimal bandwidt in te same way as for i.i.d. observations. By combining Teorems 1 2, we know te effect of α and β appears in te following cases. 1 <α<5/3 1/α < β < 1 in Case 1 and 1 <β<5/3α) in Case 2 5/3 <α<2 1/α < β < 1/α + 2/5 incase1 Ten we can only say tat larger bandwidts will improve small sample properties and it may be ard to conduct statistical inference. Te same problem appens for linear processes wit finite variance and long memory. However, it is important to know te statistical properties of suc often used estimators as kernel density estimators. 3 Simulation study We carried out a simulation study to examine te small sample properties. Te result is presented in Tables 1, 2, 3 below. In tis simulation study ɛ 1 follows a standard symmetric α-stable distribution, S α 1, 0, 0). We estimate f x 0 ) by using te Epanecnikov kernel.
9 Nonparametric density estimation 421 We took α = 1.2, 1.5, 1.8, β = 0.9, 1.3, 1.7, 2.1,, { c0 j + 1) b j = β, 0 j , 1000 j We mean i.i.d. observations by β = and c 0 is cosen so tat X 1 also follows S α 1, 0, 0). Wetried = 0.2, 0.3, 0.4 to see te effect of bandwidts. We conducted te simulation study by using R2.3.1 and te fbasics package. Te sample size is 200 and eac entry of Tables 1, 2, 3 are based on 2,000 repetitions. In Tables 1, 2, 3, mean, var, and mse stand for te sample means, te sample variances, and te sample mean squared errors of te repetitions, respectively. Te values of β are on te left margins of te Tables 1, 2, 3. Te true values of f x 0 ) are as follows: α = 1.2 : f 0.0) = , f 0.75) = , f 1.5) = α = 1.5 : f 0.0) = , f 0.75) = , f 1.5) = α = 1.8 : f 0.0) = , f 0.75) = , f 1.5) = Teorem 1 tells tat te effect of α and β appears in te asymptotic properties in te cases of α, β) = 1.2, 0.9), 1.5, 0.9), 1.8, 0.9). Teorem 2 tells tat te effect appears in te asymptotic properties in te case of α, β) = 1.2, 1.3). We obtained te following implications from Tables 1, 2, Te variance is more serious tan te bias in eac pair of α, β). Tus larger bandwidts will be better. 2. Te effect of α and β is seen in te cases of α, β) = 1.2, 0.9), 1.2, 1.3), 1.5, 0.9), 1.8, 0.9). Tis is conformable wit Teorems 1, 2. Especially te effect is remarkable in te case of α, β) = 1.2, 0.9). Even wen te effect is seen, larger bandwidts seem to perform better contrary to Teorems 1, Te effect of α and β rapidly disappears as β becomes larger. 4 Proofs of teorems We prove Teorems 1 3 in tis section. Te proofs of all te lemmas are postponed to Sect. 5. We begin wit Lemma 1 wic deals wit S a in 6) and 7). We reproduce S a ere for reference. S a = 1 = 1 n i=1 i=1 [ K [ 1 K ) { E K ) ) Xi x Si s0}] 0 ] K ξ) f s0 x 0 + ξ X i,s0 )dξ
10 422 T. Honda Table 1 α = 1.2 x Mean Var Mse Mean Var Mse Mean Var Mse Mean Var Mse Mean Var Mse Table 2 α = 1.5 x Mean Var Mse Mean Var Mse Mean Var Mse Mean Var Mse Mean Var Mse
11 Nonparametric density estimation 423 Table 3 α = 1.8 x Mean Var Mse Mean Var Mse Mean Var Mse Mean Var Mse Mean Var Mse Remember s 0 = s 1 + s 2 and tat s 1 is fixed in P1 3. In te proofs of te teorems, we take a large s 2 and temporarily fix it. Ten we let n tend to. Tus we can take n = ks 0 for simplicity of presentation witout affecting te asymptotic properties. Since te summands in S a do not form martingale differences, we cannot apply te martingale central limit teorem directly and we need Lemma 1. We furter decompose S a into four components. were S a = N 1l = 1 E N 2l = 1 E k N 1l + l=1 s 1 +l 1)s 0 i=1+l 1)s 0 { K s 1 +l 1)s 0 i=1+l 1)s 0 { K k N 2l + l=1 [ K k N 3l + l=1 ) ) S1+l 1)s0 s1}] [ { E K k N 4l, 10) l=1, 11) ) S1+l 1)s0 s1} ) Si s0}], 12)
12 424 T. Honda N 3l = 1 E N 4l = 1 E ls 0 i=s 1 +1+l 1)s 0 { K ls 0 i=s 1 +1+l 1)s 0 { K [ K ) ) S1+l 1)s0}], 13) [ { ) E K S1+l 1)s0} ) Si s0}]. 14) N 1l and N 2l consist of s 1 terms, respectively and N 3l and N 4l consist of s 2 terms, respectively. N 1l, N 2l, N 3l, and N 4l are S s1 +l 1)s 0 -, S 1+l 1)s0 -, S 1+ls0 -, and S 1 s2 +ls 0 - measurable, respectively. In addition, E{N 1l S s1 +l 2)s 0 }=E{N 2l S 1+l 2)s0 } = E{N 3l S 1+l 1)s0 }=E{N 4l S 1 s2 +l 1)s 0 }=0. 15) Lemma 1 Suppose tat C1 and C2 old. Ten we ave k l=1 k ) Var N 2l = O l=1 d N 1l N 0, s ) 1 κ f x 0 ), s 0 ) k ) 1, Var N 4l = O n l=1 k l=1 ) 1, 16) n d N 3l N 0, s ) 2 κ f x 0 ). 17) s 0 Remark 1 Take an arbitrary positive integer m. Ten te proof of Lemma 1 in Sect. 5 and standard arguments imply tat k l=1 N 3l for x 01,...,x 0m are asymptotically mutually independent if x 0k = x 0l k = l). We go on to te proofs of Teorems 1 3. We investigate te asymptotic properties of S b in 8) for Cases 1 3 in Propositions 1 3, respectively. Ten by combining Lemma 1 and Propositions 1 3, we derive te asymptotic distributions of f ˆx 0 ) E{ f ˆx 0 )}. Case 1 Te proof of Proposition 1 below is based on te arguments in Koul and Surgailis 2001). Especially te proof of Lemma 2 is a modified and simplified argument of tose of Koul and Surgailis 2001). Proposition 1 Suppose tat C1 and C2 old and tat 1 <α<2and 1/α < β < 1. Ten we ave n β 1/α S b d f x 0 )c A Z. Remark 2 Te proof of Proposition 1 after tat of Teorem 1 implies tat Z in Proposition 1 comes from te sample mean of X 1,...,X n. Terefore two n β 1/α S b for any pair of x 01, x 02 ) are asymptotically degenerate.
13 Nonparametric density estimation 425 We prove Teorem 1. Proof of Teorem 1 First let 1/α β + 2/5 be smaller tan 0. Ten Proposition 1 implies tat S b = o p 1). By taking a sufficiently large s 2 in Lemma 1, we can make s 2 /s 0 and s 1 /s 0 arbitrarily close to 1 and 0, respectively. Tese yield te convergence in distribution of ˆ f x 0 ) E{ ˆ f x 0 )}) d N0,κf x 0 )). Next let 1/α β + 2/5 be larger tan 0. By Lemma 1,weaven β 1/α S a = o p 1). Tus te convergence in distribution of te latter case follows from Proposition 1. Hence te proof of te teorem is complete. Proof of Proposition 1 We adopt te notations of Koul and Surgailis 2001) and prove te proposition by following te arguments tere. Provided tat n β 1/α S b + 1 n K ξ) f x 0 + ξ)dξ n s 0 j= i=1 j+s 0 ) Proposition 1 follows from 1.9) of Koul and Surgailis 2001), n β 1/α 1 i=1 j=s 0 b j ɛ i j d c A Z. b i j ɛ j = o p 1), 18) We will establis 18). As in Koul and Surgailis 2001), we represent S b as S b = 1 = 1 i=1 n s 0 j=s 0 E{K i S i j } E{K i S i j 1 }) j= i=1 j+s 0 ) Ten left-and side of 18) is rewritten as were K ξ) n β 1/α 1 n s 0 E{K i S j } E{K i S j 1 }). 19) j= i=1 j+s 0 ) U i,i j ξ) dξ, 20) U ξ) = f j x 0 + ξ b j ɛ i j X +1 ) f j x 0 + ξ b j u X +1 )dgu) + f x 0 + ξ)b j ɛ i j. 21)
14 426 T. Honda We define R n ξ) by n s 0 R n ξ) = n β 1/α 1 j= i=1 j+s 0 ) ten by Jensen s inequality, we obtain for any r 1, { E K ξ)r n ξ)dξ r } We evaluate E{ R n ξ) r } in Lemma 2 below. U i,i j ξ), 22) K ξ)e{ R n ξ) r }dξ. 23) Lemma 2 For any positive M and r suc tat 1 < r = α/1 + η) and 0 <η< αβ 1)/2, we ave lim sup E{ R n ξ) r }=0. n ξ <M 18) follows from 20), 22), 23), and Lemma 2. Hence te proof of te proposition is complete. Case 2 Teorem 2 follows from Lemma 1 and Proposition 2 below. Te proof of Proposition 2 below is based on te arguments in Surgailis 2002). We adopt te notations of Surgailis 2002). Especially te proof of Lemma 4 is a modified and simplified argument of tose in Surgailis 2002). We applied te argument on pp of Surgailis 2004), te proof of 3.6) tere, to te proof of Lemma 3. Proposition 2 Suppose tat C1 and C2 old and tat 1 <α<2 and 1 <β<2/α. Ten we ave n 1 1/αβ) d c1 c α ) 1/αβ) S b 0 σ αβ β αβ c + f L + + c f L ). Remark 3 Te proof of Lemma 3.1 of Surgailis 2002) implies tat L + and L come from n 1/αβ) n i=1 ɛ i 0) 1/β E{ɛ i 0) 1/β }) n 1/αβ) n i=1 ɛ i 0 1/β E{ ɛ i 0 1/β }), and respectively. As in Teorem 2.1 of Surgailis 2002), L + and L are common to every x 0. It also follows from te proof of Lemma 3.1 of Surgailis 2002) tat te result of Proposition 2 does not depend on s 0. Te proof of Proposition 2 is given after Teorem 2 is proved.
15 Nonparametric density estimation 427 Proof of Teorem 2 First let 1/αβ) be smaller tan 3/5. Ten Proposition 2 implies tat S b = o p 1). By taking a sufficiently large s 2 in Lemma 1, we can make s 2 /s 0 and s 1 /s 0 arbitrarily close to 1 and 0, respectively. Tese yield te convergence in distribution of f ˆx 0 ) E{ f ˆx 0 )}) d N0,κf x 0 )). Next let 1/αβ) be larger tan 3/5. By Lemma 1, weaven 1 1/αβ) S a = o p 1). Tus te convergence in distribution of te latter case follows from Proposition 2. Hence te proof of te teorem is complete. Proof of Proposition 2 We begin te proof wit several definitions. H n, t) = K ξ) f x 0 + ξ t) f x 0 + ξ))dξ, T n = H n, b j ɛ i ) E{H n, b j ɛ i )}), i=1 j=s 0 T n = H n, b j ɛ i j ) E{H n, b j ɛ i j )}). i=1 j=s 0 We prove te following two lemmas in Sect. 5. Lemma 3 Lemma 4 T n T n = o p n 1/αβ) ). ns b T n = o p n 1/αβ) ). We consider T n since Lemmas 3, 4 imply tat ns b = T n + o p n 1/αβ) ).Byte Taylor series expansion, H n, b j ɛ i ) = f x 0 b j ɛ i ) f x 0 ) ξ ) + 2 K ξ) ξ η) f x 0 + η b j ɛ i ) f x 0 + η))dη dξ 0 = f x 0 b j ɛ i ) f x 0 ) + 2 H n, b j ɛ i ), 24) were H n, u) is clearly defined. H n, u) is a continuously differentiable bounded function and te derivative is also bounded. In addition, lim H n, u) = 1 ξ 2 K ξ)dξ f x 0 u) f x 0 )). n 2
16 428 T. Honda By applying te arguments in Lemma 3.1 of Surgailis 2002) to H n, u), we can sow tat for any r <αβ, r E H n, b j ɛ i ) j=s 0 < C r, were C r depends on r. Tis means tat 2 H n, b j ɛ i ) in 24) is negligible in n 1/αβ) T n and te result of Proposition 2 follows from Lemma 3.1 of Surgailis 2002). Hence te proof of Proposition 2 is complete. Case 3 Te proof of Proposition 3 below is a modified and simplified argument of part of te arguments in Hsing 1999) and Pipiras and Taqqu 2003). We adopt te notations of te papers. Since S b = O p n 1/2 ) from Proposition 3 below and we can treat S a in te same way as in Teorems 1, 2, it is easy to prove te result of Teorem 3. We omit te proof of Teorem 3. Proposition 3 Suppose tat C1 and C2 old and tat 0 <α<2 and 2/α < β. Ten we ave Proof Wat we ave to prove is n 1/2 S b = O p 1). ne{sb 2 }=O1). 25) Let ɛ be a random variable wic is distributed as ɛ 1 and independent of {ɛ j } j=. Ten we ave P b j ɛ i b j ɛ 1) C b j α and E{ b j ɛ i b j ɛ 2 I b j ɛ i b j ɛ < 1)} C b j α. 26) See 3.35) and 3.36) of Pipiras and Taqqu 2003) about 26). S b is written as S b = 1 H n X i,s0 ), 27) n were H n ζ ) = i=1 K ξ) f s0 x 0 + ξ ζ) E{ f x 0 + ξ X 1 )})dξ. As in te proofs of Propositions 1, 2, we define U ξ). In Case 3, it is defined by U ξ) = f j x 0 + ξ b j ɛ i j X +1 ) E{ f j x 0 + ξ b j ɛ i j X +1 ) S i j 1 }
17 Nonparametric density estimation 429 = f j x 0 + ξ b j ɛ i j X +1 ) f j x 0 + ξ b j ɛ X +1 ))dgɛ). 28) As in te proofs of Propositions 1, 2, U ξ) is S i j -measurable and we ave E{U ξ) S i j 1 }=0. In addition, H n X i,s0 ) is written as H n X i,s0 ) = K ξ) j=s 0 U ξ)dξ. 29) By 27) and 29), we ave ns b = H n X i,s0 ) = i=1 K ξ) U ξ)dξ. 30) i=1 j=s 0 Te properties of U ξ), 30), Jensen s inequality, and te Caucy Scwarz inequality imply tat E{nS b ) 2 } K ξ) i=1 j=s 0 j =s 0 E{U 2 ξ)})1/2 E{U 2 i, j ξ)})1/2 dξ, 31) were i = i j + j. Provided tat for any positive M, sup E{Ui, 2 j ξ)} C b j α, 32) ξ M 25) follows from 31). Te integrand in 28) is bounded by C1 ɛ i j ɛ)b j )). Terefore by 26), we get E{U 2 ξ)} CE{ ɛ i j ɛ)b j 2 I ɛ i j ɛ)b j < 1)}+P ɛ i j ɛ)b j 1)) C b j α. Hence 32) is establised and te proof of te proposition is complete. 5 Proofs of tecnical lemmas In tis section we prove tecnical lemmas. Te proofs of Lemmas 2 and 4 are modified and simplified arguments of tose in Koul and Surgailis 2001) and Surgailis 2002),
18 430 T. Honda respectively. We applied te argument on pp of Surgailis 2004), te proof of 3.6) tere, to te proof of Lemma 3. Proof of Lemma 1 We write K i for K X i x 0 )/) in te proof for notational simplicity. Note tat for any s s 1, 1 E{K i S i s }= K ξ) f s x 0 + ξ X i,s )dξ 33) and tat te above expression is a bounded continuous function of X i,s. Hence 16) follows from 15) by appealing to te properties of martingale differences. In fact, k ) Var N 2l = i=1 k i=1 E{N 2 2l } and E{N 2 2l } C K,s 1 s 2 1 n 2, were C K,s1 depends on 33). Note tat s 2 is tentatively fixed for N 4l. We prove te latter of 17) byusing15) and applying te martingale central limit e.g. Teorem of Cow and Teicer 1988)). Te former can be treated in te same way and te proof is omitted. Since N 3l C/, we ave only to sow tat 15) and P2 imply tat k l=1 k E{N3l 2 S 1+l 1)s 0 } l=1 E{N 2 3l S 1+l 1)s 0 } p s 2 s 0 κ f x 0 ). 34) k ls 0 1 = E{K i 2 S 1+l 1)s 0 } l=1 i=s 1 +1+l 1)s 0 k ls 0 ls E { } K i1 K i2 S 1+l 1)s0 = l=1 i 1 =s 1 +1+l 1)s 0 i 2 =i 1 +1 k ls 0 l=1 i 1 =s 1 +1+l 1)s 0 E { K i2 S 1+l 1)s0 } ls 0 i 2 =s 1 +1+l 1)s 0 1 E { K i1 S 1+l 1)s0 } k ls 0 1 E{K i 2 S 1+l 1)s 0 }+O p ). 35) i=s 1 +1+l 1)s 0 l=1
19 Nonparametric density estimation 431 By P1 and te ergodic tereom, we obtain k ls 0 1 E{K i 2 S 1+l 1)s 0 } i=s 1 +1+l 1)s 0 l=1 = 1 n k ls 0 l=1 i=s 1 +1+l 1)s 0 p s 2 s 0 κ f x 0 ) K 2 ξ) f i 1 l 1)s0 x 0 X i,i 1 l 1)s0 )dξ + O p ) since E{ f s x 0 X i,i s )}= f x 0 ).34) follows from 35) and 36). 36) Proof of Lemma 2 We adopt te notations of Koul and Surgailis 2001), for example, U k), Mk), Dk), χ k) k), and W. Note tat we can do witout 2.1) tere. We fix a positive γ suc tat 1/r 1/α)/β < γ < α r)/r) 1 1/rβ)). Ten r1 + γ)<α. {M j,n ξ)} defined in 37) form a martingale difference sequence wit respect to {S j }. M j,n ξ) = U i,i j ξ). 37) i=1 j+s 0 ) Ten by te von Bar and Esseen inequality, we ave E{ R n ξ) r } 2 n s 0 j= E{ M j,n ξ) r }. 38) To evaluate E{ M j,n ξ) r }, we decompose U ξ) into tree components, U 1), U 2), and U 3). Hereafter we suppress te dependence on ξ for notational convenience. U 1) = f j x 0 + ξ b j ɛ i j X +1 ) f j x 0 + ξ b j u X +1 )dgu)+ f j x 0 + ξ X +1 )b j ɛ i j 39) U 2) = b j ɛ i j f x 0 + ξ) f x 0 + ξ X +1 )) 40) U 3) = b j ɛ i j f x 0 + ξ X +1 ) f j x 0 + ξ X +1 )). 41) Ten M j,n = 3 k=1 M k) j,n, were Mk) j,n = i=1 j+s 0 ) U k) i,i j.
20 432 T. Honda Provided tat E{ M j,n ξ) r } C i=1 j+s 0 ) r i j) β1+γ), 42) 38), 42) and some calculation as on p. 321 of Koul and Surgailis 2001) imply te result of Lemma 2. We will establis 42). We give some definitions before we consider M 1) 1) j,n and U. χ 1) = I b j v 1, b j ɛ i j 1), χ 2) = I b j v > 1, b j ɛ i j 1), χ 3) = I b j ɛ i j > 1). By using tem, we ave 3 E{ M 1) j,n r } C D k) j,n, 43) k=1 were j,n = E D k) i=1 j+s 0 ) χ k) i,i j U 1) i,i j r. We sow tat all of D k) j,n are bounded by C n i=1 j+s0 )) i j) β1+γ) ) r. We deal wit D 1) j,n and D3) j,n. D2) j,n can be treated in te same way as D3) j,n. We can represent U 1) as were Since b W 1) j ɛ i j = b j u b W 2) j ɛ i j = b j u U 1) = W 1) W 2), 44) ) f j x 0 + v + ξ X +1 )dv dgu), ) f j x 0 + ξ X +1 )dv dgu). f j x 0 + v + ξ X +1 ) f j x 0 + ξ X +1 ) C v γ for v 1, we ave U 1) χ 1) C b j ɛ i j b j u ) v γ dv dgu) C b j 1+γ 1 + ɛ i j 1+γ ). 45)
21 Nonparametric density estimation 433 By 44), 45) and Minkowski s inequality, we get D 1) j,n C C i=1 j+s 0 ) i=1 j+s 0 ) b i j 1+γ E{1 + ɛ j r1+γ) }) 1/r Next we deal wit D 3) j,n. By exploiting 44), we ave D 3) j,n C 2 l=1 k=1 2 E r i j) β1+γ). 46) i=1 j+s 0 ) r i,i j I b i j ɛ j > 1), 47) W l,k) r were b W 1,1) j ɛ i j = b j u f j x 0 + v + ξ X +1 )dv I b j ɛ i j > 1)I ɛ i j > u )dgu), ) b W 1,2) j ɛ i j = f j x 0 + v + ξ X +1 )dv b j u I b j ɛ i j > 1)I ɛ i j u )dgu), ) b W 2,1) j ɛ i j = f j x 0 + ξ X +1 )dv b j u I b j ɛ i j > 1)I ɛ i j > u )dgu), ) b W 2,2) j ɛ i j = f j x 0 + ξ X +1 )dv b j u I b j ɛ i j > 1)I ɛ i j u )dgu). ) Noticing tat W 1,1) =C b j ɛ i j 1+γ I b j ɛ i j > 1), W 1,2) =C b j u 1+γ I b j ɛ i j > 1)dGu) C b j 1+γ, W 2,1) =C b j ɛ i j 1+γ I b j ɛ i j > 1), W 2,2) =C b j u 1+γ I b j ɛ i j > 1)dGu) C b j 1+γ,
22 434 T. Honda we get D 3) j,n C i=1 j+s 0 ) b i j 1+γ r C i=1 j+s 0 ) i j) β1+γ) r 48) Combining 43), 46), and 48), we ave We deal wit M 2) j,n. E{ M 1) j,n r } C i=1 j+s 0 ) r i j) β1+γ). 49) E{ M 2) j,n r } E{ ɛ j r r }E b i j f x 0 + ξ) f x 0 + ξ X i,i j+1 )) i=1 j+s 0 ) r CE b i j X i,i j+1 i=1 j+s 0 ) r C b i j E{ X i,i j+1 r }) 1/r i=1 j+s 0 ) r C i j) 2β+1/r C i=1 j+s 0 ) i=1 j+s 0 ) r i j) β1+γ). 50) See te definition of r and γ about te last line of 50). Finally, by using P3, weave E{ M 3) j,n r } C C i=1 j+s 0 ) i=1 j+s 0 ) i j) β i j) β+1/r r i j) β1+γ). 51) Hence 42) is proved for every k and te proof of te lemma is complete. r
23 Nonparametric density estimation 435 Proof of Lemma 3 We prove Lemma 3 by using te results given in Surgailis 2002, 2004). Especially te proof of 3.6) of te latter. However, we do not deal wit te uniformity and we can do witout 2.3) of Surgailis 2002) in te proofs of Lemmas 3, 4 ere. We write V n for te difference of T n and T n and represent V n as were V n = T n T n = A n k) + B n k), 52) k=1 k=1 A n k) = H n, b j ɛ n+1 k ) E{H n, b j ɛ n+1 k )}), j=k k+n 1 B n k) = H n, b j ɛ 1 k ) E{H n, b j ɛ 1 k )}). j=k We coose a positive numbers r suc tat 1 < rβ,0< 1 + r1 β) < r/αβ) and 1 < r <α. It is not difficult to ceck te existence of r. Ten by te von Bar and Esseen inequality, we obtain ) E{ V n r } 2 E{ A n k) r }+ E{ B n k) r }. 53) k=1 Provided tat E{ A n k) r }=on r/αβ) ) k=1 and k=1 k=1 te result of te lemma follows from 53). We will prove 54). First we deal wit A n k). Since E{ B n k) r }=o n r/αβ)), 54) H n, 0) = 0, H n, x) C, and H n, x) C, we ave H n, b j ɛ n+1 k ) C1 ɛ n+1 k j β )). 55) By exploiting 55), we obtain E[ H n, b j ɛ 1 ) E{H n, b j ɛ 1 )} r ]} Cj βr. 56) As in te argument on p. 337 of Surgailis 2004), we ave r E{ A n k) r } C j β Cn 1+r1 β) = o n r/αβ)). k=1 k=1 j=k Hence te former of 54) is establised.
24 436 T. Honda Next we consider B n k).56) and te argument for A n k) imply tat E{ B n k) r } Cn 1+r1 β) {u 1 β 1 + u) 1 β } r du = o n r/αβ)). k=1 Te latter of 54) is also establised. Hence te proof is complete. 0 Proof of Lemma 4 We adopt te notations of Surgailis 2002), for example, U k), M k) n, j, and V nj. We coose two positive numbers, λ and r, suc tat 2 1 2β 1 + 1/αβ) < r <α and αβ r αβ2βr 1 r) <λ<1 2 αβ αβ3 2βr). Te existence of λ and r is proved in Surgailis 2002). [a] stands for te largest integer wic is less tan or equal to a. We represent ns b T n as were V 1n = V 2n = [n λ ] i=1 j=s 0 i=1 j=[n λ ]+1 ns b T n = V 1n + V 2n, 57) K ξ)u ξ)dξ, K ξ)u ξ)dξ, U ξ) = f j x 0 + ξ b j ɛ i j X +1 ) f j+1 x 0 + ξ X +1 ) f x 0 + ξ b j ɛ i j ) + E{ f x 0 + ξ b j ɛ i j )}. If E{ V n1 2 }=on 2/αβ) ) and E{ V n2 r }=o n r/αβ)), 58) te result of te lemma follows from 57). We will establis 58). As in te proof of Lemma 2, we rewrite V n1 and V n2 as n s 0 V n1 = K ξ) M 1) dξ and V n2 = K ξ) n, j ξ) j=1 [n λ ] n 1 [n λ ] M 2) n, j ξ) j= dξ, 59) were M 1) n, j ξ) = i=1 j+s 0 ) U i,i j ξ)i i j [n λ ])
25 Nonparametric density estimation 437 and M 2) n, j ξ) = i=1 j+s 0 ) U i,i j ξ)i i j > [n λ ]). {M l) n, j ξ)} form martingale difference sequences wit respect to {S j}. 58) follows from 59), Jensen s inequality and te von Bar and Esseen inequality if we ave sown tat for any positive M, uniformly on { ξ M}, n s 0 j=1 [n λ ] n 1 [n λ ] j= i=1 j+s 0 ) i=1 j+s 0 ) Provided tat on { ξ M}, E{Ui,i 2 j ξ)i i j [nλ ])}) 1/2 E{ U i,i j ξ) r I i j > [n λ ])}) 1/r 2 = o n 2/αβ)), 60) r = o n r/αβ)). 61) E{ U ξ) r } Cj 1 2rβ, j s 0, 62) some calculation as on pp of Surgailis 2002) implies 60) and 61). In te calculation we use te fact tat E{ U ξ) 2 } CE{ U ξ) r } since U ξ) is uniformly bounded in, and ξ. We will establis 62). Hereafter we suppress te dependence on ξ. We decompose U into tree components. were U = U 1) + U 2) + U 3), 63) U 1) = f j x 0 + ξ b j ɛ i j X +1 ) f j+1 x 0 + ξ X +1 ) f j x 0 + ξ b j ɛ i j ) + E{ f j x 0 + ξ b j ɛ i j )}, U 2) = f j x 0 + ξ b j ɛ i j ) f j x 0 + ξ) f x 0 + ξ b j ɛ i j ) + f x 0 + ξ), U 3) = E{U 2) }. First we evaluate U 1), wic is written as U 1) = ) b j ɛ i j f j x 0 + ξ + z X +1 ) f j x 0 + ξ + z))dz dgu). b j u
26 438 T. Honda Since b j ɛ i j b j u f j x 0 + ξ + z X +1 ) f j x 0 + ξ + z))dz C b j u + b j ɛ i j ) X +1, we ave E{ U 1) r } Cj 1 2βr. 64) Next we deal wit U 2), wic is also written as U 2) = b j ɛ i j 0 Using P3 and te above expression, we obtain f j x 0 + ξ + z) f x 0 + ξ + z))dz. U 2) C b jɛ i j j 1/r β. 65) 65) implies tat E{ U 2) r } Cj 1 2βr. 66) Finally, by Jensen s inequality, we obtain U 3) r Cj 1 2βr. 67) 62) follows from 64), 66), and 67). Hence te proof of te lemma is complete. Acknowledgments muc. Te autor appreciates elpful comments of te associate editor and te referee very References Bryk, A., Mielniczuk, J. 2005). Asymptotic properties of density estimates for linear processes: application of projection metod. Journal of Nonparametric Statistics, 17, Ceng, B., Robinson, P. M. 1991). Density estimation in strongly dependent non-linear time series. Statistica Sinica, 1, Cow, Y. S., Teicer, H. 1988). Probability teory 2nd ed). New York: Springer. Csörgő, S., Mielniczuk, J. 1995). Density estimation under long-range dependence. Te Annals of Statistics, 28, Doukan, P. 1994). Mixing: properties and examples. Lecture Notes in Statistics, Vol. 85. New York: Springer. Fan, J., Yao, Q. 2003). Nonlinear time series. New York: Springer. Giraitis, L., Koul, H. L., Surgailis, D. 1996). Asymptotic normality of regression estimators wit long memory errors. Statistics & Probability Letters, 29, Giraitis, L., Surgailis, D. 1986). Multivariate Appell polynomials and te central limit teorem. In E. Eberlein, M. S. Taqqu Eds.), Dependence in probability and statistics pp ). Boston: Birkäuser. Hall, P., Hart, J. D. 1990). Convergence rates in density estimation for data from infinite-order moving average process. Probability Teory and Related Fields, 87,
27 Nonparametric density estimation 439 Hallin, M., Tran, L. T. 1996). Kernel density estimation for linear processes: asymptotic normality and optimal bandwidt derivation. Te Annals of te Institute of Statistical Matematics, 48, Hidalgo, J. 1997). Non-parametric estimation wit strongly dependent multivariate time series. Journal of Time Series Analysis, 18, Ho, H.-C. 1996). On central and non-central limit teorems in density estimation for sequences of longrange dependence. Stocastic Processes and teir Applications, 63, Ho, H.-C., Hsing, T. 1996). On te asymptotic expansion of te empirical process of long-memory moving averages. Te Annals of Statistics, 24, Ho, H.-C., Hsing, T. 1997). Limit teorems for functionals of moving averages. Te Annals of Probability, 25, Honda, T. 2000). Nonparametric density estimation for a long-range dependent linear process. Te Annals of te Institute of Statistical Matematics, 52, Hsing, T. 1999). On te asymptotic distribution of partial sum of functionals of infinite-variance moving averages. Te Annals of Probability, 27, Koul, H. L., Surgailis, D. 2001). Asymptotics of empirical processes of long memory moving averages wit infinite variance. Stocastic Processes and teir Applications, 91, Koul, H. L., Surgailis, D. 2002). Asymptotic expansion of te empirical process of long memory moving averages. In H. Deling, T. Mikosc, M. Sørensen Eds.), Empirical process tecniques for dependent data pp ). Boston: Birkäuser. Peng, L., Yao, Q. 2004). Nonparametric regression under dependent errors wit infinite variance. Te Annals of te Institute of Statistical Matematics, 56, Pipiras, V., Taqqu, M. S. 2003). Central limit teorems for partial sums of bounded functionals of infinitevariance moving averages. Bernoulli, 9, Samorodnitsky, G., Taqqu, M. S. 1994). Stable non-gaussian processes: stocastic models wit infinite variance. London: Capman & Hall. Scick, A., Wefelmeyer, W. 2006). Pointwise convergence rates for kernel density estimators in linear processes. Statistics & Probability Letters, 76, Surgailis, D. 2002). Stable limits of empirical processes of moving averages wit infinite variance. Stocastic Processes and teir Applications, 100, Surgailis, D. 2004). Stable limits of sums of bounded functions of long-memory moving averages wit finie variance. Bernoulli, 10, Taniguci, M. Kakizawa, Y. 2000). Asymptotic teory of statistical inference for time series. NewYork: Springer. Wu, W. B., Mielniczuk, J. 2002). Kernel density estimation for linear processes. Te Annals of Statistics, 30, Wuertz, D. and many oters and see te SOURCE file 2006). fbasics: Rmetrics -Marketes and Basic Statistics, R package version ttp://
Nonparametric Density Estimation fo Title Processes with Infinite Variance.
Nonparametric Density Estimation fo Title Processes with Infinite Variance Author(s) Honda, Toshio Citation Issue 2006-08 Date Type Technical Report Text Version URL http://hdl.handle.net/10086/16959 Right
More informationUniform Convergence Rates for Nonparametric Estimation
Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate
More information7 Semiparametric Methods and Partially Linear Regression
7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).
More informationAdditive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535
Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June
More informationch (for some fixed positive number c) reaching c
GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan
More informationKernel Density Based Linear Regression Estimate
Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency
More informationNADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i
NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u
More informationVolume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households
Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of
More informationNew families of estimators and test statistics in log-linear models
Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics
More informationThe Priestley-Chao Estimator
Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are
More informationChapter 1. Density Estimation
Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f
More informationSymmetry Labeling of Molecular Energies
Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry
More informationNumerical Differentiation
Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function
More informationEFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS
Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,
More informationOSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix
Opuscula Mat. 37, no. 6 (2017), 887 898 ttp://dx.doi.org/10.7494/opmat.2017.37.6.887 Opuscula Matematica OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS Sandra
More informationHomework 1 Due: Wednesday, September 28, 2016
0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q
More information1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point
MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note
More informationBandwidth Selection in Nonparametric Kernel Testing
Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel
More informationStationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series
Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Lawrence D. Brown, Pilip A. Ernst, Larry Sepp, and Robert Wolpert August 27, 2015 Abstract We consider te class,
More informationPoisson Equation in Sobolev Spaces
Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on
More informationContinuity and Differentiability of the Trigonometric Functions
[Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te
More informationHOMEWORK HELP 2 FOR MATH 151
HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,
More informationEstimation of boundary and discontinuity points in deconvolution problems
Estimation of boundary and discontinuity points in deconvolution problems A. Delaigle 1, and I. Gijbels 2, 1 Department of Matematics, University of California, San Diego, CA 92122 USA 2 Universitair Centrum
More information3.4 Worksheet: Proof of the Chain Rule NAME
Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are
More informationlecture 26: Richardson extrapolation
43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)
More informationMANY scientific and engineering problems can be
A Domain Decomposition Metod using Elliptical Arc Artificial Boundary for Exterior Problems Yajun Cen, and Qikui Du Abstract In tis paper, a Diriclet-Neumann alternating metod using elliptical arc artificial
More informationBrazilian Journal of Physics, vol. 29, no. 1, March, Ensemble and their Parameter Dierentiation. A. K. Rajagopal. Naval Research Laboratory,
Brazilian Journal of Pysics, vol. 29, no. 1, Marc, 1999 61 Fractional Powers of Operators of sallis Ensemble and teir Parameter Dierentiation A. K. Rajagopal Naval Researc Laboratory, Wasington D. C. 2375-532,
More informationConsider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.
Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions
More informationOrder of Accuracy. ũ h u Ch p, (1)
Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical
More information232 Calculus and Structures
3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE
More informationPOLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY
APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky
More informationFunction Composition and Chain Rules
Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function
More informationarxiv: v1 [math.pr] 28 Dec 2018
Approximating Sepp s constants for te Slepian process Jack Noonan a, Anatoly Zigljavsky a, a Scool of Matematics, Cardiff University, Cardiff, CF4 4AG, UK arxiv:8.0v [mat.pr] 8 Dec 08 Abstract Slepian
More information. If lim. x 2 x 1. f(x+h) f(x)
Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value
More informationCopyright c 2008 Kevin Long
Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula
More informationLIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT
LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc
More informationClick here to see an animation of the derivative
Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,
More informationThe derivative function
Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative
More informationA New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model
e University of Adelaide Scool of Economics Researc Paper No. 2009-6 October 2009 A New Diagnostic est for Cross Section Independence in Nonparametric Panel Data Model Jia Cen, Jiti Gao and Degui Li e
More informationMaterial for Difference Quotient
Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient
More informationIEOR 165 Lecture 10 Distribution Estimation
IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat
More informationFinancial Econometrics Prof. Massimo Guidolin
CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis
More informationA = h w (1) Error Analysis Physics 141
Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.
More informationMath 161 (33) - Final exam
Name: Id #: Mat 161 (33) - Final exam Fall Quarter 2015 Wednesday December 9, 2015-10:30am to 12:30am Instructions: Prob. Points Score possible 1 25 2 25 3 25 4 25 TOTAL 75 (BEST 3) Read eac problem carefully.
More informationMA455 Manifolds Solutions 1 May 2008
MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using
More informationHazard Rate Function Estimation Using Erlang Kernel
Pure Matematical Sciences, Vol. 3, 04, no. 4, 4-5 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/0.988/pms.04.466 Hazard Rate Function Estimation Using Erlang Kernel Raid B. Sala Department of Matematics
More informationApplications of the van Trees inequality to non-parametric estimation.
Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc
More informationBootstrap confidence intervals in nonparametric regression without an additive model
Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap
More informationLecture 10: Carnot theorem
ecture 0: Carnot teorem Feb 7, 005 Equivalence of Kelvin and Clausius formulations ast time we learned tat te Second aw can be formulated in two ways. e Kelvin formulation: No process is possible wose
More informationSECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY
(Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative
More information5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems
5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we
More informationKernel Density Estimation
Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative
More informationOn Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys
American Journal of Teoretical and Applied Statistics 2018; 7(3): 92-101 ttp://www.sciencepublisinggroup.com/j/ajtas doi: 10.11648/j.ajtas.20180703.11 ISSN: 2326-8999 (Print); ISSN: 2326-9006 (Online)
More informationNUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,
NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing
More informationDepartment of Statistics & Operations Research, Aligarh Muslim University, Aligarh, India
Open Journal of Optimization, 04, 3, 68-78 Publised Online December 04 in SciRes. ttp://www.scirp.org/ournal/oop ttp://dx.doi.org/0.436/oop.04.34007 Compromise Allocation for Combined Ratio Estimates of
More informationDifferentiation in higher dimensions
Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends
More informationSome Results on the Growth Analysis of Entire Functions Using their Maximum Terms and Relative L -orders
Journal of Matematical Extension Vol. 10, No. 2, (2016), 59-73 ISSN: 1735-8299 URL: ttp://www.ijmex.com Some Results on te Growt Analysis of Entire Functions Using teir Maximum Terms and Relative L -orders
More informationLecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.
Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative
More informationA Goodness-of-fit test for GARCH innovation density. Hira L. Koul 1 and Nao Mimoto Michigan State University. Abstract
A Goodness-of-fit test for GARCH innovation density Hira L. Koul and Nao Mimoto Micigan State University Abstract We prove asymptotic normality of a suitably standardized integrated square difference between
More informationERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*
EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T
More information4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.
Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra
More informationA Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation
A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract
More informationUnified estimation of densities on bounded and unbounded domains
Unified estimation of densities on bounded and unbounded domains Kairat Mynbaev International Scool of Economics Kazak-Britis Tecnical University Tolebi 59 Almaty 5, Kazakstan email: kairat mynbayev@yaoo.com
More informationResearch Article Error Analysis for a Noisy Lacunary Cubic Spline Interpolation and a Simple Noisy Cubic Spline Quasi Interpolation
Advances in Numerical Analysis Volume 204, Article ID 35394, 8 pages ttp://dx.doi.org/0.55/204/35394 Researc Article Error Analysis for a Noisy Lacunary Cubic Spline Interpolation and a Simple Noisy Cubic
More informationCombining functions: algebraic methods
Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)
More informationBasic Nonparametric Estimation Spring 2002
Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,
More informationChapter 2 Limits and Continuity
4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(
More informationSECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES
(Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,
More informationGlobal Existence of Classical Solutions for a Class Nonlinear Parabolic Equations
Global Journal of Science Frontier Researc Matematics and Decision Sciences Volume 12 Issue 8 Version 1.0 Type : Double Blind Peer Reviewed International Researc Journal Publiser: Global Journals Inc.
More informationINFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction
INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial
More informationDifferential Calculus (The basics) Prepared by Mr. C. Hull
Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit
More informationMath 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006
Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2
More informationOverlapping domain decomposition methods for elliptic quasi-variational inequalities related to impulse control problem with mixed boundary conditions
Proc. Indian Acad. Sci. (Mat. Sci.) Vol. 121, No. 4, November 2011, pp. 481 493. c Indian Academy of Sciences Overlapping domain decomposition metods for elliptic quasi-variational inequalities related
More informationJournal of Computational and Applied Mathematics
Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing
More informationUniform Consistency for Nonparametric Estimators in Null Recurrent Time Series
Te University of Adelaide Scool of Economics Researc Paper No. 2009-26 Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Jiti Gao, Degui Li and Dag Tjøsteim Te University of
More informationMATH1151 Calculus Test S1 v2a
MATH5 Calculus Test 8 S va January 8, 5 Tese solutions were written and typed up by Brendan Trin Please be etical wit tis resource It is for te use of MatSOC members, so do not repost it on oter forums
More informationConvergence rates in weighted L 1 spaces of kernel density estimators for linear processes
Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x
More informationExercises for numerical differentiation. Øyvind Ryan
Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More information1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)
Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of
More information1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist
Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter
More informationDynamics and Relativity
Dynamics and Relativity Stepen Siklos Lent term 2011 Hand-outs and examples seets, wic I will give out in lectures, are available from my web site www.damtp.cam.ac.uk/user/stcs/dynamics.tml Lecture notes,
More informationMath Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim
Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z
More informationPUBLISHED VERSION. Copyright 2009 Cambridge University Press.
PUBLISHED VERSION Gao, Jiti; King, Maxwell L.; Lu, Zudi; josteim, D.. Nonparametric specification testing for nonlinear time series wit nonstationarity, Econometric eory, 2009; 256 Suppl:1869-1892. Copyrigt
More informationOn convexity of polynomial paths and generalized majorizations
On convexity of polynomial pats and generalized majorizations Marija Dodig Centro de Estruturas Lineares e Combinatórias, CELC, Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa, Portugal
More informationPre-Calculus Review Preemptive Strike
Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly
More information1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 -
Eercise. Find te derivative of g( 3 + t5 dt wit respect to. Solution: Te integrand is f(t + t 5. By FTC, f( + 5. Eercise. Find te derivative of e t2 dt wit respect to. Solution: Te integrand is f(t e t2.
More informationTime (hours) Morphine sulfate (mg)
Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15
More informationExam 1 Review Solutions
Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),
More informationNumerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems
Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for
More informationQuantum Theory of the Atomic Nucleus
G. Gamow, ZP, 51, 204 1928 Quantum Teory of te tomic Nucleus G. Gamow (Received 1928) It as often been suggested tat non Coulomb attractive forces play a very important role inside atomic nuclei. We can
More informationSuperconvergence of energy-conserving discontinuous Galerkin methods for. linear hyperbolic equations. Abstract
Superconvergence of energy-conserving discontinuous Galerkin metods for linear yperbolic equations Yong Liu, Ci-Wang Su and Mengping Zang 3 Abstract In tis paper, we study superconvergence properties of
More informationMath 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0
3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,
More informationArtificial Neural Network Model Based Estimation of Finite Population Total
International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony
More informationCubic Functions: Local Analysis
Cubic function cubing coefficient Capter 13 Cubic Functions: Local Analysis Input-Output Pairs, 378 Normalized Input-Output Rule, 380 Local I-O Rule Near, 382 Local Grap Near, 384 Types of Local Graps
More informationMAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points
MAT 15 Test #2 Name Solution Guide Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points Use te grap of a function sown ere as you respond to questions 1 to 8. 1. lim f (x) 0 2. lim
More informationModel Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1
Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional
More informationUniversity Mathematics 2
University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at
More information