Structural Nonparametric Cointegrating Regression
|
|
- Ezra Gilbert Bryan
- 5 years ago
- Views:
Transcription
1 Structural Nonparametric Cointegrating Regression Qiying Wang Scool of Matematics and Statistics Te University of Sydney Peter C. B. Pillips Yale University, University of Auckland University of York & Singapore Management University August 25, 2008 Abstract Nonparametric estimation of a structural cointegrating regression model is studied. As in te standard linear cointegrating regression model, te regressor and te dependent variable are jointly dependent and contemporaneously correlated. nonparametric estimation problems, joint dependence is known to be a major complication tat affects identification, induces bias in conventional kernel estimates, and frequently leads to ill-posed inverse problems. In functional cointegrating regressions were te regressor is an integrated or near-integrated time series, it is sown ere tat inverse and ill-posed inverse problems do not arise. Instead, simple nonparametric kernel estimation of a structural nonparametric cointegrating regression is consistent and te limit distribution teory is mixed normal, giving straigtforward asymptotics useable in practical work. Te results provide a convenient basis for inference in structural nonparametric regression wit nonstationary time series wen tere is a single integrated or near-integrated regressor. Te metods may be applied to a range of empirical models were functional estimation of cointegrating relations is required. Key words and prases: Brownian Local time, Cointegration, Functional regression, Gaussian process, Integrated process, Kernel estimate, Near integration, Nonlinear functional, Nonparametric regression, Structural estimation, Unit root. JEL Classification: C14, C22. Te autors tank te coeditor two referees for elpful comments on te original version. Wang acknowledges partial researc support from Australian Researc Council. Pillips acknowledges partial researc support from a Kelly Fellowsip and te NSF under Grant No. SES In 1
2 1 Introduction A good deal of recent attention in econometrics as focused on functional estimation in structural econometric models and te inverse problems to wic tey frequently give rise. A leading example is a structural nonlinear regression were te functional form is te object of primary interest. In suc systems, identification and estimation are typically muc more callenging tan in linear systems because tey involve te inversion of integral operator equations wic may be ill-posed in te sense tat te solutions may not exist, may not be unique and may not be continuous. Some recent contributions to tis field include Newey, Powell and Vella (1999), Newey and Powell (2003), Ai and Cen (2003), Florens (2003), and Hall and Horowitz (2004). Overviews of te ill-posed inverse literature are given in Florens (2003) and Carrasco, Florens and Renault (2006). All of tis literature as focused on microeconometric and stationary time series settings. In linear structural systems problems of inversion from te reduced form are muc simpler and conditions for identification and consistent estimation tecniques ave been extensively studied. Under linearity, it is also well known tat te presence of nonstationary regressors can provide a simplification. In particular, for cointegrated systems involving time series wit unit roots, structural relations are actually present in te reduced form (and terefore always identified) because of te unit roots in a subset of te determining equations. In fact, suc models can always be written in error correction or reduced rank regression format were te structural relations are immediately evident. Te present paper sows tat nonstationarity leads to major simplifications in te context of structural nonlinear functional regression. Te primary simplification arises because in nonlinear models wit endogenous nonstationary regressors tere is no ill-posed inverse problem. In fact, tere is no inverse problem at all in te functional treatment of suc systems. Furtermore, identification does not require te existence of instrumental variables tat are ortogonal to te equation errors. Finally, and peraps most importantly for practical work, consistent estimation may be accomplised using standard kernel regression tecniques, and inference may be conducted in te usual way and is valid asymptotically under simple regularity conditions. Tese results for kernel regression in structural nonlinear models of cointegration open up many new possibilities for empirical researc. Te reason wy tere is no inverse problem in structural nonlinear nonstationary systems can be explained euristically as follows. In a nonparametric structural setting 2
3 it is conventional to impose on te disturbances a zero conditional mean condition given certain instruments, in order to assist in identifying an infinite dimensional function. Suc conditions lead to an integral equation involving te conditional probability distribution of te regressors and te structural function integrated over te space of te regressor. Tis equation describes te relation between te structure and reduced form and its solution, if it exists and is unique, delivers te unknown structural function. But wen te endogenous regressor is nonstationary tere is no invariant probability distribution of te regressor, only te local time density of te limiting stocastic process corresponding to a standardized version of te regressor as it sojourns in te neigborood of a particular spatial value. Accordingly, tere is no integral equation relating te structure to te reduced form. In fact, te structural equation itself is locally also a reduced form equation in te neigborood of tis spatial value. For wen an endogenous regressor is in te locality of a specific value, te systematic part of te structural equation depends on tat specific value and te equation is effectively a reduced form. Wat is required is tat te nonstationary regressor spends enoug time in te vicinity of a point in te space to ensure consistent estimation. Tis in turn requires recurrence, so tat te local time of te limit process corresponding to te time series is positive. In addition, te random wandering nature of a stocastically nonstationary regressor suc as a unit root process ensures tat te regressor inevitably departs from any particular locality and tereby assists in tracing out (and identifying) te structural function over a wide domain. Te process is similar to te manner in wic instruments may sift te location in wic a structural function is observed and in doing so assist in te process of identification wen te data are stationary. Linear cointegrating systems reveal a strong form of tis property. As mentioned above, in linear cointegration te inverse problem disappears completely because te structural relations continue to be present in te reduced form. Indeed, tey are te same as reduced form equations up to simple time sifts, wic are of no importance in long run relations. In nonlinear structural cointegration, te same beavior applies locally in te vicinity of a particular spatial value, tereby giving local identification of te structural function and facilitating estimation. In linear cointegration, te signal strengt of a nonstationary regressor ensures tat least squares estimation is consistent, altoug te estimates are well-known to ave second order bias (Pillips and Durlauf, 1986; Stock, 1987) and are terefore seldom used 3
4 in practical work. Muc attention as terefore been given in te time series literature to te development of econometric estimation metods tat remove te second order bias and are asymptotically and semiparametrically efficient. In nonlinear structural functional estimation wit a single nonstationary regressor, tis paper sows tat local kernel regression metods are consistent and tat under some regularity conditions tey are also asymptotically mixed normally distributed, so tat conventional approaces to inference are possible. Tese results constitute a major simplification in te functional treatment of nonlinear cointegrated systems and tey directly open up empirical applications wit existing metods. In related recent work, Karlsen, Myklebust and Tjøsteim (2007)and Scienle (2008) used Markov cain metods to develop an asymptotic teory of kernel regression allowing for some forms of nonstationarity and endogeneity in te regressor. Scienle also considers additive nonparametric models wit many nonstationary regressors and smoot backfitting metods of estimation. Te results in te current paper are obtained using local time convergence tecniques, extending tose in Wang and Pillips (2008) to te endogenous regressor case and allowing for bot integrated and near integrated regressors wit general forms of serial dependence in te generating mecanism and equilibrium error. Te validity of te limit teory in te case of near integrated regressors is important in practice because it is often convenient in empirical work not to insist on unit roots and to allow for roots near unity in te regressors. By contrast, conventional metods of estimation and inference in parametric models of linear cointegration are known to break down wen te regressors ave roots local to unity. Te paper is organized as follows. Section 2 introduces te model and assumptions. Section 3 provides te main results on te consistency and limit distribution of te kernel estimator in a structural model of nonlinear cointegration and associated metods of inference. Section 4 reports a simulation experiment exploring te finite sample performance of te kernel estimator. Section 5 concludes and outlines ways in wic te present paper may be extended. Proofs and various subsidiary tecnical results are given in Sections 6-9 as Appendices to te paper. 4
5 2 Model and Assumptions We consider te following nonlinear structural model of cointegration y t = f(x t ) + u t, t = 1, 2,..., n, (2.1) were u t is a zero mean stationary equilibrium error, x t is a jointly dependent nonstationary regressor, and f is an unknown function to be estimated wit te observed data {y t, x t } n. Te conventional kernel estimate of f(x) in model (2.1) is given by ˆf(x) = n y tk (x t x) n K (x t x), (2.2) were K (s) = 1 K(s/), K(x) is a nonnegative real function, and te bandwidt param- eter n 0 as n. Te limit beavior of ˆf(x) as been investigated in past work in some special situations, notably were te error process u t is a martingale difference sequence and tere is no contemporaneous correlation between x t and u t. Tese are strong conditions, tey are particularly restrictive in relation to te conventional linear cointegrating regression framework, and tey are unlikely to be satisfied in econometric applications. However, tey do facilitate te development of a limit teory by various metods. In particular, Karlsen, Myklebust and Tjøsteim (2007) investigated ˆf(x) in te situation were x t is a recurrent Markov cain, allowing for some dependence between x t and u t. Under similar conditions and using related Markov cain metods, Scienle (2008) investigated additive nonlinear versions of (2.1) and obtained a limit teory for nonparametric regressions under smoot backfitting. Wang and Pillips (2008, ereafter WP) considered an alternative treatment by making use of local time limit teory and, instead of recurrent Markov cains, worked wit partial sum representations of te type x t = t j=1 ξ j were ξ j is a general linear process. Tese autors sowed tat te limit teory for ˆf(x) as links to traditional nonparametric asymptotics for stationary models wit exogenous regressors even toug te rates of convergence are different and typically slower wen x t is nonstationary and te limit teory is mixed normal rater tan normal. In extending tis work, it seems particularly important to relax conditions of independence and permit joint determination of x t and y t, and to allow for serial dependence in te equilibrium errors u t and te innovations driving x t, so tat te system is a time series structural model. Te goal of te present paper is to do so and to develop a limit teory for structural functional estimation in te context of nonstationary time series 5
6 tat is more in line wit te type of assumptions made for parametric linear cointegrated systems. Trougout te paper we let {ɛ t } t 1 be a sequence of independent and identically distributed (iid) continuous random variables wit Eɛ 1 = 0, Eɛ 2 1 = 1, and wit te caracteristic function ϕ(t) of ɛ 1 satisfying ϕ(t) dt <. Te sequence {ɛ t} t 1 is assumed to be independent of anoter iid random sequence {λ t } t 1 tat enters into te generating mecanism for te equilibrium errors. Tese two sequences comprise te innovations tat drive te time series structure of te model. We use te following assumptions in te asymptotic development. Assumption 1. x t = ρ x t 1 + η t, were x 0 = 0, ρ = 1 + κ/n wit κ being a constant and η t = k=0 φ kɛ t k wit φ k=0 φ k 0 and k=0 φ k <. Assumption 2. u t = u(ɛ t, ɛ t 1,..., ɛ t m0 +1, λ t, λ t 1,..., λ t m0 +1) satisfies Eu t = 0 and Eu 4 t < for t m 0, were u(x 1,..., x m0, y 1,..., y m0 ) is a real measurable function on R 2m 0. We define u t = 0 for 1 t m 0 1. Assumption 3. K(x) is a nonnegative bounded continuous function satisfying K(x)dx < and ˆK(x) dx <, were ˆK(x) = e ixt K(t)dt. Assumption 4. For given x, tere exists a real function f 1 (s, x) and an 0 < γ 1 suc tat, wen sufficiently small, f(y + x) f(x) γ f 1 (y, x) for all y R and K(s) f 1(s, x)ds <. Assumption 1 allows for bot a unit root (κ = 0) and near unit root (κ 0) regressor by virtue of te localizing coefficient κ and is standard in te near integrated regression framework (Pillips, 1987, 1988; Can and Wei, 1987). Te regressor x t is ten a triangular array formed from a (weigted) partial sum of linear process innovations tat satisfy a simple summability condition wit long run moving average coefficient φ 0. We remark tat in te cointegrating framework, it is conventional to set κ = 0 so tat te regressor is integrated and tis turns out to be important in inference. Indeed, in linear parametric cointegration, it is well known (e.g., Elliott, 1998) tat near integration (κ 0 ) leads to failure of standard cointegration estimation and test procedures. As sown ere, no suc failures occur under near integration in te nonparametric regression context. Assumption 2 allows te equation error u t to be serially dependent and cross correlated wit x s for t s < m 0, tereby inducing endogeneity in te regressor. In te asymptotic development below, m 0 is assumed to be finite but tis could likely be relaxed under 6
7 some additional conditions and wit greater complexity in te proofs, altoug tat is not done ere. It is not necessary for u t to depend on λ s, in wic case tere is only a single innovation sequence. However, in most practical cases involving cointegration between two variables, we can expect tat tere will be two innovation sequences. Wile u t is stationary in Assumption 2, we later discuss some nonstationary cases were te conditional variance of u t may depend on x t. Note tat Assumption 2 allows for a nonlinear generating mecanisms for te equilibrium error u t. Tis seems appropriate in a context were te regression function itself is allowed to take a general nonlinear form. Assumption 3 places stronger conditions on te kernel function tan is usual in kernel estimation, requiring tat te Fourier transform of K(x) is integrable. Tis condition is needed for tecnical reasons in te proofs and is clearly satisfied for many commonly used kernels, like te normal kernel or kernels aving a compact support. Assumptions 4, wic was used in WP, is quite weak and can be verified for various kernels K(x) and regression functions f(x). For instance, if K(x) is a standard normal kernel or as a compact support, a wide range of regression functions f(x) are included. Tus, commonly occuring functions like f(x) = x β and f(x) = 1/(1 + x β ) for some β > 0 satisfy Assumption 4 wit γ = min{β, 1}. Wen γ = 1, stronger smootness conditions on f(x) can be used to assist in developing analytic forms for te asymptotic bias function in kernel estimation. 3 Main result and outline of te proof Te limit teory for te conventional kernel regression estimate ˆf(x) under random normalization turns out to be very simple and is given in te following teorem. THEOREM 3.1. For any satisfying n 2 and 0, ˆf(x) p f(x). (3.1) Furtermore, for any satisfying n 2 and n 2(1+2γ) 0, ( n 1/2( K (x t x)) ˆf(x) f(x)) D N(0, σ 2 ), (3.2) were σ 2 = E(u 2 m 0 ) K2 (s)ds / K(x)dx. 7
8 Remarks (a) Te result (3.1) implies tat ˆf(x) is a consistent estimate of f(x). Furtermore, as in WP, we may sow tat [ ˆf(x) f(x) = o P {a n γ + ( n) 1/2] }, (3.3) were γ is defined as in Assumption 4, and a n diverges to infinity as slowly as required. Tis indicates tat a possible optimal bandwidt wic yields te best rate in (3.3) or te minimal E( ˆf(x) f(x)) 2 at least for general γ satisfies a argmin { γ + ( n) 1/2 } a n 1/[2(1+2γ)], were a and a are positive constants. In te most common case tat γ = 1, tis result suggests a possible optimal bandwidt to be a n 1/6, so tat = o(n 1/6 ) ensures undersmooting. Tis is different from tat of nonparametric regression wit a stationary regressor, wic typically requires = o(n 1/5 ) for undersmooting. Under stronger smootness conditions on f(x) it is possible to develop an explicit expression for te bias function and te weaker condition = o(n 1/10 ) applies for undersmooting. Some furter discussion and results are given in Remark (c) and Section 9. (b) To outline te essentials of te argument in te proof of Teorem 3.1, we split te error of estimation ˆf(x) f(x) as ˆf(x) f(x) = n u tk[(x t x)/] n K[(x t x)/] + n [ f(xt ) f(x) ] K[(x t x)/] n K[(x. t x)/] Te result (3.3) wic implies (3.1) by letting a n = min{ γ, ( n) 1/2 } will follow if we prove Θ 1n := Θ 2n := n u t K[(x t x)/] = O P {( n) 1/2 }, (3.4) n [ f(xt ) f(x) ] K[(x t x)/] = O P { n 1+γ }, (3.5) and if, for any a n diverging to infinity as slowly as required, / n Θ 3n := 1 K[(x t x)/] = o P {a n /( n)}. (3.6) 8
9 On te oter and, it is readily seen tat ( n 1/2( K (x t x)) ˆf(x) f(x)) = n u tk[(x t x)/] n K[(x t x)/] + Θ 2n Θ3n. By virtue of (3.5) and (3.6) wit a n = (n 2+4γ ) 1/8, we obtain Θ 2n Θ3n P 0, since n 2+4γ 0. Te stated result (3.2) will ten follow if we prove [nt] {(n 2 ) 1/4 k=1 u k K[(x k x)/], (n 2 ) 1/2 n k=1 } K[(x k x)/] D { d0 N L 1/2 (t, 0), d 1 L(1, 0) }, (3.7) on D[0, 1] 2, were d 2 0 = φ 1 E(u 2 m 0 ) K2 (s)dt, d 1 = φ 1 K(s)ds, L(t, 0) is te local time process at te origin of te Gaussian diffusion process {J κ (t)} t 0 defined by and {W (t)} t 0 J κ (t) = W (t) + κ t 0 e (t s)κ W (s)ds (3.8) being a standard Brownian motion, and were N is a standard normal variate independent of L(t, 0). Te local time process L(t, a) is defined by 1 L(t, a) = lim ɛ 0 2ɛ t 0 I{ J κ (r) a ɛ}dr. (3.9) Indeed, since P (L(1, 0) > 0) = 1, te required result (3.2) follows by (3.7) and te continuous mapping teorem. It remains to prove (3.4) (3.7), wic are establised in te Appendix. As for (3.7), it is clearly sufficient for te required result to sow tat te finite dimensional distributions converge in (3.7). (c) Results (3.2) and (3.7) sow tat ˆf(x) as an asymptotic distribution tat is mixed normal and tat tis limit teory olds even in te presence of an endogenous regressor. Te mixing variate in te limit distribution depends on te local time process L(1,0), as follows from (3.7). Explicitly, (n 2 ) 1/4 ( ˆf(x) f(x)) D d 0 d 1 1 N L 1/2 (1, 0), (3.10) wenever n 2 and n 2(1+2γ) 0. Again, tis is different from tat of nonparametric regression wit a stationary regressor. As noticed in WP, in te nonstationary case, te amount of time spent by te process around any particular spatial 9
10 point is of order n rater tan n, so tat te corresponding convergence rate in suc regressions is now n = (n 2 ) 1/4, wic requires tat n 2. In effect, te local sample size is n in nonstationary regression involving integrated processes, rater tan n as in te case of stationary regression. Te condition tat n 2(1+2γ) 0 is required to remove bias. Tis condition can be furter relaxed if we add stronger smootness conditions on f(x) and incorporate an explicit bias term in (3.10). A full development requires furter conditions and a very detailed analysis, wic we defer to later work. In te simplest case were κ = 0, u t is a martingale difference sequence wit E (u 2 t ) = σ 2 u, u t is independent of x t, K satisfies K(y)dy = 1, yk(y)dy = 0 and as compact support, and f as continuous, bounded tird derivatives, it is sown in te Appendix in Section 9 tat ( ) [ ] n 2 1/4 ˆf (x t ) f (x) 2 2 f (x) y 2 K(y)dy provided n 14 0 and n 2. N ( ) 0, σu 2 K2 (s) ds, L (1, 0) 1/2 (3.11) (d) As is clear from te second member of (3.7), te signal strengt in te present kernel regression is O( n k=1 K[(x k x)/]) = O( n), wic gives te local sample size in tis case, so tat consistency requires tat te bandwidt does not pass to zero too fast (viz., n 2 ). On te oter and, wen tends to zero slowly, estimation bias is manifest even in very large samples. Some illustrative simulations are reported in te next section. (e) Te limiting variance of te (randomly normalized) kernel estimator in (3.2) is simply a scalar multiple of te variance of te equilibrium error, viz., Eu 2 m 0, rater tan a conditional variance tat depends on x t x, as is commonly te case in kernel regression teory for stationary time series. Tis difference is explained by te fact tat, under Assumption 2, u t is stationary and, even toug u t is correlated wit te socks ε t,..., ε t m0 +1 involved in generating te regressor x t, te variation of u t wen x t x is still measured by Eu 2 m 0 in te limit teory. If Assumption 2 is relaxed to allow for some explicit nonstationarity in te conditional variance of u t, ten tis may impact te limit teory. Te manner in wic te limit teory is affected depends on te form of te conditional variance function. For instance, 10
11 suppose te equilibrium error is u t = g(x t )u t, were u t satisfies Assumption 2 and is independent of x t, and g is a positive continuous function, e.g. g(x) = 1/(1 + x α ) for some α > 0. In tis case under some additional regularity conditions, modifications to te arguments given in Proposition 7.2 sow tat te variance of te limit distribution is now given by σ 2 (x) = E(u 2 m 0 ) g(x) 2 K2 (s)ds / K(x)dx. Te limiting variance of te kernel estimator is ten simply a scalar multiple of te variance of te equilibrium error, were te scalar depends on g(x). (f) Teorem 3.1 gives a pointwise result at te value x, wile te process x t itself is recurrent and wanders over te wole real line. For fixed points x x, te kernel cross product 1 n 1/4 1/2 n ( ) ( ) xt x xt x K K = o p (1) for x x. (3.12) To sow (3.12), note tat if x t / t as a bounded density t (y), as in WP, we ave [ ( ) ( ) xt x xt x ] E K K [ ( ) ( ) ty x ty x ] = K K t (y)dy = t 1/2 K (y) K [y + (x x )/] t [(y + x)/ t]dy t 1/2 t (0) K (y) K [y + (x x )/] dy = o( t 1/2 ), wenever x x, 0 and t. Ten 1 n 1/4 1/2 n ( ) ( ) xt x xt x ( 1/2 K K = o p n 1/4 n ) 1 ( = o ) t 1/2 p 1/2. Tis result and teorem 2.1 of WP give [ 1 n K ( x t x ) 2 K ( x t x ) ( K xt x ) ] n 1/4 1/2 K ( x t x ) ( K xt x ) K ( x t x ) 2 [ ] K (s) 2 ds 0 L (1, 0) 0 K (s) 2. ds 11
12 Following te same line of argument in te proof of teorem 3.2 of WP, it follows tat in te special case were u t is a martingale difference sequence independent of x ( ) t te regression ordinates ˆf (x), ˆf (x ) ave a mixed normal limit distribution wit diagonal covariance matrix. Te ordinates are ten asymptotically conditionally independent given te local time L (1, 0). Extension of tis teory to te general case were u t and x t are dependent involves more complex limit teory and is left for later work. (g) Te error variance term Eu 2 m 0 in te limit distribution (3.2) may be estimated by a localized version of te usual residual based metod. Indeed, by letting ˆσ 2 n = n [ yt ˆf(x) ] 2 K (x t x) n K, (x t x) we ave te following teorem under minor additional conditions. THEOREM 3.2. In addition to Assumptions 1-4, Eu 8 m 0 < and K(s)f 2 1 (s, x)ds < for given x. Ten, for any satisfying n 2 and 0, Furtermore, for any satisfying n 2 and n 2(1+γ) 0, ˆσ 2 n p Eu 2 m 0. (3.13) (n 2 ) 1/4 (ˆσ 2 n Eu 2 m 0 ) D σ 1 N L 1/2 (1, 0), (3.14) were N and L(1, 0) are defined as in (3.7) and σ 2 1 = E(u 2 m 0 Eu 2 m 0 ) 2 K2 (s)ds / K(x)dx. Wile te estimator ˆσ 2 n is constructed from te regression residuals y t ˆf(x), it is also localized at x because of te action of te kernel function K (x t x) in (3.13). Note, owever, tat in te present case te limit teory for ˆσ 2 n is not localized at x. In particular, te limit of ˆσ 2 n is te unconditional variance Eu 2 m 0, not a conditional variance, and te limit distribution of ˆσ 2 n given in (3.14) depends only on te local time L(1, 0) of te limit process at te origin, not on te precise value of x. Te explanation is tat conditioning on te neigborood x t x is equivalent to x t / n x/ n or x t / n 0, wic translates into te local time of te limit process of x t at te origin irrespective of te given value of x. For te same reason, as discussed in Remark (e) above, te limit distribution of te kernel regression estimator given in (3.2) depends on te variance Eu 2 m 0. However, as discussed in Remark (e), in te more general context were tere is 12
13 nonstationary conditional eterogeneity, te limit of ˆσ 2 n may be correspondingly affected. For instance, in te case considered tere were u t = g(x t )u t, u t satisfies Assumption 2, and g is a positive continuous function, we find tat ˆσ 2 n p Eu 2 m 0 g(x) 2. 4 Simulations Tis section reports te results of a simulation experiment investigating te finite sample performance of te kernel regression estimator. Te generating mecanism follows (2.1) and as te explicit form y t = f (x t ) + u t, x t = ɛ t, u t = (λ t + θɛ t ) / ( 1 + θ 2) 1/2, were (ɛ t, λ t ) are iid N (0, σ 2 I 2 ). Te following two regression functions were used in te simulations: f A (x) = j=1 ( 1) j+1 sin (jπx) j 2, f B (x) = x 3. Te first function corresponds (up to a scale factor) to te function used in Hall and Horowitz (2005) and is truncated at j = 4 for computation. Figs. 1 and 2 grap tese functions (te solid lines) and te mean simulated kernel estimates (broken lines) over te intervals [0, 1] and [ 1, 1] for kernel estimates of f A and f B, respectively. Bias, variance and mean squared error for te estimates were computed on te grid of values {x = 0.01k : k = 0, 1,..., 100} for [0, 1] and {x = k; k = 0, 1,..., 100} for [ 1, 1] based on 10,000 replications. Simulations were performed for θ = 1 (weak endogeneity) and θ = 100 (strong endogeneity), wit σ = 0.1, and for te sample size n = 500. A Gaussian kernel was used wit bandwidts = n 10/18, n 1/2, n 1/3, n 1/5. Table 1 sows te performance of te regression estimate ˆf computed over various bandwidts,, and endogeneity parameters, θ, for te two models. Since x t is recurrent and wanders over te real line, some simulations are inevitably tin in subsets of te cosen domains and tis inevitably affects performance because te local sample size is small. In bot models te degree of endogeneity (θ) in te regressor as a negligible effect on te properties of te kernel regression estimate wen is small. It is also clear tat estimation bias can be substantial, particularly for model A wit bandwidt = n 1/5, 13
14 corresponding to te conventional rate for stationary series. Bias is substantially reduced for te smaller bandwidts = n 1/2, n 1/3 at te cost of an increase in dispersion and is furter reduced wen = n 10/18 altoug tis coice and = n 1/2 violate te condition n 2 of teorem 3.1. Te downward bias in te case of ˆfA over te domain [0, 1] appears to be due to te periodic nature of te function f A and te effects of smooting over x values for wic te function is negative. Te bias in ˆf B is similarly towards te origin over te wole domain [ 1, 1]. Te performance caracteristics seem to be little affected by te magnitude of te endogeneity parameter θ. For model A, finite sample performance in terms of MSE seems to be optimized for close to n 1/2. For model B, = n 1/5 delivers te best MSE performance largely because of te substantial gains in variance reduction wit te larger bandwidt tat occur in tis case. Tus, bias reduction troug coice of a very small bandwidt may be important in overall finite sample performance for some regression functions but muc less so for oter functions. Of course, if 0 so fast tat n 2 ten te signal n K ( x t x ) and te kernel estimate is not consistent. ( ) Figs. 1 and 2 sow results for te Monte Carlo approximations to E ˆfA (x) and ( ) E ˆfB (x) corresponding to bandwidts = n 1/2 (broken line), = n 1/3 (dotted line), and = n 1/5 (dased and dotted line) for θ = 100. Figs 3 and 4 sow te Monte Carlo ( ) ( ) approximations to E ˆfA (x) and E ˆfB (x) togeter wit a 95% pointwise estimation band. As in Hall and Horowitz (2005), tese bands connect points f (x j ± δ j ) were eac δ j is cosen so tat te interval [f (x j ) δ j, f (x j ) + δ j ] contains 95% of te 10,000 simulated values of ˆf (x j ) for models A and B, respectively. Apparently, te bands are quite wide, reflecting te muc slower rate of convergence of te kernel estimate ˆf (x) in te nonstationary case. In particular, since x t spends only n of its time in te neigborood of any specific point, te effective sample size for pointwise estimation purposes is Wen = n 1/3, it follows from teorem 3.1 tat te convergence rate is (n 2 ) 1/4 = n 1/12, wic is far slower tan te rate (n) 1/2 = n 2/5 for conventional kernel regression. 14
15 Table 1 Model A: f A (x) = 4 ( 1) j+1 sin(jπx) j=1 j 2 θ Bias Std MSE 100 n 10/ n 1/ n 1/ n 1/ n 10/ n 1/ n 1/ n 1/ Model B: f B (x) = x 3 θ Bias Std MSE 100 n 10/ n 1/ n 1/ n 1/ n 10/ n 1/ n 1/ n 1/ Using Teorems 3.1 and 3.2 an asymptotic 100(1 α)% level confidence interval for f (x) is given by ( 1/2 ˆσ ˆf 2 (x) ± z nµ K 2/µ K α/2 n K ( x t x )), were µ K 2 = K2 (s) ds, µ K = K (s) ds, and z α/2 = Φ 1 (1 α/2) using te standard normal cdf Φ. Figs 5 and 6 sow te empirical coverage probabilities of tese pointwise asymptotic confidence interval for f A and f B over 100 equispaced points on te domains [0, 1] and [ 1, 1], using a standard normal kernel, various bandwidts as sown, and setting α = 0.05 and n = 500. For bot functions te coverage rates are closer to te nominal level of 95% and more uniform over te respective domains for te smaller 15
16 bandwidt coices. For function f A tere is evidence of substantial undercoverage in te interior of te [0, 1] interval, were te nonparametric estimator was seen to be biased (Fig. 1) for larger bandwidts. For function f B, te undercoverage is also substantial for te larger bandwidts but in tis case away from te origin, wile at te origin tere is some evidence of overcoverage for te larger bandwidts. For bot functions, te smaller bandwidt coice seems to give more uniform performance, te coverage probability is around 90% and is close enoug to te nominal level to be satisfactory. 5 Conclusion Te two main results in te present paper ave important implications for applications. First, tere is no inverse problem in structural models of nonlinear cointegration of te form (2.1) were te regressor is an endogenously generated integrated or near integrated process. Tis result reveals a major simplification in structural nonparametric regression in cointegrating models, avoiding te need for instrumentation and completely eliminating ill-posed functional equation inversions. Second, functional estimation of (2.1) is straigtforward in practice and may be accomplised by standard kernel metods. Tese metods yield consistent estimates tat ave a mixed normal limit distribution, tereby validating conventional metods of inference in te nonstationary nonparametric setting. Te results open up some interesting possibilities for functional regression in empirical researc wit integrated and near integrated processes. In addition to many possible empirical applications wit te metods, tere are some interesting extensions of te ideas presented ere to oter useful models involving nonlinear functions of integrated processes. In particular, additive nonlinear cointegration models (c.f. Scienle, 2008) and partial linear cointegration models may be treated in a similar way to (2.1), but multiple non-additive regression models present difficulties arising from te nonrecurrence of te limit processes in ig dimensions (c.f. Park and Pillips, 2000). Tere are also issues of specification testing, functional form tests, and cointegration tests, wic may now be addressed using tese metods. It will also be of interest to consider te properties of instrumental variable procedures in te present nonstationary context. We plan to report on some of tese extensions in later work. 16
17 6 Proof of Teorem 3.1 As sown in Remark (b), te proof of te teorem essentially amounts to proving (3.4) (3.7). To do so, we will make use of various subsidiary results wic are proved ere and in te next section. First, it is convenient to introduce te following definitions and notation. If α (1) n, α n (2),..., α n (k) (1 n ) are random elements of D[0, 1], we will understand te condition n, α n (2),..., α n (k) ) D (α (1), α (2),..., α (k) (α (1) to mean tat for all α (1), α (2),..., α (k) -continuity sets A 1, A 2,...,A k P ( ) ( ) α n (1) A 1, α n (2) A 1,..., α n (k) A k P α (1) A 1, α (2) A 2,..., α (k) A k. [see Billingsley (1968, Teorem 3.1) or Hall (1977)]. D[0, 1] k will be used to denote D[0, 1]... D[0, 1], te k-times coordinate product space of D[0, 1]. We still use to denote weak convergence on D[0, 1]. In order to prove (3.7), we use te following lemma. LEMMA 6.1. Suppose tat {F t } t 0 is an increasing sequence of σ-fields, q(t) is a process tat is F t -measurable for eac t and continuous wit probability 1, Eq 2 (t) < and q(0) = 0. Let ψ(t), t 0, be a process tat is nondecreasing and continuous wit probability 1 and satisfies ψ(0) = 0 and Eψ 2 (t) <. Let ξ be a random variable wic is F t -measurable for eac t 0. If, for any γ j 0, j = 1, 2,..., r, and any 0 s < t t 0 < t 1 <... < t r <, ) ( E e P [ r j=1 γ j[ψ(t j ) ψ(t j 1 )] q(t) q(s) ] ) F s ( E e P { r j=1 γ j[ψ(t j ) ψ(t j 1 )] [q(t) q(s)] 2 [ψ(t) ψ(s)] } ) F s = 0, a.s., = 0, a.s. ten te finite-dimensional distributions of te process (q(t), ξ) t 0 coincide wit tose of te process (W [ψ(t)], ξ) t 0, were W (s) is a standard Brownian motion wit EW 2 (s) = s independent of ψ(t). Proof. Tis lemma is an extension of Teorem 3.1 of Borodin and Ibragimov (1995, page 14) and te proof follows te same lines as in teir work. Indeed, by using te fact tat ξ is F t -measurable for eac t 0, it follows from te same arguments as in te proof of Teorem 3.1 of Borodin and Ibragimov (1995) tat, for any t 0 < t 1,..., t r <, α j R 17
18 and s R, Ee i P r j=1 α j[q(t j ) q(t j 1 )]+isξ = E = E [e i P r 1 j=1 α j[q(t j ) q(t j 1 )]+isξ E ( e iαr[q(tr) q(t r 1)] F tr 1 ) ] [ e α2 r 2 [ψ(tr) ψ(tr 1)] e i P ] r 1 j=1 α j[q(t j ) q(t j 1 )]+isξ 2 r =... = Ee α 2 wic yields te stated result. P r j=1 [ψ(t j) ψ(t j 1 )]+isξ, By virtue of Lemma 6.1, we now obtain te proof of (3.7). Tecnical details of some subsidiary results tat are used in tis proof are given in te next section. Set ζ n (t) = 1 [nt] 1 ɛ k, ψ n (t) = n d0 2 n 2 S n (t) = k=1 1 d 0 (n 2 ) 1/4 [nt] k=1 for 0 t 1, were d 0 is defined as in (3.7). [nt] k=1 u k K[(x k x)/], u 2 kk 2 [(x k x)/], We will prove in Propositions 7.1 and 7.2 tat ζ n (t) W (t) and ψ n (t) ψ(t) on D[0, 1], were ψ(t) := L(t, 0). {S n (t)} n 1 is tigt on D[0, 1]. Tese facts imply tat Furtermore we will prove in Proposition 7.4 tat {S n (t), ψ n (t), ζ n (t)} n 1 is tigt on D[0, 1] 3. Hence, for eac {n } {n}, tere exists a subsequence {n } {n } suc tat { Sn (t), ψ n (t), ζ n (t) } { } d η(t), ψ(t), W (t). (6.1) on D[0, 1] 5, were η(t) is a process continuous wit probability one by noting (7.25) below. Write F s = σ{w (t), 0 t 1; η(t), 0 t s}. It is readily seen tat F s and η(s) is F s -measurable for eac 0 s 1. Also note tat ψ(t) (for any fixed t [0, 1]) is F s -measurable for eac 0 s 1. If we prove tat for any 0 s < t 1, ( [η(t) ] ) E η(s) Fs = 0, a.s., (6.2) ( {[η(t) E η(s)] 2 [ψ(t) ψ(s)] } ) F s = 0, a.s., (6.3) ten it follows from Lemma 6.1 tat te finite-dimensional distributions of (η(t), ψ(1)) coincide wit tose of {N L 1/2 (t, 0), L(1, 0)}, were N is normal variate independent of 18
19 L(t, 0). Te result (3.7) terefore follows, since η(t) does not depend on te coice of te subsequence. Let 0 t 0 < t 2 <... < t r = 1, r be an arbitrary integer and G(...) be an arbitrary bounded measurable function. In order to prove (6.2) and (6.3), it suffices to sow tat E[η(t j ) η(t j 1 )] G[η(t 0 ),..., η(t j 1 ); W (t 0 ),..., W (t r )] = 0, (6.4) E { [η(t j ) η(t j 1 )] 2 [ψ(t j ) ψ(t j 1 )] } G[η(t 0 ),..., η(t j 1 ); W (t 0 ),..., W (t r )] = 0. (6.5) Recall (6.1). Witout loss of generality, we assume te sequence {n } is just {n} itself. Since S n (t), S 2 n(t) and ψ n (t) for eac 0 t 1 are uniformly integrable (see Proposition 7.3), te statements (6.4) and (6.5) will follow if prove E[S n (t j ) S n (t j 1 )] G[...] 0, (6.6) E { [S n (t j ) S n (t j 1 )] 2 [ψ n (t j ) ψ n (t j 1 )] } G[...] 0, (6.7) were G[...] = G[S n (t 0 ),..., S n (t j 1 ); ζ n (t 0 ),..., ζ n (t r )] (see, e.g., Teorem 5.4 of Billingsley, 1968). Furtermore, by using similar arguments to tose in te proofs of Lemma 5.4 and 5.5 in Borodin and Ibragimov (1995), we may coose G(y 0, y 1,..., y j 1 ; z 0, z 1,..., z r ) = exp { i ( j 1 λ k y k + Terefore, by independence of ɛ k, we only need to sow tat { [nt j ] E k=[nt j 1 ]+1 = o[(n 2 ) 1/4 ], { [nt [ j ] E k=[nt j 1 ]+1 = o[(n 2 ) 1/2 ], k=0 } u k K[(x k x)/]e iµ j [ζn(t j) ζ n(t j 1 )]+iχ(t j 1 ) u k K[(x k x)/] ] [nt j] 2 k=[nt j 1 ]+1 r )} µ k z k. k=0 (6.8) } u 2 k K 2 [(x k x)/] e iµ j [ζn(t j) ζ n(t j 1 )]+iχ(t j 1 ) were χ(s) = χ(x 1,..., x s, u 1,..., u s ), a functional of x 1,..., x s, u 1,..., u s, and µ j = r k=j µ k. Note tat χ(s) depends only on (..., ɛ s 1, ɛ s ) and λ 1,..., λ s, and we may write x t = t ρ t j η j = j=1 = ρ t s x s + t j=s+1 t j=1 ρ t j ρ t j j i= s i= ɛ i φ j i ɛ i φ j i + t j=s+1 ρ t j j i=s+1 ɛ i φ j i := x s,t + x s,t, (6.10) 19 (6.9)
20 were x s,t depends only on (..., ɛ s 1, ɛ s ) and x s,t = t s j=1 ρ t j s j ɛ i+s φ j i = i=1 t i=s+1 t i ɛ i j=0 ρ t j i φ j. Now, by independence of ɛ k again and conditioning arguments, it suffices to sow tat, for any µ, { m sup E u k K[(y + x s,k)/] e iµ P m i=1 ɛ i/ } n y,0 s<m n k=s+1 = o[(n 2 ) 1/4 ], sup ( { E m u k K[(y + x s,k)/] } 2 y,0 s<m n = o[(n 2 ) 1/2 ]. k=s+1 m k=s+1 Tis follows from Proposition 7.5. Te proof of (3.7) is now complete. We next prove (3.4)-(3.6). (6.11) ) u 2 k K 2 [(y + x s,k)/] e iµ P m i=1 ɛ i/ n (6.12) In fact, it follows from Proposition 7.3 tat, uniformly in n, EΘ 2 1n/(n 2 ) 1/2 = d 2 0 ES 2 n(1) C. Tis yields (3.4) by te Markov s inequality. It follows from Claim 1 in te proof of Proposition 7.2 tat x t / nφ satisfies Assumption 2.3 of WP. Te same argument as in proof of (5.18) in WP yields (3.5). As for (3.6), it follows from Proposition 7.2, togeter wit te fact tat P (L(t, 0) > 0) = 1. Te proof of Teorem 3.1 is now complete. 7 Some Useful Subsidiary Propositions In tis section we will prove te following propositions required in te proof of teorem 3.1. Notation will be same as in te previous section except wen explicitly mentioned. PROPOSITION 7.1. We ave ζ n (t) W (t) and ζ n(t) := 1 [nt] nφ k=1 ρ [nt] k η k J κ (t) on D[0, 1], (7.1) were {W (t), t 0} is a standard Brownian motion and J κ (t) is defined as in (3.8). Proof. Te first statement of (7.1) is well-known. In order to ζ n(t) J κ (t), for eac fixed l 1, put Z (l) 1j = l k=0 φ k ɛ j k and Z (l) 2j = 20 k=l+1 φ k ɛ j k.
21 It is readily seen tat for any m 1, m m ρ m j Z (l) 1j = j=1 Terefore, for fixed l 1, ( ) ζ n(t) 1 l = ρ k φ k φ k=0 = = j=1 ρ m j l ρ k φ k k=0 l ρ k φ k k=0 [nt] 1 n j=1 l φ k ɛ j k k=0 m j=1 m j=1 ρ m j ɛ j + l ρ m+s 1 ɛ 1 s s=1 l 1 + ρ j ɛ m s s=0 l j=s+1 ρ m j ɛ j + R(m, l), ρ j φ j say. l ρ j φ j j=s ρ [nt] j ɛ j + 1 R([nt], l) + 1 [nt] nφ nφ j=1 ρ [nt] j Z (l) 2j. (7.2) Note tat 1 n [nt] j=1 ρ[nt] j ɛ j J κ (t) [see Can and Wei (1987) and Pillips (1987)] and l k=0 ρ k φ k φ as n first and ten l. By virtue of Teorem 4.1 of Billingsley (1968, page 25), to prove ζ n(t) J κ (t), it suffices to sow tat for any δ > 0, { lim sup P sup R([nt], l) δ } n = 0, (7.3) n 0 t 1 for fixed l 1 and lim l lim sup P n sup 0 t 1 [nt] Z (l) 2j j=1 δ n = 0. (7.4) Recall lim n ρ n = e κ, wic yields e κ /2 ρ k 2e κ for all n k n and n sufficiently large. Te result (7.3) olds since k=0 φ k <, and ence as n, ( 1 sup R([nt], l) 1 l l ) l max ɛ P j φ j + φ j 0. n 0 t 1 n l j n We next prove (7.4). Noting m ρ m j Z (l) 2j = j=1 k=l+1 φ k m j=1 s=0 j=s j=s+1 ρ m j ɛ j k, for any m 1, by applying te Hölder inequality and te independence of ɛ k, we ave 2 k n(t) ( m ) 2 E sup Z (l) 2j φ k φ k E max ρ m j ɛ j k 0 t 1 1 m n j=1 k=l+1 k=l+1 j=1 ( 2 C n φ k ). k=l+1 21
22 Result (7.4) now follows immediately from te Markov inequality and k=l+1 φ k 0 as l. Te proof of Proposition 7.1 is complete. PROPOSITION 7.2. For any satisfying 0 and n 2, we ave 1 n 2 1 n 2 [nt] k=1 [nt] k=1 K i [(x k x)/] d i L(t, 0), i = 1, 2, (7.5) K 2 [(x k x)/] u 2 k d 2 0 L(t, 0), (7.6) on D[0, 1], were d i = φ 1 Ki (s)ds, i = 1, 2 and d 2 0 = φ 1 Eu 2 m 0 K2 (s)ds, and L(t, s) is te local time process of te Gaussian diffusion process {J κ (t), t 0} defined by (3.8), in wic {W (t), t 0} is a standard Brownian motion. PROPOSITION 7.3. For any fixed 0 t 1, S n (t), S 2 n(t) and ψ n (t), n 1, are uniformly integrable. PROPOSITION 7.4. {S n (t)} n 1 is tigt on D[0, 1]. PROPOSITION 7.5. Results (6.11) and (6.12) old true for any u R. In order to prove Propositions , we need some preliminaries. Let r(x) and r 1 (x) be bounded functions suc tat ( r(x) + r 1(x) )dx <. We first calculate te values of I (s) k,l and II (s) k defined by [ I (s) k,l = E r(x s,k/) r 1 (x s,l/) g(u k ) g 1 (u l ) exp { iµ [ II (s) k = E r(x s,k/) g(u k ) exp { iµ l ɛ j / n }], j=1 k ɛ j / n }], (7.7) j=1 under different settings of g(x) and g 1 (x), were x s,k is defined as in (6.10). We ave te following lemmas, wic will play a core rule in te proof of te main results. We always assume k < l and let C denote a constant not depending on k, l and n, wic may be different from line to line. LEMMA 7.1. Suppose ˆr(λ) dλ < were ˆr(t) = e itx r(x)dx. (a) If E g(u k ) <, ten, for all k s + 1, II (s) k C / k s. (7.8) 22
23 (b) If Eg(u k ) = 0 and Eg 2 (u k ) <, ten, for all k s + 1, II (s) k C [ (k s) 2 + /(k s) ]. (7.9) LEMMA 7.2. Suppose tat ˆr(λ) dλ < and ˆr 1 (λ) dλ <, were ˆr(t) = e itx r(x)dx and ˆr 1 (t) = e itx r 1 (x)dx. Suppose tat Eg(u l ) = Eg 1 (u k ) = 0 and Eg 2 (u m0 )+Eg 2 1(u m0 ) <. Ten, for any ɛ > 0, tere exists an n 0 > 0 suc tat, for all n n 0, all l k 1 and all k s + 1, I (s) k,l C [ ɛ (l k) 2 + (l k) 1] [ (k s) 2 + / ] k s, (7.10) were we define j=t/2 = j t/2. We only prove Lemma 7.2 wit s = 0. Te proofs of Lemma 7.1 and Lemma 7.2 wit s 0 are te same and ence te details are omitted. Te proof of Lemma 7.2. Write x k = x 0,k and I k,l = I (0) k,l. As ( ˆr(t) + ˆr 1 (t) )dt <, we ave r(x) = 1 2π e ixtˆr(t)dt and r 1 (x) = 1 2π e ixtˆr 1 (t)dt. Tis yields tat [ I k,l = E r(x k/) r 1 (x l /) g(u k ) g 1 (u l ) exp { iµ = l ɛ j / n }] j=1 { it x E e k / iλ x e l / g(u k ) g 1 (u l ) e iµ P l j=1 ɛ j/ } n ˆr(t) ˆr 1 (λ) dt dλ. Define l j=k = 0 if l < k, and put (k) = k j=0 ρ j φ j and a s,q = ρ l q (s q). Since x l = l q=1 l q ɛ q j=0 ( k ρ l q j φ j = + q=1 l m 0 q=k+1 + l q=l m 0 +1 it follows from independence of te ɛ k s tat, for l k m 0 + 1, { I k,l } E e iz (2) / { E e iz (3)/ g 1 (u l ) } ˆr1 (λ) ) ɛ q a l,q, ( E { e iz(1) / g(u k ) } ˆr(t) dt) dλ, (7.11) were z (1) = z (2) = z (3) = k ( ɛ q λ al,q t a k,q + u / n ), q=1 l m 0 q=k+1 l q=l m 0 +1 ɛ q ( λ al,q + u / n ), ɛ q ( λ al,q + u / n ). 23
24 We may take n sufficiently large so tat u/ n is as small as required. Witout loss of generality we assume u = 0 in te following proof for convenience of notation. We first sow tat, for all k sufficiently large, Λ(λ, k) := E { e iz (1) / g(u k ) } ˆr(t) dt C ( k 2 + / k ). (7.12) To estimate Λ(λ, k), we need some preliminaries. Recall ρ = 1 + κ/n. For any given s, we ave lim n (s) = s j=0 φ j. Tis fact implies tat k 0 can be taken sufficiently large suc tat wenever n sufficiently large, φ j e κ φ /4 e κ (k 0 ), (7.13) j=k 0 /2+1 and ence for all k 0 s n and 1 q s/2, as,q 2 1 e ( κ (k 0 /2) 2e κ j=k 0 /2+1 φ j ) e κ φ /4, (7.14) were we ave used te well known fact tat lim n ρ n = e κ, wic yields e κ /2 ρ k 2e κ for all n k n. Furter write Ω 1 (Ω 2, respectively) for te set of 1 q k/2 suc tat λ a l,q t a k,q ( λ a l,q t a k,q <, respectively), and B 1 = q Ω 2 a 2 k,q, B 2 = q Ω 2 a l,q a k,q and B 3 = q Ω 2 a 2 l,q. By virtue of (7.13), it is readily seen tat B 1 C k, wenever #(Ω 2 ) k, were #(A) denotes te number of elements in A. We are now ready to prove (7.12). First notice tat tere exist constants γ 1 > 0 and γ 2 > 0 suc tat { Ee i ɛ 1 t e γ 1 if t 1, e γ 2t 2 if t 1, (7.15) since Eɛ 1 = 0, Eɛ 2 1 = 1 and ɛ 1 as a density. See, e.g., Capter 1 of Petrov (1995). Also note tat q Ω 2 (λ a l,q ta k,q ) 2 = λ 2 B 3 2λ t B 2 + t 2 B 1 = B 1 (t λb 2 /B 1 ) 2 + λ 2 (B 3 B 2 2/B 1 ) B 1 (t λb 2 /B 1 ) 2, since B 2 2 B 1 B 3, by Hölder s inequality. It follows from te independence of ɛ t tat, for all k k 0, Ee i W (1) / exp { γ 1 #(Ω 1n ) γ 2 2 (λ a l,q ta k,q ) 2} q Ω2 exp { γ 1 #(Ω 1n ) γ 2 B 1 2 (t λb 2 /B 1 ) 2} 24
25 were W (1) = k/2 q=1 ɛ q (λ a l,q t a k,q ). Tis, togeter wit te facts: z (1) = W (1) + k q=k/2+1 ɛ q(λa l,q ta k,q ) and k/2 k m 0 (wic implies tat W (1) is independent of u k ), yield tat Λ(λ, k) E { e } iw (1) / E g(u k ) ˆr(t) dt C e γ 1#(Ω 1 ) ˆr(t) dt + C e γ #(Ω 1 ) k #(Ω 1 ) 2 B 1 2 (t λb 2 /B 1 ) 2 dt k C k 2 ˆr(t) dt + e γ 2 B 1 2 t 2 dt ) C (k 2 + / k). Tis proves (7.12) for k k 0. We now turn back to te proof of (7.10). We will estimate I k,l in tree separate settings: l k 2k 0 and k k 0 ; l k 2k 0 and k k 0 ; l > k and k k 0, were, witout loss of generality, we assume k 0 2m 0. Case I. l k 2k 0 and k k 0. We first notice tat, for any δ > 0, tere exist constants γ 3 > 0 and γ 4 > 0 suc tat, for all s k 0 and q s/2, { Ee iɛ 1 λ a s,q/ e γ 3 if λ δ, e γ 4 λ 2 / 2 if λ δ. Tis fact follows from (7.14) and (7.15) wit a simple calculation. Hence it follows from te facts: l m 0 (l + k)/2 and l q k 0 for all k q (l + k)/2 since l k 2k 0 and k 0 2m 0, tat Ee iz(2) / (l+k)/2 Π { q=k Ee iɛ q λ a l,q / e γ 3(l k) if λ δ, e γ 4 (l k)λ 2 / 2 if λ δ. On te oter and, since Eg 1 (u l ) = 0, we ave We also ave { { E e iz(3) / (e g 1 (u l )} = E iz (3)/ 1 ) } g 1 (u l ) 1 E [ z (3) g 1 (u l ) ] (7.16) C (Eɛ 2 1) 1/2 (Eg 2 1(u l )) 1/2 λ 1. (7.17) E{e iz(3) / g 1 (u l )} 0, wenever λ/, (7.18) 25
26 uniformly for all l m 0. Indeed, supposing φ 0 0 (if φ 0 = 0, we may use φ 1 and so on), we ave E{e iz(3) / g 1 (u l )} = E { e iɛ lφ 0 λ ρ n l / g (ɛ l ) }, were g (ɛ l ) = E [ e i(z(3) ɛ l φ 0 λ ρ n l )/ g 1 (u l ) ] ɛ l. By recalling tat ɛl as a density d(x), it is readily seen tat sup g (x) d(x)dx E g 1 (u l ) <, λ uniformly for all l. Te result (7.18) follows from te Riemann-Lebesgue teorem. By virtue of (7.18), for any ɛ > 0, tere exists a n 0 (A 0 respectively) suc tat, for all n n 0 ( λ / A 0 respectively), E{e iz(3) / g 1 (u l )} ɛ. Tis, togeter wit (7.12) and (7.16) wit δ = A 0, yields tat I (2) k,l := E { e } iz(2) / { E e iz (3)/ g 1 (u l ) } Λ(λ, k) ˆr1 (λ) dλ λ >A 0 C ɛ e γ3(l k) (k 2 + / k) ˆr 1 (λ) dλ C ɛ (l k) 2 (k 2 + / k). λ >A 0 Similarly it follows from (7.12), (7.16) wit δ = A 0 and (7.17) tat I (1) k,l := E { e } iz(2) / { E e iz (3)/ g 1 (u l ) } Λ(λ, k) ˆr1 (λ) dλ λ A 0 C (k 2 + / k) 1 λ e γ 4 (l k)λ 2 / 2 dλ λ A 0 C (l k) 1 (k 2 + / k). Te result (7.10) in Case I now follows from I k,l I (1) k,l + I (2) k,l C [ ɛ (l k) 2 + (l k) 1] (k 2 + / k). Case II. l k 2k 0 and k k 0. In tis case, we only need to sow tat In fact, as in (7.11), we ave I k,l C (ɛ + ) ( k 2 + / k ). (7.19) I k,l E { e iz (4) / } E { e iz (5) / g(u k ) g 1 (u l ) } ˆr(t) ˆr1 (λ) dt dλ, (7.20) were z (4) = z (5) = k m 0 q=1 ɛ q [ λ al,q t a k,q ], l q=k m 0 +1 ɛ q ( λ al,q + u / n ) t 26 k q=k m 0 +1 ɛ q a k,q.
27 Similar arguments to tose in te proof of (7.12) give tat, for all λ and all k k 0, Λ 1 (λ, k) := E { e iz(4) / } ˆr(t) dt C ( k 2 + / k ). Note tat E g(u k ) g 1 (u l ) (Eg 2 (u k )) 1/2 (Eg 2 1(u l )) 1/2 <. For any ɛ > 0, similar to te proof of (7.18), tere exists a n 0 (A 0 respectively) suc tat, for all n n 0 ( λ / A 0 respectively), E{e iz(5) / g(u k ) g 1 (u l )} ɛ. By virtue of tese facts, we ave I k,l ( ) { + E e iz (5)/ g(u k ) g 1 (u l ) } ˆr1 (λ) Λ 1 (λ, k) dλ λ A 0 λ >A 0 C ( dλ + ɛ ˆr 1 (λ) dλ ) (k 2 + / k) λ A 0 λ >A 0 C (ɛ + ) ( k 2 + / k ). Tis proves (7.19) and ence te result (7.10) in case II. Case III. l > k and k k 0. In tis case, we only need to prove I k,l C [ ɛ (l k) 3/2 + (l k) 1]. (7.21) In order to prove (7.21), split l > k into l k 2k 0 and l k 2k 0. Te result (7.10) ten follows from te same arguments as in te proofs of cases I and II but replacing te estimate of Λ(λ, k) in (7.12) by Λ(λ, k) E g(u k ) ˆr(t) dt C. We omit te details. Te proof of Lemma 7.2 is now complete. We are now ready to prove te propositions. We first mention tat, under te conditions for K(t), if we let r(t) = K(y/ + t) or r(t) = K 2 (y/ + t), ten r(x) dx = K(x) dx < and ˆr(λ) dλ ˆK(λ) dλ < uniformly for all y R. Proof of Proposition 7.5. Let r(t) = r 1 (t) = K(y/ + t) and g(x) = g 1 (x) = x. It follows from Lemma 7.2 tat for any ɛ > 0, tere exists a n 0 suc tat, wenever n n 0, I k,l C [ ɛ (l k) 2 + (l k) 1] ( k 2 + / k) 1 k<l n 1 k<l n C (ɛ + n k 1 ) k=1 k=1 C(ɛ + log n) (C + n ). 27 n ( k 2 + / k)
Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households
Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of
More informationUniform Consistency for Nonparametric Estimators in Null Recurrent Time Series
Te University of Adelaide Scool of Economics Researc Paper No. 2009-26 Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Jiti Gao, Degui Li and Dag Tjøsteim Te University of
More informationBootstrap confidence intervals in nonparametric regression without an additive model
Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc
More informationBootstrap prediction intervals for Markov processes
arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA
More information7 Semiparametric Methods and Partially Linear Regression
7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).
More informationEFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS
Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,
More informationPoisson Equation in Sobolev Spaces
Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on
More informationNew Distribution Theory for the Estimation of Structural Break Point in Mean
New Distribution Teory for te Estimation of Structural Break Point in Mean Liang Jiang Singapore Management University Xiaou Wang Te Cinese University of Hong Kong Jun Yu Singapore Management University
More informationA = h w (1) Error Analysis Physics 141
Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.
More informationDifferentiation in higher dimensions
Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends
More informationKernel Density Based Linear Regression Estimate
Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x
More informationHOMEWORK HELP 2 FOR MATH 151
HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,
More informationConsider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.
Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions
More informationChapter 1. Density Estimation
Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f
More informationA New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model
e University of Adelaide Scool of Economics Researc Paper No. 2009-6 October 2009 A New Diagnostic est for Cross Section Independence in Nonparametric Panel Data Model Jia Cen, Jiti Gao and Degui Li e
More informationStationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series
Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Lawrence D. Brown, Pilip A. Ernst, Larry Sepp, and Robert Wolpert August 27, 2015 Abstract We consider te class,
More informationNonparametric Cointegrating Regression with Endogeneity and Long Memory
Nonparametric Cointegrating Regression with Endogeneity and Long Memory Qiying Wang School of Mathematics and Statistics TheUniversityofSydney Peter C. B. Phillips Yale University, University of Auckland
More informationBasic Nonparametric Estimation Spring 2002
Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,
More informationGradient Descent etc.
1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )
More informationA Simple Matching Method for Estimating Sample Selection Models Using Experimental Data
ANNALS OF ECONOMICS AND FINANCE 6, 155 167 (2005) A Simple Matcing Metod for Estimating Sample Selection Models Using Experimental Data Songnian Cen Te Hong Kong University of Science and Tecnology and
More informationModel Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1
Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional
More informationApplications of the van Trees inequality to non-parametric estimation.
Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc
More informationarxiv: v1 [math.pr] 28 Dec 2018
Approximating Sepp s constants for te Slepian process Jack Noonan a, Anatoly Zigljavsky a, a Scool of Matematics, Cardiff University, Cardiff, CF4 4AG, UK arxiv:8.0v [mat.pr] 8 Dec 08 Abstract Slepian
More informationNADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i
NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u
More informationSymmetry Labeling of Molecular Energies
Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry
More informationAnalytic Functions. Differentiable Functions of a Complex Variable
Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general
More informationEstimation in threshold autoregressive models with a stationary and a unit root regime
ISSN 440-77X Australia Department of Econometrics and Business Statistics ttp://www.buseco.monas.edu.au/depts/ebs/pubs/wpapers/ Estimation in tresold autoregressive models wit a stationary and a unit root
More informationPUBLISHED VERSION. Copyright 2009 Cambridge University Press.
PUBLISHED VERSION Gao, Jiti; King, Maxwell L.; Lu, Zudi; josteim, D.. Nonparametric specification testing for nonlinear time series wit nonstationarity, Econometric eory, 2009; 256 Suppl:1869-1892. Copyrigt
More informationFunction Composition and Chain Rules
Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function
More informationERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*
EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T
More informationlecture 26: Richardson extrapolation
43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)
More informationPOLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY
APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky
More informationA SHORT INTRODUCTION TO BANACH LATTICES AND
CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,
More informationch (for some fixed positive number c) reaching c
GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan
More informationLecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines
Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to
More informationINFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction
INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial
More informationFinancial Econometrics Prof. Massimo Guidolin
CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis
More informationNUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,
NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing
More informationTHE IMPLICIT FUNCTION THEOREM
THE IMPLICIT FUNCTION THEOREM ALEXANDRU ALEMAN 1. Motivation and statement We want to understand a general situation wic occurs in almost any area wic uses matematics. Suppose we are given number of equations
More informationOrder of Accuracy. ũ h u Ch p, (1)
Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical
More informationHomework 1 Due: Wednesday, September 28, 2016
0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q
More informationChapter 5 FINITE DIFFERENCE METHOD (FDM)
MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential
More informationLecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.
Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative
More informationThe Priestley-Chao Estimator
Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are
More informationClick here to see an animation of the derivative
Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,
More informationContinuous Stochastic Processes
Continuous Stocastic Processes Te term stocastic is often applied to penomena tat vary in time, wile te word random is reserved for penomena tat vary in space. Apart from tis distinction, te modelling
More informationUnified estimation of densities on bounded and unbounded domains
Unified estimation of densities on bounded and unbounded domains Kairat Mynbaev International Scool of Economics Kazak-Britis Tecnical University Tolebi 59 Almaty 5, Kazakstan email: kairat mynbayev@yaoo.com
More informationUniform Convergence Rates for Nonparametric Estimation
Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate
More information1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)
Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of
More informationPhysically Based Modeling: Principles and Practice Implicit Methods for Differential Equations
Pysically Based Modeling: Principles and Practice Implicit Metods for Differential Equations David Baraff Robotics Institute Carnegie Mellon University Please note: Tis document is 997 by David Baraff
More informationThis is a repository copy of Uniform Consistency of Nonstationary Kernel-Weighted Sample Covariances for Nonparametric Regression.
Tis is a repository copy of Uniform Consistency of Nonstationary Kernel-Weigted Sample Covariances for Nonparametric Regression. Wite Rose Researc Online URL for tis paper: ttp://eprints.witerose.ac.uk/02283/
More informationA Goodness-of-fit test for GARCH innovation density. Hira L. Koul 1 and Nao Mimoto Michigan State University. Abstract
A Goodness-of-fit test for GARCH innovation density Hira L. Koul and Nao Mimoto Micigan State University Abstract We prove asymptotic normality of a suitably standardized integrated square difference between
More informationNumerical Differentiation
Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function
More information232 Calculus and Structures
3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE
More information. If lim. x 2 x 1. f(x+h) f(x)
Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value
More informationFast Exact Univariate Kernel Density Estimation
Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis
More informationMath 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0
3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,
More informationBandwidth Selection in Nonparametric Kernel Testing
Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel
More informationCopyright c 2008 Kevin Long
Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula
More informationQuasiperiodic phenomena in the Van der Pol - Mathieu equation
Quasiperiodic penomena in te Van der Pol - Matieu equation F. Veerman and F. Verulst Department of Matematics, Utrect University P.O. Box 80.010, 3508 TA Utrect Te Neterlands April 8, 009 Abstract Te Van
More informationTHE DISCRETE PLATEAU PROBLEM: CONVERGENCE RESULTS
MATHEMATICS OF COMPUTATION Volume 00, Number 0, Pages 000 000 S 0025-5718XX0000-0 THE DISCRETE PLATEAU PROBLEM: CONVERGENCE RESULTS GERHARD DZIUK AND JOHN E. HUTCHINSON Abstract. We solve te problem of
More informationLIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION
LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y
More informationError estimates for a semi-implicit fully discrete finite element scheme for the mean curvature flow of graphs
Interfaces and Free Boundaries 2, 2000 34 359 Error estimates for a semi-implicit fully discrete finite element sceme for te mean curvature flow of graps KLAUS DECKELNICK Scool of Matematical Sciences,
More informationJournal of Computational and Applied Mathematics
Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing
More informationMIXED DISCONTINUOUS GALERKIN APPROXIMATION OF THE MAXWELL OPERATOR. SIAM J. Numer. Anal., Vol. 42 (2004), pp
MIXED DISCONTINUOUS GALERIN APPROXIMATION OF THE MAXWELL OPERATOR PAUL HOUSTON, ILARIA PERUGIA, AND DOMINI SCHÖTZAU SIAM J. Numer. Anal., Vol. 4 (004), pp. 434 459 Abstract. We introduce and analyze a
More informationarxiv: v1 [math.dg] 4 Feb 2015
CENTROID OF TRIANGLES ASSOCIATED WITH A CURVE arxiv:1502.01205v1 [mat.dg] 4 Feb 2015 Dong-Soo Kim and Dong Seo Kim Abstract. Arcimedes sowed tat te area between a parabola and any cord AB on te parabola
More informationSubdifferentials of convex functions
Subdifferentials of convex functions Jordan Bell jordan.bell@gmail.com Department of Matematics, University of Toronto April 21, 2014 Wenever we speak about a vector space in tis note we mean a vector
More informationNonlinear elliptic-parabolic problems
Nonlinear elliptic-parabolic problems Inwon C. Kim and Norbert Požár Abstract We introduce a notion of viscosity solutions for a general class of elliptic-parabolic pase transition problems. Tese include
More informationMath Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim
Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z
More informationIEOR 165 Lecture 10 Distribution Estimation
IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat
More informationA central limit theorem for triangular arrays of weakly dependent random variables, with applications in statistics
A central limit teorem for triangular arrays of weakly dependent random variables, wit applications in statistics Micael H. Neumann Friedric-Sciller-Universität Jena Institut für Stocastik Ernst-Abbe-Platz
More informationNonparametric density estimation for linear processes with infinite variance
Ann Inst Stat Mat 2009) 61:413 439 DOI 10.1007/s10463-007-0149-x Nonparametric density estimation for linear processes wit infinite variance Tosio Honda Received: 1 February 2006 / Revised: 9 February
More informationDeconvolution problems in density estimation
Deconvolution problems in density estimation Dissertation zur Erlangung des Doktorgrades Dr. rer. nat. der Fakultät für Matematik und Wirtscaftswissenscaften der Universität Ulm vorgelegt von Cristian
More information3.4 Worksheet: Proof of the Chain Rule NAME
Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are
More informationUNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING
Statistica Sinica 15(2005), 73-98 UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Peter Hall 1 and Kee-Hoon Kang 1,2 1 Australian National University and 2 Hankuk University of Foreign Studies Abstract:
More information1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist
Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter
More informationMAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016
MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0
More informationHazard Rate Function Estimation Using Erlang Kernel
Pure Matematical Sciences, Vol. 3, 04, no. 4, 4-5 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/0.988/pms.04.466 Hazard Rate Function Estimation Using Erlang Kernel Raid B. Sala Department of Matematics
More informationSECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY
(Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative
More informationA Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation
A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract
More informationarxiv: v1 [math.na] 28 Apr 2017
THE SCOTT-VOGELIUS FINITE ELEMENTS REVISITED JOHNNY GUZMÁN AND L RIDGWAY SCOTT arxiv:170500020v1 [matna] 28 Apr 2017 Abstract We prove tat te Scott-Vogelius finite elements are inf-sup stable on sape-regular
More informationLIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT
LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as
More informationTopics in Generalized Differentiation
Topics in Generalized Differentiation J. Marsall As Abstract Te course will be built around tree topics: ) Prove te almost everywere equivalence of te L p n-t symmetric quantum derivative and te L p Peano
More informationThe derivative function
Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative
More information158 Calculus and Structures
58 Calculus and Structures CHAPTER PROPERTIES OF DERIVATIVES AND DIFFERENTIATION BY THE EASY WAY. Calculus and Structures 59 Copyrigt Capter PROPERTIES OF DERIVATIVES. INTRODUCTION In te last capter you
More informationArtificial Neural Network Model Based Estimation of Finite Population Total
International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony
More informationSupplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017
Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.
More informationMathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative
Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x
More informationarxiv: v1 [math.ap] 16 Nov 2018
Exit event from a metastable state and Eyring-Kramers law for te overdamped Langevin dynamics arxiv:1811.06786v1 [mat.ap] 16 Nov 2018 Tony Lelièvre 1, Dorian Le Peutrec 2, and Boris Nectoux 1 1 École des
More informationMA455 Manifolds Solutions 1 May 2008
MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using
More informationArbitrary order exactly divergence-free central discontinuous Galerkin methods for ideal MHD equations
Arbitrary order exactly divergence-free central discontinuous Galerkin metods for ideal MHD equations Fengyan Li, Liwei Xu Department of Matematical Sciences, Rensselaer Polytecnic Institute, Troy, NY
More informationA Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation
A Locally Adaptive Transformation Metod of Boundary Correction in Kernel Density Estimation R.J. Karunamuni a and T. Alberts b a Department of Matematical and Statistical Sciences University of Alberta,
More informationOne-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes
DOI 10.1007/s10915-014-9946-6 One-Sided Position-Dependent Smootness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meses JenniferK.Ryan Xiaozou Li Robert M. Kirby Kees Vuik
More informationSolution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.
December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need
More information5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems
5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we
More informationarxiv:math/ v1 [math.ca] 1 Oct 2003
arxiv:mat/0310017v1 [mat.ca] 1 Oct 2003 Cange of Variable for Multi-dimensional Integral 4 Marc 2003 Isidore Fleiscer Abstract Te cange of variable teorem is proved under te sole ypotesis of differentiability
More information1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point
MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note
More informationNew families of estimators and test statistics in log-linear models
Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics
More information