Bootstrap prediction intervals for Markov processes

Size: px
Start display at page:

Download "Bootstrap prediction intervals for Markov processes"

Transcription

1 arxiv: arxiv: Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA , USA Dimitris N. Politis Department of Matematics University of California San Diego La Jolla, CA , USA Abstract: Given time series data X 1,..., X n, te problem of optimal prediction of X n+1 as been well-studied. Te same is not true, owever, as regards te problem of constructing a prediction interval wit prespecified coverage probability for X n+1, i.e., turning te point predictor into an interval predictor. In te past, prediction intervals ave mainly been constructed for time series tat obey an autoregressive model tat is linear, nonlinear or nonparametric. In te paper at and, te scope is expanded by assuming only tat {X t} is a Markov process of order p 1 witout insisting tat any specific autoregressive equation is satisfied. Several different approaces and metods are considered, namely bot Forward and Backward approaces to prediction intervals as combined wit tree resampling metods: te bootstrap based on estimated transition densities, te Local Bootstrap for Markov processes, and te novel Model-Free bootstrap. In simulations, prediction intervals obtained from different metods are compared in terms of teir coverage level and lengt of interval. Keywords and prases: Confidence intervals, Local Bootstrap, Model-Free Prediction. Corresponding autor: Dimitris N. Politis, Department of Matematics, University of California San Diego, La Jolla, CA , USA; dpolitis@ucsd.edu; tel: ( ; fax: ( Te website ttp:// politis/dpsoftware.tml contains relevant software for te implementation of metods developed in tis paper. 1

2 L. Pan and D. Politis/Bootstrap for Markov processes 2 1. Introduction Prediction is a key objective in time series analysis. Te teory of optimal linear and nonlinear point predictors as been well developed. Te same is not true, owever, as regards te problem of constructing a prediction interval wit prespecified coverage probability, i.e., turning te point predictor into an interval predictor. Even in te related problem of regression, te available literature on prediction intervals is not large; see e.g. Geisser (1993, Carroll and Ruppert (1991, Olive (2007, Olive (2015, Patel (1989, Scmoyer (1992, and Stine (1985. Recently, Politis (2013 as re-cast te prediction problem including prediction intervals in a Model-Free setting. An autoregressive (AR time series model, be it linear, nonlinear, or nonparametric, bears a formal resemblance to te analogous regression model. Indeed, AR models can typically be successfully fitted by te same metods used to estimate a regression, e.g., ordinary Least Square (LS regression metods for parametric models, and scatterplot smooting for nonparametric ones. Tere are several papers for prediction intervals for AR models (typically linear tat represent a broad spectrum of metods; see e.g. Alonso et al. (2002, Box and Jenkins (1976, Breidt et al. (1995, Masarotto (1990, Pascual et al. (2004, Tombs and Scucany (1990, and Wolf and Wunderli (2015. Recently, Pan and Politis (2015 presented a unified approac towards prediction intervals wen a time series {X t } obeys an autoregressive model tat is eiter linear, nonlinear or nonparametric. We expand te scope by assuming only tat {X t } is a Markov process of order p 1 witout insisting tat any specific autoregressive equation is satisfied. Recall tat Pan and Politis (2015 identified two different general approaces towards building bootstrap prediction intervals wit conditional validity, namely te Forward and Backward recursive scemes. We will address bot Forward and Backward approaces in te setting of Markovian data; see Section 2 for details. In terms of te actual resampling mecanism, we will consider te following tree options: 1. Te bootstrap metod based on kernel estimates of te transition density of te Markov processes as proposed by Rajarsi (1990; see Section Te Local Bootstrap for Markov processes as proposed by Paparoditis and Politis (1998 and Paparoditis and Politis (2002; see Section Te Model-Free Bootstrap for Markov Processes; tis is a novel resampling sceme tat stems from te Model-Free Prediction Principle of Politis (2013. To elaborate, te key idea is to transform a given complex dataset into one tat is i.i.d. (independent, identically distributed; aving done tat, te prediction problem is greatly simplified, and tat includes te construction of prediction intervals. In te case of a Markov Process, tis simplification can be accomplised using te Rosenblatt (1952 transformation; see Section 6. In te case of time series tat satisfy an autoregressive equation tat is nonlinear and/or nonparametric, Pan and Politis (2015 noted tat te Backward approac was not generally feasible. Recall tat, under causality, AR models are special cases of Markov processes. Hence, in Section 5 we propose a ybrid approac for nonparametric autoregressions in wic te forward step uses te autoregressive equation explicitly wile te backward step uses one of te tree aforementioned Markov bootstrap procedures. In te following, Section 2 will describe te setting of te prediction problem under consideration, and te construction of bootstrap prediction intervals. All prediction intervals studied in te paper at and are asymptotically valid under appropriate conditions. We will assess and compare te finitesample performance of all te metods proposed via Monte Carlo simulations presented in Section 7. Appendix A is devoted to sowing tat a Markov process remains Markov after a time-reversal; tis is needed to justify te use of all Backward bootstrap approaces. Finally, Appendix B discusses te problem of prediction intervals in r step aead prediction for r 1.

3 L. Pan and D. Politis/Bootstrap for Markov processes 3 2. Prediction and Bootstrap for Markov Processes 2.1. Notation and Definitions Here, and trougout te rest of te paper, we assume tat X = {X t, t = 1, 2, } is a real-valued, strictly stationary process tat is Markov of order p. Letting Y t = (X t, X t 1,, X t p+1, we define F (y = P [Y p y], F (x, y = P [X p+1 x, Y p y], F (x y = P [X p+1 x Y p = y], (2.1 for x R, y R p ; in te above, we ave used te sort-and {Y p y} to denote te event {te it coordinate of Y p is less or equal to te it coordinate of y for all i = 1,..., p}. Let f(y, f(x, y, f(x y be te corresponding densities of te distributions in eq. (2.1. We will assume trougout te paper tat tese densities are wit respect to Lebesgue measure. However, our results in Sections 3 and 4, i.e., bootstrap based on estimated transition densities and Local Bootstrap, could be easily generalized to te case of densities taken wit respect to counting measure; i.e., te case of discrete random variables. Remark 6.4 sows a modification tat also renders te Model-Free bootstrap of Section 6 valid for discrete data. Let X 1 = x 1, X 2 = x 2,, X n = x n denote te observed sample pat from te Markov cain X, and let y n = (x n,, x n p+1. Denote by ˆX n+1 te cosen point predictor of X n+1 based on te data at and. Because of te Markov structure, tis predictor will be a functional of ˆf n ( y n wic is our data-based estimator of te conditional density f( y n. For example, te L 2 optimal predictor would be given by te mean of ˆf n ( y n ; similarly, te L 1 optimal predictor would be given by te median of ˆf n ( y n. To fix ideas in wat follows will will focus on te L 2 optimal predictor, usually approximated by ˆX n+1 = x ˆf n (x y n dx, wit te understanding tat oter functionals of ˆf n ( y n can be accommodated equally well. Remark 2.1. An integral suc as x ˆf n (x y n dx can be calculated by numerical integration, e.g. using te adaptive quadrature metod. However, te L 2 optimal predictor can be approximated in several different ways tat are asymptotically equivalent. Te most straigtforward alternative is a kernel smooted estimator of te autoregression scatterplot, i.e., estimator (5.3 defined in te sequel. Remark 6.2 lists some furter alternative options. Beyond te point predictor ˆX n+1, we want to construct a prediction interval tat will contain X n+1 wit probability 1 α asymptotically; te following definition is elpful. Definition 2.1. Asymptotic validity of prediction intervals. Let L n, U n be functions of te data X 1,, X n. Te interval [L n, U n ] will be called a (1 α100% asymptotically valid prediction interval for X n+1 given X 1,, X n if P (L n X n+1 U n 1 α as n (2.2 for all (X 1,, X n in a set tat as (unconditional probability equal to one. Te probability P in (2.2 sould be interpreted as conditional probability given X 1,, X n altoug it is not explicitly denoted; ence, Definition 2.1 indicates conditional validity of te prediction interval [L n, U n ]. Remark 2.2. Asymptotic validity is a fundamental property but it does not tell te wole story; see Pan and Politis (2015 for a full discussion. To elaborate, one could simply let L n and U n be te

4 L. Pan and D. Politis/Bootstrap for Markov processes 4 α/2 and 1 α/2 of te conditional density estimator ˆf n ( y n respectively. If ˆf n ( y n is consistent for f n ( y n, ten tis interval would be asymptotically valid; neverteless, it would be caracterized by pronounced under-coverage in finite samples since te nontrivial variability in te estimate ˆf n ( y n is ignored. In order to capture te finite-sample variability involved in model estimation some kind of bootstrap algoritm is necessary. Tus, consider a bootstrap pseudo series X1,, Xn constructed according to one of te metods mentioned in te Introduction. Let ˆf n( y n be te corresponding estimator of f( y n as obtained from te bootstrap data X1,, Xn. To acieve conditional validity, we will ensure tat te last p values in te bootstrap world coincide wit te last p values in te real world, i.e., tat (Xn,, Xn p+1 = y n. Finally, we construct te predictor ˆX n+1 using te same functional, i.e., mean, median, etc., as used in te construction of ˆXn+1 in te real world but, of course, tis time te functional is applied to ˆf n( y n. For example, te L 2 optimal predictor in te bootstrap world will be given by ˆX n+1 = x ˆf n(x y n dx. Bootstrap probabilities and expectations are usually denoted by P and E, and tey are understood to be conditional on te original data X 1 = x 1,, X n = x n. Since Definition 2.1 involves conditional validity, we will understand tat P and E are also conditional on Xn p+1 = x n p+1,, Xn = x n wen tey are applied to future events in te bootstrap world, i.e., events determined by {Xs for s > n}; tis is not restrictive since we will ensure tat our bootstrap algoritms satisfy tis requirement. Definition 2.2. Te predictive root is te error in prediction, i.e., X n+1 ˆX n+1. Similarly, te bootstrap predictive root is te error in prediction in te bootstrap world, i.e., X n+1 ˆX n+1. Remark 2.3. Construction of prediction intervals in tis paper will be carried out via approximating te quantiles of te predictive root wit tose of te bootstrap predictive root. To see wy, suppose te (conditional probability P (X n+1 ˆX n+1 a is a continuous function of a in te limit as n. If one can sow tat sup P (X n+1 ˆX n+1 a P (Xn+1 ˆX P n+1 a 0, a ten standard results imply tat te quantiles of P (Xn+1 ˆX n+1 a can be used to consistently estimate te quantiles of P (X n+1 ˆX n+1 a, tus leading to asymptotically valid prediction intervals. Indeed, all prediction intervals tat will be studied in tis paper are asymptotically valid under appropriate conditions. However, as mentioned earlier, it is difficult to quantify asymptotically te extent to wic a prediction interval is able to capture bot sources of variation, i.e., te variance associated wit te new observation X n+1 and te variability in estimating ˆX n+1 ; ence, te prediction intervals in tis paper will be compared via finite-sample simulations. Finally, note tat Pan and Politis (2015 also defined prediction intervals based on studentized predictive roots. For concreteness, in tis paper we will focus on te simple notion of Definition 2.2 but generalization to studentized predictive roots is straigtforward Forward vs. Backward Bootstrap for Prediction Intervals Consider te bootstrap sample X 1,, X n. As mentioned in Section 2.1, in order to ensure conditional validity it would be elpful if te last p values in te bootstrap world coincided wit te last p values in te real world, i.e., tat (X n,, X n p+1 = y n (x n,, x n p+1. For te

5 L. Pan and D. Politis/Bootstrap for Markov processes 5 application to prediction intervals, note tat te bootstrap also allows us to generate X n+1 so tat te statistical accuracy of te predictor ˆX n+1 can be gauged. However, under a usual Monte Carlo simulation, none of te simulated bootstrap series will ave teir last p values exactly equal to te original vector y n as needed for prediction purposes. Herein lies te problem, since te beavior of te predictor ˆX n+1 needs to be captured conditionally on te original vector y n. To avoid tis difficulty in te set-up of a linear AR(p model, Tombs and Scucany (1990 proposed to generate te bootstrap data X 1,, X n going backwards from te last p values tat are fixed at (X n,, X n p+1 = y n ; tis is te backward bootstrap metod tat was revisited by Breidt et al. (1995 wo gave te correct algoritm of finding te backward errors. Note tat te generation of X n+1 must still be done in a forward fasion using te fitted AR model conditionally on te value X n. Going beyond te linear AR(p model, a backward bootstrap for Markov processes was proposed by Paparoditis and Politis (1998 via teir notion of Local Bootstrap. We will elaborate on te backward Local Bootstrap and oter backward bootstrap metods for Markov processes in te sequel. A key result ere it tat a Markov process remains Markov after a time-reversal; see our Appendix A. Neverteless, te natural way Markov processes evolve is forward in time, i.e., one generates X t given X t 1, X t 2,..., X t p. Tus, it is intuitive to construct bootstrap procedures tat run forward in time, i.e., to generate X t given X t 1, X t 2,..., X t p. Indeed, most (if not all of te literature on bootstrap confidence intervals for linear AR models uses te natural time order to generate bootstrap series. However, recall tat predictive inference is to be conducted conditionally on te last p values given by y n in order to be able to place prediction bounds around te point predictor ˆX n+1. In order to maintain te natural time order, i.e., generate bootstrap series forward in time, but also ensure tat X n+1 is constructed correctly, i.e., conditionally on te original y n, Pan and Politis (2015 introduced te forward bootstrap metod for prediction intervals, tat comprises of te following two steps. In describing it, we will use te notion of fitting a Markov model by estimating te transition density f(x y as will be discussed in detail in Section 3; different notions of Markov bootstrap, e.g., te Local Bootstrap, work analogously. A. Coose a starting vector (X1 p, X2 p,..., X0 appropriately, e.g., coose it at random as one of te stretces (subseries of lengt p found in te original data X 1,, X n. Ten, use te fitted Markov model, i.e., use te estimated transition density ˆf n (x y, in order to generate bootstrap data Xt recursively for t = 1,..., n. Now re-fit te Markov model using te bootstrap data X1,, Xn, i.e., obtain ˆf n(x y as an estimate of te transition density. B. Re-define te last p values in te bootstrap world, i.e., let (Xn,, Xn p+1 = y n, and generate te future bootstrap observation Xn+1 by a random draw from density ˆf n( y n. Also compute te one-step aead bootstrap point predictor ˆX n+1 = x ˆf n(x y n dx. Note tat te forward bootstrap idea as been previously used for prediction intervals in linear AR models by Masarotto (1990 and Pascual et al. (2004 but wit some important differences; for example, Masarotto (1990 omits te important step B above. Pan and Politis (2015 found tat te forward bootstrap is te metod tat can be immediately generalized to apply for nonlinear and nonparametric autoregressions as well, tus forming a unifying principle for treating all AR models; indeed, for nonlinear and/or nonparametric autoregressions te backward bootstrap seems infeasible. Neverteless, as will be sown in te next two sections, te backward bootstrap becomes feasible again under te more general setup of Markov process data. In Section 5 we will return briefly to te setup of a nonlinear and/or nonparametric autoregression and propose a ybrid approac in wic te forward step uses te autoregressive equation explicitly wile te backward step uses one of te tree Markov bootstrap procedures mentioned in te Introduction.

6 L. Pan and D. Politis/Bootstrap for Markov processes 6 3. Bootstrap Based on Estimates of Transition Density Rajarsi (1990 introduced a bootstrap metod tat creates pseudo-sample pats of a Markov process based on an estimated transition density; tis metod can form te basis for a forward bootstrap procedure for prediction intervals. Since te time-reverse of a Markov cain is also a Markov cain see Appendix A, it is possible to also define a backward bootstrap based on an estimated backward transition density Forward Bootstrap Based on Transition Density Recall tat x 1, x 2,, x n is te observed sample pat from te Markov cain X, and y t = (x t,, x t p+1. In wat follows, te prase generate z f( will be used as sort-and for generate z by a random draw from probability density f(. Algoritm 3.1. Forward Bootstrap (1 Coose a probability density K on R 2 and positive bandwidts 1, 2 to construct te following kernel estimators: ˆf n (x, y = ˆf n (y = 1 n K( x x i, y y i 1 (n p 1 2 i=p (3.1 ˆf n (x, ydx (3.2 ˆf n (x y = ˆf n (x, y ˆf n (y (3.3 for all x R, y R p, and were is a norm in R p. (2 Calculate te point predictor ˆx n+1 = x ˆf n (x y n dx. (3 (a Generate y p = (x p,, x 1 wit probability density function ˆf n ( given by (3.2. Alternatively, let y p be one of te stretces of p observations tat are present as a subset of te original series x 1,..., x n ; tere are n p + 1 suc stretces coose one of tem at random. (b Generate x p+1 ˆf n ( y p given by (3.3. (c Repeat (b to generate x t+1 ˆf n ( y t for t = p,, n 1, were y t = (x t,, x t p+1. (d Construct ˆf n(x y in a similar way as in (3.3 wit te same kernel and bandwidts but based on te pseudo-data x 1, x 2,, x n instead of te original data. (e Calculate te bootstrap point predictor ˆx n+1 = x ˆf n(x y n dx. (f Generate te bootstrap future value x n+1 ˆf n ( y n. (g Calculate te bootstrap root replicate as x n+1 ˆx n+1. (4 Repeat (3 B times; te B bootstrap root replicates are collected in te form of an empirical distribution wose α-quantile is denoted q(α. (5 Te (1 α100% equal-tailed, bootstrap prediction interval for X n+1 is given by [ˆx n+1 + q(α/2, ˆx n+1 + q(1 α/2]. (3.4

7 L. Pan and D. Politis/Bootstrap for Markov processes Backward Bootstrap Based on Transition Density Letting Y t = (X t, X t 1,, X t p+1 and x R, y R p as before, we can define te backwards transition distribution as F b (x y = P [X 0 x Y p = y] wit corresponding density f b (x y. Similarly, we define te backwards joint distribution as F b (x, y = P [X 0 x, Y p y] wit corresponding density f b (x, y. Having observed a sample pat x 1, x 2,, x n of a Markov cain, Appendix A sows tat te time-reversed sample-pat x n, x n 1,, x 1 can be considered as a sample pat of anoter Markov cain wit transition distribution and density given by F b (x y = P [X 0 x Y p = y] and f b (x y respectively. Note tat te densities f b (x, y and f b (x y admit kernel estimators as follows: ˆf bn (x, y = 1 (n p 1 2 ˆf bn (x y = ˆf bn (x, y ˆf bn (y n i=p+1 K( x x i p, y y i ( Te above equation can be used to form an alternative estimator of te unconditional density f(y, namely ˆf bn (y = ˆf bn (x, ydx. Te algoritm for backward bootstrap based on transition density is very similar to tat of te corresponding forward bootstrap. Te only difference is in Step (3 were we generate te pseudo series (x 1,, x n in a time-reversed fasion. Te backward bootstrap algoritm is described below were te notation y t = (x t,, x t p+1 is again used. Algoritm 3.2. Backward Bootstrap (1-(2 Same as te steps in Algoritm 3.1. (3 (a Let y n = y n. (b Generate x n p ˆf bn ( y n = y n (3.6 (c Repeat (b going backwards in time to generate x t ˆf bn ( y t+p for t = n p, n p 1,, 1. (d Generate bootstrap future value x n+1 ˆf n ( y n. [Note: tis is again going forward in time, using te forward transition density exactly as in te Forward Bootstrap Algoritm 3.1.] (e Construct ˆf n(x y in a similar way as in (3.3 wit te same kernel and bandwidts but based on te pseudo-data x 1, x 2,, x n instead of te original data. (f Calculate te bootstrap root replicate as x n+1 ˆx n+1. (4-(5 Same as te steps in Algoritm Asymptotic Properties For simplicity, in tis section we focus on a Markov sequence X of order one, i.e., p = 1. Te following tecnical assumptions are needed to ensure asymptotic validity of te prediction intervals of Sections 3.1 and 3.2. (α 1 X = {X 1, X 2, } forms an aperiodic, strictly stationary and geometrically ergodic and φ mixing Markov cain on (R, B were B is Borel-σ algebra over R.

8 L. Pan and D. Politis/Bootstrap for Markov processes 8 (α 2 F (y, F (x, y and F (x y defined in eq. (2.1 wit p = 1 are absolutely continuous, and ave uniformly continuous and bounded densities f(y, f(x, y and f(x y respectively. (α 3 Assume a compact subset S R exists suc tat f(y δ > 0 for eac y S. Also assume X t S for all t 1. Remark 2.1 of Rajarsi (1990 provides a discussion on te (nonrestrictiveness of assumption α 3 wic sould ave little effect in practice. Let K(x, y be an appropriately cosen probability density on R 2 ; also let 1 = 2 = for simplicity. Te required conditions on te kernel K and te bandwidt are specified in assumptions (β 1 -β 3. (β 1 K(x, y is uniformly continuous in (x, y, and K(x, y 0 as (x, y. (β 2 K(x, y is of bounded variation on S S. (β 3 As n, we ave = (n 0, n and m=1 mk+1 (m 4(k+1 < for some k 3. Under assumptions (α 1 -(α 3 and (β 1 -(β 3, te following results are proved by Rajarsi (1990. sup ˆf n (x, y f(x, y 0 a.s. (3.7 x,y sup ˆf n (x f(x 0 a.s. (3.8 x sup ˆf n (x y f(x y 0 a.s. (3.9 x,y Focusing on te forward bootstrap, te above tree equations are enoug to sow tat sup P (X n+1 x P (Xn+1 x 0 a.s. (3.10 x To argue in favor of asymptotic validity by appealing to Remark 2.3, we ave to center te distributions appearing in eq. (3.10. Recall tat te predictor of future value is ˆX n+1 = x ˆf n (x y n dx, and te bootstrap predictor is ˆX n+1 = x ˆf n(x y n dx. Now it is not ard to sow tat ˆX n+1 xf(x yn dx a.s. and ˆX n+1 xf(x y n dx as well; details can be found in Pan (2013. Terefore, it follows tat ˆX n+1 ˆX n+1 0 a.s., and appealing to Remark 2.3 we ave te following. Corollary 3.1. Under assumptions (α 1 -(α 3 and (β 1 -(β 3, te prediction interval constructed from te forward bootstrap of Algoritm 3.1 is asymptotically valid. As Appendix A sows, te time-reverse of a Markov process is also a Markov process. Hence, similar arguments leading to Corollary 3.1 can be used to prove te following. Corollary 3.2. Under assumptions (α 1 -(α 3 and (β 1 -(β 3, te prediction interval constructed from te backward bootstrap of Algoritm 3.2 is asymptotically valid. Remark 3.1. [On bandwidt coice] Bandwidt coice is as difficult as it is important in practice. Rajarsi (1990 used te bandwidt coice = 0.9An 1/6 were A = min(ˆσ, IQR 1.34, ˆσ is te estimated standard deviation of te data, and IQR is te interquartile range. However, our simulations indicated tat suc a bandwidt coice typically gives prediction intervals tat exibit significant over-coverage. Note tat te last requirement of assumption (β 3 implies tat n 4 0; tis convergence can be very slow provided k is large but in any case te order of sould be at most O(n 1/4. Terefore, te practical bandwidt coice was modified to = 0.9An 1/4. Te cross-validation metod for bandwidt selection is not recommended ere as it results into an of order n 1/5.

9 4. Te Local Boostrap for Markov processes L. Pan and D. Politis/Bootstrap for Markov processes 9 Te Local Bootstrap for Markov processes was proposed by Paparoditis and Politis (2002. Altoug it still assumed tat te random variables X 1, X 2, are continuous, possessing probability densities, te Local Boostrap generates bootstrap sample pats based on an estimated transition distribution function tat is a step function as opposed to generating bootstrap sample pats based on an estimated transition density as in Section 3; in tat sense, Rajarsi s (1990 metod is to te Local Boostrap wat te smooted bootstrap for i.i.d. data is to Efron s (1979 original bootstrap tat resamples from te empirical distribution function Forward Local Bootstrap As before, let x 1,, x n be te observed sample pat or a Markov cain of order p, and let Y t = (X t, X t 1,, X t p+1 and y t = (x t, x t 1,, x t p+1. Following Paparoditis and Politis (2002, te estimator of te one-step aead transition distribution function will be given by te weigted empirical distribution n 1 j=p F n (x y = 1 (,x](x j+1 W b (y y j n 1 m=p W (4.1 b(y y m were W b ( = 1/bW ( /b wit W ( being a bounded, Lipscitz continuous and symmetric probability density kernel in R p, and b > 0 is a bandwidt parameter tending to zero. Te Local Boostrap generation of pseudo-data will ten be based on te estimated conditional distribution ˆF n (x y. However, since te latter is a step-function, i.e., it is te distribution of a discrete random variable, in wat follows it is easier to work wit te probability mass function associated wit tis discrete random variable. Algoritm 4.1. Forward Local Bootstrap (1 Coose a resampling kernel W and bandwidt b; ere b can be selected by cross validation. Ten calculate te predictor ˆx n+1 as n 1 j=p W b(y n y j x j+1 n 1 m=p W. b(y n y m (2 (a Set starting value y p to a subseries consisting of any consecutive p values from {x 1,, x n }. (b Suppose x 1,, x t for t p ave already been generated. Let J be a discrete random variable taking its values in te set {p,, n 1} wit probability mass function given by Ten let x t+1 = x J+1 for t = p. P (J = s = W b (yt y s n 1 m=p W b(yt y m. (c Repeat (b for t = p + 1, p + 2,... to generate x p+1,, x n. (d Calculate te bootstrap predictor ˆx n+1 as n 1 j=p W b(y n yj x j+1 n 1 m=p W, b(y n ym were y t = (x t,, x t p+1.

10 L. Pan and D. Politis/Bootstrap for Markov processes 10 (e Re-define yn = y n, and ten generate x n+1 = x J+1 as in step (b, were J is a discrete random variable taking its values in te set {p,, n 1} wit probability mass function given by W b (y n y s P (J = s = n 1 m=p W b(y n y m. (f Calculate te bootstrap prediction root replicate as x n+1 ˆx n+1 (3 Repeat step (2 B times; te B bootstrap root replicates are collected in te form of an empirical distribution wose α-quantile is denoted q(α. (4 Te (1 α100% equal-tailed, forward Local Bootstrap prediction interval for X n+1 is given by [ˆx n+1 + q(α/2, ˆx n+1 + q(1 α/2] Backward Local Bootstrap As sown in Appendix A, te time-reverse of a Markov process is also a Markov process. So in tis section we will employ te Backward Local Bootstrap tat was introduced by Paparoditis and Politis (1998 for te purpose of constructing prediction intervals; teir example was a first order autoregressive process wit conditionally eteroscedastic errors, i.e., te model X t = φx t 1 + ɛ t α 0 + α 1 X 2 t 1 were {ɛ t} are i.i.d wit mean 0 and variance 1. We will now generalize tis idea to te Markov(p case; in wat follows, te Backward Local Bootstrap employs an estimate of te backward conditional distribution given by n p j=1 F bn (x y = 1 (,x](x j W b (y y j+p n p m=1 W b (y y (4.2 m+p were b is te backward bandwidt wic can be different from te forward bandwidt b. Algoritm 4.2. Backward Local Bootstrap (1 Same as te Forward Local Bootstrap in Algoritm 4.1. (2 (a Set starting value y n = y n. (b Suppose yt+p as already been generated were 1 t n p. Let J be a discrete random variable taking its values in te set {1, 2,, n p} wit probability mass function given by W b (yt+p y s+p P (J = s = n p m=1 W b (y t+p y m+p. Ten let x t = x J. (c Repeat (b to generate x n p,, x 2, x 1 backwards in time, i.e., first generate x n p, ten generate x n p 1, etc. (d Let J be a discrete random variable taking its values in te set {p, p + 1,, n 1} wit probability mass function given by P (J = s = W b (y n y s n 1 m=p W b(y n y m. Ten let x n+1 = x J+1. [Note: tis is again going forward in time exactly as in te Forward Local Bootstrap Algoritm 4.1.]

11 L. Pan and D. Politis/Bootstrap for Markov processes 11 (e Calculate te bootstrap predictor ˆx n+1 by n 1 j=p W b(y n y j x j+1 n 1 m=p W b(y n y m (f Calculate te bootstrap prediction root replicate as x n+1 ˆx n+1. (3-(4 Same as te steps of te Forward Local Bootstrap Algoritm Asymptotic Properties Paparoditis and Politis (2002 under teir assumptions (A1-(A3 and (B1-(B2 proved tat sup F n (x y F (x y 0 a.s. (4.3 x,y were te transition distribution estimator F (x y was defined in (4.1; tis is sufficient to sow eq. (3.10 for te Forward Local Bootstrap. Te argument regarding ˆX n+1 ˆX n+1 0 is similar as in Section 3.3, and ence asymptotic validity follows. Corollary 4.1. Under assumptions (A1-(A3 and (B1-(B2 of Paparoditis and Politis (2002, te prediction interval constructed from te Forward Local Bootstrap of Algoritm 4.1 is asymptotically valid. Due to Appendix A, it is easy to see tat te same consistency can be sown for te backwards transition distribution estimator F bn (y x defined in (4.2, i.e., tat sup F bn (x y F b (x y 0 a.s. x,y from wic asymptotic validity of te Backward Local Bootstrap follows. Corollary 4.2. Under assumptions (A1-(A3 and (B1-(B2 of Paparoditis and Politis (2002, te prediction interval constructed from te Backward Local Bootstrap of Algoritm 4.2 is asymptotically valid. 5. Hybrid Backward Markov Bootstrap for Nonparametric Autoregression In tis section only, we will consider te special case were our Markov(p process is generated via a nonparametric autoregression model, i.e., one of te two models below: AR wit omoscedastic errors: AR wit eteroscedastic errors: X t = m(x t 1,..., X t p + ɛ t wit ɛ t i.i.d.(0, σ 2 (5.1 X t = m(x t 1,..., X t p + σ(x t 1,..., X t p ɛ t wit ɛ t i.i.d.(0, 1. (5.2 As before, we assume tat {X t } is strictly stationary; we furter need to assume causality, i.e., tat ɛ t is independent of {X t 1, X t 2,...} for all t. As usual, te recursions (5.1 and (5.2 are meant to

12 L. Pan and D. Politis/Bootstrap for Markov processes 12 run forward in time, i.e., X p+1 is generated given an initial assignment for X 1,..., X p ; ten, X p+2 is generated given its own p-past, etc. Recently, Pan and Politis (2015 presented a unified approac for prediction intervals based on forward, model-based resampling under one of te above two models. It was noted tat a backward model-based bootstrap is not feasible except in te special case were te conditional expectation function m(x t 1,..., X t p is affine in its arguments, i.e., a linear AR model; see e.g. Breidt et al. (1995 and Tombs and Scucany (1990. Using te ideas presented in Sections 3 and 4, we can now propose a ybrid Backward Markov Bootstrap for Nonparametric Autoregression in wic forward resampling is done using te model, i.e., eq. (5.1 or (5.2, wereas te backward resampling is performed using te Markov property only Hybrid Backward Markov Bootstrap Algoritms Consider Markov Processes generated from eiter omoscedastic model (5.1 or eteroscedastic model (5.2. Given a sample {x 1, x 2,, x n }, te algoritms of ybrid backward Markov bootstrap based on transition density for nonparametric model wit i.i.d errors and eteroscedastic errors are described in Algoritm 5.1 and 5.2 respectively; te corresponding algoritms based on Local Bootstrap are described in Algoritm 5.3 and 5.4. Algoritm 5.1. Hybrid backward Backward Markov bootstrap based on transition density for nonparametric model wit omoscedastic errors (1 Select a bandwidt and construct te kernel estimates ˆm(y i for i = p,, n 1, were ˆm(y i = n 1 yi yt t=p K( x t+1 n 1 yi yt t=p K( ; (5.3 as before, y t = (x t, x t 1,, x t p+1, and is a norm in R p. (2 Compute te residuals: ˆɛ i = x i ˆm(y i 1, for i = p + 1,, n. (3 Center te residuals: ˆr i = ˆɛ i (n p 1 n t=p+1 ˆɛ t, for i = p + 1,, n; let te empirical distribution of ˆr t denoted by ˆF ɛ. (a Construct te backward transition density estimate ˆf bn as eq. (3.6. (b Let y n = y n. (c Generate x n p ˆf bn ( y n = y n. Repeat it to generate x t ˆf bn ( y t+p for t = n p,, 1 backwards in time, i.e., first for t = n p, ten for t = n p 1, etc. (d Compute te future bootstrap observation X n+1 by te AR formula X n+1 = ˆm(y n + ɛ n+1 = ˆm(y n + ɛ n+1, were ɛ n+1 is generated from ˆF ɛ. Ten re-estimate m( based on te pseudo data, i.e., ˆm (y = and let ˆX n+1 = ˆm (y n = ˆm (y n n 1 i=p K( y y i n 1 i=p K( y y i x i+1,

13 L. Pan and D. Politis/Bootstrap for Markov processes 13 (e Calculate te bootstrap root replicate as X n+1 ˆX n+1. (4 Steps (a-(e in te above are repeated B times, and te B bootstrap root replicates are collected in te form of an empirical distribution wose α-quantile is denoted q(α. (5 Ten, a (1 α100% equal-tailed predictive interval for X n+1 is given by [ ˆm(x n + q(α/2, ˆm(x n + q(1 α/2] Algoritm 5.2. Hybrid backward Markov bootstrap based on transition density for nonparametric model wit eteroscedastic errors (1 Select te bandwidt and construct te estimates { ˆm(y i, ˆσ(y i } for i = p,, n 1, were (2 Compute te residuals: ˆσ 2 (y i = ˆm(y i = n 1 t=1 n 1 yi yt t=p K( x t+1 n 1 yi yt t=p K( yi yt K( n 1 t=1 (x t+1 ˆm(y t 2. yi yt K( ˆɛ i = x i ˆm(y i 1, ˆσ(y i 1 for i = p + 1,, n. (3 Tis step is similar as step (3 in Algoritm 5.1; te only difference is in step (d wen te future bootstrap observation X n+1 is computed as follows: X n+1 = ˆm g (y n + ˆσ g (y nɛ n+1 = ˆm g (y n + ˆσ g (y n ɛ n+1. Here ˆm g and ˆσ g are over-smooted estimates of m and g computed in te same way as ˆm and ˆσ but using bandwidt g tat is bigger tan ; see Pan and Politis (2015 for a discussion. (4-(5 Same as tose in Algoritm 5.1. Algoritm 5.3. Hybrid backward local bootstrap for nonparametric model wit omoscedastic errors Te algoritm is identical to Algoritm 5.1 wit te exception of steps 3 (a to (c tat ave to be canged as follows. (3 (a Select a resampling bandwidt b and kernel W. (b Let y n = y n Suppose y t+p ave already been generated for 1 t n p. Let J be a discrete random variable taking its values in te set {1, 2,, n p} wit probability mass function given by, (c Ten x t = x J. P (J = s = W b (yt+p y s+p n p m=1 W b (y t+p y m+p, Repeat (b to generate x n p,, x 2, x 1 backwards in time. Algoritm 5.4. Hybrid backward local bootstrap for nonparametric model wit eteroscedastic errors

14 L. Pan and D. Politis/Bootstrap for Markov processes 14 (1 (2 Same as te corresponding steps in Algoritm 5.2 (3 (a (c Same as steps (a (c in Algoritm 5.3 (d (e Same as te corresponding steps in Algoritm 5.2 (4 (5 Same as te corresponding steps in Algoritm 5.2 Remark 5.1. Te ybrid algoritms use a model-based resampling based on fitted residuals. As discussed by Pan and Politis (2015, usage of predictive residuals may be preferable. According to te two models (5.1 or (5.2, te predictive residuals are respectively defined as ˆɛ (t t = x t ˆm (t t (y t 1 or ˆɛ (t t = xt ˆm(t t (y t 1 ˆσ (t t (y t 1 were ˆm (t and ˆσ (t are smooting estimators calculated from te original dataset aving te t-t point deleted. Finally, to define ybrid backward bootstrap intervals based on predictive residuals we just need to replace te fitted residuals {ˆɛ i } in step (2 of Algoritms by te predictive residuals {ˆɛ (t t }. 6. Bootstrap Prediction Intervals for Markov Processes Based on te Model-Free Prediction Principle We now return to te setup of data from a general Markov process tat does not necessarily satisfy a model equation suc as (5.1 or (5.2. In wat follows, we will introduce te Model-Free Bootstrap for Markov Processes; tis is a novel approac tat stems from te Model-Free Prediction Principle of Politis (2013. Te key idea is to transform a given complex dataset into one tat is i.i.d., and terefore easier to andle; aving done tat, te prediction problem is greatly simplified, and tis includes te construction of prediction intervals. In te case of a Markov Process, tis simplification can be practically accomplised using te Rosenblatt (1952 transformation. Instead of generating one-step aead pseudo data by some estimated conditional distribution, e.g. te transition density given in (3.3 or te transition distribution function given in (4.1, te Model-Free Bootstrap resamples te transformed i.i.d. data, and ten transforms tem back to obtain te desired one-step aead prediction. Note tat te bootstrap based on kernel estimates of te transition density of Section 3, and te Local Bootstrap of Section 4 can also be considered model-free metods as tey apply in te absence of a model equation suc as (5.1 or (5.2. Te term Model-Free Bootstrap specifically refers to te transformation approac stemming from te Model-Free Prediction Principle of Politis ( Teoretical Transformation Let X = {X 1, X 2, } be a stationary time series from a Markov process of order p, and Y t 1 = (X t 1,, X t p. Given Y t 1 = y R p, we denote te conditional distribution of X t as D y (x = P (X t x Y t 1 = y. (6.1 Tis is te same distribution discussed in eq. (2.1; canging te notation will elp us differentiate between te different metods. For some positive integer i p, we also define te distributions wit partial conditioning as follows D y,i (x = P (X t x Y (i t 1 = y (6.2 were Y (i t 1 = (X t 1,, X t i and y R i. In tis notation, we can denote te unconditional distribution as D y,0 (x = P (X t x wic does not depend on y. Trougout tis section, we assume tat, for any y and i, te function D y,i ( is continuous and invertible over its support.

15 L. Pan and D. Politis/Bootstrap for Markov processes 15 A transformation from our Markov(p dataset X 1,, X n to an i.i.d. dataset η 1,, η n can now be constructed as follows. Let η 1 = D y,0 (X 1 ; η 2 = D Y (1 1,1 (X 2; η 3 = D Y (2 2,2 (X 3; ; η p = D Y (p 1 p 1,p 1 (X p (6.3 and η t = D Yt 1 (X t for t = p + 1, p + 2,, n. (6.4 Note tat te transformation from te vector (X 1,, X m to te vector (η 1,, η m is one-to-one and invertible for any natural number m by construction. Hence, te event {X 1 = x 1,, X t = x m } is identical to te event {η 1 = ζ 1,, η t = ζ m } wen te construction of ζ t follows (6.3 and (6.4, i.e., ζ 1 = D y,0 (x 1 ; ζ 2 = D y (1 1,1(x 2; ζ 3 = D y (2 2,2(x 3; ; ζ p = D Y (p 1 p 1,p 1 (x p (6.5 and ζ t = D yt 1 (x t for t = p + 1, p + 2,, n (6.6 were y t 1 = (x t 1,, x t p and y (i t 1 = (x t 1,, x t i. It is not difficult to see tat te random variables η 1,, η n are i.i.d. Uniform(0,1; in fact, tis is an application of te Rosenblatt (1952 transformation in te case of Markov(p sequences. For example, te fact tat η 1 is Uniform(0,1 is simply due to te probability integral transform. Now for t > p, we ave P (η t z η t 1 = ζ t 1,..., η 1 = ζ 1 = P (η t z X t 1 = x t 1,..., X 1 = x 1 = P (η t z Y t 1 = y t 1, X t p 1 = x t p 1,..., X 1 = x 1 by te discussion preceding eq. (6.5. Letting y be a sort-and for y t 1, we ave: P (η t z η t 1 = ζ t 1,..., η 1 = ζ 1 =P (D y (X t z Y t 1 = y, X t p 1 = x t p 1,..., X 1 = x 1 =P (X t D 1 y (z Y t 1 = y (by te Markov property =D y (Dy 1 (z =z (wic is uniform and does not depend on y. Hence, P (η t z η t 1 = ζ t 1,..., η 1 = ζ 1 = z, i.e., for t > p, η t is a random variable tat is independent of te its own past and as a Uniform (0,1 distribution. Te same is true for η t wit 1 < t < p; te argument is similar to te above but using te D y,t ( distribution instead of D y (. All in all, it sould be clear tat te random variables η 1,, η n are i.i.d. Uniform(0, Estimating te Transformation from Data To estimate te teoretical transformation from data, we would need to estimate te distributions D y,i ( for i = 0, 1,..., p 1 and D y (. Note, owever, tat D y,i ( for i < p can in principle be computed from D y ( since te latter uniquely specifies te wole distribution of te stationary Markov process. Hence, it sould be sufficient to just estimate D y ( from our data. Anoter way of seeing tis is to note tat te p variables in eq. (6.3 can be considered as edge effects or initial conditions ; te crucial part of te transformation is given by eq. (6.4, i.e., te one based on D y (. Given observations x 1,, x n, we can estimate D y (x by local averaging metods suc as te kernel estimator n i=p+1 ˆD y (x = 1 {x i x}k( y yi 1 n k=p+1 K(. (6.7 y y k 1

16 L. Pan and D. Politis/Bootstrap for Markov processes 16 Note tat ˆD y in (6.7 is a step function; we can use linear interpolation on tis step function to produce an estimate D y tat is piecewise linear and strictly increasing (and terefore invertible; see Politis (2010, Section 4.1 for details on tis linear interpolation. Consequently, we define u t = D yt 1 (x t, for t = p + 1,, n. (6.8 Estimator ˆD y is consistent for D y under regularity conditions; see eq. (4.3 and te associated discussion. Furtermore, te consistency of D y follows from its close proximity to ˆD y ; see te related discussion in Politis (2010. It ten follows tat u t η t were η t was defined in Section 6.1, and tus {u t for t = p+1,, n} are approximately i.i.d. Uniform(0,1. Hence, te goal of transforming our data x 1,, x n to a sequence of (approximately i.i.d. random variables u t as been acieved; note tat te initial conditions u 1,..., u p were not explicitly generated in te above as tey are not needed in te Model-free bootstrap algoritms Basic Algoritm for Model-free Prediction Interval Given observations x 1,, x n, te basic model-free (MF algoritm for constructing te predictive interval can be described as follows. As before, y t 1 = (y t 1,, y t p and y t 1 = (x t 1,, x t p. Algoritm 6.1. Model-Free (MF Metod (1 Use eq. (6.8 to obtain te transformed data u p+1,, u n. (2 Calculate ˆx n+1, te predictor of x n+1, by ˆx n+1 = 1 n p n t=p+1 D 1 y n (u t. (6.9 (3 (a Resample randomly(wit replacement te transformed data u p+1,, u n to create te pseudo data u M, u M+1,, u 0, u 1,, u n 1, u n and u n+1 for some large positive integer M. (b Draw x M,, x M+p 1 randomly from any consecutive p values of te dataset (x 1,, x n ; let y M+p 1 = (x M+p 1,, x M. (c Generate x t = D 1 y t 1(u t for t = M + p,, n. (d Calculate te bootstrap future value x n+1 = 1 D y n (u n+1. n n p t=p+1 (e Calculate te bootstrap predictor ˆx n+1 = 1 interpolated version of te step function given by n ˆD y(x i=p+1 = 1 {x i x} K( y y i 1. n k=p+1 K( y y k 1 1 D y n (u t were D y is a linearly (f Calculate te bootstrap root x n+1 ˆx n+1. (4 Repeat step (3 B times; te B bootstrap root replicates are collected in te form of an empirical distribution wose α-quantile is denoted q(α. (5 Te (1 α100% equal-tailed predictive interval for X n+1 is given by [ˆx n+1 + q(α/2, ˆx n+1 + q(1 α/2].

17 Some remarks are in order. L. Pan and D. Politis/Bootstrap for Markov processes 17 Remark 6.1. Algoritm 6.1 is in effect a Forward bootstrap algoritm for prediction intervals according to te discussion of Section 2. Constructing a backward bootstrap analog of Algoritm 6.1 is straigtforward based on te Markov property of te reversed Markov process sown in Appendix A. One would ten need a reverse construction of te teoretical transformation of Section 6.1. To elaborate on te latter, we would instead let η t = G Yt+p (X t for t = n p, n p 1,..., 1 were G y (x = P (X t x Y t+p = y is te backwards analog of D y (x; te η t for t = n,..., n p+1 can be generated using te backwards analogs of D y,i (x. Te details are straigtforward and are omitted especially since te finite-sample performance of te two approaces is practically indistinguisable. Remark 6.2. As mentioned in Section 2, tere exist different approximations to te conditional expectation wic serves as te L 2 optimal predictor. Te usual one is te kernel smooted estimator (5.3 but eq. (6.9 gives an alternative approximation; we ave used it in Algoritm 6.1 because it follows directly from te Model-Free Prediction Principle of Politis (2013. However, te two approximations are asymptotically equivalent, and tus can be used intercangeably. To see wy, note tat 1 n p n t=p+1 1 D y 1 n (u t D y 1 n (udu 0 x ˆf n (x y n dx ˆm(y n = 1 0 ˆD 1 y n (udu 1 n p n t=p+1 ˆD 1 y n (u t were ˆD 1 y ( indicates te quantile inverse of te step-function ˆD y (. Remark 6.3. Recall tat ˆD y (x is a local average estimator, i.e., averaging te indicator 1 {xi x} over data vectors Y t tat are close to y. If y is outside te range of te data vectors Y t, ten obviously estimator ˆD y (x can not be constructed, and te same is true for D y (x. Similarly, if y is at te edges of te range of Y t, e.g., witin of being outside te range, ten ˆD y (x and D y (x will not be very accurate. Step 1 of Algoritm 6.1 can ten be modified to drop te u i s tat are obtained from an x i wose y i 1 is witin of te boundary; see Politis (2013 for a related discussion. Remark 6.4. [Discrete-valued Markov processes] Since te transformed data u p+1,..., u n are approximately i.i.d. Uniform(0,1, te resampling in step 3(a of Algoritm 6.1 could alternatively be done using te Uniform distribution, i.e., generate u M, u M+1,, u 0, u 1,, u n 1, u n and u n+1 as i.i.d. Uniform(0,1. Algoritm 6.1 still works fine wit tis coice but can not obviously be extended to include te use of predictive residuals as Section 6.4 proposes; see also te discussion in te Rejoinder of Politis (2013. Interestingly, generating u i as i.i.d. Uniform(0,1, and replacing 1 1 all occurences of D y ( by te quantile inverse ˆD y ( in Algoritm 6.1, makes te algoritm valid for te situation were te time series X t is discrete-valued, i.e., wen te true D y (x is indeed a step function in x Better Model-free Prediction Intervals: te Predictive Model-free Metod In Pan and Politis (2015, model-based predictive residuals were used instead of te fitted residuals to improve te performance of te predictive interval. From eq. (6.7, we see tat te conditional distribution of interest is D yn (x = P (X t x Y t 1 = y n wic is estimated by n i=p+1 ˆD yn (x = 1 {x i x}k( yn yi 1 n. yn yi 1 k=p+1 K(

18 L. Pan and D. Politis/Bootstrap for Markov processes 18 Since x n+1 is not observed, te above estimated conditional distribution for x n+1 treats te pair (y n, x n+1 as an out-of-sample pair. To mimic tis situation in te model-free set-up, we can use te trick of Pan and Politis (2015, i.e., to calculate an estimate of D yn (x based on a dataset tat excludes te pair (y t 1, x t for t = p + 1,, n. In oter words, define te delete-one estimator ˆD (t y t 1 (x t = n i=p+1,i t 1 {x i x t}k( yt 1 yi 1 k=p+1,k t K(, for t = p + 1,, n. yt 1 y k 1 Linear interpolation on (t ˆD y (x gives (t D y (x, and we can ten define u (t t = D (t y t 1 (x t ; (6.10 ere, te u (t t serve as te analogs of te predictive residuals studied in Pan and Politis (2015 in a nonparametric regression setup. Algoritm 6.2. Predictive Model-Free (PMF Metod Te algoritm is identical to Algoritm 6.1 after substituting u (p+1 p+1,, u(n n in place of u p+1,, u n. Remark 6.5. If y t 1 is far from oter y i s (for i = p + 1,, n and i t, ten te denominator of ˆD y (t t 1 (x t can be zero wic leads to an undefined value of u (t t. We omit all tese undefined u (t t s in te practical application of te above algoritm Smooted Model-Free Metod In Section 6.2, we estimated te transition distribution D y (x = P (X t x Y t 1 = y by ˆD y (x as defined in (6.7. Noting tat D y (x is, by assumption, continuous in x wile ˆD y (x is not, te linearly interpolated, strictly increasing estimator D y (x was used instead. However, D y (x is piecewice linear, and terefore not smoot in te argument x. In tis section, we employ an alternative estimator of te conditional transition density tat is smoot in x. Note tat te function 1 {xi x} is te cumulative distribution of a point mass distribution. To smoot tis point mass distribution, we substitute 1 {xi x} in eq. (6.7 wit Λ( x xi 0 were Λ( is an absolutely continuous, strictly increasing cumulative distribution function, and 0 is a positive bandwidt parameter. Te new estimator D y (x is defined by D y (x = n x xi i=p+1 Λ( 0 K( y yi 1 n k=p+1 K( y y k 1 and te transformed data {v t for t = p + 1,, n} can be calculated by, (6.11 v t = D yt 1 (x t. (6.12 Substituting D y (x for D y (x and {v t } for {u t } in Algoritm 6.1, we ave te smooted model free metod as follows: Algoritm 6.3. Smooted Model-Free Metod (SMF (1 Use eq. (6.12 to obtain te transformed data v p+1,, v n.

19 (2 Calculate ˆx n+1, te predictor of x n+1, by L. Pan and D. Politis/Bootstrap for Markov processes 19 ˆx n+1 = 1 n p n t=p+1 D 1 x n (v t (3 (a Resample randomly (wit replacement te transformed data v p+1,, v n to create te pseudo data v M, v M+1,, v 0, v 1,, v n 1, v n and v n+1 for some large positive integer M. (b Draw x M,, x M+p 1 from any consecutive p values of te dataset (x 1,, x n ; let y M+p 1 = (x M+p 1,..., x M. (c Generate x t = D 1 y t 1(v t for t = M + p,, n. (d Calculate te bootstrap future value x n+1 = (e Calculate ˆx n+1 = 1 n p n t=p+1 1 D y n (vt were D 1 y n (v n+1. n D y(x i=p+1 = Λ( x x i 0 K( y y i 1. n k=p+1 K( y y k 1 (f Calculate te bootstrap root x n+1 ˆx n+1. (4 Repeat step (3 B times; te B bootstrap root replicates are collected in te form of an empirical distribution wose α-quantile is denoted q(α. (5 Te (1 α100% equal-tailed predictive interval for X n+1 is given by [ˆx n+1 + q(α/2, ˆx n+1 + q(1 α/2]. Remark 6.6. As in Remark 6.3, Step 1 (a of Algoritm 6.3 can be modified to drop te v i s tat are obtained from an x i s wose y i 1 is witin of te boundary. Remark 6.7. [On Bandwidt Coice] As suggested by Li and Racine (2007 Capter 6.2, te optimal smooting of Dy (x wit respect to Mean Squared Error (MSE requires tat 0 = O p (n 2/5 and = O p (n 1/5 ; ence, in te algoritm s implementation, we cose troug cross validation, and ten let 0 = 2. As in Section 6.4, we can also use te delete-x t estimator D (t y (x t = n xt xi i=p+1,i t Λ( 0 in order to construct te transformed data: K( yt 1 yi 1 k=p+1,k t K( yt 1 y k 1 for t = p + 1,, n v (t t = D (t y t 1 (x t for t = p + 1,, n. Tis leads to te Predictive Smooted Model-Free Algoritm. Algoritm 6.4. Predictive Smooted Model-Free (PSMF Metod Te algoritm is identical to Algoritm 6.3 after substituting v (p+1 p+1,, v(n n in place of v p+1,, v n.

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

UC San Diego Recent Work

UC San Diego Recent Work UC San Diego Recent Work Title Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions Permalink https://escholarship.org/uc/item/67h5s74t Authors Pan, Li Politis, Dimitris

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions arxiv: arxiv:0000.0000 REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions Li Pan and Dimitris N. Politis Li Pan Department of Mathematics University of California

More information

Block Bootstrap Prediction Intervals for Autoregression

Block Bootstrap Prediction Intervals for Autoregression Department of Economics Working Paper Block Bootstrap Prediction Intervals for Autoregression Jing Li Miami University 2013 Working Paper # - 2013-02 Block Bootstrap Prediction Intervals for Autoregression

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)

1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x) Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Math 1241 Calculus Test 1

Math 1241 Calculus Test 1 February 4, 2004 Name Te first nine problems count 6 points eac and te final seven count as marked. Tere are 120 points available on tis test. Multiple coice section. Circle te correct coice(s). You do

More information

Continuous Stochastic Processes

Continuous Stochastic Processes Continuous Stocastic Processes Te term stocastic is often applied to penomena tat vary in time, wile te word random is reserved for penomena tat vary in space. Apart from tis distinction, te modelling

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

Chapter 4: Numerical Methods for Common Mathematical Problems

Chapter 4: Numerical Methods for Common Mathematical Problems 1 Capter 4: Numerical Metods for Common Matematical Problems Interpolation Problem: Suppose we ave data defined at a discrete set of points (x i, y i ), i = 0, 1,..., N. Often it is useful to ave a smoot

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE U N I V E R S I T Ä T H A M B U R G A note on residual-based empirical likeliood kernel density estimation Birte Musal and Natalie Neumeyer Preprint No. 2010-05 May 2010 DEPARTMENT MATHEMATIK SCHWERPUNKT

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds. Numerical solvers for large systems of ordinary differential equations based on te stocastic direct simulation metod improved by te and Runge Kutta principles Flavius Guiaş Abstract We present a numerical

More information

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0 3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

Kernel estimates of nonparametric functional autoregression models and their bootstrap approximation

Kernel estimates of nonparametric functional autoregression models and their bootstrap approximation Electronic Journal of Statistics Vol. (217) ISSN: 1935-7524 Kernel estimates of nonparametric functional autoregression models and teir bootstrap approximation Tingyi Zu and Dimitris N. Politis Department

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional

More information

Introduction to Derivatives

Introduction to Derivatives Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Model-free prediction intervals for regression and autoregression. Dimitris N. Politis University of California, San Diego

Model-free prediction intervals for regression and autoregression. Dimitris N. Politis University of California, San Diego Model-free prediction intervals for regression and autoregression Dimitris N. Politis University of California, San Diego To explain or to predict? Models are indispensable for exploring/utilizing relationships

More information

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225 THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Mat 225 As we ave seen, te definition of derivative for a Mat 111 function g : R R and for acurveγ : R E n are te same, except for interpretation:

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

arxiv: v1 [math.pr] 28 Dec 2018

arxiv: v1 [math.pr] 28 Dec 2018 Approximating Sepp s constants for te Slepian process Jack Noonan a, Anatoly Zigljavsky a, a Scool of Matematics, Cardiff University, Cardiff, CF4 4AG, UK arxiv:8.0v [mat.pr] 8 Dec 08 Abstract Slepian

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

A Reconsideration of Matter Waves

A Reconsideration of Matter Waves A Reconsideration of Matter Waves by Roger Ellman Abstract Matter waves were discovered in te early 20t century from teir wavelengt, predicted by DeBroglie, Planck's constant divided by te particle's momentum,

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

estimate results from a recursive sceme tat generalizes te algoritms of Efron (967), Turnbull (976) and Li et al (997) by kernel smooting te data at e

estimate results from a recursive sceme tat generalizes te algoritms of Efron (967), Turnbull (976) and Li et al (997) by kernel smooting te data at e A kernel density estimate for interval censored data Tierry Ducesne and James E Staord y Abstract In tis paper we propose a kernel density estimate for interval-censored data It retains te simplicity andintuitive

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

Te comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab

Te comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab To appear in: Advances in Neural Information Processing Systems 9, eds. M. C. Mozer, M. I. Jordan and T. Petsce. MIT Press, 997 Bayesian Model Comparison by Monte Carlo Caining David Barber D.Barber@aston.ac.uk

More information

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers.

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers. ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU A. Fundamental identities Trougout tis section, a and b denotes arbitrary real numbers. i) Square of a sum: (a+b) =a +ab+b ii) Square of a difference: (a-b)

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x

More information

2.11 That s So Derivative

2.11 That s So Derivative 2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point

More information

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Lawrence D. Brown, Pilip A. Ernst, Larry Sepp, and Robert Wolpert August 27, 2015 Abstract We consider te class,

More information

Bandwidth Selection in Nonparametric Kernel Testing

Bandwidth Selection in Nonparametric Kernel Testing Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

Section 3: The Derivative Definition of the Derivative

Section 3: The Derivative Definition of the Derivative Capter 2 Te Derivative Business Calculus 85 Section 3: Te Derivative Definition of te Derivative Returning to te tangent slope problem from te first section, let's look at te problem of finding te slope

More information

New Distribution Theory for the Estimation of Structural Break Point in Mean

New Distribution Theory for the Estimation of Structural Break Point in Mean New Distribution Teory for te Estimation of Structural Break Point in Mean Liang Jiang Singapore Management University Xiaou Wang Te Cinese University of Hong Kong Jun Yu Singapore Management University

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Differential equations. Differential equations

Differential equations. Differential equations Differential equations A differential equation (DE) describes ow a quantity canges (as a function of time, position, ) d - A ball dropped from a building: t gt () dt d S qx - Uniformly loaded beam: wx

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk Bob Brown Mat 251 Calculus 1 Capter 3, Section 1 Completed 1 Te Tangent Line Problem Te idea of a tangent line first arises in geometry in te context of a circle. But before we jump into a discussion of

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Section 15.6 Directional Derivatives and the Gradient Vector

Section 15.6 Directional Derivatives and the Gradient Vector Section 15.6 Directional Derivatives and te Gradient Vector Finding rates of cange in different directions Recall tat wen we first started considering derivatives of functions of more tan one variable,

More information

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 2. Predictor-Corrector Methods

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 2. Predictor-Corrector Methods Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Matematics National Institute of Tecnology Durgapur Durgapur-7109 email: anita.buie@gmail.com 1 . Capter 8 Numerical Solution of Ordinary

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

Artificial Neural Network Model Based Estimation of Finite Population Total

Artificial Neural Network Model Based Estimation of Finite Population Total International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony

More information

New Streamfunction Approach for Magnetohydrodynamics

New Streamfunction Approach for Magnetohydrodynamics New Streamfunction Approac for Magnetoydrodynamics Kab Seo Kang Brooaven National Laboratory, Computational Science Center, Building 63, Room, Upton NY 973, USA. sang@bnl.gov Summary. We apply te finite

More information

Derivation Of The Schwarzschild Radius Without General Relativity

Derivation Of The Schwarzschild Radius Without General Relativity Derivation Of Te Scwarzscild Radius Witout General Relativity In tis paper I present an alternative metod of deriving te Scwarzscild radius of a black ole. Te metod uses tree of te Planck units formulas:

More information

Derivatives. By: OpenStaxCollege

Derivatives. By: OpenStaxCollege By: OpenStaxCollege Te average teen in te United States opens a refrigerator door an estimated 25 times per day. Supposedly, tis average is up from 10 years ago wen te average teenager opened a refrigerator

More information

Deconvolution problems in density estimation

Deconvolution problems in density estimation Deconvolution problems in density estimation Dissertation zur Erlangung des Doktorgrades Dr. rer. nat. der Fakultät für Matematik und Wirtscaftswissenscaften der Universität Ulm vorgelegt von Cristian

More information

A New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model

A New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model e University of Adelaide Scool of Economics Researc Paper No. 2009-6 October 2009 A New Diagnostic est for Cross Section Independence in Nonparametric Panel Data Model Jia Cen, Jiti Gao and Degui Li e

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

f a h f a h h lim lim

f a h f a h h lim lim Te Derivative Te derivative of a function f at a (denoted f a) is f a if tis it exists. An alternative way of defining f a is f a x a fa fa fx fa x a Note tat te tangent line to te grap of f at te point

More information

The Verlet Algorithm for Molecular Dynamics Simulations

The Verlet Algorithm for Molecular Dynamics Simulations Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical

More information

Adaptive Neural Filters with Fixed Weights

Adaptive Neural Filters with Fixed Weights Adaptive Neural Filters wit Fixed Weigts James T. Lo and Justin Nave Department of Matematics and Statistics University of Maryland Baltimore County Baltimore, MD 150, U.S.A. e-mail: jameslo@umbc.edu Abstract

More information

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016 MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0

More information

Average Rate of Change

Average Rate of Change Te Derivative Tis can be tougt of as an attempt to draw a parallel (pysically and metaporically) between a line and a curve, applying te concept of slope to someting tat isn't actually straigt. Te slope

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

Time (hours) Morphine sulfate (mg)

Time (hours) Morphine sulfate (mg) Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15

More information

Kernel Smoothing and Tolerance Intervals for Hierarchical Data

Kernel Smoothing and Tolerance Intervals for Hierarchical Data Clemson University TigerPrints All Dissertations Dissertations 12-2016 Kernel Smooting and Tolerance Intervals for Hierarcical Data Cristoper Wilson Clemson University, cwilso6@clemson.edu Follow tis and

More information

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information