Block Bootstrap Prediction Intervals for Autoregression

Size: px
Start display at page:

Download "Block Bootstrap Prediction Intervals for Autoregression"

Transcription

1 Department of Economics Working Paper Block Bootstrap Prediction Intervals for Autoregression Jing Li Miami University 2013 Working Paper #

2 Block Bootstrap Prediction Intervals for Autoregression Jing Li Abstract Tis paper provides evidence tat te principle of parsimony can be extended to interval forecasts. We propose new prediction intervals based on parsimonious autoregressions. Te serial correlation in te error term is accounted for by te block bootstrap. Te proposed intervals generalize te i.i.d bootstrap intervals by allowing for serially correlated errors. Simulations sow te proposed intervals ave superior performance wen te serial correlation in te error term is strong and wen forecast orizon is sort. By applying te proposed intervals to U.S. inflation rate, we igligt a tradeoff between preserving correlation and adding variation. Keywords: Forecast; Block Bootstrap; Stationary Bootstrap; Prediction Intervals; Principle of Parsimony EconLit Subject Descriptors: C15; C22; C53 Jing Li, Department of Economics, Miami University, Oxford, OH 45056, USA. Pone: , Fax: , lij14@miamio.edu. 1

3 1. Introduction It is well known tat a parsimonious model may produce superior out-of-sample point forecasts tan a complicated model. Te main contribution of tis paper is to examine weter te principle of parsimony can be extended to interval forecasts. Te proposed block bootstrap prediction intervals (BBPI) are based on a parsimonious first order autoregression. By contrast, te i.i.d or standard bootstrap prediction intervals developed by Tombs and Scucany (1990) (called TS intervals tereafter) are based on te autoregression of order p, were p can be large. Te TS intervals assume te error term is independent. Tus te BBPI generalize te TS intervals by allowing for serially correlated errors. A largely overlooked fact is tat te principle of parsimony is forgone by te classical Box- Jenkins prediction intervals and te TS intervals. Bot require serially uncorrelated error terms, and terefore dynamically complete models 1. Usually te Breusc-Godfrey type test is conducted to ensure te adequacy of model. On te oter and, wen te goal is te point forecast, te models selected by criteria of AIC and BIC are typically parsimonious, but not necessarily adequate, see Enders (2009). Te proposed intervals do not require dynamically complete models. Unlike te Box- Jenkins intervals, non-normality can be automatically accounted for. Tis is because te block bootstrap is employed to obtain te empirical conditional distribution, wic can be non-normal, of te out-of-sample forecast. More explicitly, te block bootstrap redraws wit replacement random blocks of consecutive residuals of a parsimonious (sort) autoregression. Te blocking is intended to preserve te time dependence structure in te error term. By contrast, te TS intervals entail te i.i.d bootstrap of Efron (1979) tat resamples individual residual. Te i.i.d bootstrap works well in te independent setup. To satisfy te independence of te error term, a complicated (long) autoregression is needed for te TS intervals. 1 See te footnote page. 2

4 Tis paper also applies te bootstrap metod to direct forecasts. Tere are two ways to obtain te -step forecast. Te first is to run just one autoregression, and ten compute te -step forecast based on ( i)-step forecasts. We call tese iterated forecasts. Te second way is to run a set of direct autoregressions. Tey all use y t as te dependent variable, but te regressors are different. In te simplest case, te only regressor is y t 1 in te first direct regression, y t 2 in te second direct regression, and so on. We call tese direct forecasts. Tis paper considers bot forecasts. Tere are tree steps to construct te BBPI. In step one te AR(1) regression is estimated by ordinary least squares (OLS), and te residual is saved. In step two a backward AR(1) regression is fitted, and random blocks of residuals are used to generate te bootstrap replicate. In step tree te bootstrap replicate is used to run te AR(1) regression again and random blocks of residuals in step one are used to compute te bootstrap out-of-sample forecast. After repeating steps two and tree many times, te BBPI are determined by te percentiles of te empirical distribution of te bootstrap forecast. We discuss tecnical issues suc as correcting te bias of autoregressive coefficients, selecting te block size (lengt), coosing between overlapping and non-overlapping blocks and using te stationary bootstrap tat sets te block size as random. Monte Carlo experiment compares te average coverage rate of te BBPI to te TS intervals. Tere are two key findings. Te first is tat te BBPI dominate wen te error term sows strong serial correlation after y t 1 being controlled for. Te second is tat te BBPI always outperform te TS intervals for te one-step forecast. For longer forecast orizon, it is possible tat te TS intervals perform better. Our findings igligt a tradeoff between preserving correlation and adding variation. Te block bootstrap acieves te former but sacrifices te latter. Tis tradeoff is also illustrated wen we apply te BBPI to te U.S. montly inflation rate. Te literature on te bootstrap prediction intervals is growing fast. Important works 3

5 include Tombs and Scucany (1990), Masarotto (1990), Grigoletto (1998), Clements and Taylor (2001), Kim (2001), and Kim (2002). Te block bootstrap and stationary bootstrap are fully developed by Künsc (1989) and Politis and Romano (1994). See Stock and Watson (1999) and Stock and Watson (2007) for more discussion about forecasting te U.S. inflation rate. Te remainder of te paper is organized as follows. Section 2 specifies te BBPI. Section 3 conducts te Monte Carlo experiment. Section 4 provides an application, and Section 5 concludes. 2. Block Bootstrap Prediction Intervals Iterated Block Bootstrap Prediction Intervals Let {y t, t Z} be a strictly stationary and weakly dependent time series wit mean of zero. In practice y t may represent te demeaned, differenced, detrended or deseasonalized series. Te goal is to find te prediction intervals for future values (y n+1, y n+2,..., y n+ ), were is te maximum forecast orizon, after observing Ω = (y 1,..., y n ). Tis paper focuses on te bootstrap prediction intervals because (i) tey do not assume tat te distribution of y n+i conditional on Ω is normal, and (ii) te bootstrap intervals can automatically take into account te sampling variability of te estimated coefficients. Te TS intervals of Tombs and Scucany (1990) are based on a long p-t order autoregression: y t = ψ 1 y t 1 + ψ 2 y t ψ p y t p + e t. (1) Te TS intervals assume te error e t is independent of (or at least uncorrelated wit) e t j, j 0. Tis assumption requires tat te model (1) be dynamically adequate. In oter words, sufficient number of lagged values sould be included. It is not uncommon tat in practice te final model can be complicated, wic is inconsistent wit te principle of parsimony. Actually te model (1) is just a finite-order approximation if te true process is 4

6 ARMA process tat as AR( ) representation. In teory te error term e t can be serially correlated no matter ow large p is. Tis implies te independent assumption can be too restrictive. Tis paper relaxes te assumption of independent errors, and proposes te block bootstrap prediction intervals (BBPI) based on a sort autoregression. Consider te AR(1) model, te most parsimonious autoregression y t = ϕ 1 y t 1 + v t. (2) Te error v t is allowed to be serially correlated, so Model (2) can be inadequate. Neverteless, te serial correlation in v t sould be utilized to improve te forecast. Toward tat end te block bootstrap later will be applied to te residual ˆv t = y t ˆϕ 1 y t 1, (3) were ˆϕ 1 is te coefficient estimated by OLS. But first, any bootstrap prediction intervals sould account for te sampling variability of ˆϕ 1. Tis is accomplised by running repeatedly te regression (2) using te bootstrap replicate, a pseudo time series. Following Tombs and Scucany (1990) we generate te bootstrap replicate using te backward representation of te AR(1) model y t = θ 1 y t+1 + u t. (4) Note tat te regressor is lead not lag. Denote te OLS estimate by ˆθ 1, and te residual by û t : û t = y t ˆθ 1 y t+1, (5) 5

7 ten one series of te bootstrap replicate (y 1,..., y n) is computed in a backward fasion as (starting wit te last observation, ten moving backward) y n = y n, y t = ˆθ 1 y t+1 + û t, (t = n 1,... 1). (6) By using te backward representation we can ensure te conditionality of AR forecasts on te last observed value y n. Put differently, all te bootstrap replicate series ave te same last observation, yn = y n. See Figure 1 of Tombs and Scucany (1990) for an illustration of tis conditionality. In equation (6) te randomness of te bootstrap replicate comes from te pseudo error term û t, wic is obtained by te block bootstrap as follows: 1. Save te residual of te backward regression û t given in (5). 2. Let b denote te block size (lengt). Te first (random) block of residuals is B 1 = (û i1, û i1+1,..., û i1+b 1 ), (7) were te index number i1 is a random draw from te discrete uniform distribution between 1 and n b+1. For instance, let b = 3 and suppose a random draw yields i1 = 20, ten B 1 = (û 20, û 21, û 22 ). In tis example te first block contains tree consecutive residuals starting from te 20t observation. By redrawing te index number wit replacement we can obtain te second block B 2 = (û i2, û i2+1,..., û i2+b 1 ), te tird block B 3 = (û i3, û i3+1,..., û i3+b 1 ), and so on. We stack up tese blocks until te lengt of te stacked series becomes n. û t denotes te t-t observation of te stacked series. Resampling blocks of residuals is intended to preserve te serial correlation of te error term in te parsimonious model. Generally speaking, te block bootstrap can be applied 6

8 to any weakly dependent stationary series. Here it is applied to te residual of te sort autoregression. By contrast, te TS intervals resample te individual residual of a long autoregression. Tere are several issues. Te first is ow to determine te block size b. Tere is no definite answer due to a tradeoff between correlation and variation. We face te same tradeoff wen coosing between te block bootstrap and te i.i.d bootstrap. Longer block (bigger b) can capture more serial correlation. But longer block also reduces te variation in te bootstrap replicate because (i) te total number of blocks falls and (ii) te cance of overlapping blocks rises. Te general rule is tat b sould rise wen serial correlation gets stronger, or wen te sample size grows. For our purpose tis issue may not be as important as it appears, because simulations will sow te superiority of te BBPI over te TS intervals may be insensitive to b. Alternatively, one may resample blocks wit random sizes, were te block size follows a geometric distribution. Tat is te main idea of te stationary bootstrap suggested by Politis and Romano (1994). More explicitly, te distribution of b is specified as P (b = j) = p(1 p) j, (j = 0, 1, 2,...). (8) Wen p rises, te probability of generating small values of b also rises. Notice tat using stationary bootstrap does not solve te problem completely. Now te new problem is ow to select te probability parameter p. Te rule is we sould let p fall wen te serial correlation gets stronger. In te simulation we replace b = 0 wit b = 1 if tat appens. Te second issue is overlapping vs non-overlapping blocks. Te algoritm above indicates te blocks are possibly overlapping. For example, B 1 partially overlaps wit B 2 wen i1 < i2 < i1 + b. To generate non-overlapping blocks, we need to randomly redraw wit 7

9 replacement te index numbers i1, i2,... from te below set: {1, 1 + b, 1 + 2b,..., 1 + kb}, (9) were k n b 1. Note tere are gaps between te values in (9), wic ensure te blocks are not overlapping. Wen b rises, using non-overlapping blocks generates less randomness tan overlapping blocks, simply because k falls and tere are fewer values left in (9). We will revisit tis issue in te simulation. See Andrews (2004) for a discussion in te context of ypotesis testing. Finally, it is not required tat te modulo of dividing n by b be zero. Only a portion of te last block is included in te stacked series wit nonzero modulo. For example, it is ok tat te lengt of stacked series is 100, b = 3, and 100 is not a multiple of 3. A total of 34 blocks are drawn, but only te first observation of te 34-t block is included in te stacked series. After generating te bootstrap replicate series using (6), next we refit te model (2) using te bootstrap replicate (y2,..., yn). Denote te newly estimated coefficient (called bootstrap coefficient) by ˆϕ 1. Ten we can compute te iterated block bootstrap l-step forecast ŷn+l as ŷ n = y n, ŷ n+l = ˆϕ 1ŷ n+l 1 + ˆv l, (l = 1,..., ) (10) were te pseudo error ˆv l is obtained by block bootstrapping te residual (3). For example, let = 8, b = 4. Ten two blocks of residuals (3) are randomly drawn, and tey are B 1 = (ˆv i1, ˆv i1+1, ˆv i1+2, ˆv i1+3 ), B 2 = (ˆv i2, ˆv i2+1, ˆv i2+2, ˆv i2+3 ). Ten ˆv l in equation (10) is te l-t observation of te stacked series {ˆv l } l=1 = {ˆv i1, ˆv i1+1, ˆv i1+2, ˆv i1+3, ˆv i2, ˆv i2+1, ˆv i2+2, ˆv i2+3 }. (11) 8

10 Te ordering of B 1 and B 2 in te stacked series (11) does not matter. It is te ordering of te observations witin eac block tat matters. Tat witin-block ordering preserves te temporal structure. Notice tat te block bootstrap as been invoked twice: first it is applied to û t given in (5), ten applied to ˆv t given in (3). Te first application is to add randomness to te bootstrap replicate yt, wic is used to rerun te autoregression and simulate te sampling variability of te estimated coefficient. Te second application is to randomize te predicted value ŷn+l. Te block size wen resampling ˆv t is te same as û t since (4) and (2) sare te same structure of serial correlation, see Box et al. (2008). To get te BBPI, we need to generate C series of te bootstrap replicate using (6), fit te model (2) using te C bootstrap replicate series, and use (10) to obtain a series of te iterated block bootstrap l-step forecasts {ŷ n+l(i)} C i=1 (12) were i is te index. Te l-step iterated BBPI at te α nominal level are given by l step Iterated BBPI () = [ ŷ n+l ( 1 α 2 ), ŷ n+l ( )] 1 + α 2 (13) were ŷ n+l ( 1 α 2 ) and ŷ n+l ( 1+α 2 ) are te ( 1 α 2 ) ( 100-t and 1+α ) t percentiles of te empirical distribution of {ŷ n+l (i)}c i=1. Trougout tis paper we let α = To avoid te discreteness problem, one may let C = 999, see Boot and Hall (1994). In tis paper we use C = 1000 and find no qualitative difference. Basically we apply te percentile metod of Efron and Tibsirani (1993) to construct te BBPI. De Gooijer and Kumar (1992) empasize te percentile metod performs well wen te conditional distribution of te predicted values is unimodal. In preliminary simulation we conduct te DIP test of Hartigan and Hartigan (1985) and find tat te distribution is 9

11 indeed unimodal. Hall (1988) discusses oter percentile metods. Bias-Corrected Prediction Intervals It is well known tat te autoregressive coefficient estimated by OLS can be biased, see Saman and Stine (1988) for instance. Tat implies te BBPI (13) can be improved by correcting te bias. Following Kilian (1998) we generate D series of te bootstrap replicate using (6), ten refit te backward regression (4) using tese bootstrap replicate series and get a series of bootstrap backward coefficients {ˆθ 1(i)} D i=1. Next compute te bias-corrected coefficient, bias-corrected residual and bias-corrected bootstrap replicate as D ˆθ 1 c = 2ˆθ 1 D 1 ˆθ 1(i), (14) i=1 û c t = y t ˆθ c 1y t+1, (15) yn c = y n, yt c = ˆθ 1y c t+1 c + û c t, (t = n 1,... 1), (16) were û c t is obtained by block bootstrapping û c t. Finally, refit te model (2) using te biascorrected bootstrap replicate y c t and compute te bias-corrected bootstrap forecast as ŷ c n = y n, ŷ c n+l = ˆϕ c 1 ŷ c n+l 1 + ˆv l, (l = 1,..., ), (17) were ˆv l is obtained by block bootstrapping ˆv t (3). Te bias-corrected BBPI are determined by te percentiles of te distribution of ŷ c n+l. Actually we can go one step furter by bias correcting te residual ˆv of (3). To do so, we need to use te D series of te bootstrap replicate to refit te forward regression (2). After obtaining a series of bootstrap forward coefficients { ˆϕ 1(i)} D i=1, compute te bias-corrected 10

12 forward coefficient and bias-corrected forward residual as ˆϕ c 1 = 2 ˆϕ D 1 D 1 ˆϕ 1(i), (18) i=1 ˆv c t = y t ˆϕ c 1y t 1. (19) Ten te twice bias-corrected bootstrap forecast is computed as ŷ 2c n = y n, ŷ 2c n+l = ˆϕ c 1 ŷ 2c n+l 1 + ˆv c l, (l = 1,..., ), (20) were ˆv c l is obtained by block bootstrapping ˆv c t (19). Equation (20) makes it clear tat ˆv c l as direct effect on te bootstrap forecast, wile û c t only as an indirect effect troug ˆϕ c 1. So we conjecture tat bias correcting te forward residual ˆv t is more important tan te backward residual û t. Tis conjecture will be verified by te simulation. In practice one may also apply te stationary correction recommended by Kilian (1998) to ensure te series of bootstrap replicate y t is stationary. Direct Block Bootstrap Prediction Intervals We call te BBPI (13) iterated because te forecast is computed in an iterative fasion: in (10) te previous step forecast ŷn+l 1 is used to compute te next step ŷ n+l. Alternatively, we can use te bootstrap replicate (y 1,..., y n) to run a set of direct regressions using only one regressor. In total tere are direct regressions. More explicitly, te l-t direct regression uses yt as te dependent variable and yt l as te independent variable. Denote te estimated direct coefficient by ˆρ l. Te residual is computed as ˆη t,l = y t ˆρ l y t l. (21) 11

13 Ten te direct bootstrap forecast is computed as ŷ d n+l = ˆρ l y n + ˆη l (22) were ˆη l is a random draw wit replacement from te empirical distribution of ˆη t,l. Te l-step direct BBPI at te α nominal level are given by l step Direct BBPI (DBBPI) = [ ŷ d n+l ( 1 α 2 ), ŷ d n+l ( )] 1 + α 2 (23) were ŷ d n+l ( 1 α ) 2 and ŷ d n+l ( 1+α 2 empirical distribution of {ŷ d n+l (i)}c i=1. ) are te ( 1 α 2 ) ( 100-t and 1+α ) t percentiles of te Tere are oter ways to obtain te direct prediction intervals. For example, te bootstrap replicate (y 1,..., y n) can be generated based on te backward form of direct regression. It is also possible to bias correct te coefficient in te direct regression. Ing (2003) compares te mean-squared prediction errors of te iterated and direct point forecasts. In te next section we will compare te iterated and direct BBPIs. 3. Monte Carlo Experiment Error Distributions Tis section compares te performances of various bootstrap prediction intervals using te Monte Carlo experiment. First we investigate te distribution of error term. Following Tombs and Scucany (1990), te data generating process (DGP) is a stationary AR(2) process: y t = ϕ 1 y t 1 + ϕ 2 y t 2 + u t (24) were ϕ 1 = 0.75, ϕ 2 = 0.5, t = 1,..., 55. Te error u t follows an independently and identically distributed process. Tree error distributions are considered: te standard normal 12

14 distribution, te exponential distribution wit mean of 0.5, and mixed normal distribution 0.9N( 1, 1) + 0.1N(9, 1). Te exponential distribution is skewed; te mixed normal distribution is bimodal and skewed. All distributions are centered to ave zero mean. We compare tree bootstrap prediction intervals. Te iterated block bootstrap prediction intervals () are based on te sort AR(1) regression (2) and its backward form (4). Te TS intervals of Tombs and Scucany (1990) are based on te long AR(2) regression (24) and its backward form. Because we know te true DGP is AR(2), te error terms in te AR(1) regression are serially correlated, wile te errors are uncorrelated in te AR(2) regression. Finally te direct block bootstrap prediction intervals (DBBPI) are based on a series of first order direct autoregressions. Eac bootstrap prediction intervals are based on te empirical distribution of 1000 bootstrap forecasts. Tat is, we let C = 1000 in (12) for te, and so on. For te and DBBPI te block size b is set to 4. Te TS intervals use te i.i.d bootstrap, so block size is irrelevant. Te first 50 observations (n = 50) are used to run te regression. Ten we evaluate weter te last 5 observations are inside te corresponding prediction intervals. In oter words, we focus on out-of-sample forecasting. Te main criterion of comparing performance is te average coverage rate () given as: () = m 1 m i=1 1(y n+ Prediction Intervals) (25) were 1(.) denotes te indicator function. Te number of iteration is set as m = We find no qualitative difference if using m = Te forecast orizon ranges from 1 to 5. Te nominal coverage α is Te intervals wose is closest to 0.90 are deemed te best. Figure 1 plots te against wen te error distribution varies. Te s of te, TS intervals and DBBPI are denoted by circle, square and star, respectively. In 13

15 te leftmost grap te error follows te standard normal distribution. It is sown tat te of te is closest to te nominal coverage 0.90, followed by te TS intervals. Te DBBPI ave te worst performance. For instance, wen = 5, te ave of 3, te TS intervals ave of 4, and te DBBPI ave of 9. Te ranking remains largely uncanged wen te errors follow te exponential and mixed normal distributions, sown in te middle and rigtmost graps. Overall, Figure 1 indicates tat (i) te ave te best performance, and (ii) te DBBPI ave te worst performance. Finding (ii) is consistent wit previous works suc as Ing (2003) wic sows te iterated point forecast outperforms te direct point forecast. Finding (i) is new, and may be explained by te fact tat te are based on te parsimonious model. By comparing te tree graps, we see no big cange in wen te error distribution varies. Tis is expected because all intervals are bootstrap intervals tat do not assume normality. Autoregressive Coefficients Now we consider varying autoregressive coefficients in te DGP (24): ϕ 1 = 0.75, ϕ 2 = 0.5 (stationary AR(2)) (26) ϕ 1 = 1.0, ϕ 2 = 0.24 (stationary AR(2)) (27) ϕ 1 = 1.2, ϕ 2 = 0.2 (non-stationary AR(2)) (28) were t = 1,..., 55 and u t iidn(0, 1). Te leftmost grap in Figure 2 looks similar to tat in Figure 1 since te same DGP is used. In te middle grap we see no difference in te ranking. Te rigtmost grap is interesting. In tis case te sum of autoregressive coefficients is = 1. So te data are nonstationary (one caracteristic root is 1), violating te assumption of stationarity. Tis violation leads to distortion in te coverage rate, particularly wen is big. For example, at = 5, te of te is between 14

16 and wen data are stationary, but close to 0.76 wen data are nonstationary. In ligt of tis we recommend applying te prediction intervals to te differenced data if te data contain unit roots. We also see te direct intervals are te best wen data are nonstationary. Tis may be due to te fact tat te direct intervals are based on te direct regression. Sample Sizes Figure 3 is concerned wit te sample size. Te DGP is (24) wit ϕ 1 = 0.75, ϕ 2 = 0.5, u t iidn(0, 1). Tree sample sizes are used for in-sample fitting: n = 30, n = 60 and n = 100. From Figure 3 we see as te sample size rises, te lines in most cases move closer to te nominal level Terefore rising sample size improves te performance of all intervals. Block Sizes Now we focus on te, and investigate ow te block size affects performance. Te DGP is (24) wit t = 1,..., 55, u t iidn(0, 1). Te coefficients are given in (26), (27) and (28). Tere are two findings from Figure 4. First, using te block size of one (i.e., te i.i.d bootstrap, denoted by circle) leads to te worst performance. Te second finding is tat te difference between using te block sizes of tree (denoted by square) and five (denoted by star) is marginal in most cases. Tis suggests tat te performance of te BBPI may be insensitive to te block size. As long as blocks are used, te block size may be a secondary issue. Block Bootstrap Intervals vs Stationary Bootstrap Intervals Instead of using te block of a fixed size, we can employ te stationary bootstrap tat resamples blocks of random sizes. Using te same DGP as Figure 4, Figure 5 compares te (iterated) block bootstrap prediction intervals (BBPI) wit b = 4 (denoted by circle) to te 15

17 stationary bootstrap prediction interval (SBPI) of te block size tat follows te geometric distribution wit parameter p = 0.3 (denoted by square). Figure 5 sows tat in most cases te difference between te two intervals is not substantial. No one can dominate te oter. Overall, it is safe to say te stationary bootstrap, wic depends on te coice of p, is an alternative to te block bootstrap, wic depends on te coice of b. Overlapping vs Non-overlapping Blocks Using te same DGP as Figure 4, Figure 6 compares te (iterated) block bootstrap prediction intervals using overlapping (denoted by circle) and non-overlapping (denoted by square) blocks. Te block size is 4. We see tat using overlapping blocks yields better performance tan non-overlapping blocks. Te reason may be tat overlapping blocks can produce more variation in te bootstrap replicate tan non-overlapping blocks. Bias Correction Now we investigate weter correcting te bias of autoregressive coefficient can improve te performance of intervals. Figure 7 compares te iterated block bootstrap prediction intervals based on (10), (17) and (20). Basically (10) does not correct te bias; (17) only corrects te bias of ˆθ 1, te coefficient in te backward regression; (20) corrects bot ˆθ 1 and ˆϕ 1, te latter is te coefficient in te forward regression. In Figure 7 tese intervals are denoted by circle (no bias correction, or no BC), square (BC Once) and star (BC Twice), respectively. Te DGP is te same as Figure 4. From Figure 7 it is clear tat te BC Twice intervals ave te best performance. It appears tat correcting te coefficient in te backward regression alone results in almost no improvement. In ligt of tis we conclude tat bias-correcting te coefficient in te forward model (2) is far more important tan bias-correcting te coefficient in te backward model (4). Anoter finding is tat bias-correcting coefficients puses te coverage rate up toward 16

18 te nominal Tis indicates tat at least part of te downward bias in coverage rate we see in all graps is caused by te bias of autoregressive coefficients. Principle of Parsimony So far te DGP as been te AR(2) model (24). Next we cange te DGP to an ARMA(1,1) process y t = ϕy t 1 + u t + θu t 1 (29) were t = 1,..., 55 and u t iidn(0, 1). In teory for tis DGP tere exists AR( ) representation. Tus te AR(p) regression is finite-order approximation. We verify te principle of parsimony in tree ways. Figure 8 compares te iterated block bootstrap prediction intervals based on te AR(1) regression, to te TS intervals based on te AR(2) regression (TS2, denoted by diamond), te AR(3) regression (TS3, denoted by square) and te AR(4) regression (TS4, denoted by star). For te TS intervals we do not ceck weter te residual is serially correlated. Tat job is left to Figure 9. Figure 8 uses tree sets of ϕ and θ. Regardless of te coefficient, te block bootstrap intervals ave te best performance. Wen te autoregression becomes longer, te corresponding TS intervals sow worse performance. Tis is te first evidence tat te principle of parsimony may work for interval forecasts. Te second evidence is presented in Figure 9, were te TS intervals are based on te autoregression wose order is determined by te Breusc-Godfrey test. Te Breusc-Godfrey test is appropriate since te regressors are lagged dependent variables. We start from te AR(1) regression. If te residual passes te Breusc-Godfrey test, ten te AR(1) regression is cosen for constructing te TS intervals. Oterwise we cange to te AR(2) regression, apply te Breusc-Godfrey test again, and so on. In te end, te TS intervals are based on an adequate autoregression wit serially uncorrelated errors. 17

19 In te leftmost grap of Figure 9, ϕ = 0.4, θ = 0.2. We see te iterated block bootstrap intervals outperform te TS intervals wen equals 1 and 2. For greater teir ranking reverses. In te middle and rigtmost graps, more serial correlation is induced as θ rises from 0.2 to 0.6, and as ϕ rises from 0.4 to 0.9. In tose two graps te BBPI dominate te TS intervals. Te fact tat te ranking of te BBPI and TS intervals switces in te leftmost grap indicates a tradeoff between preserving serial correlation and adding variation. Remember tat te BBPI use te block bootstrap, so empasize preserving serial correlation. By contrast te TS intervals use te i.i.d bootstrap, wic can generate more variation in te bootstrap replicate tan te block bootstrap. Keeping tat in mind, ten te leftmost grap makes more sense. In tat grap θ is 0.2, close to zero. Tat means te ARMA(1,1) model is essentially an AR(1) model wit weakly correlated error. Terefore preserving correlation becomes less important tan adding variation, in particular for long-orizon forecasts. Figure 10 examines te principle of parsimony using te AR(2) model (24) as te DGP, but from te perspective of caracteristic roots of difference equations. Te relationsip between te autoregressive coefficients and caracteristic roots (λ 1, λ 2 ) is ϕ 1 = λ 1 + λ 2, ϕ 2 = λ 1 λ 2 (30) Note tat wen one of te caracteristic roots, say λ 2, is close to zero, ten ϕ 2 will be close to zero. In tat case λ 1 will dominate, and te second order difference equation beaves like a first order equation. In finite sample it is igly probable tat te Breusc-Godfrey will indicate te AR(1) regression is adequate, despite tat te true DGP is AR(2). Tat is te case in te leftmost grap of Figure 10, wen λ 2 = 0.1 and ϕ 2 = 0.05, bot being close to zero. We apply te Breusc-Godfrey test for model selection wen constructing te TS intervals. Most often te test picks te AR(1) regression. Te leftmost 18

20 grap sows te BBPI outperform te TS intervals only by a small margin. Actually tere is tendency tat as rises te ranking of two intervals may reverse. So we conclude tat in te presence of weak correlation, te TS intervals may perform better, in particular wen is large, since te i.i.d bootstrap generates more variation. Te BBPI may ave worse performance since te serial correlation sould oterwise be downplayed. It is instructive to consider te limit, wen serial correlation becomes 0 and data become independent. Ten te block size sould reduce to 1, and te block bootstrap sould degenerate to te i.i.d bootstrap, wic works best in te independent setting. Te middle and rigtmost graps in Figure 10 increase λ 1 and λ 2, respectively. Wit rising serial correlation, now te BBPI outperform te TS intervals by a large margin. Finally, from Figures 8, 9 and 10 we notice tat wen = 1, te BBPI always outperform te TS intervals, weter te serial correlation is weak or strong. Tis fact adds value to te BBPI for sort-orizon forecasts. Te next section uses real data to illustrate tis value. 4. Bootstrap Prediction Intervals for U.S. Montly Inflation Rate From te Federal Reserve Economic Data we download te montly 2 consumer price index (CPI) for all urban consumers. Te series is seasonally adjusted, and is from January 1947 to February Panel A of Figure 11 plots te CPI. Te series is trending and smoot. We ten compute te montly inflation rate (INF) as inf t = log(cpi t ) log(cpi t 1 ). Te skewness and kurtosis of INF are 0.55 and 6.99, respectively. Moreover, te skewness and kurtosis test rejects te normality at 0.01 level. Given te non-normality our goal is to obtain te bootstrap prediction intervals. We consider te BBPI, wic are based on te AR(1) regression, and te TS intervals, wic are based on te autoregression selected by te Breusc-Godfrey test. Using te wole sample (794 observations), te model tat passes te Breusc-Godfrey test is te AR(10) 2 See te footnote page. 19

21 regression 3 : inf t = 0.43 inf t inf t inf t inf t inf t inf t inf t inf t inf t inf t 10 + û t (31) were and denote significance at 0.01 and 0.05 level, respectively. Te p-value of te Breusc-Godfrey test applied to te residual û t is 0.22, so te null ypotesis of no AR(1) serial correlation is not rejected. Te AR(10) regression is obviously not parsimonious, and it is difficult to interpret te dynamics. Hence we try using te AR(2) regression as an approximation for te AR(10) regression: inf t = 0.49 inf t inf t 2 + û t (32) Te AR(2) model is not adequate; te p-value of te Breusc-Godfrey test is less tan However, te AR(2) model captures te essence of dynamics. It sows tat ˆϕ 2 = 0.13 is close to zero. Tat means (i) te remaining correlation is weak (but significant) after inf t 1 being controlled for; (ii) in a subsample wit small size te second lagged term inf t 2 may be ignored by te Breusc-Godfrey test. For example, if we use only te first 72 observations, te regression selected by te Breusc-Godfrey is te AR(1) regression inf t = 0.44 inf t 1 + û t (33) Te p-value of te Breusc-Godfrey test is After referring to te leftmost grap in Figures 9, we expect tat in small subsamples te BBPI will outperform te TS intervals wen is small. For long forecast orizon we expect vice versa. We construct te BBPI and TS intervals based on rolling windows. Te first window 3 See te footnote page. 20

22 W 1 contains te first n + observations (i.e., W 1 = {inf 1,..., inf n+ }), ten we move one period aead, and define te second window as te next n + observations (i.e., W 2 = {inf 2,..., inf n++1 }), and so on. We consider tree subsample sizes: n = 24, 48, 72, and we let te forecast orizon be = 1,..., 5. In eac window, te first n observations are used for in-sample fitting. Ten we evaluate weter te last observations are inside te prediction intervals. Figure 12 plots te average coverage rate (25) of te BBPI (wit b = 4) and TS intervals. It is sown tat te BBPI outperform te TS intervals only wen = 1. Tis finding is consistent wit our expectation. In tis application te error term in te AR(1) regression is weakly correlated. Preserving correlation becomes secondary, so te BBPI do not dominate te TS intervals. Figure 12 provides a macro picture. To get a micro view, we focus on n = 72 and te first-step forecast intervals. Panel B of Figure 11 igligts (using vertical grid lines) te observations were te inflation rate is uncovered by bot te BBPI and TS intervals. For example, te CPI decreased from to in October 2008, and ten fell to in November Te montly inflation rate was - percent and percent, respectively. Bot intervals fail to cover tese inflation rates. Panel C of Figure 11 igligts te observations tat are covered by te BBPI but not te TS intervals. Panel D does te opposite. In recent istory after year 2005, 6 observations are covered by te BBPI but not te TS intervals, wile 3 observations are covered by te TS intervals but not te BBPI. 5. Conclusion Tis paper proposes new prediction intervals by applying te block bootstrap to te first order autoregression. Te AR(1) model is parsimonious in wic te error term can be serially correlated. Ten te block bootstrap is utilized to resample blocks of consecutive 21

23 observations in order to maintain te time series structure of te error term. Te forecasts can be obtained in an iterated manner, or by running direct regressions. Te Monte Carlo experiment sows (1) tere is evidence tat te principle of parsimony can be extended to interval forecast; (2) tere is tradeoff between preserving correlation and adding variation; (3) te proposed intervals ave superior performance for one-step forecast; (4) bias-correcting te coefficient, particularly in te forward regression, can improve te intervals; (5) te performance of te proposed intervals may be insensitive to te block size; and (6) in most cases, te direct forecast performs worse tan te iterated forecast. We apply te proposed intervals to U.S. montly inflation rate. In small subsamples it is sown tat te inflation may follow te AR(1) process wit weakly correlated error. In tis application, te proposed intervals outperform te i.i.d bootstrap prediction intervals for one-step forecast. Wen te forecast orizon rises, adding variation outweigs preserving correlation. So te i.i.d bootstrap prediction intervals outperform te proposed intervals for long-orizon forecasts. 22

24 References Andrews D Te block-block bootstrap: Improved asymptotic refinements. Econometrica 72: Boot J G and Hall P Monte carlo approximation and te iterated bootstrap. Biometrika 81: Box G, Jenkins G M, and Reinsel G C Time Series Analysis Forecasting and Control. Wiley, Hoboken, New Jersey, 4 edition. Clements M P and Taylor N Boostrapping prediction intervals for autoregressive models. International Journal of Forecasting 17: De Gooijer J G and Kumar K Some recent developments in non-linear time series modeling, testing, and forecasting. International Journal of Forecasting (8): Efron B Bootstrap metod: Anoter look at te jackknife. Annals of Statistics 7:1 26. Efron B and Tibsirani R J An Introduction to te Bootstrap. Capman and Hall, London. Enders W Applied Econometric Times Series. Wiley, 3 edition. Grigoletto M Bootstrap prediction intervals for autoregressions: some alternatives. International Journal of Forecasting 14: Hall P Teoretical comparison of bootstrap confidence intervals. Annals of Statistics 16: Hartigan J A and Hartigan P M Te DIP test of unimodality. Annals of Statistics 13:

25 Ing C K Multistep prediction in autogressive processes. Econometric Teory 19: Kilian L Small sample confidence intervals for impulse response functions. Te Review of Economics and Statistics 80: Kim J bootstrap-after-bootstrap prediction intervals for autoregressive models. Journal of Business & Economic Statistics 19: Kim J Bootstrap prediction intervals for autoregressive models of unknown or infinite lag order. Journal of Forecasting 21: Künsc H R Te jackknife and te bootstrap for general stationary observations. Annals of Statistics 17: Masarotto G bootstrap prediction intervals for autoregressions. International Journal of Forecasting 6: Politis D N and Romano J P Te stationary bootstrap. Journal of te American Statistical Association 89: Saman P and Stine R A Te bias of autoregressive coefficient estimators. Journal of te American Statistical Association 83: Stock J H and Watson M W Forecasting inflation. Journal of Monetary Economics 44: Stock J H and Watson M W Wy as U.S. inflation become arder to forecast. Journal of Money, Credit and Banking 39:3 33. Tombs L A and Scucany W R Bootstrap prediction intervals for autoregression. Journal of te American Statistical Association 85:

26 0.89 Normal Distribution TS DBBPI 0.89 Exponential Distribution TS DBBPI Mixed Normal Distribution 0.9 TS 0.89 DBBPI Figure 1: Error Distributions 25

27 0.89 pi1 = 0.75; pi2 = 0.5 TS DBBPI pi1 = 1.0; pi2 = 0.24 TS DBBPI pi1 = 1.2; pi2 = 0.2 TS DBBPI Figure 2: Autoregressive Coefficients 26

28 n = 30 TS DBBPI n = 60 TS DBBPI n = 100 TS DBBPI Figure 3: Sample Sizes 27

29 pi1 = 0.75; pi2 = 0.5 blocksize=1 blocksize=3 blocksize= pi1 = 1.0; pi2 = 0.24 blocksize=1 blocksize=3 blocksize=5 pi1 = 1.2; pi2 = 0.2 blocksize=1 blocksize=3 blocksize= Figure 4: Block Sizes 28

30 0.9 pi1 = 0.75; pi2 = 0.5 BBPI SBPI pi1 = 1.0; pi2 = 0.24 BBPI SBPI pi1 = 1.2; pi2 = 0.2 BBPI SBPI Figure 5: Stationary Bootstrap vs Block Bootstrap 29

31 pi1 = 0.75; pi2 = 0.5 Overlapping BBPI Nonoverlapping BBPI 0.89 pi1 = 1.0; pi2 = 0.24 Overlapping BBPI Nonoverlapping BBPI pi1 = 1.2; pi2 = 0.2 Overlapping BBPI Nonoverlapping BBPI Figure 6: Overlapping vs Nonoverlapping Bootstrap 30

32 pi1 = 0.75; pi2 = 0.5 No BC BC Once BC Twice 5 pi1 = 1.0; pi2 = 0.24 No BC BC Once BC Twice 0.9 pi1 = 1.2; pi2 = 0.2 No BC BC Once BC Twice Figure 7: Bias Correction 31

33 0.89 pi = 0.4; teta = 0.2 TS2 TS3 TS4 pi = 0.4; teta = 0.6 TS2 TS3 TS4 pi = 0.9; teta = 0.6 TS2 TS3 TS Figure 8: Parsimony I 32

34 pi = 0.4; teta = 0.2 TS 5 pi = 0.4; teta = 0.6 TS pi = 0.9; teta = 0.6 TS Figure 9: Parsimony II 33

35 lambda1 = 0.5; lambda2 = 0.1 TS lambda1 = 0.5; lambda2 = TS lambda1 = 0.7; lambda2 = 0.4 TS Figure 10: Parsimony III 34

36 CPI Panel A: US Montly Consumer Price Index Panel B: Observations of INF Uncovered by BBPI and TS INF Panel C: Observations of INF Uncovered by TS Only INF Panel D: Observations of INF Uncovered by BBPI Only INF Figure 11: Time Series Plot of U.S. Montly Consumer Price Index and Inflation Rate 35

37 0.89 n = 24 TS n = 48 TS n = 72 TS Figure 12: Bootstrap Prediction Intervals for U.S. Montly Inflation Rate 36

38 Footnotes 1. Consider te AR(1) model y t+j = ϕy t+j 1 + e t+j wit te MA representation y t+j = e t+j + ϕe t+j 1 + ϕ 2 e t+j Only if e t is serially uncorrelated te variance of forecast error as a simplified form, wic is used by te Box-Jenkins intervals: E{[y t+j (Ey t+j Ω t )] 2 Ω t } = σe 2 j k=1 ϕ2(k 1) were Ω t = (y t, y t 1,...) is te information set at time t. Oterwise, covariance between e t+j and e t+j i is nonzero and sould be included. Te error term e t is serially uncorrelated wen te AR(1) model is dynamically complete, i.e., wen E(y t y t 1, y t 2,...) = E(y t y t 1 ). Wen E(y t y t 1, y t 2,...) E(y t y t 1 ), te error term in te AR(1) model is generally correlated. 2. We do not use te annual data because we need sufficient observations to compute te average coverage rate of prediction intervals based on rolling windows. 3. Te intercept term is not included because te demeaned inflation rate is used. 37

Block Bootstrap Prediction Intervals for Vector Autoregression

Block Bootstrap Prediction Intervals for Vector Autoregression Department of Economics Working Paper Block Bootstrap Prediction Intervals for Vector Autoregression Jing Li Miami University 2013 Working Paper # - 2013-04 Block Bootstrap Prediction Intervals for Vector

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

Bootstrap prediction intervals for Markov processes

Bootstrap prediction intervals for Markov processes arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA

More information

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

How Reliable are Local Projection Estimators of Impulse Responses?

How Reliable are Local Projection Estimators of Impulse Responses? How Reliable are Local Projection Estimators of Impulse Responses? Lutz Kilian University of Micigan Yun Jung Kim University of Micigan CEPR October 3, 9 Abstract We compare te finite-sample performance

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

The Verlet Algorithm for Molecular Dynamics Simulations

The Verlet Algorithm for Molecular Dynamics Simulations Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Lecture 4a: ARMA Model

Lecture 4a: ARMA Model Lecture 4a: ARMA Model 1 2 Big Picture Most often our goal is to find a statistical model to describe real time series (estimation), and then predict the future (forecasting) One particularly popular model

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible.

1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible. 004 Algebra Pretest answers and scoring Part A. Multiple coice questions. Directions: Circle te letter ( A, B, C, D, or E ) net to te correct answer. points eac, no partial credit. Wic one of te following

More information

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Lawrence D. Brown, Pilip A. Ernst, Larry Sepp, and Robert Wolpert August 27, 2015 Abstract We consider te class,

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

Printed Name: Section #: Instructor:

Printed Name: Section #: Instructor: Printed Name: Section #: Instructor: Please do not ask questions during tis exam. If you consider a question to be ambiguous, state your assumptions in te margin and do te best you can to provide te correct

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

MATH 1020 Answer Key TEST 2 VERSION B Fall Printed Name: Section #: Instructor:

MATH 1020 Answer Key TEST 2 VERSION B Fall Printed Name: Section #: Instructor: Printed Name: Section #: Instructor: Please do not ask questions during tis exam. If you consider a question to be ambiguous, state your assumptions in te margin and do te best you can to provide te correct

More information

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Derivatives of Exponentials

Derivatives of Exponentials mat 0 more on derivatives: day 0 Derivatives of Eponentials Recall tat DEFINITION... An eponential function as te form f () =a, were te base is a real number a > 0. Te domain of an eponential function

More information

Printed Name: Section #: Instructor:

Printed Name: Section #: Instructor: Printed Name: Section #: Instructor: Please do not ask questions during tis exam. If you consider a question to be ambiguous, state your assumptions in te margin and do te best you can to provide te correct

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

New Distribution Theory for the Estimation of Structural Break Point in Mean

New Distribution Theory for the Estimation of Structural Break Point in Mean New Distribution Teory for te Estimation of Structural Break Point in Mean Liang Jiang Singapore Management University Xiaou Wang Te Cinese University of Hong Kong Jun Yu Singapore Management University

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Lab 6 Derivatives and Mutant Bacteria

Lab 6 Derivatives and Mutant Bacteria Lab 6 Derivatives and Mutant Bacteria Date: September 27, 20 Assignment Due Date: October 4, 20 Goal: In tis lab you will furter explore te concept of a derivative using R. You will use your knowledge

More information

Department of Economics

Department of Economics Department of Economics Canges in Predictive Ability wit Mixed Frequency Data Ana Beatriz Galvão Working Paper No. 595 May 2007 ISSN 1473-0278 Canges in Predictive Ability wit Mixed Frequency Data Ana

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

Cubic Functions: Local Analysis

Cubic Functions: Local Analysis Cubic function cubing coefficient Capter 13 Cubic Functions: Local Analysis Input-Output Pairs, 378 Normalized Input-Output Rule, 380 Local I-O Rule Near, 382 Local Grap Near, 384 Types of Local Graps

More information

Handling Missing Data on Asymmetric Distribution

Handling Missing Data on Asymmetric Distribution International Matematical Forum, Vol. 8, 03, no. 4, 53-65 Handling Missing Data on Asymmetric Distribution Amad M. H. Al-Kazale Department of Matematics, Faculty of Science Al-albayt University, Al-Mafraq-Jordan

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

Time (hours) Morphine sulfate (mg)

Time (hours) Morphine sulfate (mg) Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15

More information

MATH 1020 TEST 2 VERSION A FALL 2014 ANSWER KEY. Printed Name: Section #: Instructor:

MATH 1020 TEST 2 VERSION A FALL 2014 ANSWER KEY. Printed Name: Section #: Instructor: ANSWER KEY Printed Name: Section #: Instructor: Please do not ask questions during tis eam. If you consider a question to be ambiguous, state your assumptions in te margin and do te best you can to provide

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

Printed Name: Section #: Instructor:

Printed Name: Section #: Instructor: Printed Name: Section #: Instructor: Please do not ask questions during tis eam. If you consider a question to be ambiguous, state your assumptions in te margin and do te best you can to provide te correct

More information

Maximum Non-extensive Entropy Block Bootstrap

Maximum Non-extensive Entropy Block Bootstrap Overview&Motivation MEB Simulation of the MnEBB Conclusions References Maximum Non-extensive Entropy Block Bootstrap Jan Novotny CEA, Cass Business School & CERGE-EI (with Michele Bergamelli & Giovanni

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014 Solutions to te Multivariable Calculus and Linear Algebra problems on te Compreensive Examination of January 3, 24 Tere are 9 problems ( points eac, totaling 9 points) on tis portion of te examination.

More information

1watt=1W=1kg m 2 /s 3

1watt=1W=1kg m 2 /s 3 Appendix A Matematics Appendix A.1 Units To measure a pysical quantity, you need a standard. Eac pysical quantity as certain units. A unit is just a standard we use to compare, e.g. a ruler. In tis laboratory

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

New families of estimators and test statistics in log-linear models

New families of estimators and test statistics in log-linear models Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

Chapters 19 & 20 Heat and the First Law of Thermodynamics

Chapters 19 & 20 Heat and the First Law of Thermodynamics Capters 19 & 20 Heat and te First Law of Termodynamics Te Zerot Law of Termodynamics Te First Law of Termodynamics Termal Processes Te Second Law of Termodynamics Heat Engines and te Carnot Cycle Refrigerators,

More information

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING Statistica Sinica 13(2003), 641-653 EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING J. K. Kim and R. R. Sitter Hankuk University of Foreign Studies and Simon Fraser University Abstract:

More information

Math 1241 Calculus Test 1

Math 1241 Calculus Test 1 February 4, 2004 Name Te first nine problems count 6 points eac and te final seven count as marked. Tere are 120 points available on tis test. Multiple coice section. Circle te correct coice(s). You do

More information

Solve exponential equations in one variable using a variety of strategies. LEARN ABOUT the Math. What is the half-life of radon?

Solve exponential equations in one variable using a variety of strategies. LEARN ABOUT the Math. What is the half-life of radon? 8.5 Solving Exponential Equations GOAL Solve exponential equations in one variable using a variety of strategies. LEARN ABOUT te Mat All radioactive substances decrease in mass over time. Jamie works in

More information

f a h f a h h lim lim

f a h f a h h lim lim Te Derivative Te derivative of a function f at a (denoted f a) is f a if tis it exists. An alternative way of defining f a is f a x a fa fa fx fa x a Note tat te tangent line to te grap of f at te point

More information

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001 Investigating Euler s Metod and Differential Equations to Approximate π Lindsa Crowl August 2, 2001 Tis researc paper focuses on finding a more efficient and accurate wa to approximate π. Suppose tat x

More information

A comparison of four different block bootstrap methods

A comparison of four different block bootstrap methods Croatian Operational Research Review 189 CRORR 5(014), 189 0 A comparison of four different block bootstrap methods Boris Radovanov 1, and Aleksandra Marcikić 1 1 Faculty of Economics Subotica, University

More information

Notes on Neural Networks

Notes on Neural Networks Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat

More information

Inflation Revisited: New Evidence from Modified Unit Root Tests

Inflation Revisited: New Evidence from Modified Unit Root Tests 1 Inflation Revisited: New Evidence from Modified Unit Root Tests Walter Enders and Yu Liu * University of Alabama in Tuscaloosa and University of Texas at El Paso Abstract: We propose a simple modification

More information

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1 Numerical Analysis MTH60 PREDICTOR CORRECTOR METHOD Te metods presented so far are called single-step metods, were we ave seen tat te computation of y at t n+ tat is y n+ requires te knowledge of y n only.

More information

AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES

AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES Journal of Data Science 14(2016), 641-656 AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES Beste H. Beyaztas a,b, Esin Firuzan b* a Department of Statistics, Istanbul Medeniyet

More information

Digital Filter Structures

Digital Filter Structures Digital Filter Structures Te convolution sum description of an LTI discrete-time system can, in principle, be used to implement te system For an IIR finite-dimensional system tis approac is not practical

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

Teaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line

Teaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line Teacing Differentiation: A Rare Case for te Problem of te Slope of te Tangent Line arxiv:1805.00343v1 [mat.ho] 29 Apr 2018 Roman Kvasov Department of Matematics University of Puerto Rico at Aguadilla Aguadilla,

More information

Continuous Stochastic Processes

Continuous Stochastic Processes Continuous Stocastic Processes Te term stocastic is often applied to penomena tat vary in time, wile te word random is reserved for penomena tat vary in space. Apart from tis distinction, te modelling

More information

3.1 Extreme Values of a Function

3.1 Extreme Values of a Function .1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find

More information

Robotic manipulation project

Robotic manipulation project Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex

More information

5. (a) Find the slope of the tangent line to the parabola y = x + 2x

5. (a) Find the slope of the tangent line to the parabola y = x + 2x MATH 141 090 Homework Solutions Fall 00 Section.6: Pages 148 150 3. Consider te slope of te given curve at eac of te five points sown (see text for figure). List tese five slopes in decreasing order and

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos

More information

2.3 Algebraic approach to limits

2.3 Algebraic approach to limits CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

Derivation Of The Schwarzschild Radius Without General Relativity

Derivation Of The Schwarzschild Radius Without General Relativity Derivation Of Te Scwarzscild Radius Witout General Relativity In tis paper I present an alternative metod of deriving te Scwarzscild radius of a black ole. Te metod uses tree of te Planck units formulas:

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

Efficiency Tradeoffs in Estimating the Linear Trend Plus Noise Model. Abstract

Efficiency Tradeoffs in Estimating the Linear Trend Plus Noise Model. Abstract Efficiency radeoffs in Estimating the Linear rend Plus Noise Model Barry Falk Department of Economics, Iowa State University Anindya Roy University of Maryland Baltimore County Abstract his paper presents

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Notes on wavefunctions II: momentum wavefunctions

Notes on wavefunctions II: momentum wavefunctions Notes on wavefunctions II: momentum wavefunctions and uncertainty Te state of a particle at any time is described by a wavefunction ψ(x). Tese wavefunction must cange wit time, since we know tat particles

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2 MTH - Spring 04 Exam Review (Solutions) Exam : February 5t 6:00-7:0 Tis exam review contains questions similar to tose you sould expect to see on Exam. Te questions included in tis review, owever, are

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a?

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a? Solutions to Test 1 Fall 016 1pt 1. Te grap of a function f(x) is sown at rigt below. Part I. State te value of eac limit. If a limit is infinite, state weter it is or. If a limit does not exist (but is

More information

5.74 Introductory Quantum Mechanics II

5.74 Introductory Quantum Mechanics II MIT OpenCourseWare ttp://ocw.mit.edu 5.74 Introductory Quantum Mecanics II Spring 9 For information about citing tese materials or our Terms of Use, visit: ttp://ocw.mit.edu/terms. Andrei Tokmakoff, MIT

More information

Section 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point.

Section 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point. Popper 6: Review of skills: Find tis difference quotient. f ( x ) f ( x) if f ( x) x Answer coices given in audio on te video. Section.1 Te Definition of te Derivative We are interested in finding te slope

More information

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set WYSE Academic Callenge 00 Sectional Matematics Solution Set. Answer: B. Since te equation can be written in te form x + y, we ave a major 5 semi-axis of lengt 5 and minor semi-axis of lengt. Tis means

More information

Online Learning: Bandit Setting

Online Learning: Bandit Setting Online Learning: Bandit Setting Daniel asabi Summer 04 Last Update: October 0, 06 Introduction [TODO Bandits. Stocastic setting Suppose tere exists unknown distributions ν,..., ν, suc tat te loss at eac

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional

More information

2.11 That s So Derivative

2.11 That s So Derivative 2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point

More information

MTH-112 Quiz 1 Name: # :

MTH-112 Quiz 1 Name: # : MTH- Quiz Name: # : Please write our name in te provided space. Simplif our answers. Sow our work.. Determine weter te given relation is a function. Give te domain and range of te relation.. Does te equation

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information