Quasi-Maximum Likelihood and The Kernel Block Bootstrap for Nonlinear Dynamic Models

Size: px
Start display at page:

Download "Quasi-Maximum Likelihood and The Kernel Block Bootstrap for Nonlinear Dynamic Models"

Transcription

1 Quasi-Maximum Likelihood and he Kernel Block Bootstrap for Nonlinear Dynamic Models Paulo M.D.C. Parente ISEG- Lisbon School of Economics & Management, Universidade de Lisboa Richard J. Smith cemmap, U.C.L and I.F.S. Faculty of Economics, University of Cambridge Department of Economics, University of Melbourne ONS Economic Statistics Centre of Excellence his Draft: October 2018 Abstract his paper applies a novel bootstrap method, the kernel block bootstrap, to quasi-maximum likelihood estimation of dynamic models with stationary strong mixing data. he method first kernel weights the components comprising the quasi-log likelihood function in an appropriate way and then samples the resultant transformed components using the standard m out of n bootstrap. We investigate the first order asymptotic properties of the KBB method for quasi-maximum likelihood demonstrating, in particular, its consistency and the first-order asymptotic validity of the bootstrap approximation to the distribution of the quasi-maximum likelihood estimator. A set of simulation experiments for the mean regression model illustrates the efficacy of the kernel block bootstrap for quasi-maximum likelihood estimation. JEL Classification: C14, C15, C22 Keywords: Bootstrap; heteroskedastic and autocorrelation consistent inference; quasi-maximum likelihood estimation. Address for correspondence: Richard J. Smith, Faculty of Economics, University of Cambridge, Austin Robinson Building, Sidgwick Avenue, Cambridge CB3 9DD, UK

2 1 Introduction his paper applies the kernel block bootstrap (KBB), proposed in Parente and Smith (2018), PS henceforth, to quasi-maximum likelihood estimation with stationary and weakly dependent data. he basic idea underpinning KBB arises from earlier papers, see, e.g., Kitamura and Stutzer (1997) and Smith (1997, 2011), which recognise that a suitable kernel function-based weighted transformation of the observational sample with weakly dependent data preserves the large sample efficiency for randomly sampled data of (generalised) empirical likelihood, (G)EL, methods. In particular, the mean of and, moreover, the standard random sample variance formula applied to the transformed sample are respectively consistent for the population mean [Smith (2011, Lemma A.1, p.1217)] and a heteroskedastic and autocorrelation (HAC) consistent and automatically positive semidefinite estimator for the variance of the standardized mean of the original sample [Smith (2005, Section 2, pp , and 2011, Lemma A.3, p.1219)]. In a similar spirit, KBB applies the standard m out of n nonparametric bootstrap, originally proposed in Bickel and Freedman (1981), to the transformed kernel-weighted data. PS demonstrate, under appropriate conditions, the large sample validity of the KBB estimator of the distribution of the sample mean [PS heorem 3.1] and the higher order asymptotic bias and variance of the KBB variance estimator [PS heorem 3.2]. Moreover, [PS Corollaries 3.1 and 3.2], the KBB variance estimator possesses a favourable higher order bias property, a property noted elsewhere for consistent variance estimators using tapered data [Brillinger (1981, p.151)], and, for a particular choice of kernel function weighting and choice of bandwidth, is optimal being asymptotically close to one based on the optimal quadratic spectral kernel [Andrews (1991, p.821)] or Bartlett-Priestley- Epanechnikov kernel [Priestley (1962, 1981, pp ), Epanechnikov (1969) and Sacks and Yvisacker (1981)]. Here, though, rather than being applied to the original data as in PS, the KBB kernel function weighting is applied to the individual observational components of the quasi-log likelihood criterion function itself. Myriad variants for dependent data of the bootstrap method proposed in the landmark article Efron (1979) also make use of the standard m out of n nonparametric bootstrap, but, in contrast to KBB, applied to blocks of the original data. See, inter alia, the moving blocks bootstrap (MBB) [Künsch (1989), Liu and Singh (1992)], the circular block bootstrap [Politis and Romano (1992a)], the stationary bootstrap [Politis and Romano (1994)], the external bootstrap for m-dependent data [Shi and Shao (1988)], the frequency domain bootstrap [Hurvich and Zeger (1987), see also Hidalgo (2003)], and [1]

3 its generalization the transformation-based bootstrap [Lahiri (2003)], and the autoregressive sieve bootstrap [Buhlmann (1997)]; for further details on these methods, see, e.g., the monographs Shao and u (1995) and Lahiri (2003). Whereas the block length of these other methods is typically a declining fraction of sample size, the implicit KBB block length is dictated by the support of the kernel function and, thus, with unbounded support as in the optimal case, would be the sample size itself. KBB bears comparison with the tapered block bootstrap (BB) of Paparoditis and Politis (2001); see also Paparoditis and Politis (2002). Indeed KBB may be regarded as a generalisation and extension of BB. BB is also based on a reweighted sample of the observations but with weight function with bounded support and, so, whereas each KBB data point is in general a transformation of all original sample data, those of BB use a fixed block size and, implicitly thereby, a fixed number of data points. More generally then, the BB weight function class is a special case of that of KBB but is more restrictive; a detailed comparison of KBB and BB is provided in PS Section 4.1. he paper is organized as follows. After outlining some preliminaries Section 2 introduces KBB and reviews the results in PS. Section 3 demonstrates how KBB can be applied in the quasi-maximum likelihood framework and, in particular, details the consistency of the KBB estimator and its asymptotic validity for quasi-maximum likelihood. Section 4 reports a Monte Carlo study on the performance of KBB for the mean regression model. Finally section 5 concludes. Proofs of the results in the main text are provided in Appendix B with intermediate results required for their proofs given in Appendix A. 2 Kernel Block Bootstrap o introduce the kernel block bootstrap (KBB) method, consider a sample of observations, z 1,..., z, on the scalar strictly stationary real valued sequence {z t, t Z} with unknown mean µ = E[z t ] and autocovariance sequence R(s) = E[(z t µ)(z t+s µ)], (s = 0, ±1,...). Under suitable conditions, see Ibragimov and Linnik (1971, heorem , pp. 346, 347), the limiting distribution of the sample mean z = z t/ is described by 1/2 ( z µ) d N(0, σ 2 )., where σ 2 = lim var[ 1/2 z] = s= R(s). he KBB approximation to the distribution of the sample mean z randomly samples the kernel-weighted centred observations z t = 1 (ˆk 2 S ) 1/2 t 1 r=t k( r S )(z t r z), t = 1,...,, (2.1) [2]

4 where S is a bandwidth parameter, ( = 1, 2,...), k( ) a kernel function and ˆk j = 1 s=1 k(s/s ) j /S, (j = 1, 2). Let z = 1 z t denote the sample mean of z t, p (t = 1,..., ). Under appropriate conditions, z 0 and (/S ) 1/2 z /σ d N(0, 1); see, e.g., Smith (2011, Lemmas A.1 and A.2, pp ). Moreover, the KBB variance estimator, defined in standard random sampling outer product form, ˆσ 2 kbb = 1 (z t z ) 2 p σ ; 2 (2.2) and is thus an automatically positive semidefinite heteroskedastic and autocorrelation consistent (HAC) variance estimator; see Smith (2011, Lemma A.3, p.1219). KBB applies the standard m out of n non-parametric bootstrap method to the index set = {1,..., }; see Bickel and Freedman (1981). hat is, the indices t s and, thereby, z t s, (s = 1,..., m ), are a random sample of size m drawn from, respectively, and {z t }, where m = [/S ], the integer part of /S. he KBB sample mean z m = m s=1 z t s /m may be regarded as that from a random sample of size m taken from the blocks B t = {k{(t r)/s }(z r z)/(ˆk 2 S ) 1/2 } r=1, (t = 1,..., ). See PS Remark 2.2, p.3. Note that the blocks {B t } are overlapping and, if the kernel function k( ) has unbounded support, the block length is. Let P ω denote the bootstrap probability measure conditional on {z t } (or, equivalently, the observational data {z t } ) with E and var the corresponding conditional expectation and variance respectively. Under suitable regularity conditions, see PS Assumptions , pp.3-4, the bootstrap distribution of the scaled and centred KBB sample mean m 1/2 ( z m z ) converges uniformly to that of 1/2 ( z µ), i.e., sup Pω{m 1/2 ( z m z ) x} P{ 1/2 ( z µ) x} p 0, prob-pω, prob-p; x R (2.3) see PS (heorem 3.1, p.4). Given stricter requirements, PS heorem 3.2, p.5, provides higher order results on moments of the KBB variance estimator ˆσ kbb 2 (2.2). Let k(q) = lim y 0 {1 k (y)} / y q, where the induced self-convolution kernel k (y) = k(x y)k(x)dx/k 2, and MSE(/S, ˆσ kbb) 2 = (/S )E((ˆσ kbb 2 J ) 2 ), where J = 1 (1 s / )R(s). Bias: s=1 E[ˆσ2 kbb] = J + S 2 (Γ k + o(1)) + U, where Γ k = k(2) s= s 2 R(s) and U = O((S / ) b 1/2 ) + o(s 2 )+O(Sb 2 b )+O(S / )+O(S 2 / 2 ) with b > 1. Variance: if S 5 / γ (0, ], then (/S )var[ˆσ kbb] 2 = k + o(1), where k = 2σ 4 k (y) 2 dy. Mean squared error: if S 5 / γ (0, ), then MSE(/S, ˆσ kbb) 2 = k + Γ 2 k /γ + o(1). he bias [3]

5 and variance results are similar to Parzen (1957, heorems 5A and 5B, pp ) and Andrews (1991, Proposition 1, p.825), when the Parzen exponent q equals 2. he KBB bias, cf. the tapered block bootstrap (BB), is O(1/S 2 ), an improvement on O(1/S ) for the moving block bootstrap (MBB). he expression MSE(/S, ˆσ 2 kbb(s )) is identical to that for the mean squared error of the Parzen (1957) estimator based on the induced self-convolution kernel k (y). Optimality results for the estimation of σ 2 are an immediate consequence of PS heorem 3.2, p.5, and the theoretical results of Andrews (1991) for the Parzen (1957) estimator. Smith (2011, Example 2.3, p.1204) shows that the induced self-convolution kernel k (y) = kqs (y), where the quadratic spectral (QS) kernel kqs(y) = 3 ( ) sin ay cos ay, a = 6π/5, (2.4) (ay) 2 ay if k(x) = ( 5π 8 )1/2 1 x J 1( 6πx 5 ) if x 0 and (5π 8 )1/2 3π 5 if x = 0; (2.5) here J v (z) = k=0 ( 1)k (z/2) 2k+v / {Γ(k + 1)Γ(k + 2)}, a Bessel function of the first kind (Gradshteyn and Ryzhik, 1980, 8.402, p.951) with Γ( ) the gamma function. he QS kernel kqs (y) (2.4) is well-known to possess optimality properties, e.g., for the estimation of spectral densities (Priestley, 1962; 1981, pp ) and probability densities (Epanechnikov, 1969, Sacks and Yvisacker, 1981). PS Corollary 3.1, p.6, establishes a similar result for the KBB variance estimator σ 2 kbb(s ) computed with the kernel function (2.5). For sensible comparisons, the requisite bandwidth parameter is S k = S / k (y) 2 dy, see Andrews (1991, (4.1), p.829), if the respective asymptotic variances scaled by /S are to coincide; see Andrews (1991, p.829). Note that k QS (y)2 dy = 1. hen, for any bandwidth sequence {S } such that S and S 5 / γ (0, ), lim MSE(/S, ˆσ kbb(s 2 k )) MSE(/S, σ kbb(s 2 )) 0 with strict inequality if k (y) kqs (y) with positive Lebesgue measure; see PS Corollary 3.1, p.6. Also, the bandwidth S = (4Γ2 k / k )1/5 1/5 si optimal in the following sense. For any bandwidth sequence {S } such that S and S 5 / γ (0, ), lim MSE( 4/5, ˆσ kbb(s 2 )) MSE( 4/5, ˆσ kbb(s 2 )) 0 with strict inequality unless S = S + o(1/ 1/5 ); see PS Corollary 3.2, p.6. [4]

6 3 Quasi-Maximum Likelihood his section applies the KBB method briefly outlined above to parameter estimation in the quasi-maximum likelihood (QML) setting. In particular, under the regularity conditions detailed below, KBB may be used to construct hypothesis tests and confidence intervals. he proofs of the results basically rely on verifying a number of the conditions required for several general lemmata established in Gonçalves and White (2004) on resampling methods for extremum estimators. Indeed, although the focus of Gonçalves and White (2004) is MBB, the results therein also apply to other block bootstrap schemes such as KBB. o describe the set-up, let the d z -vectors z t, (t = 1,..., ), denote a realisation from the stationary and strong mixing stochastic process {z t }. he d θ -vector θ of parameters is of interest where θ Θ with the compact parameter space Θ R d θ. Consider the log-density L t (θ) = log f(z t ; θ) and its expectation L(θ) = E[L t (θ)]. he true value θ 0 of θ is defined by θ 0 = arg max θ Θ L(θ) with, correspondingly, the QML estimator ˆθ of θ ˆθ = arg max θ Θ L(θ), where the sample mean L(θ) = L t(θ)/. o describe the KBB method for QML define the kernel smoothed log density function L t (θ) = 1 (ˆk 2 S ) 1/2 t 1 r=t k( r S )L t r (θ), (t = 1,..., ), cf. (2.1). As in Section 2, the indices t s and the consequent bootstrap sample L t s (θ), (s = 1,..., m ), denote random samples of size m drawn with replacement from the index set = {1,..., } and the bootstrap sample space {L t (θ)}, where m = [/S ] is the integer part of /S. he bootstrap QML estimator ˆθ is then defined by ˆθ = arg max θ Θ L m (θ) where the bootstrap sample mean L m (θ) = m s=1 L t s (θ)/m. Remark 3.1. L t (θ), (t = 1,..., ), at L(θ); cf. (2.1). Note that, because E[ L t (θ 0 )/] = 0, it is unnecessary to centre [5]

7 he following conditions are imposed to establish the consistency of the bootstrap estimator ˆθ for θ 0. Let f t (θ) = f(z t ; θ), (t = 1, 2,...). Assumption 3.1 (a) (Ω, F, P) is a complete probability space; (b) the finite d z -dimensional stochastic process Z t : Ω R dz, (t = 1, 2,...), is stationary and strong mixing with mixing numbers of size v/(v 1) for some v > 1 and is measurable for all t, (t = 1, 2,...). Assumption 3.2 (a) f: R dz Θ R + is F-measurable for each θ Θ, Θ a compact subset of R d θ ; (b) ft ( ): Θ R + is continuous on Θ a.s.-p; (c) θ 0 Θ is the unique maximizer of E[log f t (θ)], E[sup θ Θ log f t (θ) α ] < for some α > v; (d) log f t (θ) is global Lipschitz continuous on Θ, i.e., for all θ, θ 0 L t θ θ 0 a.s.-p and sup E[ L t/ ] < ; Θ, log f t (θ) log f t (θ 0 ) Let I(x 0) denote the indicator function, i.e., I(A) = 1 if A true and 0 otherwise. Assumption 3.3 (a) S and S = o( 1 2 ); (b) k( ): R [ k max, k max ], k max <, k(0) 0, k 1 0, and is continuous at 0 and almost everywhere; (c) k(x)dx < where k(x) = I(x 0) sup y x k(y) + I(x < 0) sup y x k(y) ; (d) K(λ) 0 for all λ R, where K(λ) = (2π) 1 k(x) exp( ixλ)dx. heorem 3.1. Let Assumptions hold. hen, (a) ˆθ θ 0 0, prob-p; (b) ˆθ ˆθ 0, prob-p, prob-p. o prove consistency of the KBB distribution requires a strengthening of the above assumptions. Assumption 3.4 (a) (Ω, F, P) is a complete probability space; (b) the finite d z -dimensional stochastic process Z t : Ω R dz, (t = 1, 2,...), is stationary and strong mixing with mixing numbers of size 3v/(v 1) for some v > 1 and is measurable for all t, (t = 1, 2,...). Assumption 3.5 (a) f: R dz Θ R + is F-measurable for each θ Θ, Θ a compact subset of R d θ ; (b) ft ( ): Θ R + is continuously differentiable of order 2 on Θ a.s.-p, (t = 1, 2,...); (c) θ 0 int(θ) is the unique maximizer of E[log f t (θ)]. Define A(θ) = E[ 2 L t (θ)/ ] and B(θ) = lim var[ 1/2 L(θ)/]. Assumption 3.6 (a) 2 L t (θ)/ is global Lipschitz continuous on Θ; (b) E[sup θ Θ L t (θ)/ α ] < for some α > max[4v, 1/η], E[sup θ Θ 2 L t (θ)/ β ] < for some β > 2v; (c) A 0 = A(θ 0 ) is non-singular and B 0 = lim var[ 1/2 L(θ 0 )/] is positive definite. [6]

8 Under these regularity conditions, see the Proof of heorem 3.2. Moreover, B 1/2 0 A 0 1/2 (ˆθ θ 0 ) d N(0, I dθ ); heorem 3.2. Suppose Assumptions are satisfied. hen, if S S = O( 1 2 η ) with 0 < η < 1, 2 Pω{ 1/2 (ˆθ ˆθ)/k 1/2 x} P{ 1/2 (ˆθ θ 0 ) x} sup x R d θ where k = k 2 /k 2 1. and 0, prob-pω, prob-p, 4 Simulation Results In this section we report the results of a set of Monte Carlo experiments comparing the finite sample performance of different methods for the construction of confidence intervals for the parameters of the mean regression model when there is autocorrelation in the data. We investigate KBB, MBB and confidence intervals based on HAC covariance matrix estimators. 4.1 Design We consider the same simulation design as that of Andrews (1991, Section 9, pp ) and Andrews and Monaham (1992, Section 3, pp ), i.e., linear regression with an intercept and four regressor variables. he model studied is: y t = β 0 + β 1 x 1,t + β 2 x 2,t + β 3 x 3,t + β 4 x 4,t + σ t u t, (4.1) where σ t is a function of the regressors x i,t, (i = 1,..., 4), to be specified below. he interest concerns 95% confidence interval estimators for the coefficient β 1 of the first non-constant regressor. he regressors and error term u t are generated as follows. First, u t = ρu t 1 + ε 0,t, with initial condition u 49 = ε 0, 49. Let x i,t = ρ x i,t 1 + ε i,t, (i = 1,..., 4), [7]

9 with initial conditions x i, 49 = ε i, 49, (i = 1,..., 4). As in Andrews (1991), the innovations ε it, (i = 0,..., 4), (t = 49,..., ), are independent standard normal random variates. Define x t = ( x 1,t,..., x 4,t ) and x t = x t s=1 x s/. he regressors x i,t, (i = 1,..., 4), are then constructed as in x t = (x 1,t,..., x 4,t ) = [ x s x s/ ] 1/2 x t, (t = 1,..., ). s=1 he observations on the dependent variable y t are obtained from the linear regression model (4.1) using the true parameter values β i = 0, (i = 0,..., 4). he values of ρ are 0, 0.2, 0.5, 0.7 and 0.9. Homoskedastic, σ t = 1, and heteroskedastic, σ t = x 1t, regression errors are examined. Sample sizes = 64, 128 and 256 are considered. he number of bootstrap replications for each experiment was 1000 with 5000 random samples generated. he bootstrap sample size or block size m max{[/s ], 1}. 4.2 Bootstrap Methods was defined as Confidence intervals based on KBB are compared with those obtained for MBB [Fitzenberger (1997), Gonçalves and White (2004)] and BB [Paparoditis and Politis (2002)]. Bootstrap confidence intervals are commonly computed using the standard percentile [Efron (1979)], the symmetric percentile and the equal tailed [Hall (1992, p.12)] methods. 1 For succinctness only the best results are reported for each of the bootstrap methods, i.e., the standard percentile KBB and MBB methods and the equal-tailed BB method. o describe the standard percentile KBB method, let ˆβ 1 denote the LS estimator of β 1 and ˆβ 1 its bootstrap counterpart.because the asymptotic distribution of the LS estimator ˆβ 1 is normal and hence symmetric about β 1, in large samples the distributions of ˆβ 1 β 1 and β 1 ˆβ 1 are the same. From the uniform consistency of the bootstrap, heorem 3.2, the distribution of β 1 ˆβ 1 is well approximated by the distribution of ˆβ 1 ˆβ 1. herefore, the bootstrap percentile confidence interval for β 1 is given by ( ) [1 1 ˆk 1/2 ] ˆβ 1 + ˆβ 1, , [1 ˆk 1/2 ˆk ] ˆβ 1 + 1/2 ˆβ 1, he standard percentile method is valid here as the asymptotic distribution of the least squares estimator is symmetric; see Politis (1998, p.45). ˆk 1/2, [8]

10 where ˆβ 1,α is the 100α percentile of the distribution of ˆβ 1 and ˆk = ˆk 2 /ˆk For MBB, ˆk = 1. BB is applied to the sample components [ (1, x t) (1, x t)/ ] 1 (1, x t) ˆε t, (t = 1,..., ), of the LS influence function, where ˆε t are the LS regression residuals; see Paparoditis and Politis (2002). 3 he equal-tailed BB confidence interval does not require symmetry of the distribution of ˆβ 1. hus, because the distribution of ˆβ 1 β 1 is uniformly close to that of ( ˆβ 1 ˆβ 1 )/ˆk for sample sizes large enough, the equal-tailed BB confidence interval is given by ( [1 + 1ˆk ] ˆβ 1 ˆβ 1,0.975 ˆk, [1 + 1ˆk ] ˆβ 1 ˆβ 1,0.025 KBB confidence intervals are constructed with the following choices of kernel function: truncated [tr], Bartlett [bt], (2.5) [qs] kernel functions, the last with the optimal quadratic spectral kernel (2.4) as the associated convolution, and the kernel function based on the optimal trapezoidal taper of Paparoditis and Politis (2001) [pp], see Paparoditis and Politis (2001, p1111). he respective confidence interval estimators are denoted by KBB j, where i = tr, bt, qs and pp. BB confidence intervals are computed using the optimal Paparoditis and Politis (2001) trapezoidal taper. Standard t-statistic confidence intervals using heteroskedastic autocorrelation consistent (HAC) estimators for the asymptotic variance matrix are also considered based on the Bartlett, see Newey and West (1987), and quadratic spectral, see Andrews (1991), kernel functions. he respective HAC confidence intervals are denoted by B and QS. 2 Alternatively, k j can be replaced by k j, where k j = ˆk ) k(x) j dx, (j = 1, 2). 3 BB employs a non-negative taper w( ) with unit interval support and range which is strictly positive in a neighbourhood of and symmetric about 1/2 and is non-decreasing on the interval [0, 1/2], see Paparoditis and Politis (2001, Assumptions 1 and 2, p.1107). Hence, w( ) is centred and unimodal at 1/2. Given a positive integer bandwidth parameter S, the BB sample space is [ (1, x t) (1, x t)/ ] 1 {S 1/2 S j=1 w S (j)(1, x t+j 1 ) ˆε t+j 1 / w S 2 } S +1, where w S (j) = w{(j 1/2)/S } and w S 2 = ( S j=1 w S (j) 2 ) 1/2 ; cf. Paparoditis and Politis (2001, (3), p.1106, and Step 2, p.1107). Because (1, x t) (1, x t)/ is the identity matrix in the Andrews (1991) design adopted here, BB draws a random sample of size m = /S with replacement from the BB sample space {S 1/2 S j=1 w S (j)(1, x t+j 1 ) ˆε t+j 1 / w S 2 } S +1. Denote the BB sample mean z = m S s=1 S1/2 S j=1 w S (j)(1, x t s +j 1) ˆε t s +j 1/ w S 2 S m and sample mean z = S +1 S 1/2 j=1 w S (j)(1, x t+j 1 ) ˆε t+j 1 / w S 2 ( S + 1).hen, from Paparoditis and P Politis (2002, heorem 2.2, p.135), sup x R d θ ω { 1/2 ( z z ) x} P{ 1/2 (ˆθ θ 0 ) x} 0, prob-pω, prob-p. See Parente and Smith (2018, Section 4.1, pp.6-8) for a detailed comparison of BB and KBB.. [9]

11 4.3 Bandwidth Choice he accuracy of the bootstrap approximation in practice is particularly sensitive to the choice of the bandwidth or block size. Gonçalves and White (2004) suggest basing the choice of MBB block size on the automatic bandwidth obtained in Andrews (1991) for the Bartlett kernel, noting that the MBB bootstrap variance estimator is asymptotically equivalent to the Bartlett kernel variance estimator. Smith (2011, Lemma A.3, p.1219) obtained a similar equivalence between the KBB variance estimator and the corresponding HAC estimator based on the implied kernel function k ( ); see also Smith (2005, Lemma 2.1, p.164). We therefore adopt a similar approach to that of Gonçalves and White (2004) to the choice of the bandwidth for the KBB confidence interval estimators, in particular, the (integer part) of the automatic bandwidth of Andrews (1991) for the implied kernel function k ( ). Despite lacking a theoretical justification, the results discussed below indicate that this procedure fares well for the simulation designs studied here. he optimal bandwidth for HAC variance matrix estimation based on the kenel k ( ) is given by S = ( / 1/(2q+1) qkqα(q) 2 k (x) dx) 2, where α(q) is a function of the unknown spectral density matrix and k q = lim x 0 [1 k(x)]/ x q, q [0, ); see Andrews (1991, Section 5, pp ). Note that q = 1 for the Bartlett kernel and q = 2 for the Parzen, quadratic spectral kernels and the optimal Paparoditis and Politis (2001) taper. he optimal bandwidth S requires the estimation of the parameters α(1) and α(2). We use the semi-parametric method recommended in Andrews (1991, (6.4), p.835) based on AR(1) approximations and using the same unit weighting scheme there. Let z it = x it (y t x t ˆβ), (i = 1,..., 4). he estimators for α(1) and α(2) are given by ˆα(1) = 4 4ˆρ 2 i ˆσ4 i i=1 (1 ˆρ i ) 6 (1+ˆρ i ) 2 4 i=1, ˆα(2) = ˆσ i 4 (1 ˆρ i ) 4 4 4ˆρ 2 i ˆσ4 i i=1 (1 ˆρ i ) 8 4, ˆσ i 4 (1 ˆρ i ) 4 i=1 where ˆρ i and ˆσ i 2 are the estimators of the AR(1) coefficient and the innovation variance in a first order autoregression for z it, (i = 1,..., 4). o avoid extremely large values of the bandwidth due to erroneously large values of ˆρ i, which tended to occur for large values of the autocorrelation coefficient ρ, we replaced ˆρ i by the truncated version max[min[ˆρ i, 0.97], 0.97]. [10]

12 A non-parametric version of Andrews (1991) bandwidth estimator based on flattop lag-window of Politis and Romano (1995) is also considered given by ˆα(q) = 4 4 [ M i i=1 j= M i j q λ( j M i ) ˆR i (j)] 2 [ M i i=1 j= M i λ( j M i ) ˆR i (j)] ( ), M j i i= M j λ ˆRj M j (i) 2 where λ (t) = I ( t [0, 1/2])+2 (1 t ) I ( t (1/2, 1]), ˆR i (j) is the sample jth autocovariance estimator for z it, (i = 1,..., 4), and M i is computed using the method described in Politis and White (2004, ftn c, p.59). he MBB and BB block sizes are given by min[ Ŝ, ], where is the ceiling function and S the optimal bandwidth estimator for the Bartlett kernel k ( ) for MBB and for the kernel k ( ) induced by the optimal Paparoditis and Politis (2001) trapezoidal taper for BB. 4.4 Results ables 1 and 2 provide the empirical coverage rates for 95% confidence interval estimates obtained using the methods described above for the homoskedastic and heteroskedastic cases respectively. ables 1 and 2 around here Overall, to a lesser or greater degree, all confidence interval estimates display undercoverage for the true value β 1 = 0 but especially for high values of ρ, a feature found in previous studies of MBB, see, e.g., Gonçalves and White (2004), and confidence intervals based on t-statistics with HAC variance matrix estimators, see Andrews (1991). As should be expected from the theoretical results of Section 3, as increases, empirical coverage rates approach the nominal rate of 95%. A closer analysis of the results in ables 1 and 2 reveals that the performance of the various methods depends critically on how the bandwidth or block size is computed. While, for low values of ρ, both the methods of Andrews (1991) and Politis and Romano (1995) produce very similar results, the Andrews (1991) automatic bandwidth yields results closer to the nominal 95% coverage for higher values of ρ. However, this is not particularly surprising since the Andrews (1991) method is based on the correct model. A comparison of the various KBB confidence interval estimates with those using MBB reveals that generally the coverage rates for MBB are closer to the nominal 95% than those of KBB tr although both are based on the truncated kernel. However, MBB [11]

13 usually produces coverage rates lower than those of KBB bt, KBB qs and KBB pp especially for higher values of ρ, apart from the homoskedastic case with = 64, see able 1, when the covergae rates for MBB are very similar to those obtained for KBB bt and KBB qs. he results with homoskedastic innovations in able 1 indicate that the BB coverage is poorer than that for KBB and MBB. In contradistinction, for heteroskedastic innovations, able 2 indicates that BB displays reasonable coverage properties compared with KBB bt, KBB qs and KBB pp for the larger sample sizes = 128 and 256 for all values of ρ except ρ = 0.9. All bootstrap confidence interval estimates outperform those based on HAC t-statistics for higher values of ρ whereas for lower values both bootstrap and HAC t-statistic methods produce similarly satisfactory results. Generally, one of the KBB class of confidence interval estimates, KBB bt, KBB qs and KBB pp, outperforms any of the other methods. With homoskedastic innovations, see able 1, the coverage rates for KBB pp confidence interval estimates are closest to the nominal 95%, no matter which optimal bandwidth parameters estimation method is used; a similar finding of the robustness of KBB pp to bandwidth choice is illustrated in the simulation study of Parente and Smith (2018) for the case of the mean. results in able 2 for heteroskedastic innovations are far more varied. As noted above, for low values of ρ, the coverage rates of KBB, BB and HAC t-statistic confidence interval estimates are broadly similar. Indeed, the best results are obtained by KBB bt for moderate values of ρ and by KBB pp for high values of ρ, especially ρ = 0.9. Note, though, for = 64, that with the Politis and Romano (1995) bandwidth estimate KBB qs appears best method for low values of ρ. he 4.5 Summary In general, confidence interval estimates based on KBB PP provide the best coverage rates for all sample sizes and especially for larger values of the autoregression parameter ρ. 5 Conclusion his paper applies the kernel block bootstrap method to quasi-maximum likelihood estimation of dynamic models under stationarity and weak dependence. he proposed bootstrap method is simple to implement by first kernel-weighting the components comprising the quasi-log likelihood function in an appropriate way and then sampling the [12]

14 resultant transformed components using the standard m out of n bootstrap for independent and identically distributed observations. We investigate the first order asymptotic properties of the kernel block bootstrap for quasi-maximum likelihood demonstrating, in particular, its consistency and the firstorder asymptotic validity of the bootstrap approximation to the distribution of the quasimaximum likelihood estimator. A set of simulation experiments for the mean regression model illustrates the efficacy of the kernel block bootstrap for quasi-maximum likelihood estimation. Indeed, in these experiments, it outperforms other bootstrap methods for the sample sizes considered, especially if the kernel function is chosen as the optimal taper suggested by Paparoditis and Politis (2001). Appendix hroughout the Appendices, C and denote generic positive constants that may be different in different uses with C, M, and the Chebyshev, Markov, and triangle inequalities respectively. UWL is a uniform weak law of large numbers such as Newey and McFadden (1994, Lemma 2.4, p.2129) for stationary and mixing (and, thus, ergodic) processes. A similar notation is adopted to that in Gonçalves and White (2004). For any bootstrap statistic (, ω), (, ω) 0, prob-p ω, prob-p if, for any δ > 0 and any ξ > 0, lim P{ω : P ω{λ : (λ, ω) > δ} > ξ} = 0. o simplify the analysis, the appendices consider the transformed centred observations L t (θ) = 1 (k 2 S ) 1/2 t s=t 1 k( s S )L t s (θ) with k 2 substituting for ˆk 2 = S 1 1 k(t/s ) 2 in the main text since ˆk 2 k 2 = o(1), cf. PS Supplement Corollary K.2, p.s.21. For simplicity, where required, it is assumed /S is integer. Appendix A: Preliminary Lemmas Assumption A.1. (Bootstrap Pointwise WLLN.) For each θ Θ R d θ, Θ a compact set, S and S = o( 1/2 ) (k 2 /S ) 1/2 [ L m (θ) L (θ)] 0, prob-pω, prob-p. [13]

15 Remark A.1. See Lemma A.2 below. Assumption A.2. (Uniform Convergence.) (k2 /S ) 1/2 L (θ) k 1 L(θ) 0 prob-p. Remark A.2. sup θ Θ he hypotheses of the UWLs Smith (2011, Lemma A.1, p.1217) and Newey and McFadden (1994, Lemma 2.4, p.2129) for stationary and mixing (and, thus, ergodic) processes are satisfied under Assumptions Hence, sup θ Θ (k2 /S ) 1/2 L (θ) k 1 L(θ)] 0 prob-p noting sup θ Θ L(θ) L(θ)] 0 prob-p where L(θ) = E[Lt (θ)]. hus, Assumption A.2 follows by and k 1, k 2 = O(1). Assumption A.3. (Global Lipschitz Continuity.) For all θ, θ 0 Θ, L t (θ) L t (θ 0 ) L t θ θ 0 a.s.p where sup E[ L t/ ] <. Remark A.3. Assumption A.3 is Assumption 3.2(c). Lemma A.1. (Bootstrap UWL.) Suppose Assumptions A.1-A.3 hold. hen, for S and S = o( 1/2 ), for any ε > 0 and δ > 0, lim P{P ω{sup (k 2 /S ) 1/2 L m (θ) k 1 L(θ) > δ} > ξ} = 0. θ Θ Proof. From Assumption A.2 the result is proven if lim P{P ω{sup(k 2 /S ) 1/2 L m (θ) L (θ) > δ} > ξ} = 0. θ Θ he following preliminary results are useful in the later analysis. By global Lipschitz continuity of L t ( ) and by, for large enough, (k 2 /S ) 1/2 L (θ)) L (θ 0 ) 1 t 1 1 ( ) s S k Lt s(θ) L t s(θ 0 ) (A.1) S s=t = 1 Lt (θ) L t (θ 0 ) 1 t ( ) s k S S since for some 0 < C < 1 S t s=1 t C θ θ 0 1 L t ( ) s k S O(1) < C [14] s=1 t

16 uniformly t for large enough, see Smith (2011, eq. (A.5), p.1218). Next, for some 0 < C <, (k 2 /S ) 1/2 E [ L m (θ) L m (θ 0 ) ] = 1 m = 1 m s=1 1 S E [ t s 1 r=t s ( r k Lt (θ) L t (θ 0 ) 1 S s=1 C θ θ 0 1 L t. Hence, by M, for some 0 < C < uniformly t for large enough, S ) Lt s r (θ) L t s r(θ 0 ) ] t 1 r=t ( ) r k S P ω{(k 2 /S ) 1/2 L m (θ) L m (θ 0 ) > ɛ} C ɛ θ θ 0 1 L t. (A.2) he remaining part of the proof is identical to Gonçalves and White (2000, Proof of Lemma A.2, pp.30-31) and is given here for completeness; cf. Hall and Horowitz (1996, Proof of Lemma 8, p.913). Given ε > 0, let {η(θ i, ε), (i = 1,..., I)} denote a finite subcover of Θ where η(θ i, ξ) = {θ Θ : θ θ i < ε}, (i = 1,..., I). Now sup(k 2 /S ) 1/2 L m (θ) L (θ) = max sup (k 2 /S ) 1/2 L m (θ) L (θ). θ Θ i=1,...,i θ η(θ i,ξ) he argument ω Ω is omitted for brevity as in Gonçalves and White (2000). It then follows that, for any δ > 0 (and any fixed ω), Pω{sup (k 2 /S ) 1/2 L m (θ) L (θ) I > δ} θ Θ i=1 P ω{ sup (k 2 /S ) 1/2 L m (θ) L (θ) > δ}. θ η(θ i,ξ) For any θ η(θ i, ξ), by and Assumption A.3, (k 2 /S ) 1/2 L m (θ) L (θ) (k 2 /S ) 1/2 L m (θ i ) L (θ i ) + (k2 /S ) 1/2 L m (θ) L m (θ i ) Hence, for any δ > 0 and ξ > 0, +(k 2 /S ) 1/2 L (θ) L (θ i ). P{Pω{ sup (k 2 /S ) 1/2 L m (θ) L (θ) > δ} > ξ} P{P ω {(k 2 /S ) 1/2 L m (θ i ) L (θ i ) δ > θ η(θ i ε) 3 } > ξ 3 } +P{P ω{(k 2 /S ) 1/2 L m (θ) L m (θ i ) > δ 3 } > ξ 3 } +P{(k 2 /S ) 1/2 L (θ) L (θ i ) δ > }. (A.3) 3 [15]

17 By Assumption A.1 P{P ω{(k 2 /S ) 1/2 L m (θ i ) L (θ i ) > δ 3 } > ξ 3 } < ξ 3 for large enough. Also, by M (for fixed ω) and Assumption A.3, noting L t 0, (t = 1,..., ), from eq. (A.2), Pω{(k 2 /S ) 1/2 L m (θ) L m (θ i ) > δ 3 } 3C θ θ i 1 δ 3C ε 1 δ L t L t as 1 L t satisfies a WLLN under the conditions of the theorem. As a consequence, for any δ > 0 and ξ > 0, for sufficiently large, P{Pω{(k 2 /S ) 1/2 L m (θ) L m (θ i ) > δ 3ε } > ξ 3 } ε 1 P{3C δ = P{ 1 L t > ξδ 9C ε } 9C ε ξδ E[ 1 L t] 9C ε < ξ ξδ 3 L t > ξ 3 } for the choice ε < ξ 2 δ/27c, where, since, by hypothesis E[ L t/ ] = O(1), the second and third inequalities follow respectively from M and a sufficiently large but finite constant such that sup E[ L t/ ] <. Similarly, from eq. (A.1), for any δ > 0 and ξ > 0, by Assumption A.3, P{(k 2 /S ) 1/2 L (θ) L (θ i ) > δ/3} P{Cε L t/ > δ/3} 3Cε /δ < ξ/3 for sufficiently large for the choice ε < ξδ/9c. herefore, from eq. (A.3), the conclusion of the Lemma follows if ε = ξδ ( ) 1 9 max C, ξ. 3C Lemma A.2. (Bootstrap Pointwise WLLN.) Suppose Assumptions 3.1, 3.2(a) and 3.3 are satisfied. hen, if 1/α /m 0 and E[sup θ Θ log f t (θ) α ] < for some α > v, for each θ Θ R d θ, Θ a compact set, (k 2 /S ) 1/2 [ L m (θ) L (θ)] 0, prob-p ω, prob-p. Proof: he argument θ is suppressed throughout for brevity. First, cf. Gonçalves and White (2004, Proof of Lemma A.5, p.215), (k 2 /S ) 1/2 ( L m L ) = (k 2 /S ) 1/2 ( L m E [ L m ]) (k 2 /S ) 1/2 ( L E [ L m ]). [16]

18 Since E [ L m ] = L, cf. PS (Section 2.2, pp.2-3), the second term L E [ L m ] is zero. Hence, the result follows if, for any δ > 0 and ξ > 0 and large enough, P{P ω{(k 2 /S ) 1/2 L m E [ L m ] > δ} > ξ} < ξ. Without loss of generality, set E [ L m ] = 0. Write K t = (k 2 /S ) 1/2 L t, (t = 1,..., ). First, note that E [ Kt 1 s ] = K t = 1 O(1) 1 L t = O p (1), 1 t 1 S k( s )L t s s=t S uniformly, (s = 1,..., m ), by WLLN and E[sup θ Θ log f t (θ) α ] <, α > 1. Also, for any δ > 0, 1 K t 1 K t I( K t < m δ) = 1 K t I( K t m δ) 1 K t max I( K t m δ). t Now, by M, max t K t = O(1) max L t = O p ( 1/α ); t cf. Newey and Smith (2004, Proof of Lemma A1, p.239). Hence, since, by hypothesis, 1/α /m = o(1), max t I( K t m δ) = o p (1) and K t / = O p (1), E [ K t s I( K t s m δ)] = 1 K t I( K t m δ) = o p (1). (A.4) he remaining part of the proof is similar to that for Khinchine s WLLN given in Rao (1973, pp ). For each s define the pair of random variables V t s = K t s I( Kt s < m δ), W t s = K t s I( Kt s m δ), yielding K t s = V t s + W t s, (s = 1,..., m ). Now var [V t s ] E [Vt 2 s ] m δe [ Vt s ]. (A.5) Write V m = m s=1 V t s /m. hus, from eq. (A.5), using C, P { V m E [V t s ] ε} var [V t s ] m ε 2 δe [ V t s ]. ε 2 [17]

19 Also K E [V t s ] = op (1), i.e., for any ε > 0, large enough, K E [V t s ] ε, since by, noting E [V t s ] = K t I( K t < m δ)/, K E [V t s ] = 1 1 K t 1 K t I( K t < m δ) K t I( K t m δ) = o p (1) from eq. (A.4). Hence, for large enough, P { V m K δe [ Vt 2ε} s ]. (A.6) ε 2 By M, P {W t s 0} = P { Kt s m δ} 1 I( Kt s m δ)] m δ E [ Kt s o see this, E [ K t s I( K t s E [ Kt s I( Kt s m δ)] δ 2 w.p.a.1. Write W m (A.7), δ m. (A.7) m δ)] = o p (1) from eq. (A.4). hus, for large enough, = m s=1 W t s /m. hus, from eq. P { W m 0} m s=1 P {W t s 0} δ. (A.8) herefore, P { K m K 4ε} P { V m K + W m 4ε} P { V m K 2ε} + P { W m 2ε} δe [ Vt s ] + P { W ε 2 m δe [ Vt 0} s ] + δ. ε 2 where the first inequality follows from, the third from eq. (A.6) and the final inequality from eq. (A.8). Since δ may be chosen arbitrarily small enough and E [ Vt s ] E [ Kt s ] = Op (1), the result follows by M. Lemma A.3. Let Assumptions 3.2(a)(b), 3.3, 3.4 and 3.6(b)(c) hold. hen, if S and S = O( 1 2 η ) with 0 < η < 1 2, sup L P ω{m 1/2 ( m (θ 0 ) x R L (θ 0 ) ) x} P{ 1/2 L(θ 0 ) x} 0, prob-p. [18]

20 Proof. he result is proven in Steps 1-5 below; cf. Politis and Romano (1992, Proof of heorem 2, pp ). o ease exposition, let m = /S be integer and d θ = 1. Step 1. d L(θ 0 )/dθ 0 prob-p. Follows by White (1984, heorem 3.47, p.46) and E[ L t (θ 0 )/] = 0. Step 2. P{B 1/2 0 1/2 d L(θ 0 )/dθ x} Φ(x), where Φ( ) is the standard normal distribution function. Follows by White (1984, heorem 5.19, p.124). P{B 1/2 Step 3. sup x 0 1/2 d L(θ 0 )/dθ x} Φ(x) 0. Follows by Pólya s heorem (Serfling, 1980, heorem 1.5.3, p.18) from Step 2 and the continuity of Φ( ). Step 4. var [m 1/2 d L m (θ 0 )/dθ] B 0 prob-p. Note E [d L m (θ 0 )/dθ] = d L (θ 0 )/dθ. hus, var [m 1/2 d L m (θ 0 ) dθ ] = var [ dl t (θ 0 ) ] dθ = 1 ( dl t (θ 0 ) dθ = 1 ( dl t (θ 0 ) dθ d L (θ 0 ) ) 2 dθ ) 2 ( d L (θ 0 ) ) 2. dθ the result follows since (d L (θ 0 )/dθ) 2 = O p (S / ) (Smith, 2011, Lemma A.2, p.1219), S = o( 1/2 ) by hypothesis and 1 (dl t (θ 0 )/dθ) 2 B 0 prob-p (Smith, 2011, Lemma A.3, p.1219). Step 5. lim P { sup P ω{ d L m (θ 0 )/dθ E [d L } m (θ 0 )/dθ] x var [d L x} Φ(x) m (θ 0 )/dθ] 1/2 ε = 0. Applying the Berry-Esséen inequality, Serfling (1980, heorem 1.9.5, p.33), noting the bootstrap sample observations {dl t s (θ 0 )/dθ} m s=1 are independent and identically distributed, sup x P ω{ m1/2 (d L m (θ 0 )/dθ d L (θ 0 )/dθ) var [m 1/2 d L x} Φ(x) C var [ dl t (θ 0 ) ] 3/2 m (θ 0 )/dθ] 1/2 m 1/2 dθ E [ dl t (θ 0 ) d L 3 (θ 0 ) dθ dθ ]. Now var [dl t (θ 0 )/dθ] B 0 > 0 prob-p; see the Proof of Step 4 above. Furthermore, E [ dl t (θ 0 )/dθ d L (θ 0 )/dθ 3 ] = 1 dl t (θ 0 )/dθ d L (θ 0 )/dθ 3 and 1 dl t (θ 0 ) d L 3 (θ 0 ) dθ dθ max dl t (θ 0 ) t d L (θ 0 ) 1 dθ dθ ( dl t (θ 0 ) d L (θ 0 ) ) 2 dθ dθ = O p (S 1/2 1/α ). [19]

21 he equality follows since max dl t (θ 0 ) t d L (θ 0 ) dθ dθ max dl t (θ 0 ) t dθ + d L (θ 0 ) dθ = O p (S 1/2 1/α ) + O p ((S / ) 1/2 ) = O p (S 1/2 1/α ) by M and Assumption 3.6(b), cf. Newey and Smith (2004, Proof of Lemma A1, p.239), and (dl t (θ 0 )/dθ d L (θ 0 )/dθ) 2 / = O p (1), see the Proof of Step 4 above. herefore sup x P ω{ (/S ) 1/2 (d L m (θ 0 )/dθ d L (θ 0 )/dθ) var [(/S ) 1/2 d L x} Φ(x) m (θ 0 )/dθ] 1/2 1 m 1/2 O p (1)O p (S 1/2 1/α ) by hypothesis, yielding the required conclusion. = S1/2 m 1/2 O p ( 1/α ) = o p (1), Lemma A.4. Suppose that Assumptions 3.2(a)(b), 3.3, 3.4 and 3.6(b)(c) hold. hen, if S and S = O( 1 2 η ) with 0 < η < 1 2, (k 2 /S ) 1/2 L (θ 0 ) = k 1 L(θ 0 ) + o p ( 1/2 ). Proof. Cf. Smith (2011, Proof of Lemma A.2, p.1219). Recall (k 2 /S ) 1/2 L (θ 0 ) = 1 S 1 r=1 ( ) r 1 k S min[, r] t=max[1,1 r] L t (θ 0 ). he difference between min[, r] t=max[1,1 r] L t(θ 0 )/ and L t(θ 0 )/ consists of r terms. By C, using White (1984, Lemma 6.19, p.153), P{ 1 r L t (θ 0 ) ε} 1 r ( ε) E[ 2 = r O( 2 ) L t (θ 0 ) where the O( 2 ) term is independent of r. herefore, using Smith (2011, Lemma C.1, 2 ] [20]

22 p.1231), (k 2 /S ) 1/2 L (θ 0 ) = 1 S = 1 S 1 r=1 1 s=1 ( r k S ) ( L(θ 0 ) + r O p ( 2 )) ( ) s L(θ0 ) k + O p ( 1 ) S = (k 1 + o(1)) L(θ 0 ) + O p ( 1 ) = k L(θ 0 ) 1 + o p ( 1/2 ). Appendix B: Proofs of Results Proof of heorem 3.1. heorem 3.1 follows from a verification of the hypotheses of Gonçalves and White (2004, Lemma A.2, p.212). o do so, replace n by, Q (, θ) by L(θ) and Q (, ω, θ) by L m (ω, θ). Conditions (a1)-(a3), which ensure ˆθ θ 0 0, prob-p, hold under Assumptions 3.1 and 3.2. o establish ˆθ ˆθ 0, prob-p, prob- P, Conditions (b1) and (b2) follow from Assumption 3.1 whereas Condition (b3) is the bootstrap UWL Lemma A.1 which requires Assumption 3.3. Proof of heorem 3.2. he structure of the proof is identical to that of Gonçalves and White (2004, heorem 2.2, pp ) for MBB requiring the verification of the hypotheses of Gonçalves and White (2004, Lemma A.3, p.212) which together with Pólya s heorem (Serfling, 1980, heorem 1.5.3, p.18) and the continuity of Φ( ) gives the result. Assumptions ensure heorem 3.1, i.e., ˆθ ˆθ 0, prob-p, prob-p, and ˆθ θ 0 0. he assumptions of the complete probability space (Ω, F, P) and compactness of Θ are stated in Assumptions 3.4(a) and 3.5(a). Conditions (a1) and (a2) follow from Assumptions 3.5(a)(b). Condition (a3) B 1/2 0 1/2 L(θ 0 )/ d N(0, I dθ ) is satisfied under Assumptions 3.4, 3.5(a)(b) and 3.6(b)(c) using the CL White (1984, heorem 5.19, p.124); cf. Step 4 in the Proof of Lemma A.3 above. he continuity of A(θ) and the UWL Condition (a4) sup θ Θ 2 L(θ)/ A(θ) 0, prob-p, follow since the hypotheses of the UWL Newey and McFadden (1994, Lemma 2.4, p.2129) for stationary and mixing (and, thus, ergodic) processes are satisfied under Assumptions Hence, invoking Assumption 3.6(c), from a mean value expansion of L(ˆθ)/ = 0 around θ = θ 0 with θ 0 int(θ) from Assumption 3.5(c), 1/2 (ˆθ θ 0 ) d N(0, A 1 0 B 0 A 1 0 ). Conditions (b1) and (b2) are satisfied under Assumptions 3.5(a)(b) as above. o [21]

23 verify Condition (b3), m 1/2 L m (ˆθ) L = m 1/2 ( m (θ 0 ) L (θ 0 ) ) +m 1/2 L (θ 0 ) L + m 1/2 ( m (ˆθ) L m (θ 0 ) ). With Lemma A.3 replacing Gonçalves and White (2002, heorem 2.2(ii), p.1375), the first term converges in distribution to N(0, B 0 ), prob-p ω, prob-p. he sum of the second and third terms converges to 0, prob-p, prob-p. o see this, first, using the mean value theorem for the third term, i.e., L m 1/2 ( m (ˆθ) L m (θ 0 ) ) = 1 2 L m ( θ) 1/2 (ˆθ θ S 1/2 0 ), where θ lies on the line segment joining ˆθ and θ 0. Secondly, (k 2 /S ) 1/2 2 L m ( θ)/ k 1 A 0, prob-p ω, prob-p, using the bootstrap UWL sup θ Θ (k 2 /S ) 1/2 2 L m (θ)/ 2 L (θ)/ 0, prob-p ω, prob-p, cf. Lemma A.1, and the UWL sup θ Θ (k 2 /S ) 1/2 2 L (θ)/ k 1 A(θ) 0, prob-p, cf. Remark A.2. Condition (b3) then follows since 1/2 (ˆθ θ 0 )+A 1 0 1/2 L(θ 0 )/ 0, prob-p, and m 1/2 L (θ 0 )/ (k 1 /k 1/2 2 ) 1/2 L(θ 0 )/ 0, prob-p, cf. Lemma A.4. Finally, Condition (b4) sup (k2 θ Θ /S ) 1/2 [ 2 L m (θ)/ 2 L (θ)/ ] 0, prob-p ω, prob-p, is the bootstrap UWL Lemma A.1 appropriately revised using Assumption 3.6. Because ˆθ int(θ) from Assumption 3.5(c), from a mean value expansion of the first order condition L m (ˆθ )/ = 0 around θ = ˆθ, 1/2 (ˆθ ˆθ) = [(k 2 /S ) 1/2 2 L m ( θ) ] 1 m 1/2 L m (ˆθ), where θ lies on the line segment joining ˆθ and ˆθ. Noting ˆθ ˆθ 0, prob-p, prob-p, and ˆθ θ 0 0, prob-p, (k 2 /S ) 1/2 2 L m ( θ)/ k 1 A 0, prob-p ω, prob-p. herefore, 1/2 (ˆθ ˆθ) converges in distribution to N(0, (k 2 /k 2 1)A 1 0 B 0 A 1 0 ), prob-p ω, prob-p. References Anatolyev, S. (2005): GMM, GEL, Serial Correlation, and Asymptotic Bias, Econometrica, 73, Andrews, D.W.K. (1991): Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation, Econometrica, 59, [22]

24 Andrews, D.W.K., and Monahan, J.C. (1992): An Improved Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimator, Econometrica, 60, 1992, Atkinson, K.E. (1989): An Introduction to Numerical Analysis (2nd ed.). New York: Wiley. Beran, R. (1988): Prepivoting est Statistics: a Bootstrap View of Asymptotic Refinements, Journal of the American Statistical Association, 83, Bickel, P., and Freedman, D. (1981): Some Asymptotic heory of the Bootstrap, Annals of Statistics, 9, Brillinger, D.R. (1981): ime Series: Data Analysis and heory. San Francisco: Holden Day. Bühlmann, P. (1997): Sieve Bootstrap for ime Series, Bernoulli, 3, Cattaneo, M.D., Crump, R.K., and Jansson, M. (2010): Bootstrapping Density-Weighted Average Derivatives, CREAES Research Paper , School of Economics and Management, University of Aarhus. Cohn, D L. (1980): Measure heory. Boston: Birkhäuser. Davidson, J.E.H. (1994): Stochastic Limit heory. Oxford: Oxford University Press. Davidson, J., and de Jong, R. (2000): Consistency of Kernel Estimators of Heteroscedastic and Autocorrelated Covariance Matrices, Econometrica, 68, Doukhan, P. (1994): Mixing: Properties and Examples. Lecture Notes in Statistics 85. New York: Springer. Efron, B. (1979): Bootstrap Methods: Another Look at the Jackknife, Annals of Statistics, 7, Epamechnikov, V.A. (1969): Non-parametric Estimation of a Multivariate Probability Density, heory of Probability and Its Applications, 14, Fitzenberger, B. (1997): he Moving Blocks Bootstrap and Robust Inference for Linear Least Squares and Quantile Regressions, Journal of Econometrics, 82, Gallant, R. (1987): Nonlinear Statistical Models. New York: Wiley. [23]

25 Gonçalves, S., and White, H. (2000): Maximum Likelihood and the Bootstrap for Nonlinear Dynamic Models, University of California at San Diego, Economics Working Paper Series , Department of Economics, UC San Diego. Gonçalves, S., and White, H. (2001): he Bootstrap of the Mean for Dependent Heterogeneous Arrays, CIRANO working paper, 2001s-19. Gonçalves, S., and White, H. (2002): he Bootstrap of the Mean for Dependent Heterogeneous Arrays, Econometric heory, 18, Gonçalves, S., and White, H. (2004): Maximum Likelihood and the Bootstrap for Nonlinear Dynamic Models, Journal of Econometrics, 119, Hahn, J. (1997): A Note on Bootstrapping Generalized Method of Moments Estimators, Econometric heory, 12, Hall, P. (1992): he Bootstrap and Edgeworth Expansion. New York: Springer. Hall, P., and Horowitz, J. (1996): Bootstrap Critical Values for ests Based on Generalized-Method-of-Moments Estimators, Econometrica, 64, Hansen, B. (1992): Consistent Covariance Matrix Estimation for Dependent Heterogeneous Processes, Econometrica, 60, Hidalgo, J. (2003): An Alternative Bootstrap to Moving Blocks for ime Series Regression Models, Journal of Econometrics, 117, Horowitz, J. (2001): he Bootstrap. In Heckman, J.J., and Leamer, E. (Eds.), Handbook of Econometrics, Vol. 5, Amsterdam:. Elsevier. Hurvich, C., and Zeger, S. (1987): Frequency Domain Bootstrap Methods for ime Series, Statistics and Operation Research Working Paper, New York University, New York. Kitamura, Y. (1997): Empirical Likelihood Methods with Weakly Dependent Processes, Annals of Statistics, 25, Kitamura, Y., and Stutzer, M. (1997): An Information-heoretic Alternative to Generalized Method of Moments Estimation, Econometrica, 65, Künsch, H. (1989): he Jackknife and the Bootstrap for General Stationary Observations, Annals of Statistics, 17, [24]

26 Lahiri, S.N. (2003): Resampling Methods for Dependent Data. New York: Springer. Liu, R., and Singh, K. (1992): Moving Blocks Jackknife and Bootstrap Capture Weak Dependence. In LePage, R., and Billard L. (Eds.), Exploring the Limits of Bootstrap, New York: Wiley. Machado, J., and Parente, P. (2005): Bootstrap Estimation of Covariance Matrices via the Percentile Method, Econometrics Journal, 8, Newey, W.K. (1991): Uniform Convergence in Probability and Stochastic Equicontinuity, Econometrica, 59, Newey, W.K. and Smith, R.J. (2004): Higher Order Properties of GMM and Generalized Empirical Likelihood Estimators, Econometrica, 72, Newey, W.K. and West, K. (1987): A Simple, Positive Semi-definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix, Econometrica, 55, Owen, A. (1988): Empirical Likelihood Ratio Confidence Intervals for a Single Functional, Biometrika, 75, Owen, A. (2001): Empirical Likelihood. New York: Chapman and Hall. Paparoditis, E., and Politis, D.N. (2001): apered Block Bootstrap, Biometrika, 88, Paparoditis, E., and Politis, D.N. (2002): he apered Block Bootstrap for General Statistics from Stationary Sequences, Econometrics Journal, 5, Parente, P.M.D.C., and Smith, R.J. (2018): Kernel Block Bootstrap, CWP 48/18, Centre for Microdata Methods and Practice, U.C.L and I.F.S. Politis, D.N. (1998): Computer-Intensive Methods in Statistical Analysis, IEEE Signal Processing Magazine, 15, No. 1, 39-55, January. Politis, D.N., and Romano, J. (1992a): A Circular Block-Resampling Procedure for Stationary Data. In LePage, R., and Billard L. (Eds.), Exploring the Limits of Bootstrap, New York: Wiley. Politis, D.N., and Romano, J. (1992b): A General Resampling Scheme for riangular Arrays of α-mixing Random Variables with Application to the Problem of Spectral Density Estimation, Annals of Statistics, 20, [25]

27 Politis, D.N., and Romano, J. (1994): he Stationary Bootstrap, Journal of the American Statististical Association, 89, Politis, D.N., Romano, J.P., and Wolf, M. (1997): Subsampling for Heteroskedastic ime Series, Journal of Econometrics, 81, Politis, D.N., Romano, J.P., and Wolf, M. (1999): Subsampling. New York: Springer. Priestley, M.B. (1962): Basic Considerations in the Estimation of Spectra, echnometrics, 4, , Priestley, M.B. (1981): Spectral Analysis and ime Series. New York: Academic Press. Qin, J., and J. Lawless (1994): Empirical Likelihood and General Estimating Equations, Annals of Statistics, 22, Rao, C.R. (1973): Linear Statistical Inference and its Applications. Second Edition. New York: Wiley. Sacks, J., and Ylvisacker, D. (1981): Asymptotically Optimum Kernels for Density Estimation at a Point, Annals of Statistics, 9, Serfling, R. (2002): Approximation heorems of Mathematical Statistics. New York: Wiley. Shao, J. and u, D. (1995): he Jackknife and Bootstrap. New York: Springer. Shi, X., and Shao, J. (1988): Resampling Estimation when the Observations are m- Dependent, Communications in Statistics, A, 17, Singh, K.N. (1981): On the Asymptotic Accuracy of Efron s Bootstrap, Annals of Statistics, 9, Singh, K.N., and Xie, M. (2010): Bootstrap - A Statistical Method. In International Encyclopedia for Education, hird Edition, Amsterdam: Elsevier. Smith, R.J. (1997): Alternative Semi-parametric Likelihood Approaches to Generalised Method of Moments Estimation, Economic Journal, 107, Smith, R. J. (2005): Automatic Positive Semidefinite HAC Covariance Matrix and GMM Estimation, Econometric heory, 21, [26]

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented

More information

LECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH

LECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH LECURE ON HAC COVARIANCE MARIX ESIMAION AND HE KVB APPROACH CHUNG-MING KUAN Institute of Economics Academia Sinica October 20, 2006 ckuan@econ.sinica.edu.tw www.sinica.edu.tw/ ckuan Outline C.-M. Kuan,

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Asymptotic distribution of GMM Estimator

Asymptotic distribution of GMM Estimator Asymptotic distribution of GMM Estimator Eduardo Rossi University of Pavia Econometria finanziaria 2010 Rossi (2010) GMM 2010 1 / 45 Outline 1 Asymptotic Normality of the GMM Estimator 2 Long Run Covariance

More information

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators Robert M. de Jong Department of Economics Michigan State University 215 Marshall Hall East

More information

Block Bootstrap HAC Robust Tests: The Sophistication of the Naive Bootstrap

Block Bootstrap HAC Robust Tests: The Sophistication of the Naive Bootstrap Block Bootstrap HAC Robust ests: he Sophistication of the Naive Bootstrap Sílvia Gonçalves Département de sciences économiques, CIREQ and CIRANO, Université de ontréal and imothy J. Vogelsang Department

More information

On block bootstrapping areal data Introduction

On block bootstrapping areal data Introduction On block bootstrapping areal data Nicholas Nagle Department of Geography University of Colorado UCB 260 Boulder, CO 80309-0260 Telephone: 303-492-4794 Email: nicholas.nagle@colorado.edu Introduction Inference

More information

On bootstrap coverage probability with dependent data

On bootstrap coverage probability with dependent data On bootstrap coverage probability with dependent data By Jānis J. Zvingelis Department of Finance, The University of Iowa, Iowa City, Iowa 52242, U.S.A. Abstract This paper establishes that the minimum

More information

Testing in GMM Models Without Truncation

Testing in GMM Models Without Truncation Testing in GMM Models Without Truncation TimothyJ.Vogelsang Departments of Economics and Statistical Science, Cornell University First Version August, 000; This Version June, 001 Abstract This paper proposes

More information

LIST OF PUBLICATIONS. 1. J.-P. Kreiss and E. Paparoditis, Bootstrap for Time Series: Theory and Applications, Springer-Verlag, New York, To appear.

LIST OF PUBLICATIONS. 1. J.-P. Kreiss and E. Paparoditis, Bootstrap for Time Series: Theory and Applications, Springer-Verlag, New York, To appear. LIST OF PUBLICATIONS BOOKS 1. J.-P. Kreiss and E. Paparoditis, Bootstrap for Time Series: Theory and Applications, Springer-Verlag, New York, To appear. JOURNAL PAPERS 61. D. Pilavakis, E. Paparoditis

More information

University of Pavia. M Estimators. Eduardo Rossi

University of Pavia. M Estimators. Eduardo Rossi University of Pavia M Estimators Eduardo Rossi Criterion Function A basic unifying notion is that most econometric estimators are defined as the minimizers of certain functions constructed from the sample

More information

OPTIMAL BANDWIDTH CHOICE FOR INTERVAL ESTIMATION IN GMM REGRESSION. Yixiao Sun and Peter C.B. Phillips. May 2008

OPTIMAL BANDWIDTH CHOICE FOR INTERVAL ESTIMATION IN GMM REGRESSION. Yixiao Sun and Peter C.B. Phillips. May 2008 OPIAL BANDWIDH CHOICE FOR INERVAL ESIAION IN G REGRESSION By Yixiao Sun and Peter C.B. Phillips ay 8 COWLES FOUNDAION DISCUSSION PAPER NO. 66 COWLES FOUNDAION FOR RESEARCH IN ECONOICS YALE UNIVERSIY Box

More information

A NOTE ON THE STATIONARY BOOTSTRAP S VARIANCE. By Daniel J. Nordman

A NOTE ON THE STATIONARY BOOTSTRAP S VARIANCE. By Daniel J. Nordman Submitted to the Annals of Statistics A NOTE ON THE STATIONARY BOOTSTRAP S VARIANCE By Daniel J. Nordman Because the stationary bootstrap resamples data blocks of random length, this method has been thought

More information

A Primer on Asymptotics

A Primer on Asymptotics A Primer on Asymptotics Eric Zivot Department of Economics University of Washington September 30, 2003 Revised: October 7, 2009 Introduction The two main concepts in asymptotic theory covered in these

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR J. Japan Statist. Soc. Vol. 3 No. 200 39 5 CALCULAION MEHOD FOR NONLINEAR DYNAMIC LEAS-ABSOLUE DEVIAIONS ESIMAOR Kohtaro Hitomi * and Masato Kagihara ** In a nonlinear dynamic model, the consistency and

More information

Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood

Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood Advances in Decision Sciences Volume 2012, Article ID 973173, 8 pages doi:10.1155/2012/973173 Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood

More information

A better way to bootstrap pairs

A better way to bootstrap pairs A better way to bootstrap pairs Emmanuel Flachaire GREQAM - Université de la Méditerranée CORE - Université Catholique de Louvain April 999 Abstract In this paper we are interested in heteroskedastic regression

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

M O N A S H U N I V E R S I T Y

M O N A S H U N I V E R S I T Y ISSN 440-77X ISBN 0 736 066 4 M O N A S H U N I V E R S I T Y AUSTRALIA A Test for the Difference Parameter of the ARIFMA Model Using the Moving Blocks Bootstrap Elizabeth Ann Mahara Working Paper /99

More information

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos

More information

Maximum Non-extensive Entropy Block Bootstrap

Maximum Non-extensive Entropy Block Bootstrap Overview&Motivation MEB Simulation of the MnEBB Conclusions References Maximum Non-extensive Entropy Block Bootstrap Jan Novotny CEA, Cass Business School & CERGE-EI (with Michele Bergamelli & Giovanni

More information

Bootstrap prediction intervals for factor models

Bootstrap prediction intervals for factor models Bootstrap prediction intervals for factor models Sílvia Gonçalves and Benoit Perron Département de sciences économiques, CIREQ and CIRAO, Université de Montréal April, 3 Abstract We propose bootstrap prediction

More information

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017 Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han and Robert de Jong January 28, 2002 Abstract This paper considers Closest Moment (CM) estimation with a general distance function, and avoids

More information

Nonparametric regression with rescaled time series errors

Nonparametric regression with rescaled time series errors Nonparametric regression with rescaled time series errors José E. Figueroa-López Michael Levine Abstract We consider a heteroscedastic nonparametric regression model with an autoregressive error process

More information

Studies in Nonlinear Dynamics and Econometrics

Studies in Nonlinear Dynamics and Econometrics Studies in Nonlinear Dynamics and Econometrics Quarterly Journal April 1997, Volume, Number 1 The MIT Press Studies in Nonlinear Dynamics and Econometrics (ISSN 1081-186) is a quarterly journal published

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

Heteroskedasticity and Autocorrelation Consistent Standard Errors

Heteroskedasticity and Autocorrelation Consistent Standard Errors NBER Summer Institute Minicourse What s New in Econometrics: ime Series Lecture 9 July 6, 008 Heteroskedasticity and Autocorrelation Consistent Standard Errors Lecture 9, July, 008 Outline. What are HAC

More information

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach By Shiqing Ling Department of Mathematics Hong Kong University of Science and Technology Let {y t : t = 0, ±1, ±2,

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest

More information

Econ 273B Advanced Econometrics Spring

Econ 273B Advanced Econometrics Spring Econ 273B Advanced Econometrics Spring 2005-6 Aprajit Mahajan email: amahajan@stanford.edu Landau 233 OH: Th 3-5 or by appt. This is a graduate level course in econometrics. The rst part of the course

More information

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and

More information

Non-Parametric Dependent Data Bootstrap for Conditional Moment Models

Non-Parametric Dependent Data Bootstrap for Conditional Moment Models Non-Parametric Dependent Data Bootstrap for Conditional Moment Models Bruce E. Hansen University of Wisconsin September 1999 Preliminary and Incomplete Abstract A new non-parametric bootstrap is introduced

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

Working Paper No Maximum score type estimators

Working Paper No Maximum score type estimators Warsaw School of Economics Institute of Econometrics Department of Applied Econometrics Department of Applied Econometrics Working Papers Warsaw School of Economics Al. iepodleglosci 64 02-554 Warszawa,

More information

THE ERROR IN REJECTION PROBABILITY OF SIMPLE AUTOCORRELATION ROBUST TESTS

THE ERROR IN REJECTION PROBABILITY OF SIMPLE AUTOCORRELATION ROBUST TESTS Econometrica, Vol. 72, No. 3 (May, 2004), 937 946 THE ERROR IN REJECTION PROBABILITY OF SIMPLE AUTOCORRELATION ROBUST TESTS BY MICHAEL JANSSON 1 A new class of autocorrelation robust test statistics is

More information

AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES

AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES Journal of Data Science 14(2016), 641-656 AN EMPIRICAL COMPARISON OF BLOCK BOOTSTRAP METHODS: TRADITIONAL AND NEWER ONES Beste H. Beyaztas a,b, Esin Firuzan b* a Department of Statistics, Istanbul Medeniyet

More information

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data Song Xi CHEN Guanghua School of Management and Center for Statistical Science, Peking University Department

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

EXTENDED TAPERED BLOCK BOOTSTRAP

EXTENDED TAPERED BLOCK BOOTSTRAP Statistica Sinica 20 (2010), 000-000 EXTENDED TAPERED BLOCK BOOTSTRAP Xiaofeng Shao University of Illinois at Urbana-Champaign Abstract: We propose a new block bootstrap procedure for time series, called

More information

Optimal Jackknife for Unit Root Models

Optimal Jackknife for Unit Root Models Optimal Jackknife for Unit Root Models Ye Chen and Jun Yu Singapore Management University October 19, 2014 Abstract A new jackknife method is introduced to remove the first order bias in the discrete time

More information

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Bootstrapping Long Memory Tests: Some Monte Carlo Results Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin University College Dublin and Cass Business School. July 2004 - Preliminary Abstract We investigate the bootstrapped

More information

Bootstrapping Long Memory Tests: Some Monte Carlo Results

Bootstrapping Long Memory Tests: Some Monte Carlo Results Bootstrapping Long Memory Tests: Some Monte Carlo Results Anthony Murphy and Marwan Izzeldin Nu eld College, Oxford and Lancaster University. December 2005 - Preliminary Abstract We investigate the bootstrapped

More information

The generalized method of moments

The generalized method of moments Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna February 2008 Based on the book Generalized Method of Moments by Alastair R. Hall (2005), Oxford

More information

A comparison of four different block bootstrap methods

A comparison of four different block bootstrap methods Croatian Operational Research Review 189 CRORR 5(014), 189 0 A comparison of four different block bootstrap methods Boris Radovanov 1, and Aleksandra Marcikić 1 1 Faculty of Economics Subotica, University

More information

Finite Sample Properties of the Dependent Bootstrap for Conditional Moment Models

Finite Sample Properties of the Dependent Bootstrap for Conditional Moment Models 1 Finite Sample Properties of the Dependent Bootstrap for Conditional Moment Models Rachida Ouysse September 5, 27 Abstract This paper assesses the finite sample refinements of the block bootstrap and

More information

Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses

Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Ann Inst Stat Math (2009) 61:773 787 DOI 10.1007/s10463-008-0172-6 Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Taisuke Otsu Received: 1 June 2007 / Revised:

More information

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference 1 / 171 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2013 2 / 171 Unpaid advertisement

More information

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference 1 / 172 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2014 2 / 172 Unpaid advertisement

More information

1. GENERAL DESCRIPTION

1. GENERAL DESCRIPTION Econometrics II SYLLABUS Dr. Seung Chan Ahn Sogang University Spring 2003 1. GENERAL DESCRIPTION This course presumes that students have completed Econometrics I or equivalent. This course is designed

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

Robust Performance Hypothesis Testing with the Variance. Institute for Empirical Research in Economics University of Zurich

Robust Performance Hypothesis Testing with the Variance. Institute for Empirical Research in Economics University of Zurich Institute for Empirical Research in Economics University of Zurich Working Paper Series ISSN 1424-0459 Working Paper No. 516 Robust Performance Hypothesis Testing with the Variance Olivier Ledoit and Michael

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

A New Approach to Robust Inference in Cointegration

A New Approach to Robust Inference in Cointegration A New Approach to Robust Inference in Cointegration Sainan Jin Guanghua School of Management, Peking University Peter C. B. Phillips Cowles Foundation, Yale University, University of Auckland & University

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

Inference in VARs with Conditional Heteroskedasticity of Unknown Form Inference in VARs with Conditional Heteroskedasticity of Unknown Form Ralf Brüggemann a Carsten Jentsch b Carsten Trenkler c University of Konstanz University of Mannheim University of Mannheim IAB Nuremberg

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information

Atsushi Inoue and Mototsugu Shintani

Atsushi Inoue and Mototsugu Shintani BOOSRAPPING GMM ESIMAORS FOR IME SERIES by Atsushi Inoue and Mototsugu Shintani Working Paper No. 0-W29R December 200 Revised August 2003 DEPARMEN OF ECONOMICS VANDERBIL UNIVERSIY NASHVILLE, N 37235 www.vanderbilt.edu/econ

More information

Darmstadt Discussion Papers in Economics

Darmstadt Discussion Papers in Economics Darmstadt Discussion Papers in Economics The Effect of Linear Time Trends on Cointegration Testing in Single Equations Uwe Hassler Nr. 111 Arbeitspapiere des Instituts für Volkswirtschaftslehre Technische

More information

Heteroskedasticity-Robust Inference in Finite Samples

Heteroskedasticity-Robust Inference in Finite Samples Heteroskedasticity-Robust Inference in Finite Samples Jerry Hausman and Christopher Palmer Massachusetts Institute of Technology December 011 Abstract Since the advent of heteroskedasticity-robust standard

More information

A Bootstrap Test for Conditional Symmetry

A Bootstrap Test for Conditional Symmetry ANNALS OF ECONOMICS AND FINANCE 6, 51 61 005) A Bootstrap Test for Conditional Symmetry Liangjun Su Guanghua School of Management, Peking University E-mail: lsu@gsm.pku.edu.cn and Sainan Jin Guanghua School

More information

BOOTSTRAPPING DIFFERENCES-IN-DIFFERENCES ESTIMATES

BOOTSTRAPPING DIFFERENCES-IN-DIFFERENCES ESTIMATES BOOTSTRAPPING DIFFERENCES-IN-DIFFERENCES ESTIMATES Bertrand Hounkannounon Université de Montréal, CIREQ December 2011 Abstract This paper re-examines the analysis of differences-in-differences estimators

More information

Department of Economics, UCSD UC San Diego

Department of Economics, UCSD UC San Diego Department of Economics, UCSD UC San Diego itle: Spurious Regressions with Stationary Series Author: Granger, Clive W.J., University of California, San Diego Hyung, Namwon, University of Seoul Jeon, Yongil,

More information

TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST

TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Econometrics Working Paper EWP0402 ISSN 1485-6441 Department of Economics TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Lauren Bin Dong & David E. A. Giles Department

More information

Comparison of inferential methods in partially identified models in terms of error in coverage probability

Comparison of inferential methods in partially identified models in terms of error in coverage probability Comparison of inferential methods in partially identified models in terms of error in coverage probability Federico A. Bugni Department of Economics Duke University federico.bugni@duke.edu. September 22,

More information

Testing Overidentifying Restrictions with Many Instruments and Heteroskedasticity

Testing Overidentifying Restrictions with Many Instruments and Heteroskedasticity Testing Overidentifying Restrictions with Many Instruments and Heteroskedasticity John C. Chao, Department of Economics, University of Maryland, chao@econ.umd.edu. Jerry A. Hausman, Department of Economics,

More information

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods Chapter 4 Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods 4.1 Introduction It is now explicable that ridge regression estimator (here we take ordinary ridge estimator (ORE)

More information

Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences

Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences Piotr Kokoszka 1, Gilles Teyssière 2, and Aonan Zhang 3 1 Mathematics and Statistics, Utah State University, 3900 Old Main

More information

Estimation and Inference with Weak Identi cation

Estimation and Inference with Weak Identi cation Estimation and Inference with Weak Identi cation Donald W. K. Andrews Cowles Foundation Yale University Xu Cheng Department of Economics University of Pennsylvania First Draft: August, 2007 Revised: March

More information

The Role of "Leads" in the Dynamic Title of Cointegrating Regression Models. Author(s) Hayakawa, Kazuhiko; Kurozumi, Eiji

The Role of Leads in the Dynamic Title of Cointegrating Regression Models. Author(s) Hayakawa, Kazuhiko; Kurozumi, Eiji he Role of "Leads" in the Dynamic itle of Cointegrating Regression Models Author(s) Hayakawa, Kazuhiko; Kurozumi, Eiji Citation Issue 2006-12 Date ype echnical Report ext Version publisher URL http://hdl.handle.net/10086/13599

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

HAR Inference: Recommendations for Practice

HAR Inference: Recommendations for Practice HAR Inference: Recommendations for Practice Eben Lazarus, Harvard Daniel Lewis, Harvard Mark Watson, Princeton & NBER James H. Stock, Harvard & NBER JBES Invited Session Sunday, Jan. 7, 2018, 1-3 pm Marriott

More information

The Size-Power Tradeoff in HAR Inference

The Size-Power Tradeoff in HAR Inference he Size-Power radeoff in HAR Inference June 6, 07 Eben Lazarus Department of Economics, Harvard University Daniel J. Lewis Department of Economics, Harvard University and James H. Stock* Department of

More information

A Fixed-b Perspective on the Phillips-Perron Unit Root Tests

A Fixed-b Perspective on the Phillips-Perron Unit Root Tests A Fixed-b Perspective on the Phillips-Perron Unit Root Tests Timothy J. Vogelsang Department of Economics Michigan State University Martin Wagner Department of Economics and Finance Institute for Advanced

More information

RESIDUAL-BASED BLOCK BOOTSTRAP FOR UNIT ROOT TESTING. By Efstathios Paparoditis and Dimitris N. Politis 1

RESIDUAL-BASED BLOCK BOOTSTRAP FOR UNIT ROOT TESTING. By Efstathios Paparoditis and Dimitris N. Politis 1 Econometrica, Vol. 71, No. 3 May, 2003, 813 855 RESIDUAL-BASED BLOCK BOOTSTRAP FOR UNIT ROOT TESTING By Efstathios Paparoditis and Dimitris N. Politis 1 A nonparametric, residual-based block bootstrap

More information

FIXED-b ASYMPTOTICS FOR BLOCKWISE EMPIRICAL LIKELIHOOD

FIXED-b ASYMPTOTICS FOR BLOCKWISE EMPIRICAL LIKELIHOOD Statistica Sinica 24 (214), - doi:http://dx.doi.org/1.575/ss.212.321 FIXED-b ASYMPTOTICS FOR BLOCKWISE EMPIRICAL LIKELIHOOD Xianyang Zhang and Xiaofeng Shao University of Missouri-Columbia and University

More information

Bahadur representations for bootstrap quantiles 1

Bahadur representations for bootstrap quantiles 1 Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

Follow links for Class Use and other Permissions. For more information send to:

Follow links for Class Use and other Permissions. For more information send  to: COPYRIGH NOICE: Kenneth J. Singleton: Empirical Dynamic Asset Pricing is published by Princeton University Press and copyrighted, 00, by Princeton University Press. All rights reserved. No part of this

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

A note on the empirics of the neoclassical growth model

A note on the empirics of the neoclassical growth model Manuscript A note on the empirics of the neoclassical growth model Giovanni Caggiano University of Glasgow Leone Leonida Queen Mary, University of London Abstract This paper shows that the widely used

More information

Aggregation of Spectral Density Estimators

Aggregation of Spectral Density Estimators Aggregation of Spectral Density Estimators Christopher Chang Department of Mathematics, University of California, San Diego, La Jolla, CA 92093-0112, USA; chrchang@alumni.caltech.edu Dimitris Politis Department

More information

Stabilizing a GMM Bootstrap for Time Series: A Simulation Study

Stabilizing a GMM Bootstrap for Time Series: A Simulation Study 摂南経済研究第 1 巻第 1 2 号 (2011),19-37ページ Stabilizing a GMM Bootstrap for ime Series: A Simulation Study 研究論文 Stabilizing a GMM Bootstrap for ime Series: A Simulation Study Masayuki Hirukawa Abstract Inoue and

More information

Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing

Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing Yixiao Sun Department of Economics University of California, San Diego Peter C. B. Phillips Cowles Foundation, Yale University,

More information

Uniform confidence bands for kernel regression estimates with dependent data IN MEMORY OF F. MARMOL

Uniform confidence bands for kernel regression estimates with dependent data IN MEMORY OF F. MARMOL . Uniform confidence bands for kernel regression estimates with dependent data J. Hidalgo London School of Economics IN MEMORY OF F. MARMOL AIMS AND ESTIMATION METHODOLOGY CONDITIONS RESULTS BOOTSTRAP

More information

University of California San Diego and Stanford University and

University of California San Diego and Stanford University and First International Workshop on Functional and Operatorial Statistics. Toulouse, June 19-21, 2008 K-sample Subsampling Dimitris N. olitis andjoseph.romano University of California San Diego and Stanford

More information

The wild tapered block bootstrap. Ulrich Hounyo. CREATES Research Paper

The wild tapered block bootstrap. Ulrich Hounyo. CREATES Research Paper The wild tapered block bootstrap Ulrich Hounyo CREATES Research Paper 04-3 Department of Economics and Business Aarhus University Fuglesangs Allé 4 DK-80 Aarhus V Denmark Email: oekonomi@au.dk Tel: +45

More information

Diagnostics for the Bootstrap and Fast Double Bootstrap

Diagnostics for the Bootstrap and Fast Double Bootstrap Diagnostics for the Bootstrap and Fast Double Bootstrap Department of Economics and CIREQ McGill University Montréal, Québec, Canada H3A 2T7 by Russell Davidson russell.davidson@mcgill.ca AMSE-GREQAM Centre

More information

Bootstrap. Director of Center for Astrostatistics. G. Jogesh Babu. Penn State University babu.

Bootstrap. Director of Center for Astrostatistics. G. Jogesh Babu. Penn State University  babu. Bootstrap G. Jogesh Babu Penn State University http://www.stat.psu.edu/ babu Director of Center for Astrostatistics http://astrostatistics.psu.edu Outline 1 Motivation 2 Simple statistical problem 3 Resampling

More information

Economic modelling and forecasting

Economic modelling and forecasting Economic modelling and forecasting 2-6 February 2015 Bank of England he generalised method of moments Ole Rummel Adviser, CCBS at the Bank of England ole.rummel@bankofengland.co.uk Outline Classical estimation

More information

A New Test in Parametric Linear Models with Nonparametric Autoregressive Errors

A New Test in Parametric Linear Models with Nonparametric Autoregressive Errors A New Test in Parametric Linear Models with Nonparametric Autoregressive Errors By Jiti Gao 1 and Maxwell King The University of Western Australia and Monash University Abstract: This paper considers a

More information

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression Simulating Properties of the Likelihood Ratio est for a Unit Root in an Explosive Second Order Autoregression Bent Nielsen Nuffield College, University of Oxford J James Reade St Cross College, University

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 14 GEE-GMM Throughout the course we have emphasized methods of estimation and inference based on the principle

More information

Estimation of Dynamic Regression Models

Estimation of Dynamic Regression Models University of Pavia 2007 Estimation of Dynamic Regression Models Eduardo Rossi University of Pavia Factorization of the density DGP: D t (x t χ t 1, d t ; Ψ) x t represent all the variables in the economy.

More information

Bootstrap Methods in Econometrics

Bootstrap Methods in Econometrics Bootstrap Methods in Econometrics Department of Economics McGill University Montreal, Quebec, Canada H3A 2T7 by Russell Davidson email: russell.davidson@mcgill.ca and James G. MacKinnon Department of Economics

More information

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Advances in Decision Sciences Volume, Article ID 893497, 6 pages doi:.55//893497 Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Junichi Hirukawa and

More information

Adjusted Empirical Likelihood for Long-memory Time Series Models

Adjusted Empirical Likelihood for Long-memory Time Series Models Adjusted Empirical Likelihood for Long-memory Time Series Models arxiv:1604.06170v1 [stat.me] 21 Apr 2016 Ramadha D. Piyadi Gamage, Wei Ning and Arjun K. Gupta Department of Mathematics and Statistics

More information