Nonparametric regression with rescaled time series errors

Size: px
Start display at page:

Download "Nonparametric regression with rescaled time series errors"

Transcription

1 Nonparametric regression with rescaled time series errors José E. Figueroa-López Michael Levine Abstract We consider a heteroscedastic nonparametric regression model with an autoregressive error process of finite known order p. he heteroscedasticity is incorporated using a scaling function defined at uniformly spaced design points on an interval [0,]. We provide an innovative nonparametric estimator of the variance function and establish its consistency and asymptotic normality. We also propose a semiparametric estimator for the vector of autoregressive error process coefficients that is consistent and asymptotically normal for a sample size. Explicit asymptotic variance covariance matrix is obtained as well. Finally, the finite sample performance of the proposed method is tested in simulations. Keywords and phrases: autoregressive error process; heteroscedastic; semiparametric estimators; difference-based estimation approach. Introduction In this manuscript, we consider the estimation of a time series process with a time-dependent conditional variance function and serially dependent errors. More precisely, we assume that there are observations {(x t, y t )} t {,..., } available that have been generated by the following model: y t = σ t v t, σ t = σ(x t ), (.) p v t = φ j v t j + ε t, (.) j= where {ε t : < t < } are independent identically distributed with mean 0 and variance, while the autoregressive order is a fixed known integer p > 0. We also assume that x t s form an increasing equally spaced sequence on the interval [0, ]. hen, the model (.)-(.) can be viewed as a nonparametric regression model with the mean function identically equal to zero and scaled autoregressive time series errors v t. he simplest case of p = was treated earlier in Dahl and Levine(006). Another interpretation of this model would be as a type of functional autoregressive models (FAR) that were first introduced in Chen and say (993). Indeed, the process y t can be re-expressed as y t = p j= φ j σ t σ t j y t j + σ t ε t, (.3) with functional coefficients being φ j σ t σt j, j =,..., p. Nonparametric regression models with autoregressive time series errors have a long history. Hall and Van Keilegom (003) and, more recently, Shao and Yang (0) considered a general nonparametric model of the form y t = µ t + σ t v t, (.4)

2 with a regression mean process µ t given by µ t = g(x t ) for a smooth trend function g, an error process {v t : < t < } given as in (.), and an error scaling function σ t given by σ t σ, for a (possibly unknown) constant σ. Note that our model has a more general correlation structure since it rescales the AR(p) error process by the conditional variance of the unobserved process σ(x t ). Phillips and Xu (005) also considered the model (.4) with a general scaling function σ t = σ(x t ), but with the autoregressive mean process structure µ t = θ 0 + θ y t θ q y t q and with zero mean martingale difference sequence as an error process {v t : < t < }. As mentioned above, the model considered in this manuscript is an extension of that studied in Dahl and Levine (006), who treated the particular case p =. In the present manuscript, we provide an innovative nonparametric estimator of the variance function σ(x) and establish its consistency and asymptotic normality. We also propose a semiparametric estimator for the autoregressive error process coefficients (φ,..., φ p ), which is consistent and asymptotically normal. he estimators of the error covariance structure (determined by σ(x t ) and (φ,..., φ p )) are needed, for example, in order to estimate the variance of the regression mean function µ t. Another possible application is in using bootstrap methods when constructing confidence bands for the regression mean. For some discussion on this subject see, for example, Hall and Van Keilegom (003). Our results can also be used for testing the martingale difference sequence hypothesis H 0 : φ = = φ p = 0, that is often uncritically adopted in financial time series, against a fairly general alternative. Our estimation approach is based on the two-lag difference statistics (pseudoresiduals): η t = y t y t. (.5) Both Hall and Van Keilegom (003) and Dahl and Levine (006) also use two-lag difference statistics but the method in the former paper does not apply in the presence of a non-constant scaling function σ(x t ) (see Remark 3.5 below), while the method in the second paper relies heavily on the autocorrelation properties of an AR() process and, thus, cannot deal with higher order autocorrelation models for {v t : < t < } (see justification in the paragraph after Eq. (.4)). For simplicity, we mostly focus on the case of a regression mean µ t which is identically equal to zero since our main goal is the estimation of the covariance structure of the process (.)-(.). Nevertheless, we also analyze the effect of a nonzero mean function and propose a natural correction method in this case (see end of Section 3 for more details). We also show that the addition of a sufficiently smooth non-zero mean function has no effect on the asymptotic properties of ˆφ. his fact is explained in more details in Corollary 3.6; see also Hall and Van Keilegom (003) for some additional heuristics on this subject. he rest of the paper is organized as follows. In Section, we present our estimation approach. he consistency and central limit theorem for estimators of autoregressive coefficients φ and φ are given in Section 3. he analogous results for the estimator of the variance function σ(x) are presented in Section 4. A Monte Carlo simulation study and a real data application of our estimators are then presented in Section 5. In Section 6, we conclude the paper with a discussion section about extensions of our method and some interesting open problems. he proofs of main results are given in the Appendix section. Estimation method based on two-lag differences In this section, we present our estimation approach. For simplicity, we first illustrate our method for p = ; however, it will work for any finite order p as we explain further at the end of the present section. As it was mentioned in the introduction, Dahl and Levine (006) proposed an estimation method for the model (.)-(.) in the simple case of p = based on the two-lag difference statistics η t defined in (.5). In order to explain in simple terms the intuition behind our method, we recall the approach therein.

3 Suppose for now that the scaling function σ t is constant; i.e. σ t σ. We denote γ k the autocovariance of the error process v t at lag k. hen, note that ηt ( = σ v t + vt v ) tv t and, therefore, because, under the AR() specification, E η t = σ (γ 0 γ ) σ, (.) γ = φ γ = φ φ. It is now intuitive that η t can be used to develop a consistent estimator for a non-constant function σ t as well. Indeed, in the case of non-constant σ t and under a fixed design on the unit interval (i.e., x t := t/ for t =,..., ), we have E η t = ( σ t γ 0 + σ t γ 0 σ t σ t γ ). Simple heuristics suggest that the above expression can be accurately approximated by σ t in large samples for sufficiently large. his, in turn, suggests turning the original problem (.) into a non-parametric regression η t = σ (x t ) + ε t, (.) where { ε t } t=,..., are approximately centered random errors. Dahl and Levine (006) used a local linear estimator ˆσ t to estimate σ t := σ (x t ), while the parameter φ was estimated using a weighted least square estimator (WLSE). More specifically, noting that (.3) with φ = = φ p = 0 implies it follows that a natural estimator for φ is given by σt y t = φ σt y t + ε t, t =,...,, (.3) ˆφ := arg min φ (,) ( = ˆσ t y t t= t= ) ( (ˆσ t y t φ ˆσ t y ) t t= ) ˆσ t ˆσ t y ty t, (.4) where above {ˆσ t } t=,..., is the local linear estimator of σ (x t ) based on non-parametric model (.). he approach described in the previous paragraph will not work for any p > because, in that case, γ 0 γ and, therefore, even the constant variance function σ cannot be estimated consistently. A first natural idea to extend the method in Dahl and Levine (006) is to look for a linear statistic η t := m a i y t i, (.5) i=0 such that E η t σ t, for sufficiently large sample size. he following result shows that this is essentially impossible even for the simplest AR() case. he impossibility for a general AR(p) model will immediately follow since obviously the AR() model can be seen as a degenerate case of the general AR(p) model. he proof of the following result is deferred to the appendix section. 3

4 Proposition.. Consider again the case where σt σ is constant and the error process is AR() (i.e., φ = = φ p = 0 in (.)-(.)). hen, if Eηt = σ for any φ (, ), there exists a 0 k m such that a k = ±, a k+ =, a i = 0, i k, k +. he previous result shows that the only linear statistic (.5) with a 0 0 that can result in Eη t being independent of φ is the two-lag difference statistic η t = y t y t. Now we show that the expectation of ηt is indeed independent of φ and introduce our new method to estimate the model (.)-(.) with an AR() error process. Recall that (see, for example, Brockwell and Davis (99)) for a stationary AR() process v t, its variance and autocovariances are: φ γ 0 := Var(v k ) = ( + φ )(( φ ) φ (.6) ), γ := Cov(v k, v k+ ) = φ γ 0, φ (.7) γ j := Cov(v k, v k+j ) = φ γ j + φ γ j, for any k N, j. (.8) herefore, the mean of squared pseudoresidual of the nd order, η t, can be simplified as follows under the assumption of constant variance function σ t σ: E η t = σ (γ 0 γ ) = σ + φ. Indeed, using (.6)-(.7), ( ) φ γ = φ γ + φ γ 0 = + φ γ 0 = φ + ( φ )φ γ 0, φ φ ( γ 0 γ = γ 0 φ + ( φ ) ( )φ ( φ ) φ ) = γ 0 φ φ ( ) ( φ ( φ ) φ ) = ( + φ )(( φ ) φ ) =. φ + φ As before, for a general smooth enough function σt and under a fixed design on the unit interval (x t = t/, t =,..., ) with large enough, we expect that E η t σ t + φ, and, hence, we expect to estimate correctly σt up to a constant. It turns out that this will suffice to estimate the parameters φ and φ via weighted least squares (WLSE). Indeed, suppose for now that we know the variance function σt and let ȳ t := σt y t. In light of the relationship (.3), it would then be possible to estimate (φ, φ ) by the WLSE: ( φ, φ ) := arg min φ,φ (ȳ t φ ȳ t φ ȳ t ). 4

5 Basic differentiation leads to the following system of normal equations ȳ t ȳ t + φ ȳ t ȳ t + φ ȳ t + φ ȳ t y t + φ ȳ t ȳ t = 0, ȳ t = 0. Ignoring negligible edge effects (so that ȳtȳ t ȳt ȳ t and ȳ t write the above system as with We finally obtain Ā := Ā φ + B φ B = 0, B φ + Ā φ C = 0, ȳt, B := ȳ t ȳ t, C := ȳ t ȳ t. ȳ t ), we can φ := (Ā B ) (Ā C B ), φ = Ā B( ˆφ ). (.9) Obviously, these estimators are not feasible since σ t is unknown. However, we note that these estimators will not change if instead of σ t in the definition of ȳ t, we use cσ t where c is an arbitrary constant that is independent of t. his fact suggests the following two-step estimation method for the scaling function σ t = σ(x t ) and the autocorrelation coefficients (φ, φ ): Algorithm. (AR() case).. First, estimate the function σ,bias t := σ (x t ) + φ, (.0) by a non-parametric smoothing method (e.g. local linear regression) applied to the two-lag difference statistics η t defined in (.5). Let σ t be the resulting estimator.. Standardize the observations, ỹ t := σ t y t, and, then, estimate (φ, φ ) via the WLSE: ˆφ := (A B ) (AC B ), ˆφ = A B( ˆφ ). (.) with 3. Estimate σ t := σ (x t ) by A := ỹt, B := ỹ t ỹ t, C := ỹ t ỹ t. ˆσ t := ( + ˆφ ) σ t. (.) he same method can be easily extended to the case of an arbitrary autoregressive error process AR(p) with p >. Indeed, let φ,..., φ p be the coefficients of the AR(p) error process. It can be shown that, for large, the expectation of the squared pseudoresidual of order, η t, is approximately equal to the scaled value of σ t : E η t Ψ σ t, where the scaling constant Ψ is an explicit function Ψ Ψ(φ,..., φ p ) of φ,..., φ p. As with the case of p =, since the scaling constant does not depend on the variance function, the following natural extended procedure follows: 5

6 Algorithm. (General AR(p) case).. Obtain an estimate of the scaled variance function σ,bias t := σ,bias (x t ) := Ψ(φ,..., φ p )σ (x t ), by using a non-parametric smoothing method (e.g., local linear regression) applied to η t. As before, let σ t = σ (x t ) be the resulting estimator.. Standardize the observations ỹ t := σ t y t and then estimate (φ,..., φ p ) using the weighted least squares (WLSE): ( ˆφ,..., ˆφ p ) := arg min φ,...,φ p t=p+ (ỹ t φ ỹ t... φ p ỹ t p ). 3. Estimate σ t := σ (x t ) by ˆσ t := Ψ( ˆφ,..., ˆφ p ) σ t. (.3) In the next section, we will give a detailed analysis of the consistency and asymptotic properties of the proposed estimators. 3 Asymptotics Let us now consider the asymptotic properties of the estimation procedure described at the end of the previous section in the context of the general heteroscedastic process (.-.). We will use σ t to denote the inconsistent estimator of σt that is obtained by applying local linear regression to the squared-pseudoresiduals ηt. As explained above, such an estimator is inconsistent since, e.g., even for the homoscedastic model (i.e., σt σ ), Eηt = σ Ψ(φ,..., φ p ). However, note that it is expected to be a consistent estimator of the quantity σ,bias t = σt Ψ(φ,..., φ p ), as it will be formally proved in Section 4 below. hroughout, we denote σ = (σ,..., σ ), σ = ( σ,..., σ ), σ bias = (σ bias,..., σ bias ). (3.) As explained before, it seems reasonable to estimate the coefficients φ, φ,..., φ p using an inconsistent estimator σ t first and, then, correct it to obtain the asymptotically consistent estimator: ˆσ t = Ψ( ˆφ,..., ˆφ p ) σ t. he following detailed algorithm illustrates our approach to obtaining the asymptotic properties of the proposed estimators:. Using the functional autoregression form of the model (.), define the least squares estimator ˆφ := ( ˆφ,..., ˆφ p ) of φ := (φ,..., φ p ) and establish its consistency (i.e. ˆφ P φ as ), under additional conditions.. Define an asymptotically consistent estimator ˆσ t and establish its consistency and asymptotic normality. 6

7 For convenience, we recall here the functional autoregressive form of (.): σt y t = φ σt y t φ p σt p y t p + ε t. Next, for any ϑ = (ϑ,..., ϑ ) and ϕ = (ϕ,..., ϕ p ), let m k (ϑ; ϕ) := m k,t (ϑ; ϕ), (k =,,..., p), (3.) t=p where m k,t (ϑ; ϕ) := ϑ t k y t k[ϑ t y t ϑ t ϕ y t... ϑ t p ϕ py t p ], (3.3) where (y t ) is generated by the model (.) with true parameters (σ t, φ). Denote Note that and, therefore, none of them depends on {σ t }. least-square estimator ˆφ of φ are given by m t (ϑ; ϕ) := (m,t (ϑ; ϕ);... ; m p,t (ϑ; ϕ)). (3.4) m k,t (σ; φ) = v t k ε t, (k =,..., p), (3.5) hen, the first order conditions that determine the m k ( σ; ˆφ) = 0, (k =,..., p). (3.6) A few preliminary results are needed to establish consistency for ˆφ. hroughout, we assume that the data generating process (.)-(.) satisfies the following conditions:. {ε t } are independent identically distributed (i.i.d.) errors with mean zero and variance.. E ε t 4+γ < for some small γ > σ ( ) F := C [0, ], the class of continuous functions on [0, ] with continuous second-derivatives in (0, ). We denote Θ 0 the set of φ = (φ,..., φ p ) such that the roots of the characteristic equation φ z... φ p z p = 0 are greater than + δ in absolute value for some δ > 0. Such a condition on φ guarantees causality and stationarity of the AR(p) error process v t defined in (.); moreover, it also implies that v t can be represented as the MA( ) process v t = ψ i ε t i, (3.7) i=0 where all of the coefficients {ψ i } i 0 satisfy K ρ i ψ i K ρ i, (3.8) for some finite ρ such that +δ < ρ <, and positive constants K and K (see, e.g., Brockwell and Davis (99), Chapter 3). In particular, (3.8) implies that the series {ψ i } i 0 is absolutely converging: i=0 ψ i <. 7

8 Remark 3.. he coefficients {ψ i } i 0 satisfy the recursive system of equations ψ j φ k ψ j k = 0, (3.9) 0<k j with ψ 0 and φ k = 0 for k > p. Note that this implies (by induction) that ψ j is a continuously differentiable function of φ for any j 0. Our first task is to show the weak consistency of the least-square estimator ˆφ. As pointed earlier in Dahl and Levine (006), the estimator ˆφ is an example of a MINPIN semiparametric estimator (i.e., an estimator that minimizes a criterion function that may depend on a Preliminary Infinite Dimensional Nuisance Parameter estimator). MINPIN estimators have been discussed in great generality in Andrews (994). We first establish the following uniform weak law of large numbers (LLN) for m t (σ; φ) (see Andrews (987), Appendix A for the definition). Lemma 3.. Suppose that Θ Θ 0 is a compact set with non-empty interior. hen, as, P sup m t (σ; φ) 0. φ Θ t=p We are ready to show our main consistency result. he proof is presented in Appendix A. heorem 3.3. Let Θ be a compact subset of Θ 0 with non-empty interior. hen, the least squares estimator ˆφ converges to the true φ in probability. Our next task is to establish asymptotic normality of ˆφ. heorem 3.4. Let all of the assumptions of heorem 3.3 hold and, in addition,. m t (ϑ; φ) is twice continuously differentiable in φ for any fixed ϑ;. he matrix M lim t=p E m t (σ; φ) φ exists uniformly over F Θ and is continuous at (σ bias, φ) with respect to any pseudo-metric on F Θ for which ( σ, ˆφ) (σ bias, φ). Furthermore, the matrix M is invertible. hen, (ˆφ φ) d N(0, V ) where V := Γ with Γ := [γ i j ] i,j=,...,p = γ 0 γ γ... γ p γ p γ γ 0 γ... γ p 3 γ p γ p γ p γ p 3... γ γ 0. (3.0) For simplicity, the proof, that can be found in the Appendix A, is only shown for the case p = ; in that case, the covariance matrix takes the form: ( ) ( V V V := φ := ) φ ( + φ ) V V φ ( + φ ) φ. (3.) 8

9 Remark 3.5. he following observations are appropriate at this point:. he matrix Γ of (3.0) is non-singular if γ 0 > 0 and γ l 0 as l (see, e.g. Brockwell and Davis (99), Proposition 5..). his means that it is non-singular for any causal AR(p) process.. In the more general model (.4) with smooth mean function µ t = g(x t ) and constant scaling function σ t σ (possibly unknown), Hall and Van Keilegom (003) and Shao and Yang (0) proposed estimators for the autocorrelation coefficients that are / consistent with the same asymptotic variance covariance matrix as in (3.0). Hence, despite the fact that the scaling factor σ(x t ) may be non-constant in our setting, our estimators of autoregressive coefficients possess the parametric rate of convergence as if the conditional variance was known. 3. It is also important to emphasize that the methods in Hall and Van Keilegom (003) won t apply in the presence of a varying scaling function σ t = σ(x t ). Indeed, the method therein first provides the following estimates for the autocovariances of the error process: ˆγ 0 := ˆγ j := ˆγ 0 m m + ( j) m ( m) m=m i=j+ i=m+ {(D m y) i }, (3.) {(D j y) i }, (3.3) where (D j y) i := y i y i j and, as before, is the number of observation. Here, m, m are positive integer-valued sequences converging to in such a way that m m, m / log, and m = o( / ). hen, the method proceeds to estimate the coefficients φ := (φ,..., φ p ) using Yule-Walker equations Γφ = γ (see Section 8. in Brockwell and Davis (99)): ( ˆφ,..., ˆφ p ) := ˆΓ (ˆγ,..., ˆγ p ), where ˆΓ is the p p matrix having ˆγ i j as its (i, j) th entry (i, j =,..., p). Such an approach clearly will not work if the scaling function σ t is not constant. 4. Shao and Yang s (0) approach consists of first estimating the unknown mean function g(x t ) using a B-spline smoother ĝ(x t ) and, then, constructing estimators for the coefficients ( ˆφ,..., ˆφ p ) using the estimated residuals r t := y t ĝ(x t ), via the Yule-Walker equations as in Hall and Van Keilegom (003). he autocovariances γ j are estimated using the sample autocovariances ˆγ j := k t= r tr t+k. Again, the Yule-Walker estimation approach will not work if σ t is not constant. It is interesting to ask ourselves as to what happens if we consider the more general model (.4) with a non-trivial mean function µ t = g(x t ). In this case, in order to estimate φ, the following algorithm is an option: Algorithm 3... Estimate the mean function µ t using, say, a local linear regression method applied to the observations {y t } t {,..., } (see Section 4 below for more details on this method). Let ˆµ t be the resulting estimator.. Compute the centered observations ŷ t = y t ˆµ t and define the new pseudoresiduals ˆη t = ŷt ŷ t. 3. Again, obtain a biased estimator of σ t applying local linear regression to ˆη t and then follow (.) and (.). 9

10 It turns out that, if the mean function µ t can be estimated consistently, φ can be estimated at the same rate of convergence as before. Note that the mean function µ t = g( t ) here depends on the number of observations. his helps us separate trend, or a low-frequency component, represented by the µ t, from the zero-mean error process, that plays the role of the high-frequency component. A more precise formulation is given in the following result, whose proof is outlined in the Appendix A. Corollary 3.6. Let us assume that g ( ) exists and is continuous on (0, ). Also, let K be a kernel function with bounded support that integrates up to (i.e. it is a proper density) and whose first moment is equal to 0. Select a sequence h := h such that h and h 0, and let ˆµ t be the corresponding sequence of local linear estimators that use the kernel Kh (x) := h K( h x). Note that it is h that plays the role of bandwidth here and not just h. hen, the estimator ˆφ obtained using the Algorithm 3. still satisfies the two assertions of heorem Variance function estimation Estimating the variance function σ t (x) is very similar to how it was done in Dahl and Levine (006). For simplicity, we will illustrate the idea only for p =. As a reminder, the first step of our proposed method is estimating not σ but rather σ,bias (x) = σ (x) + φ, by the local linear regression applied to the squared-pseudoresiduals η t. As in Dahl and Levine (006), we assume that the kernel K(u) is a two-sided proper density second order kernel on the interval [, ]; this means that. K(u) 0 and K(u) du = 0. µ = uk(u) du = 0 and µ σ K = u K(u) du 0. We also denote R K = K (u) du. hen, the inconsistent estimator σ (x) of σ (x) is defined as the value â solving the local least squares problem (â, ˆb) = arg min a,b t=3 (η t a b(x t x)) K m (x t x), where as usual K m (x) := m K(x/m). Since σ (x) estimates σ,bias (x) consistently, at the next step we define a consistent estimator of σ (x) as follows: ˆσ (x) = σ (x)( + ˆφ ), (4.) where ˆφ = ( ˆφ, ˆφ ) is the least-squares estimator defined in (.). he following lemma can be proved by following almost verbatim heorem 3 of Dahl and Levine (006) and is omitted for brevity. Below, we will use Dσ (x) and D σ (x) to denote the first and the second-order derivatives of the function σ (x), respectively. Lemma 4.. Under the above assumptions ()-() on the kernel and assumptions ()-(3) on the data generating process (.) introduced in Section 3, σ (x) is a consistent estimator of σ,bias (x) Moreover, σ (x) σ,bias (x) B(φ, x) V / (φ, x) 0 d N(0, ),

11 where the bias B(φ, x) and variance V (φ, x) of σ (x) are such that { m σ [ K B(φ, x) = D σ (x)/4 γ (Dσ (x)) /σ (x) ] } + o(m ) + O( ) V (φ, x) = R K C(φ, φ )σ 4 (x)( m) + o( m ). and the above constant C(φ, φ ) depends only on φ and φ. Now we are ready to state the main result of this section. heorem 4.. Under the same assumptions as in Lemma 4., the estimator ˆσ (x) introduced in (4.) is an asymptotically consistent estimator of σ (x) that is also asymptotically normal with the bias ( + φ )B(φ, x) and the variance ( + φ ) V (φ, x). Proof. By the Slutsky s theorem, we have σ (x) σ,bias (x) B(φ,x) ( + ˆφ ) d ( + φ )ζ with ζ N(0, ). V (φ,x) his means that ˆσ is a consistent estimator of σt ( + φ ) V (φ, x). with the bias ( + φ )B(φ, x) and the variance 5 Numerical results 5. Simulation study In this part we review the finite-sample performance of the proposed estimators. In order to do this, we consider three model specifications given in able of Appendix B. he variance function specifications are the same as those in Dahl and Levine (006). he specification of σ t in Model is a leading example in econometrics/statistics and can generate ARCH effects if x t = y t. Model is adapted from Fan and Yao (998). In particular, the choice of σ t is identical to the variance function in their Example. he variance function in Model 3 is from Härdle and sybakov (997). We take a fixed design x t = t/ for t = 0,..., and compute the WLSE estimators ˆφ and ˆφ of (.) for the previously mentioned variance function specifications and three different samples sizes, = 00, = 000, and = 000. In order to assess the performance of the estimator (.), we compute the Mean Squared Error (MSE) defined by MSE(ˆσ) := M M i= (ˆσ t,i σt ), t= where ˆσ t,i is the estimated variance function at x t in the i th simulation and M is the number of simulations. We use a local linear estimator σ t for estimating the biased variance function σ,bias t of (.0), as in the step of the Algorithm. in Section. he selection of the method s bandwidth was carried out by a 0-fold cross-validation method (see, e.g., Section in Fan and Yao (003) for further information). able in Appendix B provides the MSE for the above three model specifications and sample sizes, while able 3 shows the sampling mean and standard errors for the estimators ˆφ and ˆφ. In the able 3 of Appendix B, Mn(Sd) stand for Mean(Standard Deviation). We also consider three different parameter settings: () (φ, φ ) = (0.6, 0.3), () (φ, φ ) = (0.6, 0.6), (3) (φ, φ ) = (0.98, 0.6).

12 For the true parameter values (φ, φ ) = (0.6, 0.3), the asymptotic standard deviation and covariance given in (3.0) take the values: V = 0.953, V = In particular, the above standard error should be compared with the asymptotic theoretical standard deviation V / from heorem 3.4. For the sample sizes 00, 000, and 000, V / takes the values , 0.030, and 0.03, which match the sampling standard deviations of able 3 in Appendix B. he results show clear improvement for increasing sample sizes; Models and 3 seem to be a little easier to estimate than Model. Finally, again for φ = 0.6 and φ = 0.3, Figure below shows the sampling densities for ˆφ and ˆφ corresponding to each of the three models and three sample sizes. No severe small sample biases seem to be present in any of the pictures. Model Model Rel. Freq =000 =000 =00 Normal Rel. Freq =000 =000 =00 Normal φ φ Model Model Rel. Freq =000 =000 =00 Normal Rel. Freq =000 =000 =00 Normal φ φ Model 3 Model 3 Rel. Freq =000 =000 =00 Normal Rel. Freq =000 =000 =00 Normal φ φ Figure : Sampling densities in comparison with the standard normal density under the three alternative variance function specifications of able of Appendix B. he true parameter values are φ = 0.6 and φ = 0.3. he number of Monte Carlo replications is 000.

13 5. Real data application We now apply our method to the first differences of annual global surface air temperatures in Celsius from 880 through 985. his data set has been extensively analyzed in the literature using different versions of the general nonparametric regression model (.4). For instance, Hall and Van Keilegom (003) fixed σ t and estimated the coefficients of both the AR() and AR() models for {v t } t {,..., }, directly from the observations {y t } t {,..., } before estimating the trend function µ t = g(x t ) (see Remark 3.5 above for further information about their method). hey reported point estimates of ˆφ = 0.44 and ˆφ = Assuming again σ t as in Hall and Van Keilegom (003), Shao and Yang (0) first used linear B-splines to estimate the trend function g, and then applied a Yule-Walker type estimation method to the residuals ŷ t = y t ˆµ t, under an AR() model specification (see Remark 3.5). hey found a point estimate of ˆφ = with a standard error he left panel of Figure shows the scatter plot of the temperature differences against time and the estimated mean ˆµ t = ĝ(x t ) using a simple local linear estimator applied to the observations (y t ). We then apply our estimation procedure to the differentials ŷ t = y t ˆµ t, assuming an AR() model specification for the error process {v t } t {,..., }. he point estimates for φ and φ are respectively ˆφ = and ˆφ = with a standard error of 0.08 (see (3.)). he resulting estimated variance function is shown in the right panel of Figure. As observed there, ˆσ t exhibits an interesting V-shaped pattern with minimum around the year 945. Also, the estimated values ˆφ and ˆφ are consistent with those reported by Hall and Van Keilegom (003). emperture difference (in CeLsius) Estimated variance function ime (in years) ime (in years) Figure : Differences of annual global surface air temperatures in Celsius from 880 through Discussion In this manuscript, we propose a method for estimation of the variance structure of the scaled autoregressive process unknown coefficients and scale variance function σ t. his method is being proposed to extend earlier results of Dahl and Levine (006), where the analogous problem for the specific case of error autoregressive process of order was solved. he direct generalization of the method of Dahl and Levine (006) does not seem to be possible in the case of the autoregressive process of order more than ; thus, the method proposed in this manuscript represents a qualitatively new procedure. his data set was obtained from the web site 3

14 here is a number of interesting issues left unanswered here that we plan to address in the future research on this subject. Although we only examined the model (.) with the conditional mean equal to zero, an important practical issue is often the robust estimation of the parametrically specified conditional mean when the error process is conditionally heteroscedastic. As an example, an autoregressive conditional mean of order l is commonly assumed. Although standard regression procedures can be made robust in this situation by using the heteroscedasticity-consistent (HC) covariance matrix estimates as suggested in Eicker (963) and White (980), there may be advantages in considering alternative methods that take a specific covariance structure into account. Such methods are likely to provide more efficient estimators. his has been done in, for example, Phillips and Xu (005) who addressed that issue by conducting asymptotic analysis of least squares estimates of the conditional mean coefficients µ t, t =,..., (see (.4)) under the assumption of strongly mixing martingale difference error process and a non-constant variance function. Our setting is not a special case of Phillips and Xu (005) since for our error process E(v t F t ) 0 (where F t = σ(v s, s t) is the natural filtration). his case, to the best of our knowledge, has not been considered in the literature before. We believe, therefore, that the asymptotic analysis of the least squares estimates of the coefficients µ t under the same assumptions on the error process as in (.) is an important topic for future research. Also, being able to estimate the exact heteroscedasticity structure is important in econometrics in order to design unit-root tests that are robust to violations of the homoscedasticity assumption. he size and power properties of the standard unit-root test can be affected significantly depending on the pattern of variance changes and when they occur in the sample; an extensive study of possible heteroscedasticity effects on unit-root can be found in Cavaliere (004). We also intend to consider the design of robust unit-root tests under the serial innovations specification as part of our future research. Finally, yet another interesting topic of future research is a possible extension of these results to the case of a more general ARMA(p,q) error process. One of the possibilities may be using the difference-based pseudoresiduals again to construct an inconsistent estimator of the variance function σt first. Indeed, since the scaling constant will only be dependent on the coefficients of the ARMA (p,q) error process, the MINPIN estimators of the coefficients of the error process based on such an inconsistent variance estimator will be unaffected. herefore, estimation of the coefficients of the error process will proceed in the same way as for usual ARMA processes. he final correction of the nonparametric variance estimator also appears to be straightforward. Acknowledgements José E. Figueroa-López has been partially supported by the NSF grant DMS Michael Levine has been partially supported by the NSF grant DMS he authors gratefully acknowledge the constructive comments provided by two anonymous referees, which significantly contributed to improve the quality of the manuscript. A Proofs Proof of Proposition.. It is easy to see that ηt = σ γ 0 m a j + j=0 i= j=0 m m i a j a j+i γ i. 4

15 Recalling that for an AR() time series, it follows that φ γ 0 = φ, and γ i = φ i γ 0, m j=0 a j + φ his can be written as the following polynomial of φ : m j=0 m m i a j a j+i φ i =. i= j=0 m a j + φ ( + a j a j+ ) + j=0 hen, we get the following system of equations: (i) (iii) m j=0 m i=,i j=0 m a j =, (ii) + a j a j+ = 0, m i a j a j+i = 0, j=0 j=0 i {, 3,..., m}. m i a j a j+i φ i = 0. Suppose that a 0 0. hen, equation (iii) for i = m, implies that a 0 a m = 0 and, hence, a m = 0. Equation (iii) for i = m yields a 0 a m + a a m = 0 and thus a m = 0. By induction, it follows that a m = a m = = a 3 = 0. Plugging in (i-iii), which admits as unique solution a 0 + a + a =, + a 0 a = 0, a 0 a + a a = 0, a 0 = ±, a = 0, a =. If a 0 = 0, but a 0, one can similarly prove that the only solution is a = ±, a 3 =, a i = 0, otherwise. he statement of the proposition can be obtained by induction in k. Proof of Lemma 3.. his can be done by appealing to heorem in Andrews (987). First, using representations (3.5)-(3.7), we define W t := (ε t, ε t,... ) R N, q t,k (W t, φ) := ε t ψ i ε t k i. i=0 It remains to verify the assumptions A-A3 of heorem in Andrews (987). As stated in Corollary of Andrews (987), one can check its condition A4 therein instead of condition A3 since A4 implies A3. We now state these three conditions: A. Θ is a compact set. 5

16 A. Let B(φ 0 ; ρ) Θ be an open ball around φ 0 of radius ρ and let m k,t (σ; ρ) = sup{m k,t(σ; φ) : φ B(φ 0 ; ρ)}, m k,t (σ; ρ) = inf{m k,t (σ; φ) : φ B(φ 0 ; ρ)}. (A.) (A.) hen, the following two conditions are satisfied: (a) All of m k,t (σ; φ), m k,t (σ; ρ), and m k,t (σ; ρ) are random variables for any φ Θ, any t and any sufficiently small ρ; (b) Both m k,t (σ; ρ) and m k,t (σ; ρ) satisfy pointwise weak laws of large numbers for any sufficiently small ρ. A4. For each φ Θ there is a constant τ > 0 such that d( φ, φ) τ implies m t (σ; φ) m t (σ; φ) B t h(d( φ, φ)) where B t is a nonnegative random variable (that may depend on φ) and lim t= EB t <, while h : R + R + is a nonrandom function such that h(y) h(0) = 0 as y 0. Above, denotes the standard Euclidean norm in R p. Since condition A above is assumed as a hypothesis, we only need to work with conditions A and A4. he verification of these is done through the following two steps:. Let ρ > 0 small enough such that B(φ 0 ; ρ) Θ. hen, recalling the representation (3.5) and (3.7), m,t (σ; φ) = ε t ψ i ε t i, i=0 (A.3) we find that the supremum of m,t (σ; φ), taken over φ B(φ 0 ; ρ), also exists and is a random variable. Indeed, each of the summands in (A.3) is a continuous function of φ as we already established earlier, the convergence in mean squared to m,t (σ; φ) is uniform in φ due to (3.8) and, therefore, m,t (σ; φ) is continuous in φ as well. hat, in turn, implies the existence of m,t (σ; ρ). he existence of m,t (σ; ρ) is established in exactly the same way. Moreover, the pointwise WLLNs for both sup φ B m k,t (σ; φ) and inf φ B m k,t (σ; φ) are also clearly satisfied since E m k,t (σ; φ) 0 for any φ B(φ; ρ) Θ.. We now show condition A4 above for m,t (σ; φ) (m,t (σ; φ) can be treated analogously). Denote m,t (σ; φ) = ε t i=0 ψ i ε t i where ψ i correspond to the MA( ) representation of the AR() series with the parameter vector φ = (φ, φ ) Θ. For the sake of brevity we will use m and m for m,t (σ; φ) and m,t(σ; φ), respectively. Note that Let B t := i=0 ψ i ε t i ε t and m m = ε t ψi ε t i ε t ψ i ε t i i=0 i=0 ψ i ε t i ε t i=0 ( ) ψ i ψ i. i=0 ψ i d(φ, φ) := ( ) ψ i ψ i. ψ i=0 i 6

17 hen, sup t= EB t sup ( E t= i=0 ψ i ε t i ε t ) = ψi <. i=0 Now we need to treat the second multiplicative term. First, recalling (3.9), it is easy to conclude, by induction, that the coefficients ψ j are continuously differentiable functions of φ = (φ, φ ) for any φ B(φ, ρ) Θ. herefore, using (3.8), one can easily establish that lim φ φ d(φ, φ) = 0. Note that the quantity ( ) ψ i ψ i i=0 ψ i is not a metric and, therefore, the verification of Assumption A4 seems in doubt at first sight. However (see Andrews (99)), the fact that Assumption A4 implies Assumption A3 does not need the argument d( φ, φ) of the function h to be a proper metric but only that d( φ, φ) 0 as φ φ. Proof of heorem 3.3. Our proof of consistency will rely on the heorem A of Andrews (994) with W t = (y t, y t, y t ). We will simply verify that the sufficient conditions of heorem A are true. he first assumption C(a) follows from Lemma 3. taking m t (σ; φ) 0. It remains to show the other conditions therein. he first part of Assumption C(b) of the heorem A is immediately satisfied because m(σ; φ) 0 for any φ and σ. Since σ t is a local linear regression estimator, it is clear that it is twice continuously differentiable as long as the kernel function K() is twice continuously differentiable; thus, the second part of of the Assumption C(b) is also true. he Assumption C(c) is true if the Euclidean norm of m(σ; φ) = lim t= E m t(σ; φ) is finite. Hereafter, we denote the euclidian norm by. In our case, since both martingale difference sequences v t ε t and v t ε t have mean zero, clearly sup Θ F m(σ; φ) = 0 < and the Assumption C(c) is satisfied. he Assumption C(d) is true because Θ is compact, the functional d t = m t(σ; φ)m t (σ; φ)/, is continuous in φ and the Hessian matrix d t φ allows us to conclude that the weak consistency holds: ˆφ P φ. is positive definite (can be verified). All of the above Proof of heorem 3.4. Recall that σ t is the inconsistent estimator of σ t which, nevertheless, estimates the quantity σt bias = σt +φ consistently (see Lemma 4. above); it is corrected to obtain ˆσ t = σ t ( + ˆφ ). We also recall the notation (3.) and define, for some generic argument ϑ = (ϑ,..., ϑ ) and ϕ = (ϕ, ϕ ), m (ϑ; ϕ) := m t (ϑ; ϕ), t= with m t (ϑ, ϕ) given as in (3.4). Since the vector valued function m t (ϑ t ; ϕ) is twice continuously differentiable, we can use a aylor expansion around ( σ; φ) to get m ( σ; ˆφ) = m ( σ; φ) + φ m ( σ; φ ) (ˆφ φ), (A.4) 7

18 for some φ that lies on the straight line connecting ˆφ and φ. Note that, due to the first order conditions (3.6), the left-hand side of the equation (A.4) cancels out. Hence, using the consistency of ˆφ for φ (heorem 3.3) and the assumption in the statement of the theorem, we obtain that (ˆφ φ) = [ M + o p ()] m ( σ t ; φ), provided that the matrix M = lim t= E m t φ (σbias ; φ) exists and is invertible. In order to verify the latter condition, note that in our case, ( m,t ) m m,t t φ (σbias φ ; φ) = φ In light of (.6), it follows that M = lim m,t φ m,t φ t= which, in particular, is clearly invertible. (σ bias,φ) = ( + φ ) ( v t v t v t v t v t v t E m ( t φ (σbias ; φ) = ( + φ ) γ0 γ γ γ 0 Next, let m (ϑ; ϕ) = t= E m t(ϑ; ϕ) and define the empirical process ν (ϑ) = [ m (ϑ; φ) m (ϑ; φ)] ), ), (A.5) (A.6) Clearly, Now, using (3.5), m ( σ; φ) = m (σ bias ; φ) + ν ( σ) ν (σ bias ) + m ( σ; φ) m (σ bias ; φ) = t= ( m t (σ bias ; φ) = + φ t= v t ε t, t= v t ε t ) Hence, using the CL for martingale difference sequences (see Billingsley (96)), m (σ bias ; φ) is asymptotically normal N(0, ( + φ ) S), where ( Ev S := t ε t Ev t v t ε ) ( ) t γ0 γ Ev t v t ε t Evt =. ε t γ γ 0 Note that M = ( + φ ) S. hen, if we can show that our task is over and we can say that (ˆφ φ) ν ( σ) ν (σ bias ) P 0 and m ( σ; φ) P 0, (A.7) d N(0, V ) with V = M (( + φ ) S)( M ) = ( ( + φ ) S) (( + φ ) S)( ( + φ ) S ) ( ) = S γ0 γ = γ0. γ γ γ 0 A simple computation using (.6)-(.7) leads to (3.0). We show (A.7) through the following two steps:. 8

19 () First, denoting the indicator function of an event A by A, we have, for any ε > 0, m ( σ; φ) = m ( σ; φ) {ρ( σ,σ bias )>ε} + m ( σ; φ) {ρ( σ,σ bias ) ε}. he first term on the right is o p () due to consistency of σ as an estimator of σ bias. Similarly, the second term therein is o p () due to the fact that E m t (σ bias ; φ) = 0 and Em t (ϑ; ϕ) is uniformly continuous in ϑ. We then conclude that m ( σ; φ) P 0 () We can show that ν ( σ) ν (σ bias ) P 0 using the argument of Andrews (994), pp , that requires, first, establishing stochastic equicontinuity of ν (ϑ) at σ bias and then deducing the required convergence in probability. o do this, note first that / t= ( v t E vt ) = Op (), / v t k v t k = O p (), t= (A.8) for k = 0,. he empirical process ν (ϑ) has its values in R ; examining its first coordinate ν (ϑ), one easily obtains ν (ϑ) = / t= S tτ t (ϑ), where S t = (y t y t, φ y t, φ y t y t ), τ t (ϑ) = (ϑ t ϑ t, ϑ t, ϑ t ϑ t ). hen, denoting the standard Euclidean norm in R 3 by, for η > 0 and δ > 0, we have ( ) lim sup P = lim sup P lim sup P sup ρ( σ,σ bias )<δ ( ( sup ρ( σ,σ bias )<δ sup ρ( σ,σ bias )<δ ν ( σ) ν (σ bias ) > η ) / (S t E S t)( τ t ( σ) τ t (σ bias )) > η t= ) / (S t E S t ) > η < ε, δ t= for δ > 0 small-enough, since / t= (S t E S t ) = O p () due to (A.8). Using exactly the same argument, one easily obtains stochastic equicontinuity of the second coordinate ν (ϑ) at σbias ; therefore, needed convergence in probability for ν ( σ) ν (σ bias ) follows from the convergence in both coordinates separately and the Slutsky s theorem. Proof of Corollary 3.6. he following is a sketch of the proof for the case of p =. he conditions given in the corollary are not the least restrictive; they guarantee consistency of the estimator ˆµ t g(x t ) and the order O((h/ ) ) for the bias; see heorem 6. in Fan and Yao (003) for details. Note that the model (.4) can be represented in the functional autoregressive form σt y t = f t (φ) + φ σt y t + φ σt y t + ε t where the new mean function is f t (φ) = σt g(x t ) φ σt g(x t ) φ σt g(x t ). Recall that functions m k,t (φ) v t k ε t, k =, do not depend on σ t ; therefore, (3.5) is still true and the Lemma 3. is also true as well. his implies that the heorem 3.3 is true as well. Finally, when trying to prove heorem 3.4, one quickly discovers that the matrix mt φ (φ) is now different; for example, m,t φ = γ 0 g(x t )σt v t. Nevertheless, since the second term of the above has the mean zero, its expectation is the same as before and the same is true of the other four elements of the matrix; thus, matrix M stays the same. Finally, the vector S t = ((y t µ t (y t µ t ), φ (y t µ t ), φ (y t µ t )(y t µ t )) is now different; nevertheless, it is easy to verify that / t= (S t E S t ) = O p () again; thus the final result is still valid. 9

20 References Andrews, D.W.K. (987) Consistency in nonlinear econometric models: a generic uniform law of large numbers, Econometrica 55, Andrews, D.W.K. (994) Asymptotics for semiparametric econometric models via stochastic equicontinuity, Econometrica 6, Billingsley, P. (96) he Lindeberg-Levy heorem for Martingales, in Proceedings of American Mathematical Society, Vol, Brockwell, P., and Davis, R. (99) ime series: theory and methods. Springer. Cavaliere, G. (004) Unit root tests under time-varying variance shifts, Econometric Reviews 3, Chen, R. and say, R.S. (993) Functional coefficient autoregressive models, Journal of American Statistical Association, 88, Dahl, C. and Levine, M. (006) Nonparametric estimation of volatility models with serially dependent innovations, Statistics and Probability Letters, 76(8), Eicker, B. (963) Limit theorems for regression with unequal and dependant errors, Annals of Mathematics and Statistics 34, Fan, J. and Yao, Q. (998) Efficient estimation of conditional variance functions in stochastic regression, Biometrika, 85, Fan, J. and Yao, Q. (003) Nonlinear time series - nonparametric and parametric methods. Springer. Hall, P. and Van Keilegom, I. (003) Using difference-based methods for inference in nonparametric regression with time series errors, Journal of the Royal Statistical Society (B), 85(), Härdle, W. and sybakov, A. (997) Local polynomial estimators of the volatility function in nonparametric autoregression, Journal of Econometrics, 8, 3-4. Phillips, P.C.B., and Xu, K.-L. (005) Inference in Autoregression under heteroscedasticity, Journal of ime Series Analysis 7, Shao, Q. and Yang, L. (0) Autoregressive coefficient estimation in nonparametric analysis, Journal of ime Series Analysis 3, White, H. (980) A heteroscedasticity-consistent covariance matrix estimator and a direct test for heteroscedasticity, Econometrica 48,

21 B ables Model Specifications σt = 0.5x t + 0. σt = 0.4 exp( x t ) σt = ϕ(x t +.) +.5ϕ(x t.) able : Alternative data generating processes. ϕ( ) denotes the standard normal probability density. φ = 0.6, φ = 0.3 φ = 0.6, φ = 0.6 φ = 0.98, φ = 0.6 Model =00 =000 =000 =00 =000 =000 =00 =000 = able : Mean Squared Errors (MSE) of ˆσ (x) under the variance function specifications of able with 000 Monte Carlo replications and a 0-fold cross-validation for bandwidth selection. φ = 0.6, φ = 0.3 Model = 00 = 000 = 000 Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ 0.559(0.4) 0.98(0.0) 0.594(0.09) 0.30(0.030) 0.595(0.0) 0.30(0.0) 0.565(0.05) 0.96(0.098) 0.594(0.09) 0.30(0.030) 0.598(0.0) 0.99(0.0) (0.03) 0.94(0.099) 0.598(0.030) 0.98(0.030) 0.597(0.0) 0.300(0.0) φ = 0.6, φ = 0.6 Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ 0.554(0.) (0.04) 0.595(0.06) -0.59(0.04) 0.597( 0.07) (0.07) 0.56(0.00) -0.54(0.099) 0.597( 0.05) (0.05) 0.596(0.07) (0.08) ( 0.00) (0.097) 0.596(0.05) (0.05) 0.597(0.07) (0.07) φ = 0.98, φ = 0.6 Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ Mn(Sd) ˆφ 0.894(0.40) -0.57(0.8) 0.97(0.06) -0.59(0.05) 0.975(0.07) ( 0.08) 0.99(0.08) (0.099) 0.973(0.05) -0.59(0.05) 0.976(0.07) (0.07) ( 0.0) (0.098) 0.974(0.06) ( 0.06) 0.976(0.07) (0.08) able 3: Sampling Means and Standard Deviations under under the variance function specifications of able with 000 Monte Carlo replications. Department of Statistics, 50 N. University St., Department of Statistics, Purdue University, West Lafayette, IN figueroa@purdue.edu Department of Statistics, 50 N. University St., Department of Statistics, Purdue University, West Lafayette, IN mlevins@purdue.edu.

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

A Primer on Asymptotics

A Primer on Asymptotics A Primer on Asymptotics Eric Zivot Department of Economics University of Washington September 30, 2003 Revised: October 7, 2009 Introduction The two main concepts in asymptotic theory covered in these

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

Econometric Forecasting

Econometric Forecasting Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 1, 2014 Outline Introduction Model-free extrapolation Univariate time-series models Trend

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR J. Japan Statist. Soc. Vol. 3 No. 200 39 5 CALCULAION MEHOD FOR NONLINEAR DYNAMIC LEAS-ABSOLUE DEVIAIONS ESIMAOR Kohtaro Hitomi * and Masato Kagihara ** In a nonlinear dynamic model, the consistency and

More information

Department of Economics, Vanderbilt University While it is known that pseudo-out-of-sample methods are not optimal for

Department of Economics, Vanderbilt University While it is known that pseudo-out-of-sample methods are not optimal for Comment Atsushi Inoue Department of Economics, Vanderbilt University (atsushi.inoue@vanderbilt.edu) While it is known that pseudo-out-of-sample methods are not optimal for comparing models, they are nevertheless

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented

More information

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

An algorithm for robust fitting of autoregressive models Dimitris N. Politis

An algorithm for robust fitting of autoregressive models Dimitris N. Politis An algorithm for robust fitting of autoregressive models Dimitris N. Politis Abstract: An algorithm for robust fitting of AR models is given, based on a linear regression idea. The new method appears to

More information

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model. Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

Discrete time processes

Discrete time processes Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following

More information

A New Test in Parametric Linear Models with Nonparametric Autoregressive Errors

A New Test in Parametric Linear Models with Nonparametric Autoregressive Errors A New Test in Parametric Linear Models with Nonparametric Autoregressive Errors By Jiti Gao 1 and Maxwell King The University of Western Australia and Monash University Abstract: This paper considers a

More information

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017 Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest

More information

Heteroskedasticity and Autocorrelation Consistent Standard Errors

Heteroskedasticity and Autocorrelation Consistent Standard Errors NBER Summer Institute Minicourse What s New in Econometrics: ime Series Lecture 9 July 6, 008 Heteroskedasticity and Autocorrelation Consistent Standard Errors Lecture 9, July, 008 Outline. What are HAC

More information

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach By Shiqing Ling Department of Mathematics Hong Kong University of Science and Technology Let {y t : t = 0, ±1, ±2,

More information

Optimal Jackknife for Unit Root Models

Optimal Jackknife for Unit Root Models Optimal Jackknife for Unit Root Models Ye Chen and Jun Yu Singapore Management University October 19, 2014 Abstract A new jackknife method is introduced to remove the first order bias in the discrete time

More information

DEPARTMENT MATHEMATIK ARBEITSBEREICH MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK ARBEITSBEREICH MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE Estimating the error distribution in nonparametric multiple regression with applications to model testing Natalie Neumeyer & Ingrid Van Keilegom Preprint No. 2008-01 July 2008 DEPARTMENT MATHEMATIK ARBEITSBEREICH

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions arxiv: arxiv:0000.0000 REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions Li Pan and Dimitris N. Politis Li Pan Department of Mathematics University of California

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

LECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH

LECTURE ON HAC COVARIANCE MATRIX ESTIMATION AND THE KVB APPROACH LECURE ON HAC COVARIANCE MARIX ESIMAION AND HE KVB APPROACH CHUNG-MING KUAN Institute of Economics Academia Sinica October 20, 2006 ckuan@econ.sinica.edu.tw www.sinica.edu.tw/ ckuan Outline C.-M. Kuan,

More information

A Bootstrap Test for Conditional Symmetry

A Bootstrap Test for Conditional Symmetry ANNALS OF ECONOMICS AND FINANCE 6, 51 61 005) A Bootstrap Test for Conditional Symmetry Liangjun Su Guanghua School of Management, Peking University E-mail: lsu@gsm.pku.edu.cn and Sainan Jin Guanghua School

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Identification and Estimation of Partially Linear Censored Regression Models with Unknown Heteroscedasticity

Identification and Estimation of Partially Linear Censored Regression Models with Unknown Heteroscedasticity Identification and Estimation of Partially Linear Censored Regression Models with Unknown Heteroscedasticity Zhengyu Zhang School of Economics Shanghai University of Finance and Economics zy.zhang@mail.shufe.edu.cn

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han and Robert de Jong January 28, 2002 Abstract This paper considers Closest Moment (CM) estimation with a general distance function, and avoids

More information

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series Willa W. Chen Rohit S. Deo July 6, 009 Abstract. The restricted likelihood ratio test, RLRT, for the autoregressive coefficient

More information

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

1 Procedures robust to weak instruments

1 Procedures robust to weak instruments Comment on Weak instrument robust tests in GMM and the new Keynesian Phillips curve By Anna Mikusheva We are witnessing a growing awareness among applied researchers about the possibility of having weak

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE)

MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE) MAT 3379 (Winter 2016) FINAL EXAM (PRACTICE) 15 April 2016 (180 minutes) Professor: R. Kulik Student Number: Name: This is closed book exam. You are allowed to use one double-sided A4 sheet of notes. Only

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

Estimation of the Conditional Variance in Paired Experiments

Estimation of the Conditional Variance in Paired Experiments Estimation of the Conditional Variance in Paired Experiments Alberto Abadie & Guido W. Imbens Harvard University and BER June 008 Abstract In paired randomized experiments units are grouped in pairs, often

More information

A time series is called strictly stationary if the joint distribution of every collection (Y t

A time series is called strictly stationary if the joint distribution of every collection (Y t 5 Time series A time series is a set of observations recorded over time. You can think for example at the GDP of a country over the years (or quarters) or the hourly measurements of temperature over a

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data Song Xi CHEN Guanghua School of Management and Center for Statistical Science, Peking University Department

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Studies in Nonlinear Dynamics & Econometrics

Studies in Nonlinear Dynamics & Econometrics Studies in Nonlinear Dynamics & Econometrics Volume 9, Issue 2 2005 Article 4 A Note on the Hiemstra-Jones Test for Granger Non-causality Cees Diks Valentyn Panchenko University of Amsterdam, C.G.H.Diks@uva.nl

More information

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Advances in Decision Sciences Volume, Article ID 893497, 6 pages doi:.55//893497 Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Junichi Hirukawa and

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

The Estimation of Simultaneous Equation Models under Conditional Heteroscedasticity

The Estimation of Simultaneous Equation Models under Conditional Heteroscedasticity The Estimation of Simultaneous Equation Models under Conditional Heteroscedasticity E M. I and G D.A. P University of Alicante Cardiff University This version: April 2004. Preliminary and to be completed

More information

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed

More information

Goodness-of-fit tests for the cure rate in a mixture cure model

Goodness-of-fit tests for the cure rate in a mixture cure model Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,

More information

Time Series Analysis. Asymptotic Results for Spatial ARMA Models

Time Series Analysis. Asymptotic Results for Spatial ARMA Models Communications in Statistics Theory Methods, 35: 67 688, 2006 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092050049893 Time Series Analysis Asymptotic Results

More information

University of Pavia. M Estimators. Eduardo Rossi

University of Pavia. M Estimators. Eduardo Rossi University of Pavia M Estimators Eduardo Rossi Criterion Function A basic unifying notion is that most econometric estimators are defined as the minimizers of certain functions constructed from the sample

More information

TESTING SERIAL CORRELATION IN SEMIPARAMETRIC VARYING COEFFICIENT PARTIALLY LINEAR ERRORS-IN-VARIABLES MODEL

TESTING SERIAL CORRELATION IN SEMIPARAMETRIC VARYING COEFFICIENT PARTIALLY LINEAR ERRORS-IN-VARIABLES MODEL Jrl Syst Sci & Complexity (2009) 22: 483 494 TESTIG SERIAL CORRELATIO I SEMIPARAMETRIC VARYIG COEFFICIET PARTIALLY LIEAR ERRORS-I-VARIABLES MODEL Xuemei HU Feng LIU Zhizhong WAG Received: 19 September

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

ESTIMATION OF NONLINEAR BERKSON-TYPE MEASUREMENT ERROR MODELS

ESTIMATION OF NONLINEAR BERKSON-TYPE MEASUREMENT ERROR MODELS Statistica Sinica 13(2003), 1201-1210 ESTIMATION OF NONLINEAR BERKSON-TYPE MEASUREMENT ERROR MODELS Liqun Wang University of Manitoba Abstract: This paper studies a minimum distance moment estimator for

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Econometrics of financial markets, -solutions to seminar 1. Problem 1

Econometrics of financial markets, -solutions to seminar 1. Problem 1 Econometrics of financial markets, -solutions to seminar 1. Problem 1 a) Estimate with OLS. For any regression y i α + βx i + u i for OLS to be unbiased we need cov (u i,x j )0 i, j. For the autoregressive

More information

AR, MA and ARMA models

AR, MA and ARMA models AR, MA and AR by Hedibert Lopes P Based on Tsay s Analysis of Financial Time Series (3rd edition) P 1 Stationarity 2 3 4 5 6 7 P 8 9 10 11 Outline P Linear Time Series Analysis and Its Applications For

More information

Final Exam November 24, Problem-1: Consider random walk with drift plus a linear time trend: ( t

Final Exam November 24, Problem-1: Consider random walk with drift plus a linear time trend: ( t Problem-1: Consider random walk with drift plus a linear time trend: y t = c + y t 1 + δ t + ϵ t, (1) where {ϵ t } is white noise with E[ϵ 2 t ] = σ 2 >, and y is a non-stochastic initial value. (a) Show

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Taking a New Contour: A Novel View on Unit Root Test 1

Taking a New Contour: A Novel View on Unit Root Test 1 Taking a New Contour: A Novel View on Unit Root Test 1 Yoosoon Chang Department of Economics Rice University and Joon Y. Park Department of Economics Rice University and Sungkyunkwan University Abstract

More information

Applied time-series analysis

Applied time-series analysis Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 18, 2011 Outline Introduction and overview Econometric Time-Series Analysis In principle,

More information

Local Polynomial Regression

Local Polynomial Regression VI Local Polynomial Regression (1) Global polynomial regression We observe random pairs (X 1, Y 1 ),, (X n, Y n ) where (X 1, Y 1 ),, (X n, Y n ) iid (X, Y ). We want to estimate m(x) = E(Y X = x) based

More information

Asymptotic distribution of GMM Estimator

Asymptotic distribution of GMM Estimator Asymptotic distribution of GMM Estimator Eduardo Rossi University of Pavia Econometria finanziaria 2010 Rossi (2010) GMM 2010 1 / 45 Outline 1 Asymptotic Normality of the GMM Estimator 2 Long Run Covariance

More information

Working Paper No Maximum score type estimators

Working Paper No Maximum score type estimators Warsaw School of Economics Institute of Econometrics Department of Applied Econometrics Department of Applied Econometrics Working Papers Warsaw School of Economics Al. iepodleglosci 64 02-554 Warszawa,

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Parameter estimation: ACVF of AR processes

Parameter estimation: ACVF of AR processes Parameter estimation: ACVF of AR processes Yule-Walker s for AR processes: a method of moments, i.e. µ = x and choose parameters so that γ(h) = ˆγ(h) (for h small ). 12 novembre 2013 1 / 8 Parameter estimation:

More information

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators Robert M. de Jong Department of Economics Michigan State University 215 Marshall Hall East

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Nonparametric Tests of Moment Condition Stability

Nonparametric Tests of Moment Condition Stability KU ScholarWorks http://kuscholarworks.ku.edu Please share your stories about how Open Access to this article benefits you. Nonparametric ests of Moment Condition Stability by ed Juhl and Zhijie Xiao 2013

More information

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s

Massachusetts Institute of Technology Department of Economics Time Series Lecture 6: Additional Results for VAR s Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture 6: Additional Results for VAR s 6.1. Confidence Intervals for Impulse Response Functions There

More information

Optimal bandwidth selection for the fuzzy regression discontinuity estimator

Optimal bandwidth selection for the fuzzy regression discontinuity estimator Optimal bandwidth selection for the fuzzy regression discontinuity estimator Yoichi Arai Hidehiko Ichimura The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP49/5 Optimal

More information

Chapter 4: Models for Stationary Time Series

Chapter 4: Models for Stationary Time Series Chapter 4: Models for Stationary Time Series Now we will introduce some useful parametric models for time series that are stationary processes. We begin by defining the General Linear Process. Let {Y t

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Midterm Suggested Solutions

Midterm Suggested Solutions CUHK Dept. of Economics Spring 2011 ECON 4120 Sung Y. Park Midterm Suggested Solutions Q1 (a) In time series, autocorrelation measures the correlation between y t and its lag y t τ. It is defined as. ρ(τ)

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations Farhat Iqbal Department of Statistics, University of Balochistan Quetta-Pakistan farhatiqb@gmail.com Abstract In this paper

More information

Simultaneous Equations and Weak Instruments under Conditionally Heteroscedastic Disturbances

Simultaneous Equations and Weak Instruments under Conditionally Heteroscedastic Disturbances Simultaneous Equations and Weak Instruments under Conditionally Heteroscedastic Disturbances E M. I and G D.A. P Michigan State University Cardiff University This version: July 004. Preliminary version

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

Monte Carlo Studies. The response in a Monte Carlo study is a random variable. Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating

More information

Cointegrating Regressions with Messy Regressors: J. Isaac Miller

Cointegrating Regressions with Messy Regressors: J. Isaac Miller NASMES 2008 June 21, 2008 Carnegie Mellon U. Cointegrating Regressions with Messy Regressors: Missingness, Mixed Frequency, and Measurement Error J. Isaac Miller University of Missouri 1 Messy Data Example

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Estimation of the functional Weibull-tail coefficient

Estimation of the functional Weibull-tail coefficient 1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,

More information

A Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003.

A Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003. A Note on Bootstraps and Robustness Tony Lancaster, Brown University, December 2003. In this note we consider several versions of the bootstrap and argue that it is helpful in explaining and thinking about

More information

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Autoregressive Moving Average (ARMA) Models and their Practical Applications Autoregressive Moving Average (ARMA) Models and their Practical Applications Massimo Guidolin February 2018 1 Essential Concepts in Time Series Analysis 1.1 Time Series and Their Properties Time series:

More information

Introduction to ARMA and GARCH processes

Introduction to ARMA and GARCH processes Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi Introduction to ARMA () and GARCH processes SNS Pisa 3 March 2010 1 / 24 Stationarity Strict stationarity: (X 1,

More information