Recursive Estimation of Time-Average Variance Constants Through Prewhitening

Size: px
Start display at page:

Download "Recursive Estimation of Time-Average Variance Constants Through Prewhitening"

Transcription

1 Recursive Estimation of Time-Average Variance Constants Through Prewhitening Wei Zheng a,, Yong Jin b, Guoyi Zhang c a Department of Mathematical Sciences, Indiana University-Purdue University, Indianapolis, IN 460, United States b Department of Finance, Insurance and Real Estate, University of Florida, Gainesville, FL 3605, United States c Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131, United States Abstract The time-average variance constant (TAVC) has been an important component in the study of time series. Many real problems request a fast and recursive way in estimating TAVC. In this paper we apply AR(1) prewhitenning filter to the recursive algorithm by Wu (009b), so that the memory complexity of order O(1) is maintained and the accuracy of the estimate is improved. This is justified by both theoretical results and simulation studies. Keywords: Central limit theorem, consistency, linear process, Markov chains, martingale, Monte Carlo, nonlinear time series, prewhitening, recursive estimation, spectral density. 010 MSC: 60F05 (Primary), 60F17 (Secondary) 1. Introduction 5 10 The time-average variance constant (TAVC) plays an important role in many time series inference such as unit root testing and statistical inference of the mean. Throughout the paper, we focus on a stationary time series {X i } i Z with mean µ = E(X i ) and finite variance. The TAVC typically has the representation of σ = k Z γ(k), where γ(k) = cov(x 0,X k ) is the covariance function at lag k. The TAVC is proportional to the spectral density function evaluated at the origin by a constant and the estimation of the latter has been extensively studied in literature. See for instance Stock (1994), Song and Schmeiser (1995), Phillips and Xiao (1998), Politis, Romano and Wolf (1999), Bühlmann (00), Lahiri (003), Alexopoulos and Goldsman (004), Haldrup and Jansson (005) and Jones et al. (006). In many applications, one has to sequentially update the estimate of σ while the observations are being accumulated one by one. For example, in MCMC experiments, while the observations are sequentially Corresponding address: Department of Mathematical Sciences, Indiana University-Purdue University Indianapolis 40 N. Blackford, LD 70, Indianapolis, IN, 460, United States. Tel.: ; fax: URL: weizheng@iupui.edu (Wei Zheng), yong.jin@warrington.ufl.edu (Yong Jin), gzhang13@gmail.com (Guoyi Zhang) Preprint submitted to Journal of LATEX Templates November 7, 014

2 generated, one would like to construct the confidence interval of µ by X n ± z α/ˆσ n / n to decide if the sample size n is sufficient to make the interval small enough. Here z α/ is the (1 α/)th percentile of the standard normal distribution and X n and ˆσ n are the sequentially updated estimates of µ and σ respectively, based on the first n observations. For the traditional methods of estimating σ, the computation time and the memory required will increase for each update of ˆσ n as n increases. This is prohibitive especially when multiple MCMC chains are run simultaneously. To address this issue, Wu (009b) modified the batch mean approach into a recursive version without sacrificing the convergence rate of O(n 1/3 ). Meanwhile, the memory complexity of each update is reduced from the traditional O(n) to O(1). In this paper, we incorporate the prewhitening idea into Wu (009b) s approach in order to achieve reduced mean squared error (MSE) for the estimation of σ. The idea of applying prewhitening (Press and Tukey, 1956) to the time series for TAVC (or the spectral density function) estimation has a long history. See for instance Andrews and Monahan (199), Lee and Phillips (1994), Phillips and Sul (003) and Sul, Phillips and Choi (005). The benefit of prewhitening is that the resulting residuals are less dependent than the original data and one can obtain more accurate estimate of the TAVC for the residuals due to larger effective degrees of freedom. From the latter, one can easily retrieve the estimate of σ. To be specific, suppose the residuals {e i } i Z are generated from {X i } i Z by the mechanism of a(l)x i = b(l)e i, where L is the lag operate and a( ) and b( ) are polynomial functions so that their roots do not fall in the unit circle. It is well known that σ = a(1) b(1) σe, where σe is the TAVC of {e i } i Z. If {X i } i Z follows an ARMA model, {e i } i Z becomes the white noise sequence under the proper choice of a( ) and b( ). In the latter case, the estimation of σe could be a lot more accurate than that of the original sequence. Of course, a( ) and b( ) are unknown even under the ARMA model, not to mention the original sequence may possibly follows a nonlinear model. However, by plugging in data-driven estimate of a( ) and b( ) according to some model fitting criteria, the transformed data {e i } i Z typically is less dependent than the original data and hence the advantage of prewhitening still remains. For the sake of simplicity of implementing the algorithm, we shall adopt the AR(1) prefilter in constructing recursive estimate of the TAVC. By this small modification, we would achieve substantial improvements estimation efficiency under some circumstances. Theoretical conditions are given for deciding if the prewhitening is needed or not. The rest of the paper is organized as follows. Section provides the algorithms of recursive estimation of TAVC. In Section 3, we derive the asymptotic properties of the proposed estimators. In Section 4, we discuss efficiency comparisons between the proposed method and Wu s method in terms of both theoretical results and simulation studies. Section 5 concludes the paper. All the proofs are included in Appendix.

3 Proposed algorithms Throughout the paper, we assume µ = 0 without loss of generality. We first propose an algorithm for estimating σ through prewhitening when µ is known. Then we generalize it to the unknown case. Here are some frequently used notations throughout the paper: a k = ck p ; t i = k N a ki ak i<a k+1 ; ρ = γ(1)/γ(0); ˆγ n (k) = n 1 n i=k+1 X ix i k ; γ n (k) = n 1 n i=k+1 (X i X n )(X i k X n ); ˆρ n = ˆγ n (1)/ˆγ n (0); ρ n = γ n (1)/ γ n (0); and v n = n l i, where l i = i t i When µ is known 55 To estimate σ, Wu (009b) proposed the statistic v 1 n n ( i X j ), which allows the recursive algorithm with the memory complexity of O(1). One way of improving the accuracy of the estimation is to prewhiten the original sequence so that the transformed sequence has weaker dependence and the corresponding estimation of TAVC would be more accurate. Here, we adopt the AR(1) model as the prefilter to produce a new sequence, i.e. e i = X i ρx i 1 with ρ = γ(1)/γ(0). Due to the relationship σ = σ e/(1 ρ), it is reasonable to estimate σ by V n /(v n (1 ρ) ) when ρ is known, where V n = Wi and W i = (X j ρx j 1 ). When ρ is unknown, one only need to plug in the estimate of ρ and hence σ can be estimated by ˆσ = ˆV n /(v n (1 ˆρ n ) ), (1) where ˆV n = n Ŵ i,n and Ŵi,n = i (X j ˆρ n X j 1 ). It can be shown that ˆV n = X j + ˆρ n X j 1 ˆρ n n X j X j To simplify notations, let S n,0 = n X i, S n,1 = n i= X ix i 1, W i,0 = i X j, W i,1 = i X j 1, V n,0 = n W i,0, V n,1 = n W i,1, and V n, = n W i,0w i,1. As a result, ˆρ n = S n,1 /S n,0 and ˆV n = V n,0 + ˆρ nv n,1 ˆρ n V n,. Now ˆσ n can be calculated recursively by the following Algorithm. Algorithm 1. At stage n, we store (k n, v n, X n, S n,0, S n,1, ˆρ n, W n,0, W n,1, V n,0, V n,1, V n, ). Note that t n = a kn. When n = 1, the vector is (1, 1, X 1, X 1, 0, 0, X 1, 0, X 1, 0, 0). At stage n + 1, we update the vector by If n + 1 = a 1+kn, let W n+1,0 = X n+1, W n+1,1 = X n and k n+1 = 1 + k n ; Otherwise, let W n+1,0 = W n,0 + X n+1, W n+1,1 = W n,1 + X n and k n+1 = k n.. Let S n+1,0 = S n,0 +X n+1, S n+1,1 = S n,1 +X n+1 X n, V n+1,0 = V n,0 +W n+1,0, V n+1,1 = V n,1 +W n+1,1, V n+1, = V n, + W n+1,0 W n+1,1, and v n+1 = v n + (n + a kn+1 ). 3

4 3. Let ˆρ n+1 = S n+1,1 /S n+1,0. 4. Calculate ˆV n+1 = V n+1,0 + ˆρ n+1v n+1,1 ˆρ n+1 V n+1, 70 Output: ˆσ n+1 = ˆV n+1 /(v n+1 (1 r) )... When µ is unknown In this case, we have to centralize the data by the sample average and then apply Algorithm 1 to the centralized data. Hence one can estimate σ by σ = Ṽn/(v n (1 ρ n ) ), () where Ṽ n = W i,n, Wi,n = (X j X n ρ n (X j 1 X n )). (3) Equation (3) can be rewritten as Ṽ n = Ṽ n + (1 ρ n ) X n d n (1 ρ n ) X n U n, (4) 75 where ρ n = n i= (X i X n )(X i 1 X n )/ n (X i X n ), Ṽn = n ( i (X j ρ n X j 1 )), d n = n l i and U n = n l i i (X j ρ n X j 1 ). Note that Ṽ n is similar with ˆV n except that the ˆρ n therein is replaced by ρ n. Also note that X n+1 = (n X n + X n+1 )/(n + 1) and ρ n = n i= X ix i 1 + X n (X 1 + X n ) (n + 1) X n n X i n X. n We can update σ n recursively by the following Algorithm Algorithm. At stage n, we store (k n, v n, d n, X 1, X n, X n, S n,0, S n,1, ρ n, W n,0, W n,1, V n,0, V n,1, V n,, U n ). Note that t n = a kn. When n = 1, the vector is (1, 1, 1, X 1, X 1, X 1, X1, 0, 0, X 1, 0, X1, 0, 0, X 1 ). At stage n + 1, we update the vector by 1. If n + 1 = a 1+kn, let W n+1,0 = X n+1, W n+1,1 = X n and k n+1 = 1 + k n ; Otherwise, let W n+1,0 = W n,0 + X n+1, W n+1,1 = W n,1 + X n and k n+1 = k n.. Let S n+1,0 = S n,0 + Xn+1, S n+1,1 = S n,1 + X n+1 X n, and X n+1 = (n X n + X n+1 )/(n + 1). 3. Let ρ n+1 = (S n+1,1 + X n+1 (X 1 + X n+1 ) (n + ) X n+1)/(s n+1,0 (n + 1) X n+1). 4. Let V n+1,0 = V n,0 + Wn+1,0, V n+1,1 = V n,1 + Wn+1,1, V n+1, = V n, + W n+1,0 W n+1,1, and v n+1 = v n + (n + a kn+1 ), d n+1 = d n + (n + a kn+1 ). 5. Let U n+1 = U n + (n + a kn+1 )(W n+1,0 ρ n+1 W n+1,1 ) and calculate Ṽ n+1 = V n+1,0 + ρ n+1v n+1,1 ρ n+1 V n+1,. 6. Calculate Ṽn+1 = Ṽ n+1 + (1 ρ n+1 ) X n+1 d n+1 (1 ρ n+1 ) X n+1 U n+1 90 Output: σ n+1 = Ṽn+1/(v n+1 (1 ρ n+1 ) ). 4

5 3. Asymptotical properties of the estimators In this section, we shall study the asymptotic properties of ˆσ n and σ as defined by (1) and () respectively. Theorem 1 derives the asymptotic properties of V n, which leads to Theorem and Corollary 1 regarding the asymptotic properties of ˆσ n and σ n. Define a b = min(a, b). A random variable ξ is said to be in L p (p > 0) if ξ p := [E( ξ p )] 1/p <, where ξ = ξ. For two sequences {a n } and {b n }, write a n b n, if lim n a n /b n = 1; write a n = o(b n ) if lim n a n /b n = 0; write a n = O(b n ) if lim sup n a n /b n < ; write a n = o a.s (b n ) if lim n a n /b n = 0 almost surely; write a n b n or a n = O(b n ) if there exists a constant c > 0, such that 1/c a n /b n c for all large n. Throughout the paper, we assume that {X i } is a stationary causal process of the form X i = g(f i ), F i = (ε 1,..., ε i 1, ε i ), and {ε i } are iid. Let X i = g(f i ), where F i = (F 1, ε 0, ε 1,..., ε i ) and ε 0 is an iid copy of ε i s. The quantity δ α (i) = X i X i α is called the physical dependence measure by Wu (005). Recall e i = X i ρx i 1 and define e i = X i ρx i 1. Let θ α(i) := e i e i α δ α (i), it can be shown that the following two conditions are equivalent δ α (i) <, (5) i=0 θ α (i) <. (6) i=0 105 in view of X i = k 0 ρk e i k. Particularly, for the AR(1) model we have θ α (i) = ε 0 ε 0 α 1 i=0. Since e i is adapted to F i and E(e i ) = 0, under mild condition of P 0 e i <, where P i = E( F i ) E( F i 1 ), (7) i=0 we have the functional central limit theorem 1 e i, 0 t 1 n i nt d {σ e IB(t), 0 t 1}, 110 where IB is the standard Brownian motion and σ e = i=0 P 0e i = k Z E(e 0e k ) = (1 ρ) σ. Note that (7) is weaker than (5). Now that σ e is the TAVC for the time series {e i }. Wu (009b) proposed V n /v n to estimate σ e. The properties of V n are given in the following theorem, and its proof follows the same way as in Wu (009b). 5

6 Theorem 1. Let a k = ck p, k 1, where c > 0 and p > 1 are constants. (i) Assume that E(X i ) = 0, X i L α and (6) holds for some α (, 4], we have V n E(V n ) α/ = O(n τ ), max V n E(V n ) n N = O(N τ logn), α/ where τ = /α + 3/ 3/(p). (ii) Assume that E(X i ) = 0, X i L α and (6) holds for some α > 4, we have V n E(V n ) α/ = O(n 3/(p) ), max V n E(V n ) n N = O(N 3/(p) ). α/ (iii) Assume that E(X i ) = 0, and for α > 4, we have P 0 e i α <, (8) i=0 V n E(V n ) lim = σ ep c 3/(p), n n 3/(p) 1p 9 (iv) Assume that EX i = 0, and for some q (0, 1] either (a) or (b) k=0 P 0X k < and are satisfied, we have k q P 0 e k < (9) k=0 k q P 0 (e k ρe k+1 ) <, (10) k=0 E(V n ) v n (1 ρ) σ = O(n 1+(1 q)(1 1/p) ). Lemma 1 below indicates that the magnitude of ˆV n V n and Ṽn V n are both negligible compared to that of V n v n (1 ρ) σ. Hence the almost sure bounds of ˆσ σ and σ σ could be resorted to Theorem Lemma 1. Assume p > 1, E(X i ) = 0, X i L α, and (5) holds with α 4. (i) For 1 < β < α, let β = β, then we have max ˆV k V k k n = O(n 3/ 1/p n (1 1/p)(/β 1) ) β/4 (ii) If 4/3 β < α and the conditions of Theorem 1 (iv) hold, we have where φ(p) = (p) 1 q(1 1/p). ˆV n /v n (1 ρ) σ β/4 = O(n φ(p) ), (11) 6

7 130 As will be shown later, (11) plays the critical role in determining the deviation of ˆσ and σ from σ. It can be shown easily that φ(p) reaches its maximum of q/(q + 1) at p = 1 + (q) 1. By imposing the latter condition, we derive the following resutls regarding asymptotic properties of ˆσ n and σ n. Theorem. Suppose E(X i ) = 0, (9) holds with q (0, 1], and p = 1 + (q) 1. (i) Assume X i L α and (5) holds with α > 4, then we have ˆσ n σ = o a.s. (n q/(q+1) logn). (1) (ii) Assume X i L 4 and (5) holds with α = 4, then we have ˆσ n σ = o a.s. (n q/(q+1) log +ɛ n) (13) for any small ɛ > Corollary 1. Suppose E(X i ) = 0, (9) holds with q (0, 1], and p = 1 + (q) 1. (i) Assume X i L α and (5) holds with α > 4, then we have σ n σ = o a.s. (n q/(q+1) logn). (14) (ii) Assume X i L 4 and (5) holds with α = 4, then we have σ n σ = o a.s. (n q/(q+1) log +ɛ n) (15) for any small ɛ > 0. (iii) Assume X i L α and (5) holds with α 4, then we have Ṽn/v n (1 ρ) σ β/4 = O(n q/(q+1) ) (16) 140 when 4/3 β < α. 4. The comparison of efficiencies Assume that (8) holds with α > 8 and (9) holds with q = 1. Theorem suggests optimal p = 3/ in the sequence a k = ck p. Let S n = n e i, η = k=1 kγ(k), and η e = k=1 ke(e 0e k ). As l, we have lσ e E(S l ) = η e + o(1). 145 Following the proof of Lemma 1 and the arguments in Section 4 of Wu (009b), we have ( Ṽn/v n σe V n /v n σe σe 4 16c /3 + η 56 ) n / c 4/3 7

8 In minimizing the MSE, we choose c = 4 η e /(3σ e) so that By direct calculations we have Ṽn/v n σe 14/3 η/3 35/3 e σe 8/3 n /3. (17) σ σ = n (1 ρ n ), where n = Ṽn/v n σ e + ( ρ ρ n )( ρ n ρ)σ. By Proposition and (17) we have Let ˇσ be the Wu s estimator of σ. n 14/3 η/3 35/3 e σe 8/3 n /3. (18) Theorem 3. Suppose E(X i ) = 0, (9) holds with q (0, 1], and p = 1 + 1/q. (i) Assume X i L α, and (5) holds with α > 40. for 40 β < α. (ii) Assume X i L α, and (5) holds with α > 40. We have σ σ β/0 n β/0 (1 ρ) = o(n 1/3 ) (19) σ σ ˇσ σ ( 1 γ(1) ) /3 (1 ρ) := λ. (0) η To estimate c, we can refer to Algorithm 3 in Wu (009b) except that ˆγ(k) therein should be replaced by an estimate of E(e 0 e k ) = (1 + ρ )γ(k) ρ(γ(k + 1) + γ(k 1)). Therefore, we could substitute ρ and γ(k) by their estimates based on a small portion of the initial data. Since λ < 1 if and only if γ(1)/η > 0, the proposed algorithm here would outperform Wu s algorithm whenever γ(1) 0 and kγ(k)/γ(1) > 1. k 4.1. ARMA model Consider ARMA( p, q) model a(l)x i = b(l)ε i, (1) 160 where a( ) and b( ) are polynomials of orders p and q respectively. Suppose a(z) = p 0 (1 r iz) mi with p0 m i = p and r i < 1. In the sequel, we will discuss the calculation of λ for the case of p > q. No extra difficulty is needed to deal with the case of p q. From Brockwell and Davis (1991), the autocovariance function has the representation of p 0 γ(k) = m i 1 j=0 d ij k j r k i, k 0, () 8

9 MSE Ratio of MSE AR(1) Direct Mean Variance Figure 1: λ for ARMA Model with a(z) = (1 0.8z)( z)(1 0.0z) and b(z) = z. 165 where d ij could be found by solving the system of linear equations () for k = 0, 1,..., p 1. See Brockwell and Davis (1991) or Karansos (1998) for details. Define Ξ i (r) = k=1 ki r k, then Ξ 0 (r) = r/(1 r) and Ξ i (r), i 1 could be derived recursively by the relation Ξ i+1 (r) = rξ i (r), i 0. For example, we have Ξ 1 (r) = r/(1 r) and Ξ (r) = (r + r)/(1 r) 3. For model (1) we have p0 γ(1) (1 ρ) η = mi 1 p0 j=0 d ij r i /(1 ρ) mi 1 j=0 d ij Ξ j+1 (r i ) and ρ = p0 mi 1 j=0 d ij r i. p0 d i0 If all the r i s are distinct, these two equations reduce to p γ(1) (1 ρ) η = d ir i /(1 ρ) p p d and ρ = d ir i ir i /(1 r i ) p d. (3) i with the convention of d i = d i0. Simulation studies are carried out for three different examples of Model (1) and the results are displayed in Figures 1 3. In all these three examples, we adopted σε = 1, and the value of λ is calculated by (3). The black line therein is associated with Wu s estimator and the red line is associated with the proposed estimator. Figures 1 and verify that when λ < 1 the proposed method outperforms Wu s. The main reason of the smaller MSE is due to the reduced bias by introducing the prewhittening. One the other hand, Figure 3 shows that when λ > 1 the prewhittening is not recommended. 9

10 MSE Ratio of MSE AR(1) Direct Mean Variance Figure : λ 0.0 for ARMA Model with a(z) = (1+0.9z)(1+0.8z)(1+0.5z)(1 0.z) and b(z) = 1+0.9z +0.8z +0.8z 3. MSE Ratio of MSE AR(1) Direct Mean Variance Figure 3: λ.9947 for ARMA Model with a(z) = (1 0.8z)( z)(1 0.z) and b(z) = z + 0.8z. 10

11 4.. AR(1) model In an AR(1) model, the autoregressive coefficient equals to ρ = γ(1)/γ(0). The AR(1) model could be written as X i ρx i 1 = ε i, where ε i W N(0, σε). (4) Since e i = ε i and σ e = (1 ρ) σ, we have E(V n ) v n (1 ρ) σ = 0. Hence (11) holds when φ(p) = (p) 1, which implies that p should be chosen as small as possible. Note that we have X i = j=0 ρj ε i j from the model, the condition ε i L α, i Z implies that X i L α and (5) holds. We will now investigate the case of p = 1 for Model (4). Proposition 1. Suppose the stationary time series {X i } are from Model (4). Let p = 1 and c N in the sequence a k = ck. (i) Assume ε 0 L α with α 4, we have V n E(V n ) α/ = O( n) (ii) Assume ε 0 L 4, and let κ = ε 0 4 /σ 4 ε. We have V n E(V n ) lim n n = φ c,κ σ 4 ε, where φ c,κ = (c + 1)(c + 1)(κ 1)/6 + c(c 1)/3. (iii) Assume that ε 0 L α for α > 4. For 4 β < α, we have 190 lim sup n where S n = n X i and R n = n ε i. Ṽn V n β/4 n σ ρ G β/ 1 c (iv) Assume ε 0 L α for α, then for 0 < β < α we have c S i β R i β, lim n ( ρ ρn )( ρ n ρ) β/4 = ( ρ) σ ρ G β/4. n By Proposition 1 and the fact v n n(c + 1)/, we have n α = O( n) for α and lim sup n n n φc,κ c + 1 σ ε + 31/4 σ ρ c(c + 1) c S i 8 R i 8 + (1 ρ)σ ρ σ. Now let C p = 18p 3/ (p 1) 1/. By Burkholder s inequality and stationarity, we have R i 8 C 8 i ε0 8, where C By Theorem 1 of Wu (007), we have S i 8 C 8 i(1 ρ ) 1 ε 0 8. By Hölder s inequality, we have R i 8 i ε 0 8 and S i 8 i(1 ρ ) 1 ε 0 8. Incorporating these bounds we have c S i 8 R i 8 (1 ρ ) 1 ε 0 c(c + 1) 8B c, 11

12 195 where B c = c i min(c 8,i) c(c+1). Note that B c reaches its minimum 1/ at two points, 1 and. Since the latter is not applicable, we have B c B 1 = 1/ for c N. Note that φ c,k /(c+1) is an increasing function of c, it is reasonable to adopt c = 1, i.e. a k = k, when implementing Algorithm, for which we have lim sup n n κ 1σε + 3 1/4 σ ρ (1 ρ ) 1 ε σ ρ (1 ρ) 1 σε. n By the same argument as for (19), when ε 0 L α with α > 40, we have σ σ = O(n 1/ ). 00 This bound is better than O(n 1/3 ) resulted from the sequence a k = ck 3/. See Lee and Phillips (1994) for a similar result in the context of non-recursive estimation. To implement the algorithm in Wu (009b), we adopt a k = ck 3/. c is computed as c = 8 ρ 3(1 ρ ), ε 0 ρ since η = (1 ρ) (1 ρ ), and σ = ε 0. Hence if AR(1) is the true model and we adopt c = p = 1 (1 ρ) for the prewhittening method, we have σ σ ˇσ σ The simulation study is carried out for three different AR(1) models, namely ρ = 0.9, 0.1 and 0.9. As shown by Figures 4-6, our method (red) performs significantly better than Wu s method (black) for AR(1) model Nonlinar models We start with a GARCH model: X t = σ t ε t, where σ t = α 0 + p α ix t i + q j=1 β iσ t j. Since {X i} are uncorrelated and there is no need to estimate the TAVC by batch method. Meanwhile, the squared terms {Xi } is of interest in some studies. See Bollerslev (1986), Ding and Granger (1996), and Karanasos 10 (1999). Let Z t = X t α 0 1 E(ε 0 ) p α i q j=1 β, t Z. j It is well known that {Z t } follows an ARMA model as follows. max(p,q) Z t ((Eε 0)α i + β i )Z t i = µ t p β i µ t j, t Z, (5) where {u t } is the white noise process. Here, α i = 0 for i > p and β j = 0 for j > p. By (5), the recursive estimation of the TAVC of the squared terms reduces to that of the regular ARMA model in Section 4.1. j=1 1

13 MSE Ratio of MSE AR(1) Direct Mean Variance Figure 4: Comparison of the two methods under the model X i 0.9X i 1 = ε i. MSE Ratio of MSE AR(1) Direct Mean Variance Figure 5: Comparison of the two methods under the model X i + 0.1X i 1 = ε i. 13

14 MSE Ratio of MSE AR(1) Direct Mean Variance Figure 6: Comparison of the two methods under the model X i + 0.9X i 1 = ε i. 15 Further, a simulation study is carried out for an ARMA GARCH model as follows. X t = 0.9X t 1 + Z t + 0.Z t 1 (6) Z t = σ t ɛ t σ t = Z t 1 0.Z t + 0.3σ t σ t Figure 7 shows that our method outperforms Wu s method. 5. Conclusions 0 5 This paper studies a prewhitened version of Wu s algorithm for estimating the TAVC in a recursive way. Asymptotic properties of the estimate is derived for a general causal stationary process and simulation studies are carried out for both linear and nonlinear time series models. prefilter, i.e. Here we adopt the simplest the AR(1) model, which achieves significant improvements on the estimation efficiency whenever γ(1) 0 and k kγ(k)/γ(1) > 1. In practice, one may keep two lines of updating algorithms simultaneously in the initial stage and then choose one from them according to the latter condition. The simple form of AR(1) facilitates the construction and implementation of the recursive estimation of TAVC with the memory complexity of O(1). 14

15 MSE AR(1) Direct mseratio[100:l] Index Mean Variance Figure 7: Comparison of the two methods under Model (6) Appendix Proof of Theorem 1 Observe that (i), (iii) and (iv)(a) are direct results from Corollary 3, Theorem (ii), and Theorem (iii) of Wu (009b) by replacing X i therein by e i. Also (ii) could be proved by the same argument as (i) by using the inequality P i k Wi α/ n C α Pi k Wi α/ 30 instead of P i k Wi α/ α/ n C α Pi k Wi α/ α/ as in Wu (009b) since α > 4. Now we prove (iii)(b). Since E( n X i) = k <n (n k )γ(k) and E( n X i)( n X i 1) = k <n (n k )γ(k + 1), we have EWi = E (X j ρx j 1 ) = [ (l i k ) (1 + ρ )γ(k) ργ(k + 1) ], k <l i 15

16 where l i = i t i + 1. For q (0, 1] we have l i (1 ρ) σ EWi = min( k, l i ) [ (1 + ρ )γ(k) ργ(k + 1) ] k Z = min(k, l i ) [ (1 + ρ )γ(k) ργ(k + 1) ργ(k 1) ] Since = k=1 min(k, l i )E [X 0 (e k ρe k+1 )] (7) k=1 E[X 0 (e k ρe k+1 )] = i Z E(P i X 0 )(P i (e k ρe k+1 )) i Z P i X 0 P i (e k ρe k+1 ), 35 we have li (1 ρ) σ EWi l 1 q i = O(l 1 q i ). k q P i X 0 P i (e k ρe k+1 ) k=1 i Z Hence EV n v n (1 ρ) σ = l i (1 ρ) σ EWi O(l 1 q i ) = O(nb 1 q m ) = O(n 1+(1 q)(1 1/p) ) Proof of Lemma1 By Cauchy s inequality, n i= X ix i 1 ( n i= X i )1/ ( n 1 X i )1/ n X i, hence we have ˆρ n 1 for any n N. By Theorem 1 (iii) of Wu (007) and Proposition (iii) we have Ŵ i,n Wi β/4 Ŵi,n W i β/3 Ŵi,n + W i β = (ˆρ n ρ)x j 1 X j (ˆρ n + ρ) j=t β/3 i 4 ˆρ n ρ β/ X j 1 β ) = O (n 1/ l /β i X j 1 β (8) 16

17 Since max k n ˆV k V k n Ŵ i W i 40, we have max ˆV k V k k n = O(n 1/ v n max β/4 i n l/β 1 i ) = O(n 3/ 1/p n (1 1/p)(/β 1) ). The application of (i) with β 4/3 gives max k n ˆV k V k = O(n 3/(p) ), hence (ii) follows from β/4 Theorem 1 in view of v n n 1/p. Proof of Theorem We prove (ii) first and then prove (i). By Lemma 1 and Theorem 1(i), we have max ˆV n EV n = O( d( 3/(p)) d), n d β/4 45 for any 4 > β >. By choosing 4 > β > 4/(1 + ɛ) we have max n d ˆV n EV n β/4 β/4 <, (9) βd( 3/(p))/4 d (+ɛ)β/4 d=1 which by the Borel-Cantelli lemma implies that ˆV n EV n = o a.s. (n ( 3/(p)) log +ɛ n) (30) Since v n n 1/p and p = 1 + 1/q, by (30) and Theorem 1(iv) we have ˆV n /v n (1 ρ) σ = o a.s. (n q/(q+1) (logn) +ɛ ). (31) Now we have ( ) ( ) 1 ρ 1 ρ ˆσ n σ = ˆσ n σ + σ σ 1 ˆρ n 1 ˆρ n = ˆV n /v n (1 ρ) σ (1 ˆρ n ) + ( ρ ˆρ n)(ˆρ n ρ) (1 ˆρ n ) σ = o a.s. (n q/(q+1) log +ɛ n) (3) 50 in view of (31) and Proposition 3. Now (i) could be proved similarly by choosing β (4, α) and setting ɛ = 1 in (9). Proof of Corollary 1 We will prove the corollary based on the expression (4). By the same argument for (30) we have Ṽn EV n = o a.s. (n ( 3/(p)) logn) (33) 17

18 under (i) and Ṽ n EV n = o a.s. (n ( 3/(p)) log +ɛ n) (34) 55 under (ii). Let U n = n l i i X j. By the same argument as in Corollary 1 of Wu (009b) we have for α (, 4] and integers r, m N that U r m U r (m 1) α = O( r(5/ /p) m /p ). Then by Proposition 1(i) of Wu (007) we have max d d r U 1 i d i U r m U r (m 1) α α α r=0 m=1 d = r(5/ /p) (d r)(1/α+ /p) r=0 = O( d(5/ /p) ) 1/α 60 Since max1 i d U i α α αd(5/ /p) n α/ < By Borel-Cantelli Lemma we have U n = o a.s. (n 5/ /p log 1/ n). Since Theorem 1 of Wu (007) implies n X i α = O( n), by the same argument as in Proposition 3 we have X n = o a.s. (n 1/ log n). Thus X n Un = o a.s. (n /p log 3 n) and X nd n = o a.s. (n /p log 4 n) in view of d n n 3 /p. Since ρ n 1, we have Ṽ n Ṽ n = o a.s. (n /p log 4 n). (35) 65 By the same argument for (3), (i) and (ii) follows in view of (33), (34), and (35). By Remark 4 of Wu (009b), we have Ṽn Ṽ n α/ = O(n /p ). Also by the same argument as Theorem (iii), we have Ṽ n /v n (1 ρ) σ β/4 = O(n q/(q+1) ) for 4/3 β < α. Hence (iii) follows. Proof of Theorem 3 By Hölder s inequality, we have n (1 ρ n ) n β/0 (1 ρ) n β/0 (1 ρ n ) n (1 ρ) β/0 = n ( ρ ρ n )( ρ n ρ) (1 ρ n ) (1 ρ) β/0 4 n β/4 ρ n ρ (1 ρ) (1 ρ n ) β/16 = ρ n ρ (1 ρ n ) O(n 1/3 ) (36) β/16 18

19 By Lemma 3 of Kan and Wang (010), we have ρ n cos(π/(n + 1)). Hence (1 ρ n ) = O(n 4 ). By Schwartz s inequality, we have ρ n 1. Choose 0 < c < 1 ρ and β < β 0 < α, we have ρ n ρ (1 ρ n ) ρ n ρ (1 ρ n ) 1 [ ρ n ρ>c] + ρ n ρ β/16 (1 ρ n ) 1 [ ρ n ρ c] β/16 = P ( ρ n ρ > c) 16/β O(n 4 ) + ρ n ρ β/16 O(1) = [ n( ρ n ρ) β0/ β 0/ n β0/4 ] 16/β O(n 4 ) + O(n 1/ ) = O(n 4(1 β0/β) ) + O(n 1/ ) β/16 = o(1) (37) 70 Then (i) follows from (36) and (37). By direct calculation, we have η e = (1 ρ) η γ(1). Then a direct application of (18), (19) with β = 40 here and (3) of Wu (009b) gives σ σ ˇσ σ = ηe /3 σe 8/3 (1 ρ) 4 η /3 σ 8/3 ( η e (1 ρ) η ) /3 = λ (38) 75 Proof of Proposition 1 Let m = n/c in the proof. For (i), let H k = kc i=(k 1)c+1 (W i EW i ), then V mc EV mc = m k=1 H k and H k L α/. By Burkholder s inequality, we have V mc EV mc α/ C p m H k α/, where C p = 18(p) 3/ (p 1) 1/. Then (i) follows in view of V n EV n (V mc EV mc ) α/ = O(1). For (ii) we have Ri E R i = = = k=1 (n + 1 i)(ε i Eε i ) + (n + 1 j)ε i ε j 1 i<j n (n + 1 i) ε 0 Eε (n + 1 j) σε 4 n(n + 1)(n + 1) 6 = nφ n,κ σ 4 ε. 1 i<j n ε 0 Eε 0 + n (n 1) σε 4 3 Since W i d = Rli, by independence we have V n EV n mcφ c,κ σ 4 ε nφ c,κ σ 4 ε, 19

20 To prove (iii), we adopt the notation in (4) and have Ṽn V n = ( ρ n ρ) X j 1 e j ( ρ n ρ) X j 1 (39) 80 We deal with the two parts in the RHS of (39) separately. ( ρ n ρ) X j 1 e j ρ n ρ β/ β/4 n ρ n ρ β/ Hence we have lim sup n n 1/ ( ρ n ρ) X j 1 e j X j 1 e j β/ X j 1 β n 1/ n( ρ n ρ) β/ (m + 1) 1 σ ρ G β/ c β/4 e j β c S i β R i β c S i β R i β. (40) Further we have ( ρ n ρ) n X j 1 β/4 ( ρn ρ) ( β/α) 1 β/ β/α ρ n ρ β/α β/ X j 1 X j 1 β/α n 1/ δ n( ρ n ρ) β/α β/ (m + 1) α α/ c S i α, where δ = 1 (1 β/α) > 0. Hence we have ( ρ n ρ) n X j 1 = o( n). (41) β/4 85 Then (iii) follows from (39), (40), and (41). For (iv), we have ( ρ ρ n )( ρ n ρ) = ( ρ)( ρ n ρ) ( ρ n ρ) Since ( ρ n ρ) β/4 = O(n 1 ) and lim n n ρn ρ β/4 = σ ρ G β/4, (iv) follows. 0

21 Proposition. (i) Assume E(X i ) = 0, X i L α and (5) holds with α >. For 0 < β < α, the following sequences are all uniformly integrable: {[n 1/ (ˆγ n (k) γ(k))] β/, n 1} {[n 1/ (ˆρ n (k) ρ(k))] β/, n 1} {[n 1/ ( γ n (k) γ(k))] β/, n 1} {[n 1/ ( ρ n (k) ρ(k))] β/, n 1} 90 (ii) Assume E(X i ) = 0, X i L 4 and (5) holds with α = 4, we have n(ˆρn ρ) n( ρn ρ) d d N(0, σ ρ), N(0, σ ρ), where σ ρ = i=0 P 0X i ẽ i /γ(0) and ẽ i = X i 1 ρx i. (iii) Assume E(X i ) = 0, X i L α and (5) holds with α 4. Then for 0 < β < α n(ˆρ n ρ) β/ σ ρ G β/, n( ρ n ρ) β/ σ ρ G β/, where σ ρ is as defined in (ii) and G stands for a standard normal distribution. Proof: First of all, we have P 0 X i X i+k α/ X i X i+1 X ix i+k α/ X i α x i+k x i+k α + X i+k α x i X i α = O(δ α (i)) (4) 95 Since X i X i+k γ(k) = l Z P lx i X i+k, we have by Burkholder s inequality that n (X i X i+k γ(k)) = P l X i X i+k α/ l Z α/ C α P l X i X i+k l Z α/ = ( n ) 1/ O(δ α (i l)) l Z ( n ) 1/ = O(δ α (i l)) l Z = O( n) (43) 1/ 1

22 Hence n 1/ (ˆγ n (k) γ(k)) α/ = n 1/ (X i X i+k γ(k)) + n 1/ X i X i+k α/ i=n k+1 α/ = O(1) (44) Hence {[n 1/ (ˆγ n (k) γ(k))] β/, n 1} is uniformly integrable. Note that n γ n (k) nˆγ n (k) α/ = X k n X i + X n n X i (n + k) X n i=n k+1 X k n X i α/ + (n + k) X n α/ k X n α X k α + (n + k) X n α α/ = O(1) (45) in view of X n α = O(n 1/ ) by Theorem 1 of Wu (007). Then (44) and (45) implies n 1/ γ n (k) γ(k) α/ = O(1), i.e. {[n 1/ ( γ n (k) γ(k))] β/, n 1} is uniformly integrable. Define the (m+1) dependent sequence X i,m = E(X i F i i m ), i Z, where F i i m = (ε i m, ε i m+1,..., ε i ). Also define γ m (k) = EX 0,m X k,m, which could be estimated by ˆγ n,m (k) = n 1 n i=k+1 X i,mx i+k,m. By the same argument as in (4), (43), (44), and (45), we could prove that {[n 1/ (ˆγ n,m (k) γ m (k))] β/, n 1} and {[n 1/ ( γ n,m (k) γ m (k))] β/, n 1} are both uniformly integral for any m N in view of X i,m α X i α < and i=0 X i,m X i,m α = m i=0 X i,m X i,m α (m + 1) X i,m α <. Then by the same argument as in Theorem 1 of Lee (1996), we could prove that {[n 1/ (ˆρ n (k) ρ(k))] β/, n 1} and {[n 1/ ( ρ n (k) ρ(k))] β/, n 1} are both uniformly integrable. For (ii), (4) follows by applying the delta method to Theorem 1 of Wu (009a) and (4) follows similarly in view of (45). (iii) follows from (i) and (ii). 310 Proposition 3. If E(X i ) = 0, X i L 4 and (5) holds with α = 4, we have ˆγ n (k) γ(k) = o a.s. (n 1/ log n), ˆρ n (k) ρ(k) = o a.s. (n 1/ log n), γ n (k) γ(k) = o a.s. (n 1/ log n), (46) ρ n (k) ρ(k) = o a.s. (n 1/ log n). (47)

23 Proof: By Theorem 1 of Wu (009a), we have n 1/ T n,k N(0, σ k) where T n,k = n (X ix i k γ(k)) and σ k = i=0 P 0X i X i k. In the following, we will write T n,k as T n. Let 1 < q <, (43) implies that {[n 1/ T n,k ] q, n N} is uniformly integral. Hence we have T n q = O( n) (48) By Proposition 1 (i) of Wu (007) and (48), we have max T n n d q d [ d r T r q 1/q q] r=0 = d/ Hence, max n d T n q q qd /d <, 315 which implies T n = o(n 1/ log n) almost surely. Then we have ( ˆγ n (k) γ(k) = 1 n T n ) k X i X i k = o a.s. (n 1/ log n), and hence ˆρ n (k) ρ(k) = ˆγ n(k) γ(k) ˆγ n (0) = o a.s. (n 1/ log n) By similar arguments, we could prove (46) and (47) in view of (45). γ(k)(ˆγ n(0) γ(0)) γ(0)ˆγ n (0) References 30 Alexopoulos, C. and Goldsman, D. (004). To batch or not to batch? ACM Transactions on Modeling and Computer Simulation Andrews, D. W. K. and Monahan, J. C. (199). An improved heteroskedasticity and autocorrelation consistent covariance matrix estimator. Econometrica

24 Bühlmann, P. (00). Bootstraps for time series. Statistical Science Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics Brockwell, P. J., and Davis, R. A. (1991). Time Series: Theory and Methods, nd ed. Springer, New York. Ding, Z., Granger, C.W.J. (1996). Modeling volatility persistence of speculative returns: a new approach. Journal of Econometrics Engle, R.F. (198). Autoregressive conditional heteroscedasticity with estimates of the variance of U.K. inflation. Econometrica Haldrup, N. and Jansson, M. (005). Improving size and power in unit root testing. Tech. rep., Department of Economics, University of Aarhus. Jones, G. L., Haran, M., Caffo, B. S. and Neath, R. (006). Fixed-width output analysis for Markov chain Monte Carlo. Journal of the American Statistical Association Kan, R. and Wang, X. (010). On the Distribution of the Sample Autocorrelation Coefficients. Journal of Econometrics Karanasos, M. (1998). A New Method for Obtaining the Autocovariance of an AMRA Model: An Exact Form Solution. Econometric Theory Karanasos, M. (1999). The second moment and the autocovariance function of the squared errors of the GARCH model, Journal of Econometrics Lahiri, S. N. (003). Resampling methods for dependent data. Springer, New York. S. Lee. (1996). Sequential estimation for the autocorrelations for linear processes. Annals of Statistics Lee, C. C. and Phillips, P. C. B. (1994). An ARMA prewhitened long-run variance estimator. Manuscript, Yale University Phillips, P. C. B. and Sul, D. (003). Dynamic panel estimation and homogeneity testing under cross section dependence, The Econometrics Journal Phillips, P. C. B. and Xiao, Z. (1998). A primer on unit root testing. Journal of Economic Surveys Politis, D. N., Romano, J. P. and Wolf, M. (1999). Subsampling. Springer, New York. Press, H. and Tukey, J.W. (1956). Power Spectral Methods of Analysis and Their Application to Problems in Airplane Dynamic. Flight Test Manual, NATO, Advisory Group for Aeronautical Research and Development Vol. IV-C Reprinted as Bell System Monograph No

25 Song, W.M. and Schmeiser, B.W. (1995). Optimal mean-squared-error batch sizes. Management Science Stock, J. H. (1994). Unit roots, structural breaks and trends. Handbook of econometrics, Vol. IV. North-Holland, Amsterdam Sul, D., Phillips, P. C. B. and Choi C. Y. (005). Prewhitening Bias in HAC Estimation. Oxford Bulletin Of Economics And Statistics Wu, W. B. (005). Nonlinear system theory: Another look at dependence. Proceedings of the National Academy of Sciences USA Wu, W. B. (007). Strong invariance principles for dependent random variables. Annals of Probability Wu, W. B. (009a). An asymptotic theory for sample covariances of Bernoulli shifts. Stochastic Processes and their Applications Wu, W. B. (009b). Recursive Estimation of Time-Average Variance Constants. Annals of Applied Probability

Nonlinear time series

Nonlinear time series Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of

More information

Econometric Forecasting

Econometric Forecasting Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 1, 2014 Outline Introduction Model-free extrapolation Univariate time-series models Trend

More information

A Single-Pass Algorithm for Spectrum Estimation With Fast Convergence Han Xiao and Wei Biao Wu

A Single-Pass Algorithm for Spectrum Estimation With Fast Convergence Han Xiao and Wei Biao Wu 4720 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 57, NO 7, JULY 2011 A Single-Pass Algorithm for Spectrum Estimation With Fast Convergence Han Xiao Wei Biao Wu Abstract We propose a single-pass algorithm

More information

α i ξ t i, where ξ t i.i.d. (0, 1),

α i ξ t i, where ξ t i.i.d. (0, 1), PROBABILITY AND MATHEMATICAL STATISTICS Vol. 31, Fasc. 1 (2011), pp. 300 000 NONLINEARITY OF ARCH AND STOCHASTIC VOLATILITY MODELS AND BARTLETT S FORMULA BY PIOTR S. KO KO S Z K A (LOGAN) AND DIMITRIS

More information

The Role of "Leads" in the Dynamic Title of Cointegrating Regression Models. Author(s) Hayakawa, Kazuhiko; Kurozumi, Eiji

The Role of Leads in the Dynamic Title of Cointegrating Regression Models. Author(s) Hayakawa, Kazuhiko; Kurozumi, Eiji he Role of "Leads" in the Dynamic itle of Cointegrating Regression Models Author(s) Hayakawa, Kazuhiko; Kurozumi, Eiji Citation Issue 2006-12 Date ype echnical Report ext Version publisher URL http://hdl.handle.net/10086/13599

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences

Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences Confidence Intervals for the Autocorrelations of the Squares of GARCH Sequences Piotr Kokoszka 1, Gilles Teyssière 2, and Aonan Zhang 3 1 Mathematics and Statistics, Utah State University, 3900 Old Main

More information

LM threshold unit root tests

LM threshold unit root tests Lee, J., Strazicich, M.C., & Chul Yu, B. (2011). LM Threshold Unit Root Tests. Economics Letters, 110(2): 113-116 (Feb 2011). Published by Elsevier (ISSN: 0165-1765). http://0- dx.doi.org.wncln.wncln.org/10.1016/j.econlet.2010.10.014

More information

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz

More information

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations

Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations Diagnostic Test for GARCH Models Based on Absolute Residual Autocorrelations Farhat Iqbal Department of Statistics, University of Balochistan Quetta-Pakistan farhatiqb@gmail.com Abstract In this paper

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Econometric Methods for Panel Data

Econometric Methods for Panel Data Based on the books by Baltagi: Econometric Analysis of Panel Data and by Hsiao: Analysis of Panel Data Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies

More information

ECON3327: Financial Econometrics, Spring 2016

ECON3327: Financial Econometrics, Spring 2016 ECON3327: Financial Econometrics, Spring 2016 Wooldridge, Introductory Econometrics (5th ed, 2012) Chapter 11: OLS with time series data Stationary and weakly dependent time series The notion of a stationary

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

Long memory and changing persistence

Long memory and changing persistence Long memory and changing persistence Robinson Kruse and Philipp Sibbertsen August 010 Abstract We study the empirical behaviour of semi-parametric log-periodogram estimation for long memory models when

More information

The Estimation of Simultaneous Equation Models under Conditional Heteroscedasticity

The Estimation of Simultaneous Equation Models under Conditional Heteroscedasticity The Estimation of Simultaneous Equation Models under Conditional Heteroscedasticity E M. I and G D.A. P University of Alicante Cardiff University This version: April 2004. Preliminary and to be completed

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

GARCH processes probabilistic properties (Part 1)

GARCH processes probabilistic properties (Part 1) GARCH processes probabilistic properties (Part 1) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach By Shiqing Ling Department of Mathematics Hong Kong University of Science and Technology Let {y t : t = 0, ±1, ±2,

More information

Simultaneous Equations and Weak Instruments under Conditionally Heteroscedastic Disturbances

Simultaneous Equations and Weak Instruments under Conditionally Heteroscedastic Disturbances Simultaneous Equations and Weak Instruments under Conditionally Heteroscedastic Disturbances E M. I and G D.A. P Michigan State University Cardiff University This version: July 004. Preliminary version

More information

A Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching

A Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching Communications for Statistical Applications and Methods 2013, Vol 20, No 4, 339 345 DOI: http://dxdoiorg/105351/csam2013204339 A Functional Central Limit Theorem for an ARMAp, q) Process with Markov Switching

More information

Econ 623 Econometrics II Topic 2: Stationary Time Series

Econ 623 Econometrics II Topic 2: Stationary Time Series 1 Introduction Econ 623 Econometrics II Topic 2: Stationary Time Series In the regression model we can model the error term as an autoregression AR(1) process. That is, we can use the past value of the

More information

The autocorrelation and autocovariance functions - helpful tools in the modelling problem

The autocorrelation and autocovariance functions - helpful tools in the modelling problem The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,

More information

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators Robert M. de Jong Department of Economics Michigan State University 215 Marshall Hall East

More information

Maximum Non-extensive Entropy Block Bootstrap

Maximum Non-extensive Entropy Block Bootstrap Overview&Motivation MEB Simulation of the MnEBB Conclusions References Maximum Non-extensive Entropy Block Bootstrap Jan Novotny CEA, Cass Business School & CERGE-EI (with Michele Bergamelli & Giovanni

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

An algorithm for robust fitting of autoregressive models Dimitris N. Politis

An algorithm for robust fitting of autoregressive models Dimitris N. Politis An algorithm for robust fitting of autoregressive models Dimitris N. Politis Abstract: An algorithm for robust fitting of AR models is given, based on a linear regression idea. The new method appears to

More information

Introduction to ARMA and GARCH processes

Introduction to ARMA and GARCH processes Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi Introduction to ARMA () and GARCH processes SNS Pisa 3 March 2010 1 / 24 Stationarity Strict stationarity: (X 1,

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

Unit Root and Cointegration

Unit Root and Cointegration Unit Root and Cointegration Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt@illinois.edu Oct 7th, 016 C. Hurtado (UIUC - Economics) Applied Econometrics On the

More information

Testing for non-stationarity

Testing for non-stationarity 20 November, 2009 Overview The tests for investigating the non-stationary of a time series falls into four types: 1 Check the null that there is a unit root against stationarity. Within these, there are

More information

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic Chapter 6 ESTIMATION OF THE LONG-RUN COVARIANCE MATRIX An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic standard errors for the OLS and linear IV estimators presented

More information

This chapter reviews properties of regression estimators and test statistics based on

This chapter reviews properties of regression estimators and test statistics based on Chapter 12 COINTEGRATING AND SPURIOUS REGRESSIONS This chapter reviews properties of regression estimators and test statistics based on the estimators when the regressors and regressant are difference

More information

UC San Diego Recent Work

UC San Diego Recent Work UC San Diego Recent Work Title The Variance of Sample Autocorrelations: Does Barlett's Formula Work With ARCH Data? Permalink https://escholarship.org/uc/item/68c247dp Authors Kokoszka, Piotr S. Politis,

More information

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 Problem Set 1 Solution Sketches Time Series Analysis Spring 2010 1. Construct a martingale difference process that is not weakly stationary. Simplest e.g.: Let Y t be a sequence of independent, non-identically

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

GARCH( 1, 1) processes are near epoch dependent

GARCH( 1, 1) processes are near epoch dependent Economics Letters 36 (1991) 181-186 North-Holland 181 GARCH( 1, 1) processes are near epoch dependent Bruce E. Hansen 1Jnrrwsity of Rochester, Rochester, NY 14627, USA Keceived 22 October 1990 Accepted

More information

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017 Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.

More information

Test for Parameter Change in ARIMA Models

Test for Parameter Change in ARIMA Models Test for Parameter Change in ARIMA Models Sangyeol Lee 1 Siyun Park 2 Koichi Maekawa 3 and Ken-ichi Kawai 4 Abstract In this paper we consider the problem of testing for parameter changes in ARIMA models

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

Ch. 19 Models of Nonstationary Time Series

Ch. 19 Models of Nonstationary Time Series Ch. 19 Models of Nonstationary Time Series In time series analysis we do not confine ourselves to the analysis of stationary time series. In fact, most of the time series we encounter are non stationary.

More information

Bias Correction and Out of Sample Forecast Accuracy

Bias Correction and Out of Sample Forecast Accuracy Auburn University Department of Economics Working Paper Series Bias Correction and Out of Sample Forecast Accuracy Hyeongwoo Kim and Nazif Durmaz Auburn University AUWP 2010 02 This paper can be downloaded

More information

Discrete time processes

Discrete time processes Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following

More information

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50 GARCH Models Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 50 Outline 1 Stylized Facts ARCH model: definition 3 GARCH model 4 EGARCH 5 Asymmetric Models 6

More information

A Primer on Asymptotics

A Primer on Asymptotics A Primer on Asymptotics Eric Zivot Department of Economics University of Washington September 30, 2003 Revised: October 7, 2009 Introduction The two main concepts in asymptotic theory covered in these

More information

Parameter Estimation for ARCH(1) Models Based on Kalman Filter

Parameter Estimation for ARCH(1) Models Based on Kalman Filter Applied Mathematical Sciences, Vol. 8, 2014, no. 56, 2783-2791 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.43164 Parameter Estimation for ARCH(1) Models Based on Kalman Filter Jelloul

More information

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong STAT 443 Final Exam Review L A TEXer: W Kong 1 Basic Definitions Definition 11 The time series {X t } with E[X 2 t ] < is said to be weakly stationary if: 1 µ X (t) = E[X t ] is independent of t 2 γ X

More information

The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions

The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions The Generalized Cochrane-Orcutt Transformation Estimation For Spurious and Fractional Spurious Regressions Shin-Huei Wang and Cheng Hsiao Jan 31, 2010 Abstract This paper proposes a highly consistent estimation,

More information

Nonparametric regression with rescaled time series errors

Nonparametric regression with rescaled time series errors Nonparametric regression with rescaled time series errors José E. Figueroa-López Michael Levine Abstract We consider a heteroscedastic nonparametric regression model with an autoregressive error process

More information

Heteroskedasticity and Autocorrelation Consistent Standard Errors

Heteroskedasticity and Autocorrelation Consistent Standard Errors NBER Summer Institute Minicourse What s New in Econometrics: ime Series Lecture 9 July 6, 008 Heteroskedasticity and Autocorrelation Consistent Standard Errors Lecture 9, July, 008 Outline. What are HAC

More information

Studies in Nonlinear Dynamics & Econometrics

Studies in Nonlinear Dynamics & Econometrics Studies in Nonlinear Dynamics & Econometrics Volume 9, Issue 2 2005 Article 4 A Note on the Hiemstra-Jones Test for Granger Non-causality Cees Diks Valentyn Panchenko University of Amsterdam, C.G.H.Diks@uva.nl

More information

Location Multiplicative Error Model. Asymptotic Inference and Empirical Analysis

Location Multiplicative Error Model. Asymptotic Inference and Empirical Analysis : Asymptotic Inference and Empirical Analysis Qian Li Department of Mathematics and Statistics University of Missouri-Kansas City ql35d@mail.umkc.edu October 29, 2015 Outline of Topics Introduction GARCH

More information

Ch.10 Autocorrelated Disturbances (June 15, 2016)

Ch.10 Autocorrelated Disturbances (June 15, 2016) Ch10 Autocorrelated Disturbances (June 15, 2016) In a time-series linear regression model setting, Y t = x tβ + u t, t = 1, 2,, T, (10-1) a common problem is autocorrelation, or serial correlation of the

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot November 2, 2011 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

Spatial autoregression model:strong consistency

Spatial autoregression model:strong consistency Statistics & Probability Letters 65 (2003 71 77 Spatial autoregression model:strong consistency B.B. Bhattacharyya a, J.-J. Ren b, G.D. Richardson b;, J. Zhang b a Department of Statistics, North Carolina

More information

M-estimators for augmented GARCH(1,1) processes

M-estimators for augmented GARCH(1,1) processes M-estimators for augmented GARCH(1,1) processes Freiburg, DAGStat 2013 Fabian Tinkl 19.03.2013 Chair of Statistics and Econometrics FAU Erlangen-Nuremberg Outline Introduction The augmented GARCH(1,1)

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Residual Bootstrap for estimation in autoregressive processes

Residual Bootstrap for estimation in autoregressive processes Chapter 7 Residual Bootstrap for estimation in autoregressive processes In Chapter 6 we consider the asymptotic sampling properties of the several estimators including the least squares estimator of the

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

Bootstrap tests of multiple inequality restrictions on variance ratios

Bootstrap tests of multiple inequality restrictions on variance ratios Economics Letters 91 (2006) 343 348 www.elsevier.com/locate/econbase Bootstrap tests of multiple inequality restrictions on variance ratios Jeff Fleming a, Chris Kirby b, *, Barbara Ostdiek a a Jones Graduate

More information

Documents de Travail du Centre d Economie de la Sorbonne

Documents de Travail du Centre d Economie de la Sorbonne Documents de Travail du Centre d Economie de la Sorbonne A note on self-similarity for discrete time series Dominique GUEGAN, Zhiping LU 2007.55 Maison des Sciences Économiques, 106-112 boulevard de L'Hôpital,

More information

Flexible Estimation of Treatment Effect Parameters

Flexible Estimation of Treatment Effect Parameters Flexible Estimation of Treatment Effect Parameters Thomas MaCurdy a and Xiaohong Chen b and Han Hong c Introduction Many empirical studies of program evaluations are complicated by the presence of both

More information

Dynamic Regression Models (Lect 15)

Dynamic Regression Models (Lect 15) Dynamic Regression Models (Lect 15) Ragnar Nymoen University of Oslo 21 March 2013 1 / 17 HGL: Ch 9; BN: Kap 10 The HGL Ch 9 is a long chapter, and the testing for autocorrelation part we have already

More information

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

Inference in VARs with Conditional Heteroskedasticity of Unknown Form Inference in VARs with Conditional Heteroskedasticity of Unknown Form Ralf Brüggemann a Carsten Jentsch b Carsten Trenkler c University of Konstanz University of Mannheim University of Mannheim IAB Nuremberg

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

HETEROSCEDASTICITY AND AUTOCORRELATION ROBUST STRUCTURAL CHANGE DETECTION. By Zhou Zhou 1 University of Toronto February 1, 2013.

HETEROSCEDASTICITY AND AUTOCORRELATION ROBUST STRUCTURAL CHANGE DETECTION. By Zhou Zhou 1 University of Toronto February 1, 2013. HETEROSCEDASTICITY AND AUTOCORRELATION ROBUST STRUCTURAL CHANGE DETECTION By Zhou Zhou 1 University of Toronto February 1, 2013 Abstract The assumption of (weak) stationarity is crucial for the validity

More information

Model Identification for Time Series with Dependent Innovations

Model Identification for Time Series with Dependent Innovations Model Identification for Time Series with Dependent Innovations Shuhao Chen 1, Wanli Min 2 and Rong Chen 1,3 1 Rutgers University, 2 IBM Singapore and 3 Peking University Abstract: This paper investigates

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

Department of Economics, UCSD UC San Diego

Department of Economics, UCSD UC San Diego Department of Economics, UCSD UC San Diego itle: Spurious Regressions with Stationary Series Author: Granger, Clive W.J., University of California, San Diego Hyung, Namwon, University of Seoul Jeon, Yongil,

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Self-Normalized Dickey-Fuller Tests for a Unit Root

Self-Normalized Dickey-Fuller Tests for a Unit Root Self-Normalized Dickey-Fuller Tests for a Unit Root Gaowen Wang Department of Finance and Banking, Takming College Wei-Lin Mao Department of Economics, National Cheng-Chi University Keywords: domain of

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 9 Jakub Mućk Econometrics of Panel Data Meeting # 9 1 / 22 Outline 1 Time series analysis Stationarity Unit Root Tests for Nonstationarity 2 Panel Unit Root

More information

3. ARMA Modeling. Now: Important class of stationary processes

3. ARMA Modeling. Now: Important class of stationary processes 3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average

More information

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes. MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

On Bootstrap Implementation of Likelihood Ratio Test for a Unit Root

On Bootstrap Implementation of Likelihood Ratio Test for a Unit Root On Bootstrap Implementation of Likelihood Ratio Test for a Unit Root ANTON SKROBOTOV The Russian Presidential Academy of National Economy and Public Administration February 25, 2018 Abstract In this paper

More information

Stochastic Equicontinuity in Nonlinear Time Series Models. Andreas Hagemann University of Notre Dame. June 10, 2013

Stochastic Equicontinuity in Nonlinear Time Series Models. Andreas Hagemann University of Notre Dame. June 10, 2013 Stochastic Equicontinuity in Nonlinear Time Series Models Andreas Hagemann University of Notre Dame June 10, 2013 In this paper I provide simple and easily verifiable conditions under which a strong form

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

Asymptotic distribution of GMM Estimator

Asymptotic distribution of GMM Estimator Asymptotic distribution of GMM Estimator Eduardo Rossi University of Pavia Econometria finanziaria 2010 Rossi (2010) GMM 2010 1 / 45 Outline 1 Asymptotic Normality of the GMM Estimator 2 Long Run Covariance

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

Taking a New Contour: A Novel View on Unit Root Test 1

Taking a New Contour: A Novel View on Unit Root Test 1 Taking a New Contour: A Novel View on Unit Root Test 1 Yoosoon Chang Department of Economics Rice University and Joon Y. Park Department of Economics Rice University and Sungkyunkwan University Abstract

More information

Stochastic volatility models: tails and memory

Stochastic volatility models: tails and memory : tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Inference for Non-stationary Time Series Auto Regression. By Zhou Zhou 1 University of Toronto January 21, Abstract

Inference for Non-stationary Time Series Auto Regression. By Zhou Zhou 1 University of Toronto January 21, Abstract Inference for Non-stationary Time Series Auto Regression By Zhou Zhou 1 University of Toronto January 21, 2013 Abstract The paper considers simultaneous inference for a class of non-stationary autoregressive

More information

1 Introduction to Generalized Least Squares

1 Introduction to Generalized Least Squares ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the

More information

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis Sílvia Gonçalves and Benoit Perron Département de sciences économiques,

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao

More information

Nonlinear minimization estimators in the presence of cointegrating relations

Nonlinear minimization estimators in the presence of cointegrating relations Nonlinear minimization estimators in the presence of cointegrating relations Robert M. de Jong January 31, 2000 Abstract In this paper, we consider estimation of a long-run and a short-run parameter jointly

More information

The Slow Convergence of OLS Estimators of α, β and Portfolio. β and Portfolio Weights under Long Memory Stochastic Volatility

The Slow Convergence of OLS Estimators of α, β and Portfolio. β and Portfolio Weights under Long Memory Stochastic Volatility The Slow Convergence of OLS Estimators of α, β and Portfolio Weights under Long Memory Stochastic Volatility New York University Stern School of Business June 21, 2018 Introduction Bivariate long memory

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

Introduction to Econometrics

Introduction to Econometrics Introduction to Econometrics T H I R D E D I T I O N Global Edition James H. Stock Harvard University Mark W. Watson Princeton University Boston Columbus Indianapolis New York San Francisco Upper Saddle

More information

CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS. Sangyeol Lee

CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS. Sangyeol Lee CUSUM TEST FOR PARAMETER CHANGE IN TIME SERIES MODELS Sangyeol Lee 1 Contents 1. Introduction of the CUSUM test 2. Test for variance change in AR(p) model 3. Test for Parameter Change in Regression Models

More information