COVARIANCES ESTIMATION FOR LONG-MEMORY PROCESSES
|
|
- Russell Ross
- 5 years ago
- Views:
Transcription
1 Adv. Appl. Prob. 4, (010) Printed in Northern Ireland Applied Probability Trust 010 COVARIANCES ESTIMATION FOR LONG-MEMORY PROCESSES WEI BIAO WU and YINXIAO HUANG, University of Chicago WEI ZHENG, University of Illinois at Chicago Abstract For a time series, a plot of sample covariances is a popular way to assess its dependence properties. In this paper we give a systematic characterization of the asymptotic behavior of sample covariances of long-memory linear processes. Central and noncentral limit theorems are obtained for sample covariances with bounded as well as unbounded lags. It is shown that the limiting distribution depends in a very interesting way on the strength of dependence, the heavy-tailedness of the innovations, and the magnitude of the lags. Keywords: Asymptotic normality; covariance, dichotomy; linear process; long-range dependence; Rosenblatt distribution 000 Mathematics Subject Classification: Primary 60F05; 6M10 Secondary 60G10 1. Introduction Auto-covariance functions play a fundamental role in time series analysis and they are used in various inference problems, including parameter estimation and hypothesis testing. They are naturally estimated by sample covariances. Hence, the convergence problem of sample covariances is of critical importance. There is a substantial literature on properties of sample covariance estimates; see, for example, Bartlett (1946), Hannan (1970, pp. 0 9, ), (1976), Anderson (1971, pp ), Hall and Heyde (1980, pp ), Porat (1987), Brockwell and Davis (1991, pp. 0 37), Phillips and Solo (199), Berlinet and Francq (1999), Wu and Min (005), among others. However, many of the earlier results are for sample covariance estimates with bounded lags. The latter restriction is quite severe. To better understand the dependence structure of a time series, we would like to know the behavior of sample covariances at large lags, namely at lags which increase to infinity with respect to sample sizes. This is especially so in the study of long-memory or long-range dependent processes since for such processes we are particularly interested in covariances at large lags. The asymptotic problem of sample covariances at large lags is quite challenging. As mentioned in Harris et al. (003), the primary reason for the difficulty is that the standard asymptotic results, such as the functional central limit theorem, stochastic integral convergence, and long-run variance estimation, are not directly applicable since the lag k n depends on the sample size n in such a way that k n. Recently, researchers have made several important breakthroughs and derived central limit theorems for sample covariances at lags Received 5 February 009; revision received 18 November 009. Postal address: Department of Statistics, University of Chicago, Chicago, IL 60637, USA. address: wbwu@galton.uchicago.edu Postal address: Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, 851 S. Morgan Street, Chicago, IL , USA. 137
2 138 W. B. WU ET AL. k n with k n. Keenan (1997) obtained a central limit theorem for sample covariances at lags k n with k n under the severe restriction k n = o(log n). Harris et al. (003) substantially extended the range of k n for short-memory linear processes. Wu (008) obtained a central limit theorem for sample covariances of nonlinear time series with a very wide range of k n. However, all those results concern short-memory processes in which the covariances are absolutely summable. The techniques therein are not directly applicable to long-memory processes. For long-memory processes, Hosking (1996) obtained central and noncentral limit theorems for sample covariances with bounded lags. Here the terminology noncentral limit theorem refers to the result that the limiting distribution is not normal, instead, it is the Rosenblatt process (see Rosenblatt (1979)). In Hosking s result, the restriction that the lag k is bounded is quite severe, since in the study of long-memory processes, we often want to study the behavior of sample covariances at large lags. Chung (00) generalized Hosking s result to multivariate long-memory processes. Again, in Chung s setting the lags are bounded. A result for sample covariances of long-memory processes with unbounded lags is given in Dai (004), who derived the uniform convergence of sample covariances. However, the latter paper does not provide an asymptotic distributional theory for sample covariances. For an inferential theory, we need to have a distributional theory. In this paper we shall consider the asymptotic behavior of sample covariances of longmemory linear processes with bounded as well as unbounded lags. Consider the linear process X k = µ + a i ε k i, where the ε i,i Z, are independent and identically distributed (i.i.d.) innovations with mean 0 and finite variance, µ is the mean, and the a i are real coefficients of the form i=0 a i = i β l(i), i N, where 1 <β<1 and l is a slowly varying function (see Bingham et al. (1989, pp. 6 8)). By the Karamata theorem in the latter book, we can show that the covariance function γ k = cov(x 0,X k ) = E(ε0 ) i=0 a i a i+k satisfies γ k C β E(ε 0 ) l (k) k β 1, where C β = 0 (u + u ) β du, (1) as k. Here, for two real sequences (b k ) and (c k ), we write b k c k if lim k b k /c k = 1. Since 1 <β<1, the γ k are not summable, thus meaning long-range dependence or long memory. Given the sample (X i ) n,ifµ is known then we can naturally estimate γ k by ˇγ k = 1 n (X i µ)(x i k µ), 0 k<n, i=k+1 and let ˇγ k =ˇγ k.ifµ is unknown, we can estimate γ k by the sample covariance ˆγ k = 1 n (X i X n )(X i k X n ), 0 k<n, where X n = 1 n i=k+1 X i. ()
3 Covariances estimation for long-memory processes 139 Estimation of γ k allows us to assess the strength of dependence of the process by examining the auto-covariance function plot. Based on (1), we can estimate the long-memory parameter β by performing a linear regression for the model log ˆγ k α 0 + α 1 log k over k = l n,l n + 1,...,u n, where α 0 is the intercept, α 1 = 1 β, l n, and u n /n 0. Let ( ˆα 0, ˆα 1 ) be the least squares estimate. Then β can be estimated by ˆβ = 1 ˆα 1/, and its confidence interval can be constructed if an asymptotic distributional theory of ( ˆγ ln,..., ˆγ un ) is available. Long-memory processes have been studied for several decades. However, the asymptotic distributional problem for ˆγ kn with large k n has been rarely touched. Here we shall present a systematic asymptotic theory for ˇγ k and ˆγ k. It is shown that their asymptotic behavior depends in a very interesting way on the strength of dependence, the heavy-tailedness of the innovations, and the magnitude of the lags. The rest of the paper is organized as follows. Our main results are stated in Section. Some of the proofs are given in Section 3. In our proofs we have extensively applied the martingale approximation techniques, which in many situations lead to optimal and nearly optimal results.. Main results Before presenting our main results, we shall first introduce some notation. For a random variable Z, write Z L p,p > 0, if Z p := (E Z p ) 1/p < and, for p =, write Z = Z. Denote by the weak convergence and by the matrix transpose. Let F i = (...,ε i 1,ε i ), i Z, and define the projection operator P i =E( F i ) E( F i 1 ). (3) In Theorems 1 6, below, we assume that µ = 0 and deal with n X i X i k. As mentioned in Remark 1, below, they also hold for n +k X i X i k = n ˇγ k. Theorem 1. Let k be a fixed nonnegative integer, and let E(X i ) = 0; let Assume that ε i L 4 and that Then Y i = (X i,x i 1,...,X i k ) and Ɣ k = (γ 0,γ 1,...,γ k ). 1 n i 1/ β l 4 (i) <. (4) (X i Y i Ɣ k ) N[0, E(D 0 D0 )], (5) where D 0 = i=0 P 0 (X i Y i ) L and P 0 is the projection operator (3). Theorem 1 provides a central limit theorem for sample covariances when the dependence is relatively weak in the sense that (4) holds. Note that, by properties of slowly varying functions, (4) is satisfied if 4 3 <β<1. In the boundary case, β = 4 3, condition (4) becomes l 4 (i)/i <, which is a sharp condition for a n-central limit theorem. Indeed, as indicated by Theorem 3, below, if l 4 (i)/i =, then we no longer have a n-central limit theorem, though the asymptotic normality still holds. Similar results have been obtained in Hosking (1996), Hall and Hyde (1980, pp ), Wu and Min (005), among others. However, the results therein are not as sharp and general as Theorem 1. For example, Hosking (1996) required that lim i l(i) exists, and Proposition 1 of Wu and Min (005) required
4 140 W. B. WU ET AL. that i 1/ ai <,or l (i)/i <, which is stronger than (4) at the boundary case, β = 4 3. Theorem 1 requires k to be bounded. It turns out that, interestingly, under the same condition (4), we can also have asymptotic normality under the natural and mild condition on k n : k n and k n /n 0. More interestingly, in Theorem, below, the limiting distribution N(0, h ) in (6) does not depend on the speed of k n growing to infinity. This interesting property has been discovered in Theorem of Wu (009) which concerns short-range-dependent processes. Theorem. Let W i = (X i,x i 1,...,X i h+1 ), where h N is fixed. Let k n, k n /n 0, E(ε i ) = 0, and ε i L 4, and assume that (4) holds. Then we have 1 n [X i W i kn E(X kn W 0 )] N(0, h ), (6) where h is an h h matrix with entries σ ab = j Z γ j+a γ j+b = j Z γ j γ j+b a =: σ a b, 1 a,b h. A key step in proving Theorems 1 and is that we approximate n (X i X i k γ k ) by the martingale M n,k = l=1 1 D l,k, where D l,k = ε l j= (γ k+j + γ k j )ε l+j + γ k (ε l E ε l ). See (17) and Lemma 1, below, for more details. Note that D 1,k,D,k,..., are martingale differences. The above martingale approximation provides an interesting insight into the Bartlett formula for asymptotic distributions of sample covariance functions (see, for example, Proposition of Brockwell and Davis (1991)) by noting that E(D l,k D l,k ) = 1 j= (γ k+j + γ k j )(γ k +j + γ k j ) ε γ k γ k κ 4, where κ 4 = ε0 E ε 0. In other words, D l,k provides a probabilistic representation for the Bartlett formula. Theorem 3, below, concerns the boundary case, β = 4 3, while (4) is violated. Together with Theorem 1, they give a complete characterization of the asymptotic behavior of ˆγ k with bounded k at the boundary β = 4 3. A special case of Theorem 3 gives Hosking s (1996) Theorem 4(ii), where in his setting the ε i are i.i.d. Gaussian and a i ci 3/4 with some positive constant c. In the latter case lim i l(i) = c and l(n) = n l 4 (i)/i c 4 log n. In Theorem 3, we recall (1) for C β and Theorem 1 for Y i and Ɣ k,k 0. Then C 3/4 = Forh N, let I h = (1,...,1) be the column vector of h 1s. Theorem 3. Assume that E(ε i ) = 0, ε i L 4, β = 4 3, and l(n) = n l 4 (i)/i. Let G be a standard normal random variable. Then, for fixed k 0, we have 1 n l(n) (X i Y i Ɣ k ) C 3/4 ε 0 GI k+1. (7)
5 Covariances estimation for long-memory processes 141 In Theorem 3 it is assumed that k is bounded. It is unclear what is the asymptotic distribution of n (X i X i k γ k ) if k = k n with k n = o(n). We conjecture that it is still asymptotically normal and pose it as an open problem. If the dependence is strong enough such that β< 4 3 then we can have a noncentral limit theorem in that the limiting distribution is the Rosenblatt distribution which is non-gaussian. Noncentral limit theorems have a long history; see Rosenblatt (1979), Taqqu (1979), Avram and Taqqu (1987), Ho and Hsing (1997), among others. To define the Rosenblatt distribution, let B(s), s R, be a standard Brownian motion. For a R, let a + = max(a, 0) be the nonnegative part of a. Forr N and β< 1 + 1/(r), define the multiple Wiener Itô (MWI) integral { 1 [ r ] β } R r,β = c r,β (v u i ) + dv db(u 1 ) db(u r ), S r 0 where S r ={(u 1,...,u r ) : <u 1 < <u r < 1} is a simplex and c r,β is a norming constant such that R r,β = 1. For r = and 1 <β< 4 3, we call R r,β the Rosenblatt distribution. Note that R 1,β is Gaussian and, for all r>1, R r,β is non-gaussian (see Taqqu (1979)). For a review of the MWI integral, see Giraitis and Taqqu (1999) and Major (1981, pp. 37). For r N with r<1/(β 1), define σn,r = n r(β 1) l r (n) ε 0 r [ 0 (x + x ) β dx] r r![1 r(β 1/)][1 r(β 1)]. (8) Recall Theorem for W i. Theorem 4. Assume that E(ε i ) = 0, ε i L 4, 1 <β< 4 3, l(i + 1)/l(i) 1 = O(1/i), and k n /n 0. Then 1 [X i W i kn E(X kn W 0 )] R,β I h. (9) σ n, Theorem 4 allows for a very wide range of k n, which can be bounded as well as unbounded. An interesting feature of this theorem is that the limiting distribution R,β does not depend on k n, regardless of whether it is bounded or not. Chung (00) pointed out that, in the situation that the lag is bounded, the limiting distribution does not depend on the lag. The phenomenon in (9) is interestingly different from Theorems 1 and, the mild long-memory case. The latter two theorems assert different limiting distributions in the sense that the asymptotic variances are different, depending on whether k n is bounded or not. In Theorems 1 4, we assume that ε i L 4. If ε i does not have a finite fourth moment then we may have weak convergence to stable distributions. Recently, Horváth and Kokoszka (008) obtained various types of convergence rates and limiting distributions, depending on the heaviness of tails and the strength of dependence. In their treatment, however, they assumed that k was bounded. For Theorem 5, below, we assume that εi E ε i is in the domain of attraction of a stable distribution Z α with index α (1, ) (see Chow and Teicher (1988, pp )), namely there exists a slowly varying function l 0 ( ) such that n (εi E ε i ) n 1/α Z α. (10) l 0 (n) In this case the asymptotic behavior of ˇγ k depends in a very interesting way on the heavy tail index α, the long memory index β, and the lag index λ. Here we let the lag k n be of the form n λ l 1 (n), where λ (0, 1) and l 1 is a slowly varying function.
6 14 W. B. WU ET AL. Theorem 5. Assume that (10) holds with 1 <α< and 3 4 <β<1. Let k n = n λ l 1 (n), where λ (0, 1) and l 1 is a slowly varying function. (i) If λ>(α 1 1 )/(β 1) then (6) holds. (ii) If λ<(α 1 1 )/(β 1) then 1 γ kn n 1/α l 0 (n) [X i W i kn E(X kn W 0 )] Z α I h. (11) In Theorem 5, cases (i) and (ii) suggest the dichotomy phenomenon: for small λ, wehave the weak convergence to stable distributions, while, for large λ, we still have the conventional central limit theorem. A similar phenomenon has been discovered in Csörgő and Mielniczuk (000) for kernel estimation of long-memory processes. They showed that large and small bandwidths correspond to different asymptotic distributions of the kernel estimates. See also Surgailis (004), Sly and Heyde (008), Mikosch et al. (00), and Hsieh et al. (007) for similar observations under different settings. In Theorem 5, the lag parameter k n plays a similar role. Theorem 5 does not cover the boundary case λ = (α 1 1 )/(β 1). In this case the situation is more subtle since the growth rates of the slowly varying functions l( ), l 0 ( ), and l 1 ( ) will be involved in the limiting distribution. We decide not to pursue the boundary case since the involved manipulations seem quite tedious. If the dependence of (X i ) is sufficiently strong such that 1 <β< 4 3, then we have a different type of dichotomy. As asserted by Theorem 6, below, the limiting distributions for large and small lags are Rosenblatt and stable distributions, respectively. Theorem 6. Assume that (10) holds with 1 <α<, 1 <β< 4 3, and l(i + 1)/l(i) 1 = O(1/i). Let k n = n λ l 1 (n), where λ (0, 1) and l 1 is a slowly varying function. (i) If β >λ(1 β) + α 1 then (9) holds. (ii) If β <λ(1 β) + α 1 then (11) holds. Remark 1. It is easily seen that Theorems 1 6 are still valid if the sums n therein are replaced by n +kn under the condition that k n = o(n). For example, let us consider (9) of Theorem 4. Define n = n k n. By (9) and stationarity, 1 σ n, +k n (X i X i kn γ kn ) R,β. Since n /n 1, we have n β l (n )/[n β l (n)] 1by properties of slowly varying functions and, hence, n ˇγ kn (n k n )γ kn σ n, = 1 σ n, +k n (X i X i kn γ kn ) R,β. (1) Similar claims can be made for other theorems. Additionally, the term (n k n )γ kn in (1) can be replaced by nγ kn since k n γ kn = O[kn β l (k n )]=o( n) if 4 3 <β<1, k nγ kn = o[ nl (n)] =o[ n l(n)] if β = 4 3, and k nγ kn = o(σ n, ) if 4 3 >β> 1.
7 Covariances estimation for long-memory processes 143 Remark. Under the dependence condition (4), the sample covariance estimator () is asymptotically close to ˇγ k := n 1 n i=k+1 X i X i k since n E ˆγ k ˇγ k E X n i=k+1 X i + E X n X n σ n k,1 + n X n = O[n β l (n)], i=k+1 X i k + E (n k) X n and n β l (n) = o( n) if 4 3 <β<1and n β l (n) = o( n l(n)) if β = 4 3. With simple manipulations, we conclude that Theorems 1 3 and 5 continue to hold if X i therein is replaced by X i X n. If 1 <β< 3 4 then the difference between ˆγ k and ˇγ k is no longer negligible; see Hosking (1996), Dehling and Taqqu (1991), and Yajima (199). Corollary 1, below, provides the asymptotic distribution of ˆγ k. Corollary 1. Let 1 <β< 4 3. we have 1 σ n, Then, under the conditions of Theorem 4 or Theorem 6(i), +k n [(X i X n )(X i kn X n ) γ kn ] R,β Under Theorem 6(ii), (11) still holds if X i therein is replaced by X i X n. 3. Proofs (3 4β) 1/ (1 β) 1/ (3 β) R 1,β. (13) This section provides proofs for the results in Section. Without loss of generality, we assume that ε 0 =1throughout the proofs. Let κ 4 = εi 1 if ε i L 4. Define a i = 0 if i < 0, and let A i = j=i aj. By Karamata s theorem, A n l (n)n 1 β /(β 1) = O(nan ). Let γ h = a i a i h. i Z Then, again by Karamata s theorem, as in (1), both γ h and γ h h 1 β l ( h )C β as h. Note that γ h = γ h if all a i Proofs of Theorems 1 and To prove Theorems 1 and, we need the following lemma. With this lemma, we shall first prove Theorem and then prove Theorem 1. Lemma 1. Let i, k 0. Assume that ε i L 4. Then P 0 (X i X i k ) a i A 1/ i k+1 + a i k A 1/ i+1 + a ia i k ε0 1. (14) Note that the above bound is a i A 1/ 0 if i<k. Additionally, under (4), we have i sup P 0 (X i X i k ) = O(1) (15) i=i 1 i 1,i,k
8 144 W. B. WU ET AL. and lim sup g k i=g For l Z, let D l,k = i Z P l(x i X i k ). Then D l,k = ε l 1 j= P 0 (X i X i k ) = 0. (16) (γ k+j + γ k j )ε l+j + γ k (ε l 1). (17) Proof. Observe that P 0 (ε j ε j ) = 0ifjj = 0, and P 0 ε0 = ε 0 1. Then P 0 (X i X i k ) = a i j a i k j P 0 (ε j ε j ) j,j Z 1 = ε 0 j = a i a i k j ε j + ε 0 1 j= a i j a i k ε j + a i a i k (ε0 1), (18) which implies (14). Since γ h = i Z a ia i h h 1 β l ( h )C β as h,wehave 1 j = ( i i=i 1 a i a i k j ) 1 j = γ k+j = O(1). By (18), (15) follows from a similar argument for i i=i1 a i j a i k. We now prove (16). By Schwarz s inequality, ( i=g a i a i k j ) A g A 0 0asg. By Lebesgue s dominated convergence theorem, as g, sup k 1 j = ( i=g a i a i k j ) sup k j = min( γ k+j,a ga 0 ) 0. With a similar treatment for i=g a i j a i k, we have (16) since ( i=g a i a i k ) A g A 0. Since γ h = i Z a ia i+h, (18) implies (17) with l = 0. The case in which l = 0 follows similarly Proof of Theorem. Recall (17) of Lemma 1 for D l,k. Let M n,k = n l=1 D l,k and S n,k = n l=1 X l X l k nγ k. Due to the orthogonality of P r,r Z, wehave ( 0 S n,kn M n,kn = + r= ) P r (S n,kn M n,kn ). (19) If r 3k n and 1 i n, by (14) of Lemma 1 and since A j = O(jaj ) as j, r=1 P r (X i X i kn ) =O( i r k n a i r a i r kn ) = O(b i r ), where b j = j 1/ β l (j), j N. Forr 3k n,wehavep r M n,kn = 0 and P r (S n,kn M n,kn ) P r X i X i kn = O(b i r ). (0)
9 Covariances estimation for long-memory processes 145 Let p (1, (β 1) 1 ) and q = p/(p 1). By Hölder s inequality, if 3k n r n, n b i+r ( n b p i+r )1/p n 1/q. By Karamata s theorem, i=r b p i = O(rbr p ) since p( 1 β) < 1. Hence, since (1/p + 1 β) > 1, again by Karamata s theorem, ( ) b i+r = O[(r 1/p b r n 1/q ) ] r=3k n r=1 = no[(n 1/p b n n 1/q ) ] = O[n 4 4β l 4 (n)] = o(n). (1) If r>nthen n b i+r = O(nb r ). Since 1 4β < 1, by Karamata s theorem, r=1+n ( ) b i+r = r=1+n O(n br ) = n3 O(bn ) = o(n). () If 1 r n, P r (S n,kn M n,kn ) = i=n+1 P r (X i X i kn ). By stationarity and (16), n 3k n r=1 P r (S n,kn M n,kn ) = g=1+3k n P 0 (X i X i kn ) Hence, by (15) of Lemma 1, since k n = o(n), we have, by (19) and (0) (3), i=g = o(n). (3) S n,kn M n,kn = o(n). (4) It remains to show the central limit theorem for M n,kn.forafixedm N, let M n,k = l=1 k+m D l,k, where D l,k = ε l j= k m γ k+j ε l+j. Since D l,k D l,k,l= 1,,...,are martingale differences, M n,k M n,k n = D 0,k D 0,k ( 1 γ k ε0 1 + j= Since γ k 0ask and g Z γ g <,wehave ) 1/ ( 1 γk j + j= γ k+j 1 { j+k >m}) 1/. M n,k M n,k lim sup lim sup = 0. (5) m n n We shall now apply the martingale central limit theorem for M n,kn / n. By the mean ergodic theorem, since m is fixed, we have 1 n E( D l,k F l 1) = 1 n l=1 l=1 ( k+m j= k m γ k+j ε l+j ) m j= m γ j
10 146 W. B. WU ET AL. in probability. Let η = ε 0 mj= m γ j ε j m 1. For any λ>0, since lim E( D n l,k 1 { D l,k λ n} ) = lim n E(η 1 { η λ n} ) = 0, which implies the Lindeberg condition. Hence, M n,k / n N(0, m j= m γj ), and the theorem follows from (4) and (5) Proof of Theorem 1. A careful check of the proof of Theorem reveals that, under (4), (4) still holds if k n is bounded. Namely, for fixed k,wehave S n,k M n,k = o(n). Then we can just apply the classical martingale central limit theorem and obtain M n,k / n N(0, D 0,k ). Then (5) easily follows from the Cramer Wold device. 3.. Proof of Theorem 3 The treatment of the boundary case, β = 4 3, is very intricate. Here we will apply the martingale approximation technique (see Wu and Woodroofe (004)). We first deal with the case in which k = 0. Let V j = Xj γ 0 al (ε j l 1). (6) l=0 We shall approximate n V j by n D j,n, where D j,n = ε j h=1 Note that D 1,n,D,n,...,D n,n are martingale differences. Let R n = (V j D j,n ). n 1 c n,h ε j h, where c n,h = a i a i+h. (7) Next we shall control R n. Since the P h,h Z, are orthogonal, R n = ( n h= + 0 h=1 n + i=0 ) P h R n. (8) If h nthen P h R n = n P h V i, and, by independence, n n P h R n = a i h ε h a i h+j ε h j h= 4 = = h= n h= n h= n h= h=1 ( ) a i h a i h+j O(n a h a j h ) O(n a h h a h ) = o(n l(n)). (9)
11 Covariances estimation for long-memory processes 147 In (9) we have applied Karamata s theorem by noting the fact that, if h nand 1 i n, then a i h = O(a h ). By Lemma 4 of Wu and Min (005), l 4 (n) = o( l(n)). Let δ (0, 1 ). For 1 + nδ h n and 1 j n, wehave l=h a l a l+j = l=1 O(a n ) = O(n 1/ l (n)) = n 1/ o[ l(n) 1/ ]. (30) Therefore, since γ j j 1/ l (j)c β,wehave n γ j l(n)cβ. By (30), lim sup n nh=1 n ( n l=h a l a l+j ) n l(n) lim sup n + lim sup n lim sup n nδ n h=1 ( n l=h a l a l+j ) n l(n) nh=1+ nδ n ( n l=h a l a l+j ) nδ h=1 n γ j n l(n) n l(n) = δc β. (31) Since δ>0 can be arbitrarily small, n h=1 n ( n l=h a l a l+j ) = o[n l(n)]. Next, nh=1 +n ( n l=h a l a l+j ) n l(n) nh=1 +n O(a j )( n l=h a l ) = n l(n) = no(na n )( n l=1 a l ) n l(n) = O(l4 (n)) l(n) = o(1). (3) We now deal with the sums 0 h=1 n and n h=1 in (8). By (31) and (3), 0 h=1 n For 1 h n, wehave P h R n = P h R n = 4 0 a i h ε h a i h+j ε h j h=1 n 0 h=1 n ( ) a i h a i h+j = o[n l(n)]. (33) P h V i D h,n = i=h n+h 1 i=n+1 P h V i.
12 148 W. B. WU ET AL. By stationarity, (31), and (3), Therefore, by (8) we have P h R n = h=1 = 4 We now further approximate n D i,n by h=1 h=1 n+h 1 i=n+1 n 1 P h V i l=n+1 h ( n 1 h =1 P 0 V l l=h a l a l+j ) = o[n l(n)]. (34) R n = o[n l(n)]. (35) H i, where H i = H i,n = ε i γ j ε i j. (36) Note that H i = 4 n γj n 4C β l 4 (j)/j. Since l 4 (n) = o( l(n)), wehave H i D i,n = o(n l(n)) (37) in view of (c n,j γ j ) ( a i a i+j ) = i=n O(nn β l (n)) = o( l(n)) since β = 4 3. It remains to show that n H i (n l(n)) 1/ N(0, 4C β ). (38) To this end, we shall apply the martingale central limit theorem. The Lindeberg condition trivially holds since E Hi 4 l(n) = E( n γ j ε j ) 4 l(n) C ( n γj ) l(n) = O(1) for some constant C>0 in view of Rosenthal s inequality (see Hall and Heyde (1980, p. 3)). It then suffices to verify the following convergence of conditional variances: 1 n l(n) E(H i F i 1 ) = 4 n l(n) ( ) γ j ε i j 4Cβ (39)
13 Covariances estimation for long-memory processes 149 in probability. By the mean ergodic theorem, E n εj n =o(n). Hence, E γj (ε i j 1) = γj o(n) = o(n l(n)). Hence, for (39), it remains to deal with the cross product terms where the coefficients 1 j =j n γ j γ j ε i j ε i j = f l,l = n+min(l,l,0) +max(l,l,0) 1 n l =l n 1 γ i l γ i l. Note that f l,l n γ i γ i+ l l =:µ l l. By independence, 1 n l =l n 1 Let 0 <δ< 1 and l δn. Then So lim sup n µ l ε l ε l f l,l 1 n l =l n 1 ε l ε l f l,l, fl,l 8n µ i. γ i O(n 1 β l (n)) = O(l 4 (n)). n( nδ + n +nδ )µ i n l (n) lim sup n n δµ 0 n l (n) = C β δ. Let δ 0. Then n n µ i = o(n l (n)) and, hence, (39) follows. By the expression of V j in (6), since εl 1 L,wehave n (εj 1) κ 4 n and (Xj γ 0 V j ) al (εj l 1) = O( n). (40) l=0 So, if k = 0, since l(n), (7) with k = 0 follows from (35), (37), (38), and (40). For the general case with finite k>0, we replace V j in (6) by V j,k = X j X j k γ k a l a l+k (εj k l 1). If we replace c n,h in (7) by n 1 j=0 (a h+j a j k + a j a j+h k ) and H j in (36) by H (k) i := ε i l=0 (γ j+k + γ j k )ε i j,
14 150 W. B. WU ET AL. using the argument for k = 0, we similarly have (X j X j k γ k ) H (k) i = o(n l(n)). So (7) follows if n (H i H (k) i ) = o(n l(n)), which is equivalent to H 0 H (k) 0 = (γ j γ j+k γ j k ) = o( l(n)). By (1), as j, γ j+k /γ j 1. So the above relation holds since l(n) and n [(γ j γ j+k ) + (γ j γ j k ) ]= n o(γ j ) = o( l(n)) Proof of Theorem 4 By Lemmas and 3, we have (Xi X ix i kn γ 0 + γ kn ) = O(nk 3 4β n l 4 (k n )). (41) By properties of slowly varying functions we have kn 3 4β l 4 (k n ) = o(n 3 4β l 4 (n)) under k n = o(n). It is well known that (see, for example,avram and Taqqu (1987)), for 1 <β< 4 3,wehave n (Xi γ 0) R,β. σ n, Hence, Theorem 4 follows. Lemma. Assume that ε i L 4, 1 <β< 3 4, and k n/n 0. Then [X i X i kn E(X i X i kn F i kn )] = O(nk 3 4β n l 4 (k n )) (4) and [Xi E(X i F i kn )] = O(nk 3 4β n l 4 (k n )). (43) Proof. Let X i = X i E(X i F i kn ). Since X i X i kn E(X i X i kn F i kn ) = X i kn X i, [X i X i kn E(X i X i kn F i kn )]= = X i kn Xi j= k n ε j min(n,j+k n 1) i=max(j,1) a i j X i kn. (44)
15 Covariances estimation for long-memory processes 151 Since the ε i are i.i.d., (4) follows from the fact that, for k n j n, min(n,j+k n 1) min(n,j+k n 1) a i j X i kn = a i j a i j E(X i kn X i k n ) i=max(j,1) i,i =max(j,1) k n 1 We now prove (43). Since X i = i j=i kn +1 a i j ε j,wehave γ m γ m m=1 k n = O(kn 3 4β l 4 (k n )). (45) X i E(X i F i kn ) = (X i ) E(X i ) + X i g=k n a g ε i g. Similarly as the argument in (44) and (45) for (4), we have Xi a g ε i g = O(nkn 3 4β l 4 (k n )). g=k n It therefore remains to verify that [(Xi ) E(Xi ) ] = P h (Xi ) h= k n To this end, uniformly over h = k n, 1 k n,...,n,wehave P h (Xi min(n,h+k n 1) ) = [a i h (εh 1) + a h 1 i hε h i=max(h,1) γ0 ε γ 0 ε k γ0 ε 0 n = O(k 3 4β n l 4 (k n )). h 1 j=max(h,1) k n +1 h 1 j=max(h,1) k n +1 m=1 So (46) holds and the proof of Lemma is now complete. Lemma 3. Under the conditions of Theorem 4, we have [E(Xi X ix i kn F i kn ) γ 0 + γ kn ] γ m = O(nk 3 4β n l 4 (k n )). (46) j=i k n +1 a i j ε j ] min(n,h+k n 1,j+k n 1) ε j γ j h i=max(h,1) a i j a i h = O(nk 3 4β n l 4 (k n )). (47)
16 15 W. B. WU ET AL. Proof. Let d i = a i a i kn.fori k n,wehave j=i d j 4A i k n, and P 0 (Xi X ix i kn ) = a iε 0 j=i+1 ( a i j=i+1 d j ε i j + d i ε 0 d j j=i+1 a j ε i j + a i d i (ε 0 1) ) 1/ + d i A 1/ i+1 + a id i (ε 0 1) a i A 1/ i+1 k n + d i A 1/ i+1 + a id i (ε0 1). (48) If i k n, since l(i + 1)/l(i) 1 = O(1/i),wehavea i+1 a i = O(a i /i) and d i = O( a i k n /i). By Karamata s theorem, since A i = O( i a i ), we have, by elementary calculations, d i A 1/ i+1 = ( ) ai k n O O( i a i ) = O(kn 3/ β l (k n )), i i=k n i=k n (49) a i A 1/ i+1 k n = O[ a i a i+1 kn (i + 1 k n ) 1/ ]=O(kn 3/ β l (k n )) i=k n i=k n (50) since, for i k n, a i a i+1 kn = O(ai ), and i=k n a i d i = i=k n O For k n i<k n, since a i = O(a kn ),wehave k n 1 i=k n a i A 1/ ( a ) i k n i = O(k 1 β n l (k n )). (51) k n 1 i+1 k n = O(a kn ) A 1/ i+1 k n = O(kn 3/ β l (k n )), (5) i=k n and, since k n 1 i=k n d i k n 1 i=0 a i =O(k n a kn ) and A i+1 = O(k n ak n ), k n 1 i=k n d i A 1/ i+1 = O(k1/ k n 1 n a kn ) i=k n d i =O(k 3/ β n l (k n )). (53) By Theorem 1 of Wu (007) we have [E(Xi X ix i kn F i kn ) γ 0 + γ kn ] n P 0 E(Xi X ix i kn F i kn )] which, by inequalities (48) (53), implies (47). = n i=0 i=k n P 0 (X i X ix i kn ),
17 Covariances estimation for long-memory processes Proof of Theorem 5 As (6), we define V j,k = X j X j k γ k a l a l+k (εj k l 1). (54) A careful check of the proof of Theorem implies that (6) holds if X i X i k γ k therein is replaced by V i,k. Indeed, if X i X i k in Lemma 1 is replaced by V i,k, then (14) becomes P 0 V i,k a i A 1/ i k+1 + a i k A 1/ i+1 under the condition ε i L and we do not need to impose ε i L 4. Also, (15) and (16) hold with X i X i k therein being replaced by V i,k, and the approximating martingale differences D l,k in (17) now become D l,k = ε l 1 j= l=0 (γ k+j + γ k j )ε l+j. The proof of Theorem is still valid if we replace M n,k by Mn,k = n l=1 Dl,k. Let p satisfy α>p>max(1,αλ)and (β 1)(1 λ) + α 1 >p 1. Since β> 1 and λ (0, 1), such a p always exists. Since εi 1 satisfies (10) and p<α,e ε i 1 p <. (i) By the argument above, it suffices to show that Q n := a l a l+kn (εj k n l 1) = a j g a j g+kn (εg k n 1) l=0 g Z satisfies Q n p = o( n). By Burkholder s and Minkowski s inequalities, Q n p p C p a j g a j g+kn p ε 0 1 p p = O(1) g Z 0 p p a j g a j g+kn + O(n) a j a j+kn. g= Since λ>(α 1 1 )/(β 1), we can choose a p<αsuch that p 1 +λ(1 β) < 1.So p n a j a j+kn = O(n γ p k n ) = O{[n 1/p kn 1 β l (k n )] p }=o( n p ), j=0 since k n = n λ l 1 (n) and l 1 is a slowly varying function. Hence, similarly, 0 p p a j g a j g+kn n a j a j+kn = o( n p ). g= g=1 n If g n, by properties of slowly varying functions, for 1 j n and k n <n, a j g a j g+kn = O(a g ). Hence, n p n a j g a j g+kn = O[(na g )p ]=O(n p+1 an p ) = o( n p ) g= in view of 1 + p 1 < β since 1 <p< and β> 3 4. j=0
18 154 W. B. WU ET AL. (ii) We first show that (11) holds with h = 1. Introduce T n = T n,kn = l=0 a l a l+kn (εj k n l 1) γ k n (εj k n 1). Under 1 < λ(1 β) + α 1,wehaven 1/ = o[γ kn n 1/α l 0 (n)]. Since (6) holds with X i X i kn γ kn therein being replaced by V i,kn, by (10), it suffices to show that where T n := l=0 T n p = o[γ kn n 1/α l 0 (n)], (55) a l a l+kn (εj l 1) γ k n (εj 1) has the same distribution as T n. To this end, note that P l Tn,l =,...,n 1,n, are martingale differences, we have, by Burkholder s and Minkowski s inequalities, T n p p C p ( 0 l= + ) P l Tn p p l=1 ( 0 C p ε0 p 1 p p a j l a j l+kn + l= We shall apply the technique in (8) (35). Clearly, p a j l a j l+kn γ kn = l=k n l=1 l =1 a j l a j l+kn γ kn p). (56) l=1 p a j a j+kn. j=l If j k n then a j a j+kn = O(aj ). Hence, p a j a j+kn = O(aj p ) = O[(lal )p ]= O[l p(1 β) l p (l)]. (57) j=l j=l l=k n l=k n l=k n If p(1 β) > 1 then, by Karamata s theorem, the above term is O[n 1+p(1 β) l p (n)], which is o[kn 1 β l (k n )n 1/α l 0 (n)] =o[γ kn n 1/α l 0 (n)] since 1+p(1 β)<λ(1 β)+α 1. If p(1 β) 1, it is easily seen that the above term is o( n), which is o[γ kn n 1/α l 0 (n)] since 1 <α 1 + λ(1 β). Since λ<p/α,wehave k n 1 p a j a j+kn = O(k n γ p k n ) = o[γ kn n 1/α l 0 (n)] p. (58) l=0 j=l If l n and 1 j n, then ( n a j+l a j+l+kn ) p = O[(nal )p ]. By Karamata s theorem, p a j+l a j+l+kn = O[(nal )p ]=O(n p+1 an p ) = o[γ kn n 1/α l 0 (n)] p, (59) l=n l=n
19 Covariances estimation for long-memory processes 155 since 1 + p(1 β)<pλ(1 β) + p/α. Hence, by (56) (59) we have Tn p p p C p a j+l a j+l+kn + C p a j a j+kn l=0 l =1 j=l p = o[γ kn n 1/α l 0 (n)] p, which implies (55) and, hence, case (ii) with h = 1. For the case with h>1, let U k = γ k (εj k 1). By (1), γ kn γ kn +h = o(γ kn ). So (11) follows from (55) and U kn U kn +h = (γ kn γ kn +h) (εj k n 1) + γ kn +ho P (1) = o P (U kn ) + γ kn +ho P (1) Proof of Theorem 6 The argument in the proof of Theorem 5 can be easily modified to prove Theorem 6. For V j,k defined in (54), under 1 <β< 4 3, we can similarly have the noncentral limit theorem n V j,k /σ n, R,β. Then we need to compare the magnitudes of n β l (n) and γ kn n 1/α l 0 (n). Under (i), the former is larger, and we have the noncentral limit theorem (9); under (ii), we have the convergence in stable distribution (11). The details are omitted since there will be no essential extra difficulties involved Proof of Corollary 1 By Lemma 4, m X i σ m,1. Since ˇγ k = n 1 n i=k+1 X i X i k, by simple algebra, E[ n( ˆγ kn ˇγ kn + X n ) ] = E X n i=n k n +1 k n X i + X n X i k n X n σ n,1σ kn,1 + k nσn,1 n n = o[n β l (n)], (60) in view of k n = o(n). Let Y n,r be as given in (61), below. Then Y n,1 = n X n and n ˇγ 0 = Xi = Y n, + ( i= t=1 a t i ) ε i. By Lemma 4, below, we have the joint convergence (Y n,1 /σ n,1,y n, /σ n, ) (R 1,β,R,β ). Hence, by (60), we have (13) in view of (41), and, by elementary calculations, σn,1 (3 4β)1/ nσ n, (1 β) 1/ (3 β). Under (ii) of Theorem 6, since n β l (n) = o(γ kn n 1/α l 0 (n)), it is easily seen that (11) still holds if X i therein is replaced by X i X n.
20 156 W. B. WU ET AL. Lemma 4. Assume that E(ε i ) = 0 and ε i L. Recall (8) for σ n,r. Let r Y n,r = a js ε t js, r 1, Y n,0 = n. (61) t=1 0 j 1 < <j r s=1 For r N with r(β 1) <1, we have E(Yn,r ) σ n,r and the joint convergence ( Yn,1,..., Y ) n,r (R 1,β,...,R r,β ). (6) σ n,1 σ n,r Lemma 4 can be proved by using the same argument as that of Lemma 5 in Surgailis (198). A careful check of the proof of his Lemma 5 suggests that the moment condition ε i L suffices and the joint convergence (6) holds. We omit the details of the derivation. Acknowledgements We are grateful to two anonymous referees for their helpful comments. References Anderson, T. W. (1971). The Statistical Analysis of Time Series. John Wiley, New York. Anderson, T. W. and Walker, A. M. (1964). On the asymptotic distribution of the autocorrelations of a sample from a linear stochastic process. Ann. Math. Statist. 35, Avram, F. and Taqqu, M. S. (1987). Noncentral limit theorems and Appell polynomials. Ann. Prob. 15, Bartlett, M. S. (1946). On the theoretical specification and sampling properties of autocorrelated time-series. J. R. Statist. Soc. 8, Berlinet, A. and Francq, C. (1999). Estimation of the asymptotic behavior of sample autocovariances and empirical autocorrelations of multivalued processes. Canad. J. Statist. 7, Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1989). Regular Variation. Cambridge University Press. Brockwell, P. J. and Davis, R. A. (1991). Time Series: Theory and Methods, nd edn. Springer, New York. Chow, Y. S. and Teicher, H. (1988). Probability Theory, nd edn. Springer, New York. Chung, C. F. (00). Sample means, sample autocovariances, and linear regression of stationary multivariate long memory processes. Econometric Theory 18, Csörgő, S. and Mielniczuk, J. (000). The smoothing dichotomy in random-design regression with long-memory errors based on moving averages. Statistica Sinica 10, Dai, W. (004). Asymptotics of the sample mean and sample covariance of long-range-dependent series. In Stochastic Methods and Their Applications (J. Appl. Prob. Spec. Vol. 41A), eds J. Gani and E. Seneta, Applied Probability Trust, Sheffield, pp Dehling, H. and Taqqu, M. (1991). Bivariate symmetric statistics of long-range dependent observations. J. Statist. Planning Infer. 8, Giraitis, L. and Taqqu, M. S. (1999). Convergence of normalized quadratic forms. J. Statist. Planning Infer. 80, Hall, P. and Heyde, C. C. (1980). Martingale Limit Theorem and Its Application. Academic Press, New York. Hannan, E. J. (1970). Multiple Time Series. John Wiley, New York. Hannan, E. J. (1976). The asymptotic distribution of serial covariances. Ann. Statist. 4, Harris, D., McCabe, B. and Leybourne, S. (003). Some limit theory for autocovariances whose order depends on sample size. Econometric Theory 19, Ho, H.-C. and Hsing, T. (1997). Limit theorems for functionals of moving averages. Ann. Prob. 5, Horváth, L. and Kokoszka, P. (008). Sample autocovariances of long-memory time series. Bernoulli 14, Hosking, J. R. M. (1996). Asymptotic distributions of the sample mean, autocovariances, and autocorrelations of long-memory time series. J. Econometrics 73, Hsieh, M.-C., Hurvich, C. M. and Soulier, P. (007). Asymptotics for duration-driven long range dependent processes. J. Econometrics 141, Keenan, D. M. (1997). A central limit theorem for m(n) autocovariances. J. Time Ser. Anal. 18, Major, P. (1981). Multiple Wiener-Itô Integrals (Lecture Notes Math. 849). Springer, New York. Mikosch, T., Resnick, S., Rootzen, H. and Stegeman, A. (00). Is network traffic approximated by stable Lévy motion or fractional Brownian motion? Ann. Appl. Prob. 1, 3 68.
21 Covariances estimation for long-memory processes 157 Phillips, P. C. B. and Solo, V. (199). Asymptotics for linear processes. Ann. Statist. 0, Porat, B. (1987). Some asymptotic properties of the sample covariances of Gaussian autoregressive moving average processes. J. Time Ser. Anal. 8, Rosenblatt, M. (1979). Some limit theorems for partial sums of quadratic forms in stationary Gaussian variables. Z. Wahrscheinlichkeitsth. 49, Sly, A. and Heyde, C. (008). Nonstandard limit theorem for infinite variance functionals. Ann. Prob. 36, Surgailis, D. (198). Zones of attraction of self-similar multiple integrals. Lithuanian Math. J., Surgailis, D. (004). Stable limits of sums of bounded functions of long-memory moving averages with finite variance. Bernoulli 10, Taqqu, M. S. (1979). Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrscheinlichkeitsth. 50, Wu,W.B.(007). Strong invariance principles for dependent random variables. Ann. Prob. 35, Wu, W. B. (009). An asymptotic theory for sample covariances of Bernoulli shifts. Stoch. Process. Appl. 119, Wu,W.B.andMin,W.(005). On linear processes with dependent innovations. Stoch. Process. Appl. 115, Wu, W. B. and Woodroofe, M. (004). Martingale approximations for sums of stationary processes. Ann. Prob. 3, Yajima, Y. (1993). Asymptotic properties of estimates in incorrect ARMA models for long-memory time series. In New Directions in Time Series Analysis. Part II (IMA Vol. Math. Appl. 46), eds D. Brillinger et al., Springer, New York, pp
Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535
Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June
More informationStochastic volatility models: tails and memory
: tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and
More informationUNIT ROOT TESTING FOR FUNCTIONALS OF LINEAR PROCESSES
Econometric Theory, 22, 2006, 1 14+ Printed in the United States of America+ DOI: 10+10170S0266466606060014 UNIT ROOT TESTING FOR FUNCTIONALS OF LINEAR PROCESSES WEI BIAO WU University of Chicago We consider
More informationQED. Queen s Economics Department Working Paper No. 1244
QED Queen s Economics Department Working Paper No. 1244 A necessary moment condition for the fractional functional central limit theorem Søren Johansen University of Copenhagen and CREATES Morten Ørregaard
More informationFOURIER TRANSFORMS OF STATIONARY PROCESSES 1. k=1
FOURIER TRANSFORMS OF STATIONARY PROCESSES WEI BIAO WU September 8, 003 Abstract. We consider the asymptotic behavior of Fourier transforms of stationary and ergodic sequences. Under sufficiently mild
More informationCENTRAL LIMIT THEOREMS FOR FUNCTIONALS OF LINEAR PROCESSES AND THEIR APPLICATIONS
Statistica Sinica 12(2002), 635-649 CENTRAL LIMIT THEOREMS FOR FUNCTIONALS OF LINEAR PROCESSES AND THEIR APPLICATIONS Wei Biao Wu University of Chicago Abstract: This paper establishes central limit theorems
More informationInvariance principles for fractionally integrated nonlinear processes
IMS Lecture Notes Monograph Series Invariance principles for fractionally integrated nonlinear processes Xiaofeng Shao, Wei Biao Wu University of Chicago Abstract: We obtain invariance principles for a
More informationNOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY
Econometric Theory, 26, 2010, 1855 1861. doi:10.1017/s0266466610000216 NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY UWE HASSLER Goethe-Universität Frankfurt
More informationNonparametric Density Estimation fo Title Processes with Infinite Variance.
Nonparametric Density Estimation fo Title Processes with Infinite Variance Author(s) Honda, Toshio Citation Issue 2006-08 Date Type Technical Report Text Version URL http://hdl.handle.net/10086/16959 Right
More informationOn Linear Processes with Dependent Innovations
The University of Chicago Department of Statistics TECHNICAL REPORT SERIES On Linear Processes with Dependent Innovations Wei Biao Wu, Wanli Min TECHNICAL REPORT NO. 549 May 24, 2004 5734 S. University
More informationNonlinear time series
Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of
More informationA CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin
A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN Dedicated to the memory of Mikhail Gordin Abstract. We prove a central limit theorem for a square-integrable ergodic
More informationSome functional (Hölderian) limit theorems and their applications (II)
Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen
More informationAsymptotic inference for a nonstationary double ar(1) model
Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk
More informationOn Kesten s counterexample to the Cramér-Wold device for regular variation
On Kesten s counterexample to the Cramér-Wold device for regular variation Henrik Hult School of ORIE Cornell University Ithaca NY 4853 USA hult@orie.cornell.edu Filip Lindskog Department of Mathematics
More informationMemory properties of transformations of linear processes
Memory properties of transformations of linear processes February 13, 2016 Hailin Sang and Yongli Sang Department of Mathematics, University of Mississippi, University, MS 38677, USA. E-mail address: sang@olemiss.edu,
More informationEstimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements
of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements François Roueff Ecole Nat. Sup. des Télécommunications 46 rue Barrault, 75634 Paris cedex 13,
More informationConvergence rates in weighted L 1 spaces of kernel density estimators for linear processes
Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,
More informationModeling and testing long memory in random fields
Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous
More informationMi-Hwa Ko. t=1 Z t is true. j=0
Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional
More informationA note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series
Publ. Math. Debrecen 73/1-2 2008), 1 10 A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series By SOO HAK SUNG Taejon), TIEN-CHUNG
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationLARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*
LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science
More informationHan-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek
J. Korean Math. Soc. 41 (2004), No. 5, pp. 883 894 CONVERGENCE OF WEIGHTED SUMS FOR DEPENDENT RANDOM VARIABLES Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek Abstract. We discuss in this paper the strong
More informationRandom Bernstein-Markov factors
Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit
More informationcovariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of
Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated
More informationNonparametric regression with martingale increment errors
S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration
More informationTime Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY
Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7
More informationA Quadratic ARCH( ) model with long memory and Lévy stable behavior of squares
A Quadratic ARCH( ) model with long memory and Lévy stable behavior of squares Donatas Surgailis Vilnius Institute of Mathematics and Informatics onatas Surgailis (Vilnius Institute of Mathematics A Quadratic
More informationThe largest eigenvalues of the sample covariance matrix. in the heavy-tail case
The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)
More informationHeavy Tailed Time Series with Extremal Independence
Heavy Tailed Time Series with Extremal Independence Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Herold Dehling Bochum January 16, 2015 Rafa l Kulik and Philippe Soulier Regular variation
More informationAsymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables
Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab
More informationTest for Parameter Change in ARIMA Models
Test for Parameter Change in ARIMA Models Sangyeol Lee 1 Siyun Park 2 Koichi Maekawa 3 and Ken-ichi Kawai 4 Abstract In this paper we consider the problem of testing for parameter changes in ARIMA models
More informationThe Convergence Rate for the Normal Approximation of Extreme Sums
The Convergence Rate for the Normal Approximation of Extreme Sums Yongcheng Qi University of Minnesota Duluth WCNA 2008, Orlando, July 2-9, 2008 This talk is based on a joint work with Professor Shihong
More informationLimit Theorems for Exchangeable Random Variables via Martingales
Limit Theorems for Exchangeable Random Variables via Martingales Neville Weber, University of Sydney. May 15, 2006 Probabilistic Symmetries and Their Applications A sequence of random variables {X 1, X
More informationTime Series: Theory and Methods
Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary
More informationα i ξ t i, where ξ t i.i.d. (0, 1),
PROBABILITY AND MATHEMATICAL STATISTICS Vol. 31, Fasc. 1 (2011), pp. 300 000 NONLINEARITY OF ARCH AND STOCHASTIC VOLATILITY MODELS AND BARTLETT S FORMULA BY PIOTR S. KO KO S Z K A (LOGAN) AND DIMITRIS
More informationThomas J. Fisher. Research Statement. Preliminary Results
Thomas J. Fisher Research Statement Preliminary Results Many applications of modern statistics involve a large number of measurements and can be considered in a linear algebra framework. In many of these
More informationCREATES Research Paper A necessary moment condition for the fractional functional central limit theorem
CREATES Research Paper 2010-70 A necessary moment condition for the fractional functional central limit theorem Søren Johansen and Morten Ørregaard Nielsen School of Economics and Management Aarhus University
More informationTime Series Analysis. Asymptotic Results for Spatial ARMA Models
Communications in Statistics Theory Methods, 35: 67 688, 2006 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092050049893 Time Series Analysis Asymptotic Results
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationEXACT CONVERGENCE RATE AND LEADING TERM IN THE CENTRAL LIMIT THEOREM FOR U-STATISTICS
Statistica Sinica 6006, 409-4 EXACT CONVERGENCE RATE AND LEADING TERM IN THE CENTRAL LIMIT THEOREM FOR U-STATISTICS Qiying Wang and Neville C Weber The University of Sydney Abstract: The leading term in
More informationON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY
J. Korean Math. Soc. 45 (2008), No. 4, pp. 1101 1111 ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY Jong-Il Baek, Mi-Hwa Ko, and Tae-Sung
More informationEstimation of a quadratic regression functional using the sinc kernel
Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073
More informationA Central Limit Theorem for the Sum of Generalized Linear and Quadratic Forms. by R. L. Eubank and Suojin Wang Texas A&M University
A Central Limit Theorem for the Sum of Generalized Linear and Quadratic Forms by R. L. Eubank and Suojin Wang Texas A&M University ABSTRACT. A central limit theorem is established for the sum of stochastically
More informationA GAMMA ACTIVITY TIME PROCESS WITH NONINTEGER PARAMETER AND SELF-SIMILAR LIMIT
J. Appl. Prob. 44, 950 959 007) Printed in England Applied Probability Trust 007 A GAMMA ACTIVITY TIME PROCESS WITH NONINTEGER PARAMETER AND SELF-SIMILAR LIMIT RICHARD FINLAY and EUGENE SENETA, University
More informationStochastic Numerical Analysis
Stochastic Numerical Analysis Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Stoch. NA, Lecture 3 p. 1 Multi-dimensional SDEs So far we have considered scalar SDEs
More informationIndependence of some multiple Poisson stochastic integrals with variable-sign kernels
Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological
More informationA Single-Pass Algorithm for Spectrum Estimation With Fast Convergence Han Xiao and Wei Biao Wu
4720 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 57, NO 7, JULY 2011 A Single-Pass Algorithm for Spectrum Estimation With Fast Convergence Han Xiao Wei Biao Wu Abstract We propose a single-pass algorithm
More informationSelf-Normalized Dickey-Fuller Tests for a Unit Root
Self-Normalized Dickey-Fuller Tests for a Unit Root Gaowen Wang Department of Finance and Banking, Takming College Wei-Lin Mao Department of Economics, National Cheng-Chi University Keywords: domain of
More informationPointwise convergence rates and central limit theorems for kernel density estimators in linear processes
Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence
More informationGaussian processes. Basic Properties VAG002-
Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity
More informationTaking a New Contour: A Novel View on Unit Root Test 1
Taking a New Contour: A Novel View on Unit Root Test 1 Yoosoon Chang Department of Economics Rice University and Joon Y. Park Department of Economics Rice University and Sungkyunkwan University Abstract
More informationExtremogram and ex-periodogram for heavy-tailed time series
Extremogram and ex-periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Zagreb, June 6, 2014 1 2 Extremal
More informationLong-range dependence
Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series
More informationA regeneration proof of the central limit theorem for uniformly ergodic Markov chains
A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,
More informationWEIGHTED SUMS OF SUBEXPONENTIAL RANDOM VARIABLES AND THEIR MAXIMA
Adv. Appl. Prob. 37, 510 522 2005 Printed in Northern Ireland Applied Probability Trust 2005 WEIGHTED SUMS OF SUBEXPONENTIAL RANDOM VARIABLES AND THEIR MAXIMA YIQING CHEN, Guangdong University of Technology
More informationConsistent estimation of the memory parameter for nonlinear time series
Consistent estimation of the memory parameter for nonlinear time series Violetta Dalla, Liudas Giraitis and Javier Hidalgo London School of Economics, University of ork, London School of Economics 18 August,
More informationMultivariate Time Series
Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationERRATA: Probabilistic Techniques in Analysis
ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,
More informationAssessing the dependence of high-dimensional time series via sample autocovariances and correlations
Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),
More informationTail empirical process for long memory stochastic volatility models
Tail empirical process for long memory stochastic volatility models Rafa l Kulik and Philippe Soulier Carleton University, 5 May 2010 Rafa l Kulik and Philippe Soulier Quick review of limit theorems and
More informationMaximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments
Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya
More informationSoo Hak Sung and Andrei I. Volodin
Bull Korean Math Soc 38 (200), No 4, pp 763 772 ON CONVERGENCE OF SERIES OF INDEENDENT RANDOM VARIABLES Soo Hak Sung and Andrei I Volodin Abstract The rate of convergence for an almost surely convergent
More informationOn uniqueness of moving average representations of heavy-tailed stationary processes
MPRA Munich Personal RePEc Archive On uniqueness of moving average representations of heavy-tailed stationary processes Christian Gouriéroux and Jean-Michel Zakoian University of Toronto, CREST 3 March
More informationHEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES
HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES by Peter W. Glynn Department of Operations Research Stanford University Stanford, CA 94305-4022 and Ward Whitt AT&T Bell Laboratories Murray Hill, NJ 07974-0636
More informationMean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability
J. Math. Anal. Appl. 305 2005) 644 658 www.elsevier.com/locate/jmaa Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability
More informationOn corrections of classical multivariate tests for high-dimensional data
On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics
More informationCOMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES
Hacettepe Journal of Mathematics and Statistics Volume 43 2 204, 245 87 COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES M. L. Guo
More informationThe Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1
Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September
More informationGAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n
GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. A. ZVAVITCH Abstract. In this paper we give a solution for the Gaussian version of the Busemann-Petty problem with additional
More informationSize properties of wavelet packets generated using finite filters
Rev. Mat. Iberoamericana, 18 (2002, 249 265 Size properties of wavelet packets generated using finite filters Morten Nielsen Abstract We show that asymptotic estimates for the growth in L p (R- norm of
More informationKALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)
PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)
More informationMODERATE DEVIATIONS FOR STATIONARY PROCESSES
Statistica Sinica 18(2008), 769-782 MODERATE DEVIATIONS FOR STATIONARY PROCESSES Wei Biao Wu and Zhibiao Zhao University of Chicago and Pennsylvania State University Abstract: We obtain asymptotic expansions
More informationSMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES
Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and
More informationA generalization of Strassen s functional LIL
A generalization of Strassen s functional LIL Uwe Einmahl Departement Wiskunde Vrije Universiteit Brussel Pleinlaan 2 B-1050 Brussel, Belgium E-mail: ueinmahl@vub.ac.be Abstract Let X 1, X 2,... be a sequence
More informationReduction principles for quantile and Bahadur-Kiefer processes of long-range dependent linear sequences
Reduction principles for quantile and Bahadur-Kiefer processes of long-range dependent linear sequences Miklós Csörgő Rafa l Kulik January 18, 2007 Abstract In this paper we consider quantile and Bahadur-Kiefer
More informationComplete Moment Convergence for Sung s Type Weighted Sums of ρ -Mixing Random Variables
Filomat 32:4 (208), 447 453 https://doi.org/0.2298/fil804447l Published by Faculty of Sciences and Mathematics, Uversity of Niš, Serbia Available at: http://www.pmf..ac.rs/filomat Complete Moment Convergence
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationTrimmed sums of long-range dependent moving averages
Trimmed sums of long-range dependent moving averages Rafal l Kulik Mohamedou Ould Haye August 16, 2006 Abstract In this paper we establish asymptotic normality of trimmed sums for long range dependent
More informationLogarithmic scaling of planar random walk s local times
Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract
More informationCharacterizations on Heavy-tailed Distributions by Means of Hazard Rate
Acta Mathematicae Applicatae Sinica, English Series Vol. 19, No. 1 (23) 135 142 Characterizations on Heavy-tailed Distributions by Means of Hazard Rate Chun Su 1, Qi-he Tang 2 1 Department of Statistics
More information1. Stochastic Processes and Stationarity
Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture Note 1 - Introduction This course provides the basic tools needed to analyze data that is observed
More informationBrownian Motion and Conditional Probability
Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical
More information3. ARMA Modeling. Now: Important class of stationary processes
3. ARMA Modeling Now: Important class of stationary processes Definition 3.1: (ARMA(p, q) process) Let {ɛ t } t Z WN(0, σ 2 ) be a white noise process. The process {X t } t Z is called AutoRegressive-Moving-Average
More informationConditional independence, conditional mixing and conditional association
Ann Inst Stat Math (2009) 61:441 460 DOI 10.1007/s10463-007-0152-2 Conditional independence, conditional mixing and conditional association B. L. S. Prakasa Rao Received: 25 July 2006 / Revised: 14 May
More informationWittmann Type Strong Laws of Large Numbers for Blockwise m-negatively Associated Random Variables
Journal of Mathematical Research with Applications Mar., 206, Vol. 36, No. 2, pp. 239 246 DOI:0.3770/j.issn:2095-265.206.02.03 Http://jmre.dlut.edu.cn Wittmann Type Strong Laws of Large Numbers for Blockwise
More informationON THE ESTIMATION OF EXTREME TAIL PROBABILITIES. By Peter Hall and Ishay Weissman Australian National University and Technion
The Annals of Statistics 1997, Vol. 25, No. 3, 1311 1326 ON THE ESTIMATION OF EXTREME TAIL PROBABILITIES By Peter Hall and Ishay Weissman Australian National University and Technion Applications of extreme
More informationStochastic process for macro
Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,
More informationRegular Variation and Extreme Events for Stochastic Processes
1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for
More informationCOVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE
Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT
More informationSingle Equation Linear GMM with Serially Correlated Moment Conditions
Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )
More informationCentral Limit Theorem for Non-stationary Markov Chains
Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities
More informationJoint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion
Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Luis Barboza October 23, 2012 Department of Statistics, Purdue University () Probability Seminar 1 / 59 Introduction
More informationWLLN for arrays of nonnegative random variables
WLLN for arrays of nonnegative random variables Stefan Ankirchner Thomas Kruse Mikhail Urusov November 8, 26 We provide a weak law of large numbers for arrays of nonnegative and pairwise negatively associated
More information