Journal of Mathematical Research & Exposition Jul., 211, Vol.31, No.4, pp. 687 697 DOI:1.377/j.issn:1-341X.211.4.14 Http://jmre.dlut.edu.cn Complete q-moment Convergence of Moving Average Processes under ϕ-mixing Assumption Xing Cai ZHOU 1,2, Jin Guan LIN 1, 1. Department of Mathematics, Southeast University, Jiangsu 2196, P. R. China; 2. Department of Mathematics and Computer Science, Tongling University, Anhui 244, P. R. China Abstract In this paper, we study the complete q-moment convergence of moving average processes under ϕ-mixing assumption. The results obtained not only extend some previous known results, but also improve them. Keywords moving average; ϕ-mixing; complete moment convergence. Document code A MR(21) Subject Classification 6G5; 6F15 Chinese Library Classification O211.4 1. Introduction We assume that Y i, < i < is a doubly infinite sequence of identically distributed random variables. Let, < i < be an absolutely summable sequence of real numbers, and X n = Y in, n 1. (1.1) Under independence assumptions, i.e., Y i, < i < is a sequence of independent random variables, many limiting results have been obtained for moving average process X n, n 1. For example, Ibragimov [5] established the central limit theorem, Burton and Dehling [2] obtained a large deviation principle, and Li et al.[7] obtained the complete convergence. We know that if Y i, < i < is the sequence of i.i.d. random variables, the moving average random variables X n, n 1 are dependent, which is called weak dependence. Recently, some limiting results for moving average processes based on the dependent sequences have been obtained. For example, Zhang [13] discussed the complete convergence of moving average processes under ϕ- mixing assumption, Yu and Wang [12] and Baek et al. [1] obtained the complete convergence of moving average processes under negative dependence assumptions, and Li and Zhang [9] Received September 12, 29; Accepted January 18, 21 Supported by the National Natural Science Foundation of China (Grant Nos. 18711; 197197), the Anhui Provincial Natural Science Foundation of China (Grant No. 11466M4) and the Anhui Province College Excellent Young Talents Fundation of China (Grant No. 29SQRZ176ZD). * Corresponding author E-mail address: jglin@seu.edu.cn (J. G. LIN); xczhou@nuaa.edu.cn (X. C. ZHOU)
688 X. C. ZHOU and J. G. LIN discussed the complete convergence of moving average processes under negatively associated random variables. The following theorem is due to [13]. Theorem 1.1 ([13]) Let h be a function slowly varying at infinity, 1 p < 2 and r 1. Suppose that X n, n 1 is a moving average process based on a sequence Y i, < i < of identically distributed ϕ-mixing random variables with ϕ1/2 (m) <. If EY 1 = and E Y 1 rp h( Y 1 p ) <, then n n r 2 h(n)p X k εn 1/p <, for all ε >. Zhang [13] did not discuss the complete convergence for the cases of the maximums and supremums of the partial sums as it has been done for ϕ-mixing random variables by Shao [11]. Chen et al. [3] gave the following two theorems about the complete convergence for the cases r > 1 and r = 1, respectively. Theorem 1.2 ([3]) Let h be a function slowly varying at infinity, 1 p < 2 and r > 1. Suppose that X n, n 1 is a moving average process based on a sequence Y i, < i < of identically distributed ϕ-mixing random variables. If EY 1 = and E Y 1 rp h( Y 1 p ) <, then and n r 2 h(n)p max X j εn 1/p <, for all ε >, n r 2 h(n)p sup k n k 1 p X j ε <, for all ε >. Theorem 1.3 ([3]) Let h be a function slowly varying at infinity and 1 p < 2. Assume that θ <, where θ belongs to (, 1) if p = 1 and θ = 1 if 1 < p < 2. Suppose that X n, n 1 is a moving average process based on a sequence Y i, < i < of identically distributed ϕ-mixing random variables with ϕ1/2 (m) <. If EY 1 = and E Y 1 p h( Y 1 p ) <, then h(n) n P max X j εn 1/p <, for all ε >. Kim and Ko [6] extended Theorem 1.1 to the following result on the complete moment convergence. Let x = max(x, ). Theorem 1.4 ([6]) Let h be a function slowly varying at infinity, 1 p < 2 and r 1 p/2. Suppose that X k, k 1 is a moving average process based on a sequence Y i, < i < of identically distributed ϕ-mixing random variables with EY 1 =, EY 2 1 < and ϕ1/2 (m) <. If E Y 1 rp h( Y 1 p ) <, then n r 2 1/p h(n)e n X k εn 1/p <, for all ε >.
Complete q-moment convergence of moving average processes under ϕ-mixing assumption 689 Recently, Li and Spǎtaru [8] obtained the refinement of complete convergence n n r 2 P X n nb > x 1/q n 1/p dx <, for all ε >, ε under some suitable moment conditions for a sequence of i.i.d. random variables X n, n 1, where < p < 2, r 1, q >, and b = EX if rp 1 and b = if < rp < 1. Chen and Wang [4] pointed out that the refinement of complete convergence and the complete q-moment convergence are equivalent, because they showed that for all ε > a n P Z n > x 1/q b n dx < and ε a n Eb 1 n Z n ε q < are equivalent for any a n >, b n >, q >, and any sequence of random variables Z n, n 1. As we know, the complete moment convergence can more exactly describe the convergence rate of a sequence of random variables than the complete convergence. In this paper, we shall extend Theorems 1.2 and 1.3 about the complete convergence to the complete q-moment convergence, and Theorem 1.4 is our special case of q = 1. We use the methods different from those in [6], and our results improve Theorem 1.4. 2. Main results and some lemmas We suppose that Y i, < i < is a sequence of identically distributed ϕ-mixing random variables, i.e., ϕ(m) = supsup P(B A) P(A), A F, k P(A), B Fkm k 1 as m, where F m n = σ(y i, n i m), n m. Recall that a real valued function h, positive and measurable on [, ), is said to be slowly varying if for each λ > h(λx) lim x h(x) = 1. We often use the following properties of slowly varying functions [3,1]. If h is a slowly varying function at infinity, then for any a b and t 1 b a x t h(x)dx x t1 h(x) b a, where C does not depend on a and b, and for any λ > max h(x) (λ)h(λa). a x λa Of course, these two inequalities hold only if the right hand sides make sense.
69 X. C. ZHOU and J. G. LIN Throughout the sequel, C will represent a positive constant and its value may change at each occurrence and [x] indicates the maximum integer not larger than x. Now we state our main results and some lemmas. The proofs of main results will be given in the next section. Theorem 2.1 Let h be a function slowly varying at infinity, 1 p < 2, r > 1 and q >. Suppose that X n, n 1 is a moving average process based on a sequence Y i, < i < of identically distributed ϕ-mixing random variables. If EY 1 = and E Y 1 rp h( Y 1 p ) <, then and n r 2 h(n)e max X j εn 1/p q <, for all ε >, (2.1) n r 2 h(n)e sup k n k 1 p q X j ε <, for all ε >. (2.2) Theorem 2.2 Let h be a function slowly varying at infinity, 1 p < 2 and q >. Assume that θ <, where θ belongs to (, 1) if p = 1 and θ = 1 if 1 < p < 2. Suppose that X n, n 1 is a moving average process based on a sequence Y i, < i < of identically distributed ϕ-mixing random variables with ϕ1/2 (m) <. If EY 1 = and E Y 1 p h( Y 1 p ) <, then n 1 h(n)e max X j εn 1/p q <, for all ε >. (2.3) Remark Theorems 2.1 and 2.2 give the complete q-moment convergence for the cases of the maximums and supremums of the partial sums, which are different from Theorem 1.4. Note that by the method of [6] it is difficult to obtain these results, even if q = 1. Theorem 2.1 provides the results without any mixing rate, and only r > 1 is required. The conditions of Theorem 2.1 are obviously weaker than these of Theorem 1.4. Theorem 2.2 gives the result of the case r = 1. Kim and Ko [6] did not discuss the case. The following lemmas from [11] will be useful. It is assumed in the lemmas that Y n, n 1 is a ϕ-mixing sequence and S k (n) = kn i=k1 Y i, n 1, k. Lemma 2.1 Let EY i = and EY 2 i < for all i 1. Then for all n 1 and k we have [log n] ESk(n) 2 8n exp 6 ϕ 1/2 (2 i ) max EY i 2. k1 i kn Lemma 2.2 Suppose that there exists an array C k,n, k, n 1 of positive numbers such that max 1 i n ES 2 k (i) k,n for every k and n 1. Then for any s 2, there exists C (s, ϕ( )) such that for any k and n 1 ( ) E max S k(i) s C s/2 1 i n k,n E( max Y i s ). k<i kn
Complete q-moment convergence of moving average processes under ϕ-mixing assumption 691 3. Proof of main results Proof of Theorem 2.1 First, we prove (2.1). Let Y xj = Y j I[ Y j x 1/q ] EY j I[ Y j x 1/q ], and l(n) = n r 2 h(n). Recall that n X k = n Since <, we have for x >, E max x 1/q Y ik = E = max x 1/q x 1/q x 1/q n Hence for x large enough one gets Then in Y j I[ Y j x 1/q ] in Y j. Y j I[ Y j > x 1/q ] (EY j = ) E Y j I[ Y j > x 1/q ] E Y 1 I[ Y 1 > x 1/q ] x 1/q x p/q E Y 1 I[ Y 1 > x 1/q ] E Y 1 p I[ Y 1 > x 1/q ], as x. x 1/q E in l(n)e max X j εn 1/p q = = l(n) Y j I[ Y j x 1/q ] < ε/4. (3.1) P max X j εn 1/p > x 1/q dx n l(n) P max X j > εn 1/p x 1/q dx l(n) P max X j > εn 1/p x 1/q dx l(n)p max X j > εn 1/p l(n) P max X j > x 1/q dx =: I 1 I 2. (3.2)
692 X. C. ZHOU and J. G. LIN For I 1, by Theorem 1.2, we have I 1 = n r 2 h(n)p max X j > εn 1/p <. (3.3) For I 2, from (3.1), we have I 2 C l(n) P max l(n) P max Y j I[ Y j > x 1/q ] x 1/q /2 dx Y xj x 1/q /4 dx =:I 21 I 22. (3.4) For I 21, by Markov s inequality and the mean-value theorem, we have I 21 l(n) x 1/q E max l(n) nx 1/q E Y 1 I[ Y 1 > x 1/q ]dx (m1) m x 1/q E Y 1 I[ Y 1 > x 1/q ]dx m 1/p 1 E Y 1 I[ Y 1 > m 1/p ] m m 1/p 1 E Y 1 I[ Y 1 > m 1/p ] m 1/p 1 E Y 1 I[ Y 1 > m 1/p ]m r h(m) m r 1 1/p h(m)e Y 1 I[ Y 1 > m 1/p ] m r 1 1/p h(m) E Y 1 I[k < Y 1 p k 1] k=m E Y 1 I[k < Y 1 p k 1] m r 1 1/p h(m) k r 1/p h(k)e Y 1 I[k < Y 1 p k 1] Y j I[ Y j > x 1/q ] dx E Y 1 rp h( Y 1 p ) <. (3.5) For I 22, by Markov s and Hölder s inequalities, the mean-value theorem, Lemmas 2.1 and
Complete q-moment convergence of moving average processes under ϕ-mixing assumption 693 2.2, one gets that for any s 2 I 22 C C C =:I 221 I 222. l(n) x s/q E max Y xj s dx [ l(n) x s/q E ( 1 1/s )( 1/s max ( ) s 1 ( l(n) x s/q E max l(n) x s/q l(n) x s/q l(n) x s/q l(n) x s/q ik ik Y xj )] sdx Y xj s) dx ( [log k] max k exp6 s/2dx ϕ 1/2 (2 j ) max xj) EY 2 i1 j ik E max i<j in Y xj s dx ( [log n] s/2dx n exp6 ϕ 1/2 (2 j ) max x1) EY 2 i1 j in n E Y x1 s dx ( [log n] s/2dx l(n) x s/q n exp6 ϕ 1/2 (2 j )E Y 1 2 I[ Y 1 x ]) 1/q l(n) nx s/q E Y 1 s I[ Y 1 x 1/q ]dx Note that ϕ(m) as m, hence [log n] ϕ 1/2 (2 j ) = o(log n). Therefore, for any λ > and t >, expλ [log n] ϕ 1/2 (2 j ) = o(n t ). For I 221, we consider the following two cases. (3.6) If rp < 2, take s > 2 and let u = st/2. We have that r (r 1)s/2 < 1. Then take t > small enough such that s > small enough, r (r 1)s/2u < 1. By the mean-value theorem, we have I 221 =C =C ( s/2 n (1t)s/2 l(n) x s/q E Y 1 2 I[ Y 1 x ]) 1/q dx n s/2u l(n) n s/2u l(n) (m1) m ( s/2 x s/q E Y 1 2 I[ Y 1 x ]) 1/q dx ( m s/p 1 E Y 1 2 I[ Y 1 (m 1) 1/p ] ) s/2 ( ) s/2 m m s/p 1 E Y 1 2 I[ Y 1 (m 1) 1/p ] n s/2u l(n)
694 X. C. ZHOU and J. G. LIN ( ) s/2 m rs/2 s/pu 2 h(m) E Y 1 rp Y 1 2 rp I[ Y 1 (m 1) 1/p ] ( ) s/2 m r (r 1)s/2u 2 h(m) E Y 1 rp I[ Y 1 (m 1) 1/p ] m r (r 1)s/2u 2 h(m) <. (3.7) If rp 2, take s > max(2p(r 1)/(2 p), q) and let u = st/2. We have that r s/ps/2 < 1. Then, take t > small enough such that u > small enough. We have r s/p s/2 u < 1. In this case, we note that E Y 1 2 <. Therefore, one gets I 221 n (t1)s/2 l(n) x s/q (E Y 1 2 I[ Y 1 x 1/q ]) s/2 dx n r s/p(t1)s/2 2 h(n) n r s/ps/2u 2 h(n) <. (3.8) So, by (3.7) and (3.8) we get For I 222, by the mean-value theorem, we have I 222 I 221 <. (3.9) (m1) m x s/q E Y 1 s I[ Y 1 x 1/q ]dx m s/p 1 E Y 1 s I[ Y 1 (m 1) 1/p ] m m s/p 1 E Y 1 s I[ Y 1 (m 1) 1/p ] m r s/p 1 h(m)e Y 1 s I[ Y 1 (m 1) 1/p ] E Y 1 rp h( Y 1 p ) <. (3.1) Thus, (2.1) follows from (3.2) (3.6), (3.9) and (3.1). Now, we prove (2.2). We have = = n r 2 h(n)e n r 2 h(n) 2 i 1 sup k n k 1 p n=2 i 1 n r 2 h(n) P sup k n q X j ε k 1 p P sup k n X j > ε t 1/q dt k 1 p X j > ε t 1/q dt
Complete q-moment convergence of moving average processes under ϕ-mixing assumption 695 P sup k 1 p k 2 i 1 2 i(r 1) h(2 i ) 2 i(r 1) h(2 i ) l=1 P 2 l(r 1) h(2 l ) l=1 l=i X j > ε t 1/q 2 i 1 dt 2 i(r 2) h(2 i ) n=2 i 1 X j > ε t 1/q dt P sup k 1 p k 2 i 1 max k 1 p 2 l 1 k<2 l (letting y = 2 (l 1) t) 2 l(r 1 ) h(2 l ) l=1 n r 2 h(n) P max k 1 p 2 l 1 k<2 l X j > ε t 1/q dt X j > ε t 1/q dt P max 2 l 1 k<2 l l 2 i(r 1) h(2 i ) X j > (ε t 1/q )2 (l 1)/p dt P max 1 k<2 l X j > 2 (l 1)/p ε y 1/q dy P max X j > n 1/p 2 1/p ε y 1/q dy (letting ε = 2 1/p ε) = n r 2 h(n)e max X j ε n 1/p q <. Thus, (2.2) holds. Proof of Theorem 2.2 Here, let l(n) = n 1 h(n). Similarly to the proof of (3.2), we have l(n)e max X j εn 1/p q = l(n)p max X j > εn 1/p l(n) P max X j > x 1/q dx =: J 1 J 2. (3.11) For J 1, by Theorem 1.3, we have h(n) J 1 = n P max X j > εn 1/p <. (3.12) For J 2, similarly to the proof of (3.4), we have J 2 C l(n) P max l(n) P max Y j I[ Y j > x 1/q ] x 1/q /2 dx Y xj x 1/q /4 dx
696 X. C. ZHOU and J. G. LIN =:J 21 J 22. (3.13) For J 21, by Markov s and C r inequalities, we have J 21 l(n) x θ/q E max x θ/q E Y 1 θ I[ Y 1 > x 1/q ]dx (m1) m θ Y j I[ Y j > x 1/q ] dx x θ/q E Y 1 θ I[ Y 1 > x 1/q ]dx m θ/p 1 E Y 1 θ I[ Y 1 > m 1/p ] m m θ/p 1 E Y 1 θ I[ Y 1 > m 1/p ] m θ/p h(m)e Y 1 θ I[ Y 1 > m 1/p ] E Y 1 p h( Y 1 p ) <. (3.14) For J 22, similarly to the proof of I 22, take s = 2 in I 22, and note that ϕ1/2 (m) <, we have J 22 l(n) x 2/q E max Y xj 2 dx ( [log n] ) l(n) x 2/q n exp6 ϕ 1/2 (2 j )E Y 1 2 I[ Y 1 x 1/q ] dx C l(n) nx 2/q EY 1 2 I[ Y 1 x 1/q ]dx x 2/q EY 1 2 I[ Y 1 x 1/q ]dx n 1/p (m1) =C x 2/q E Y 1 2 I[ Y 1 x 1/q ]dx =C l(n) m m 2/p 1 E Y 1 2 I[ Y 1 (m 1) p ] m 2/p E Y 1 2 I[ Y 1 (m 1) p ] m m 2/p E Y 1 2 I[ Y 1 (m 1) p ] n 1 h(n) m 2/p h(m)e Y 1 2 I[ Y 1 (m 1) p ]
Complete q-moment convergence of moving average processes under ϕ-mixing assumption 697 E Y 1 p h( Y 1 p ) <. (3.15) From (3.13) (3.15), one gets J 2 <. (3.16) So, (2.3) follows from (3.11), (3.12) and (3.16). The proof of Theorem 2.2 is completed. References [1] BAEK J I, KIM T S, LIANG Hanying. On the convergence of moving average processes under dependent conditions [J]. Aust. N. Z. J. Stat., 23, 45(3): 331 342. [2] BURTON R M, DEHLING H. Large deviations for some weakly dependent random processes [J]. Statist. Probab. Lett., 199, 9(5): 397 41. [3] CHEN Pingyan, HU T C, VOLODIN A. Limiting behaviour of moving average processes under φ-mixing assumption [J]. Statist. Probab. Lett., 29, 79(1): 15 111. [4] CHEN Pingyan, WANG Dingcheng. Convergence rates for probabilities of moderate deviations for moving average processes [J]. Acta Math. Sin. (Engl. Ser.), 28, 24(4): 611 622. [5] IBRAGIMOV I A. Some limit theorems for stationary processes [J]. Teor. Verojatnost. i Primenen., 1962, 7: 361 392. (in Russian) [6] KIM T S, KO M H. Complete moment convergence of moving average processes under dependence assumptions [J]. Statist. Probab. Lett., 28, 78(7): 839 846. [7] LI Deli, RAO M B, WANG Xiangchen. Complete convergence of moving average processes [J]. Statist. Probab. Lett., 1992, 14(2): 111 114. [8] LI Deli, SPǍTARU A. Refinement of convergence rates for tail probabilities [J]. J. Theoret. Probab., 25, 18(4): 933 947. [9] LI Yun-xia, ZHANG Lixin. Complete moment convergence of moving-average processes under dependence assumptions [J]. Statist. Probab. Lett., 24, 7(3): 191 197. [1] SENETA E. Regularly Varying Function [M]. Springer-Verlag, Berlin-New York, 1976. [11] SHAO Qiman. A moment inequality and its applications [J]. Acta Math. Sinica, 1988, 31(6): 736 747. [12] YU Deming, WANG Zhijiang. Complete convergence of moving average processes under negative dependence assumptions [J]. Math. Appl. (Wuhan), 22, 15(1): 3 34. [13] ZHANG Lixin. Complete convergence of moving average processes under dependence assumptions [J]. Statist. Probab. Lett., 1996, 3(2): 165 17.