Asymptotics of empirical processes of long memory moving averages with innite variance
|
|
- August Miles
- 5 years ago
- Views:
Transcription
1 Stochastic Processes and their Applications 91 (2001) Asymptotics of empirical processes of long memory moving averages with innite variance Hira L. Koul a;, Donatas Surgailis b a Department of Statistics & Probability, Michigan State University, East Lansing, MI 48824, USA b Vilnius Institute of Mathematics & Informatics, USA eceived 15 November 1999; received in revised form 5 June 2000; accepted 14 June 2000 Abstract This paper obtains a uniform reduction principle for the empirical process of a stationary moving average time series {X t} with long memory and independent and identically distributed innovations belonging to the domain of attraction of symmetric -stable laws, 1 2. As a consequence, an appropriately standardized empirical process is shown to converge weakly in the uniform-topology to a degenerate process of the form fz, where Z is a standard symmetric -stable random variable and f is the marginal density of the underlying process. A similar result is obtained for a class of weighted empirical processes. We also show, for a large class of bounded functions h, that the limit law of (normalized) sums n h(xs) is symmetric -stable. s=1 An application of these results to linear regression models with moving average errors of the above type yields that a large class of M-estimators of regression parameters are asymptotically equivalent to the least-squares estimator and -stable. This paper thus etends various well-known results of Dehling Taqqu and Koul Mukherjee from nite variance long memory models to innite variance models of the above type. c 2001 Elsevier Science B.V. All rights reserved. MSC: primary 62G05; secondary 62J05; 62E20 Keywords: Non-random designs; Unbounded spectral density; Uniform reduction principle; M-estimators 1. Introduction and summary A strictly stationary second-order times series X t ;t Z := 0; ±1; ±2;:::; is said to have long memory if its lag t covariances are not summable and decrease as t 2d 1, where 0 d 1=2. The eistence of long memory data has been manifested in numerous scientic areas ranging from climate warming to stock markets (Beran, 1992; obinson, 1994b; Baillie, 1996). One of the most popular models of long memory processes is AFIMA (p; d; q) dened by the autoregressive equation (L)(1 L) d X t = (L) t ; (1.1) Corresponding author. Fa: address: koul@stt.msu.edu (H.L. Koul) /01/$ - see front matter c 2001 Elsevier Science B.V. All rights reserved. PII: S (00)00065-X
2 310 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) where t ;t Z; is an independent and identically distributed (i.i.d.) sequence, L is the backward shift operator, (1 L) d is the fractional dierencing operator dened for 1=2 d 1=2 by the corresponding binomial epansion (see e.g. Granger and Joyeu (1980) or Hosking (1981)), and (L); (L) are polynomials in L of degree p; q, respectively, ( ) satisfying the usual root requirement for stationarity of the process. The stationary solution to (1.1) for d 0 can be written as a causal innite moving average process X t = b t s s ; (1.2) s6t whose weights b j satisfy the asymptotic relation b j c 0 j ; (j ); =1 d; (1.3) with the constant c 0 c 0 ( ;;d)= (1) = (1) (d) (see Hosking, 1981). Thus, in case of 0 having zero mean and nite variance, X t (1.1) is well-dened strictly stationary process for all d 1=2 and has long memory in the above sense, provided 1=2 1. An important problem in the contet of long memory processes from the inference point of view is the investigation of the asymptotic behavior of a class of statistics of the type S n;h = h(x t ); where h(); ; is a real valued measurable function, usually assumed to have nite second moment Eh 2 (X 0 ). A special case (up to the factor n 1 ) of utmost interest is the empirical distribution function ˆF n ()=n 1 I(X t 6); ; in which case the above question usually etends to the weak convergence of the corresponding random process indeed by ; in the Skorokhod space D( ); = [ ; ] with the sup-topology. For Gaussian long memory processes X t ;t Z; (including AFIMA (p; d; q) as a special case), the study of limit distributions of S n;h has a long history, starting with osenblatt (1961) and culminating in the papers of Dobrushin and Major (1979) and Taqqu (1979). The weak convergence of the empirical process of Gaussian and their subordinated long memory sequences was obtained in Dehling and Taqqu (1989). For non-gaussian linear processes (1.2) and (1.3) with nite variance, these problems were studied by Surgailis (1982), Giraitis and Surgailis (1989, 1999), Ho and Hsing (1996,1997), Koul and Surgailis (1997). It is well known that in the case the r.v. s X t are i.i.d. with continuous distribution function (d.f.) F, the normalized process n 1=2 ( ˆF n F) converges weakly in the space D( ) with the sup-topology, which we denote by D( ) in the sequel, to a Gaussian process Z(); ; with zero mean and covariance E[Z()Z(y)]=F( y) F()F(y) (see e.g. Billingsley, 1968; Doukhan et al., 1995; Shao and Yu, 1996). In the case of long memory, the asymptotic behavior of ˆF n is very dierent. Assuming a moving average structure of X t (1.2) and some additional regularity and moment
3 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) conditions on the distribution of 0 (which are satised of course in the case the latter are Gaussian), one has n =2 ( ˆF n () F()) D( ) cf()z; (1.4) where Z N(0; 1) is the standard normal variable, f is probability density of the marginal d.f. F of X 0, and c is some constant (see e.g., Dehling and Taqqu, 1989; Ho and Hsing, 1996; Giraitis and Surgailis, 1999). The dierence between (1.4) and the classical Brownian bridge limit is not only in the rate of convergence, which is much slower in (1.4) compared to the classical n 1=2, but, more importantly, in the asymptotic degeneracy of the limit process of (1.4) which shows that the increments of standardized ˆF n over disjoint intervals, or disjoint observation sets, are asymptotically completely correlated. Similar asymptotic behavior is shared by weighted residual empirical processes which arise in the study of multiple regression models with long memory errors (Koul and Mukherjee, 1993; Giraitis et al., 1996). These asymptotic degeneracy results provide the main basis of many surprising results about the large sample behavior of various inference procedures in the presence of long memory (Dehling and Taqqu, 1989; Beran, 1991; Koul, 1992a; Koul and Mukherjee, 1993; obinson, 1994a,b; Csorgo and Mielniczuk, 1995; Giraitis et al., 1996; Ho and Hsing, 1996; Koul and Surgailis, 1997 and the references therein). The aim of the present paper is to etend the functional limit result (1.4) and some of the above mentioned inference results to linear models (1.2), (1.3) with innite variance, in particular, to AFIMA (p; d; q) time series, with i.i.d. innovations t ;t Z, belonging to the domain of attraction of a symmetric -stable (SS) law, 1 2. More precisely, we shall assume in the sequel that X t ;t Z; is a moving average process (1.2), where j ;j Z are i.i.d. r.v. s with zero mean and satisfying the tail regularity condition lim G() = lim (1 G()) = c 1 (1.5) for some 1 2 and some constant 0 c 1, where G is the d.f. of 0. In addition, the weights b j ;j 0 satisfy the asymptotics (1.3), where c 0 0and 1= 1: (1.6) Without loss of generality, we assume c 1 = 1 of (1.5) in the sequel. Under these assumptions, b j = ; B:= b j ; (1.7) j=0 j=0 the linear process X t of (1.2) is well dened in the sense of the convergence in probability, and its marginal d.f. F satises lim F() = lim (1 F()) = B: (1.8) Note that (1.8) implies E X 0 = and E X 0 r for each r, in particular EX 2 0 = and E X 0. Because of these facts and (1.7), this process will be called long memory moving average process with innite variance in this paper. In the sequel,
4 312 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) we refer to the above assumptions as the standard assumptions about the time series in consideration. The class of moving averages satisfying these assumptions includes AFIMA (p; d; q) with SS-innovations, 0 d 1 1=. See Kokoszka and Taqqu (1995) for detailed discussion of properties of stable AFIMA series. Astrauskas (1983), Avram and Taqqu (1986, 1992), Kasahara and Maejima (1988) have shown that under the standard assumptions the sample mean X n = n 1 n X t is asymptotically -stable: na 1 n X n cz; (1.9) where Z is a standard SS r.v. with E[e iuz ]=e u ;u, and ( (2 )cos(=2) 1 1= 1 c = c 0 2c 1 (t s) + dt ds) : (1.10) 1 0 The normalization A n = n 1 +1= (1.11) grows much faster as compared with the usual normalization n 1= in the case of partial sums of independent random variables in the domain of attraction of -stable law. The latter fact is another indication that the series X t ;t Z; ehibits long memory. Clearly, the usual characterization of this property in terms of the covariance s decay is not available in the case of innite variance. In the case when the i.i.d. innovations j ;j Z; are SS, the moving average X t ;t Z; has SS nite dimensional distributions and the role of the covariance is played, to a certain etent, by the (Levy spectral measure) quantities such as covariation and=or codierence (see Samorodnitsky and Taqqu, 1994). A related characteristic of dependence in the innite variance case is Cov(e iu1x0 ; e iu2xt )=E[e i(u1x0+u2xt) ] E[e iu1x0 ]E[e iu2xt ]; u 1 ;u 2 : In the particular case when X t ;t Z is a fractional SS noise, for any u 1 ;u 2 Cov(e iu1x0 ; e iu2xt ) k u1;u 2 E[e iu1x0 ]E[e iu2x0 ]t 1 ; (1.12) as t, where k u1;u 2 is a constant depending on u 1 ;u 2 (Astrauskas et al., 1991). In section 6 we etend the asymptotics (1.12) to an arbitrary (not necessarily SS) moving average X t of (1.2) satisfying the standard assumptions above. We shall now summarize the contents of the remaining sections. Theorem 2.1 below contains the main result of the paper about the uniform reduction principle for weighted residuals empirical processes of an innite variance moving average observations X t ; t =1;:::;n. It yields in particular, an analog of (1.4) where now Z is a standard SS r.v. These results involve some additional regularity assumptions about the probability density of the innovations. Corollary 2.3 below shows that the weak limit of A 1 n S n;h, for a bounded h, isan SS r.v. This result itself is surprising, as it shows that an -stable (1 2) limit law may arise from sums of bounded random variables h(x t ). It is well known that in the case of i.i.d. or weakly dependent summands such limit laws require a long tailed summands distribution and the contribution of the maimal summand to be comparable
5 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) to the sum itself. These results further reconrm the deep dierences between long and short memory. Section 3 discusses the asymptotic distribution of (robust) M-estimators of the underlying regression parameters in linear regression models with innite variance longmemory moving average errors. We show that the least-squares estimator converges in distribution to a vector of SS r.v. s and asymptotically these M-estimators are equivalent to the least-squares estimator in probability (Theorem 3.1). These ndings should be contrasted with those available in the i.i.d. errors linear regression models with innite variance. In these models the asymptotic distribution of an M-estimator of the regression parameter vector is known to be Gaussian with zero mean and an asymptotic variance that depends on the given score function (Knight, 1993), a fact that is in complete contrast to the above ndings. The proofs of Theorems 2.1 and 3.1 appear in Sections 4 and 5, respectively. Finally, Section 6 discusses the asymptotics (1.12). 2. Uniform reduction principle for weighted empiricals We assume below that X t ;t Z; satises the standard assumptions of section 1 and, furthermore, that G is twice dierentiable with the second derivative G satisfying the inequalities G () 6C(1 + ) ; (2.1) G () G (y) 6C y (1 + ) ; ;y ; y 1: (2.2) These conditions are satised if G is SS d.f., which follows from asymptotic epansion of stable density (see e.g. Christoph and Wolf (1992, Theorem 1:5) or Ibragimov and Linnik (1971)). In this case, (2.1) (2.2) hold with + 2 instead of. Under the standard assumptions, the d.f. F of X 0 is shown to be innitely dierentiable in Lemma 4.2 below. Now, let f denote the density of F and introduce the weighted empirical process: S n ()= n;s (I(X s 6 + n;s ) F( + n;s )+f( + n;s )X s ); ; (2.3) s=1 where ( n;i ; n;i ;16i6n) are non-random real-valued sequences. We are ready to state Theorem 2.1. Assume; in addition to the standard assumptions and conditions (2:1) and (2:2); that (a:1) ma n;i = O(1); 16i6n (a:2) ma n;i = O(1): 16i6n Then; there eists 0 such that; for any 0, [ ] P sup A 1 n S n () 6Cn ; where A n is as in (1:11).
6 314 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) In the special case n;i 1; n;i 0; S n ()=n( ˆF n () F()+f() X n ) and Theorem 2.1 implies Corollary 2.1. There is 0 such that; for any 0; [ ] P sup na 1 n ˆF n () F()+f() X n 6Cn : This fact and (1.9) readily imply the following two corollaries: Corollary 2.2. na 1 n ( ˆF n () F()) D( ) cf()z; where Z is a standard SS random variable and the constant c given in (1:10). Corollary 2.3. Let h be a real valued measurable function of bounded variation; such that Eh(X 0 )=0. Then; A 1 n S n;h ch 1 Z; where h 1 = f()dh()= h()f ()d. emark 2.1. Corollaries etend the uniform reduction principle and some other results of Dehling and Taqqu (1989) to the case of long memory processes with innite variances. As mentioned earlier, Corollary 2.3 is surprising in the sense it shows that an -stable (1 2) limit law may arise from sums of bounded random variables h(x t ). This is unlike the case of i.i.d. or weakly dependent summands, where such limit laws require a long tailed summands distribution and the contribution of the maimal summand to be comparable to the sum itself. emark 2.2. In the case h 1 = 0, Corollary 2.3 implies A 1 n S n;h 0 only. The question whether in this case it is possible to obtain a nondegenerate limit for S n;h with some other normalization o(a n ), is open. It is possible that the situation in the innite variance case is quite dierent in this respect from (say) the Gaussian case, in the sense that higher order epansions of the empirical distribution function (the analogs of the Hermite epansion in the case of a Gaussian underlying process) may not eist at all. emark 2.3. Corollary 2.3 contradicts the recent result of Hsing (1999, Theorem 2) which claims, under similar assumptions on X t and h, that (var(s n;h )) 1=2 S n;h converges to a nondegenerate Gaussian limit. Note the normalization (var(s n;h )) 1=2 =O(n (3 )=2 ) grows faster that A n = n 1 +1=. The proof of the above mentioned theorem in Hsing (1999) uses an approimation of S n;h by a sum of independent (but not identically distributed) random variables, whose normal limiting behavior is deduced by the classical Lindeberg Feller central limit theorem. However, Lindeberg s condition ((38) of Hsing (1999)) actually does not hold, as we shall now show. For convenience we shall use the notation of Hsing (1999) in the rest of this remark. Also, the letter H below will stand for Hsing (1999). Note that N and K in H play the roles of our n and h, respectively.
7 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) Eq. (38) of H claims the Lindeberg condition: for each 0, lim N 1 var(t N ) N 1 k=1 M N [ E 2 N;kI( N;k ] var(t N )) =0: (38) Here, M N is a sequence of positive integers satisfying M N N, for some 1. See page 1583 of H for the denition of T N. To prove the invalidity of (38), it suces to show lim N 1 var(t N ) N=2 k=1 From denitions around (6) and (38) of H, (k+m N ) N N;k = n=(k+1) 1 [ E 2 N;kI( N;k ] var(t N )) 0: (2.4) [ K (a n k k ) EK (a n k k ) ] ; K ()=EK( + X n;1; ); X n;1; = a i n i = X n : i 1 From the stationarity of {X n }, we thus obtain that K ()=EK( + X 0 ): Now, let K();, be bounded, strictly increasing and antisymmetric: K() = K( ); K(0)=0: As SS are symmetric, and a j = j 0, so the distribution of X 0 is also symmetric, which implies that K is bounded, strictly increasing and antisymmetric. Consequently, K ( )=EK( + X 0 )=EK( X 0 )= EK( + X 0 )= K (); : Thus, EK (a n k k )=0;K (0)=0; and K () 0( 0): Therefore, (k+m N ) N N;k = n=(k+1) 1 Hence, (2.4) follows from lim N K (a n k k ) 0 (if k 0); 0 (if k 0): 2 N=2 1 (k+m N ) N K (a n k ) var(t N ) k=1 0 n=(k+1) 1 (k+m N ) N I K (a n k ) var(t N ) F(d) 0; (2.5) n=(k+1) 1 where F is the d.f. of SS law. Observe the integral in (2.5) only decreases if we replace K () by a smaller function, say K () ci( 1); ( 0); (2.6)
8 316 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) where c := K (1) 0: Clearly, we may take c = 1 in the sequel. Thus, (2.5) follows from 2 N=2 1 N lim I(a n k 1) N var(t N ) k=1 0 n=(k+1) 1 N I I(a n k 1) var(t N ) F(d) 0: (2.7) n=(k+1) 1 Here, we used the fact that for k 1 and M N N, (k + M N ) N = N. Now, since a j = j, we obtain for all 0, N N k ( 1 I(a n k 1) = I j 1 ) =(N k) 1= n=(k+1) 1 j=1 (N=2) 1= ; (16k6N=2): Hence, (2.7) follows from var(t N )=O(N 3 ) (see (14) of H) and N lim sup N N 3 (N 1= ) 2 I(N 1= N (3 )=2 )F(d) 0: (2.8) 0 Because 1 2, (2.8) in turn is implied by N lim N 2 2= d= +1 0: (2.9) N N (3 )=2 Now, the last integral equals ( (2=) ) 1 ((N ) (2=) (N (3 )=2 ) (2=) ) : But, N 2 (N ) (2=) = 1, for all N 1, while lim N N 2 (N (3 )=2 ) (2=) =0 because of 1. This proves (2.9), thereby, also proving the invalidity of Hsing s conclusion (38). Note that (2.6) holds for any strictly monotone bounded antisymmetric function K on. In particular K() [() 1=2]= [(1 + y) 1=2]F(dy), where denotes the d.f. of a standard Gaussian r.v., satises all of these conditions. emark 2.4. The only place where we need conditions (2.1) (2.2) on the second derivative of G is to prove Lemma 4.2 below, which gives a similar bound for the second derivative of F and its nite memory approimations. Note that the latter d.f. is innitely dierentiable provided G satises -Holder condition with arbitrary 0 (see Giraitis et al. (1996), which suggests that (2.1) and (2.2) probably can be relaed). Furthermore, it seems that our results can be generalized to the case of innovations belonging to the domain of attraction of non-symmetric -stable distributions, 1 2. However, the case 0 61 is ecluded by (1.6) and is quite open. 3. Limit behavior of M-estimators Consider the linear regression model: Y n;t = C n;t + t ; t =1;:::;n;
9 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) where p is an unknown parameter vector, C n;t is the tth row of the known n p nonsingular design matri V n ; 16t6n, and the errors t ;t Z; follow an innite variance long-memory process: t = b t j j ; (3.1) j6t where b i ;i 0 and t ;t Z; satisfy the assumptions of Theorem 2.1. Let be a real valued nondecreasing right continuous function on such that E ( 0 ) for all. Put ()=E ( 0 ): Note that is nonincreasing, continuously dierentiable and we assume (0) = E ( 0 )=0: The corresponding M-estimator ˆ of the parameter is dened as { } ˆ = argmin C n;t (Y n;t C n;tb) : b p : In the particular case () =, the corresponding M-estimator is known as the least-squares estimator which we denote by ˆ ls : ˆ ls =(V n V n ) 1 C n;t Y t : We shall consider the particular case when the designs are of the form C n;t = C(t=n); (3.2) where C(t) =(v 1 (t);:::;v p (t));t [0; 1] is a given continuous p -valued function on [0; 1]. In such a case, n 1 V n V n V; where V = 1 0 C(t)C(t) dt is the p p-matri with entries V ij = 1 0 v i(t)v j (t)dt; i; j =1;:::;p: We shall assume, as usual, that the matri V is nondegenerate. Then n( ˆ ls )=V 1 C(i=n) i (1+o P (1)) and it follows from Kasahara and Maejima (1988) (under the standard assumptions on b j and j ) that na 1 n ( ˆ ls ) c 1 V 1 Z(C); where Z(C) isanss random vector, whose characteristic function is { } 1 E ep{iu Z(C)} = ep u C(t)(t s) + dt ds ; u p : 0 Let 1 := d()=d =0. Theorem 3.1. Assume; in addition to the above conditions; that is bounded and 1 0.Then na 1 n ( ˆ ) c 1 V 1 Z(C); na 1 n ( ˆ ˆ ls )=o P (1):
10 318 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) emark 3.1. Using the weak convergence methods and Theorem 2.1, one can also obtain analogous results for the classes of the so called -estimators of in the present case, as in Koul and Mukherjee (1993) for the nite variance case. 4. Proof of Theorem 2.1 Throughout the proofs below, C stands for a generic constant not depending on n, and for any real function g() and any y, let g(; y)=g(y) g(). Let the function H t (z) H t (z; ; y);z be dened, for any y; t Z, by H t (z)=i( + n;t z6y + n;t ) F( + n;t ;y+ n;t )+f( + n;t ;y+ n;t ) z; so that S n (; y)= n;t H t (X t ): (4.1) Our aim is to prove the following crucial Lemma 4.1. There eist 1 r ; 0; and a nite measure on the real line such that for any y; E S n (; y) r 6(; y)a r nn : Proof. We use the martingale decomposition as in Ho and Hsing (1996) and Koul and Surgailis (1997). Let U t;s (; y)=e[h t (X t ) F t s ] E[H t (X t ) F t s 1 ]; where F t = { s : s6t} is the -algebra of the past. Then, we can rewrite H t (X t )= s 0 U t;s (; y): (4.2) Observe that the series (4.2) converges in L r L r (), for each r. Namely, the series (E[I(X t 6 + n;t ) F( + n;t ) F t s ] s 0 E[I(X t 6 + n;t ) F( + n;t ) F t s 1 ]) converges in L 2 by orthogonality and hence in L r (r 2) as well, while t F t s ] E[X t F t s 1 ]) = s 0(E[X b s t s = X t s 0 converges in L r ( r ). For s 0, introduce the truncated moving averages: s X t;s = b i t i ; X t;s = b i t i ; i=0 i=s+1 X t =X t;s + X t;s. Let F s ()=P[X t;s 6]; F s ()=P[ X t;s 6] be the corresponding marginal d.f. s. Note that, for each 1= r, E X t;s r 6C b i r 6C i r 6C(1 + s) 1 r : (4.3) i=s+1 i=s+1
11 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) To proceed further, we need some estimates of the derivatives of F s ; F s and their dierence, similar to the estimates obtained in Giraitis et al. (1996), Koul and Surgailis (1997) for the case of a moving average with nite variance. Lemma 4.2. For any k 0 one can nd s 1 such that the d.f. F; F s ;s s 1 are k times continuously dierentiable. Furthermore; for any 1 r r ;r 1= and any suciently large s 1 there eists a constant C = C r;r such that for any ; y ; y 61; s s 1 ; F () + F s () 6C(1 + ) r ; (4.4) F (; y) + F s (; y) 6C y (1 + ) r ; (4.5) F () F s () 6C s (1=r ) (1 + ) r : (4.6) Proof. Assumption (2:1) implies that E ep{iu 0 } 6C=(1 + u ); for all u. This alone implies the dierentiability of F and F s as in Koul and Surgailis (1997, Lemma 4.1). We shall now prove (4.4). Assume b 0 = 1 without loss of generality. Then F()= G( u)d F 0 (u): According to condition (2.1), F () 6 G ( u) d F 1 (u)6c (1 + u ) d F 0 (u): As E X 0 r for any 1= r, the required bound (4.4) for F now follows from Lemma 5.1(i) below. The proof of the remaining bounds in (4.4) and (4.5) is eactly similar. To prove (4.6), write 3 F () F s () = (F s ( y) F s ()) d F s (y) 6 J i (); where J 1 ()= F s ( y) F s () d F s (y); y 61 J 3 ()= F s () d F s (y): y 1 By (4.3) and (4.5), J 1 () 6C(1 + ) r E X 0;s 6 C(1 + ) r E 1=r [ X 0;s r ] 6 C(1 + ) r s (1=r ) : Net, using (4.3) and Lemma 5.1 (i) below, we obtain J i () 6C(1 + ) r E X 0;s r 6 C(1 + ) r E r=r [ X 0;s r ] 6 C(1 + ) r s r((1=r ) ) ; i =2; 3. This proves the Lemma 4.2. J 2 ()= F s ( y) d F s (y); y 1
12 320 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) Proof of Lemma 4.1 (continued). As in Ho and Hsing (1996), Giraitis and Surgailis (1999), write U t;s (; y)=u t;s (y) U t;s (), where U t;s ()=F s 1 ( + n;t b s t s X t;s ) Observe that n;t H t (X t )= where M j;n (; y)= F s 1 ( + n;t b s u X t;s )dg(u)+f( + n;t )b s t s : j= M j;n (; y); n;t U t;t j (; y) j is F j -measurable and E[M j;n (; y) F j 1 ] = 0, i.e. {M j;n (; y): j Z} are martingale dierences. We use the following well-known inequality (von Bahr and Esseen, 1965): for any martingale dierence sequence Y 1 ;Y 2 ;:::;E[Y i Y 1 ;:::;Y i 1 ]=0 i, and any 16r62 m r m E Y i 6 2 E Y i r ; m 1: Applying this inequality to the martingale dierences M j;n (; y); j6n one obtains, for any 1 r 2, r E M j;n (; y) 62 E M j;n (; y) r : (4.7) j= j= Write F s =f s. As in Ho and Hsing (1996) or Koul and Surgailis (1997), decompose U t;s ()= 3 i=0 U (i) t;s (); and, respectively, M j;n (; y)= 3 i=0 M (i) (0) j;n (; y), where U t;s (; y)= U t;s (; y) I(06s6s 1 ); =U (i) t;s (; y)=0 for i =1; 2; 3; 06s6s 1 and, for s s 1, U (1) t;s ()=F s 1 ( + n;t b s t s X t;s ) F s 1 (y + n;t b s u X t;s )dg(u) +f s 1 ( + n;t X t;s )b s = t s ; U (2) t;s ()=b s t s (f( + n;t ) f( + n;t = X t;s )); U (3) t;s ()=b s t s (f( + n;t X t;s )= f s 1 ( + n;t X t;s )): Lemma 4.3. For each 1 r ;r suciently close to ; and any 0 satisfying the inequality (1 + )r ; (4.8) one can nd a nite measure = r; on such that E M (0) E M (i) j;n (; y) r 6(; y) I( s 1 6j6n); j;n (; y) r 6(; y) [1 (t j) (1+) ] ; y; ; 2; 3: j
13 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) The proof of Lemma 4.3 will be given below. We now use this lemma, (4.2) and (4.7) to prove Lemma 4.1. By (4.7), it suces to show that one can nd r and 0 such that A r n j= By Lemma 4.3, where j= n = E M (i) j;n (; y) r 6Cn (; y): (4.9) { E M (i) n if i =0; j;n (; y) r 6C(; y) n if i =1; 2; 3; j= [1 t j (1+) ] j n ( n r 6 C (t s) dt) (1+) ds 0 s ( 1 r 1 6 Cn 1+r (1+)r (t s) dt) (1+) ds6cn 1+r (1+)r ; (4.10) 0 s provided (1 + ) 1 and (1 + )r 1 hold. Observe that the last conditions (which guarantee the convergence of the double integral in (4.10)), with the preservation of (4.8), can always be achieved by choosing r suciently close to, as this implies 0 suciently small by (4.8) and thus (1 + ) 1 because of 1, and also implies (1 + )r r 1 because of 1. Now for i =1; 2; 3 (4.9) follows from (4.10) and A r n = n r r+r= by choosing (1 + )r = and taking 0 suciently small; indeed, in such a case =(r r + r=) (1 + r (1 + )r) ( 1)( r) =( 1) (r=)( 1) = 0: Finally, for i = 0 (4.9) follows from n =O(A r nn ), or 16r(1 +1=), by taking 0 r and suciently small. Lemma 4.1 is proved. 5. Some proofs Proof of Lemma 4.3 (case i = 1). Similarly as Giraitis and Surgailis (1999), write U (1) t;s (; y)=w (1) t;s (; y) W (2) t;s (; y);
14 322 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) where, for y, W (1) t;s (; y)= bs t s dg(u) dz b su bs W (2) t s t;s (; y)= dg(u) dz b su Net, introduce certain truncations: y y (0) t;s = I( X t;s 61; b s u 61; b s t s 61); (1) t;s = I( X t;s 1; b s u 61; b s t s 61); (2) t;s = I( b s u 1; b s t s 61); (3) t;s = I( b s t s 1): Accordingly, E M (1) j;n (; y) r = E Dene 6 C C j 3 E i=0 3 i=0 g (z)=(1+ z ) 1 ; r n;t U (1) t;t j (; y) n;t U (1) t;t j (; y)(i) j f s 1(w n;t + z X t;s )dw; f s 1(w n;t X t;s )dw: t;t j r D (i) j;n (; y) say: (5.1) (; y)= y g (z)dz; 0; (5.2) We shall use Lemma 4.2 and the nice majorizing properties of the measures to estimate the above epectations D (i) j;n (; y);i=0; 1; 2; 3: The following lemma will be often used in the sequel. Lemma 5.1. (i) For any 0 there eists a constant C ; depending only on ; such that g (z + y)6c g (z)(1 y ) 1+ ; y; z : (ii) Moreover; for any nonnegative function l of bounded variation on ; with lim u l(u)=0; g (z + u)l(u)du6c g (z) (1 u ) 1+ dl(u) ; z :
15 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) Proof. (i) It suces to consider y 1. We have g (z + y) =g (z)+ y 0 y g (z + u)du ( ) 1+ 2y g 1+u (z + u) du 6 g (z)+ 0 6 g (z)+cy 1+ (1 + u) 1 g (z + u) du: The last integral in the above bound is equal to (1 + ) (1+u) 1 (1 + z + u ) 2 du6c g (z): Hence, g (z + y)6(1 + C y 1+ )g (z)6c (1 y) 1+ g (z): 1 (ii) Assume again without loss of generality l(u) =0;u 1. Then [ u ] g (z u)l(u)du = l(u)d g (z w)dw [ u ] 6 g (z w)dw dl(u) 6 (z) 1 1 u 1+ dl(u) ; where u ] (z) sup [u 1 g (z w)dw 6C g (z): (5.3) u 1 1 Indeed, write (z)6sup 16u6 z =2 [ :::] + sup u z =2 [ :::] (z)+ + (z); then + (z) clearly satises (5.3) as g (w)dw C. On the other hand, for 16u6z=2, u 1 z 1 g (w z)dw 6 z u = 1 (z u) ( d = 1 1+ ( 1 1 (z u) 1 ( z u z 1 (z 1) ) ) u 6C z 1+ ; implying (z)6c ( sup16u6z=2 u ) z 1 6C g (z). The case z 0; 16u6 z=2 is similar. This proves Lemma 5.1. Proof of Lemma 4.3 (case i = 1)(continued). Consider the terms on the right-hand side of (5.1). Consider D (0) n;j (; y). From Lemma 4.2 and condition (a.2) of Theorem 2.1 we obtain f s 1(w n;t + z X t;s ) f s 1(w n;t X t;s ) (0) t;s 6C min(1; z )g (w n;t ) (0) t;s 6C z g (w) (0) t;s ; )
16 324 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) where 0 1 will be specied later, and we have used the inequality min(1; )6 which is valid for any. Then bs U (1) t;s (; y) (0) t;s 6 C ( + n;t ;y+ n;t ) (0) t s t;s dg(u) z dz b su 6 C (; y) (0) t;s b s 1+ ( t s 1+ + u 1+ )dg(u) 6 C (; y) b s 1+ (1 + t s 1+ ); assuming 1 +. Consequently, j;n (; y) 6 C( (; y)) r E[(1 + j 1+ ) r ] D (0) 6 C (; y) b t j 1+ j b t j 1+ j (5.4) provided (1 + )r, or (4.8) holds. Net, consider the term 2 n;j (; y)6 E D (1) j n;t W (i) t;t j (; y)(i) t;j r 2 D (1;i) n;j (; y): By Lemma 4.2(4.4), f s 1 () 6Cg ();, with C independent of s s 1, where 0 will be chosen below. Furthermore, z 6ma( b s t s ; b s u )61 on the set { (1) t;s = 1}. According to Lemma 5.1(i), implying W (i) t;s (; y) (1) t;s 6 C b s I( X t;s 1) dg(u) u t s D (1;i) j;n (; y) 6 C( (; y)) r E 6 C b s (1 + t s ) I( X t;s 1) y y g (w X t;s )dw g (w X t;s )dw 6 C (; y) b s (1 + t s ) X t;s 1+ I( X t;s 1); (5.5) 6 C (; y) j [( 1 b t j X t;t j 1+ I( X t;t j 1) ) r ] b t j E 1=r [ X t;t j r(1+) I( X t;t j 1)] ;
17 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) where in the last part we used niteness of and the norm (Minkowski) inequality in L r. Therefore, by (4.3), D (1;i) j;n (; y) 6 C (; y) b t j (1 t j ) (1=r) (1+) 6 C (; y) j (1 t j ) (1+) ; ; 2; (5.6) j provided r 1; 0 satisfy (1 + )r ; i.e. inequality (4.8). The last inequality in (5.6) follows from r 1, which follows from (1.6) provided r is chosen suciently close to. We shall now discuss the most delicate case r D (3) j;n (; y) =E n;t U (1) t;t j (; y)(3) t;j 6 j 2 E 2 W (i) j D (3;i) j;n (; y) say: t;t j (; y) I( b t j j 1) Now, recall the denition of W (i) t;t j (; y) and using Lemma 4.2, to obtain where 2 W (i) t;t j (; y) 6C k=1 W (1;1) t;t j(; y)= dg(u) dz W (1;2) t;t j(; y)= dg(u) dz W (2;1) t;t j(; y)= W (2;2) t;t j(; y)= W (i;k) t;t j(; y); ; 2; dg(u) dg(u) y y dz dz r g (w + z X t;t j )I( z b t j j ; j u )dw; y g (w + z X t;t j )I( z b t j u ; j u )dw; y Then apply Lemma 5.1(i) and (ii), to obtain W (1;1) t;t j(; y) I( b t j j 1) j 6 dz y g (w X t;t j )I( z b t j j ; j u )dw; g (w X t;t j )I( z b t j u ; j u )dw: g (w + z X t;t j )I( z b t j j ; b t j j 1) dw j
18 326 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) y 6C dz g (w + z)(1 X t;t j ) 1+ I( z b t j j ; b t j j 1) dw j 6C (; y) j (1 X t;t j ) 1+ b t j j 1+ I( b t j j 1); where in the last inequality we used Lemma 5.1(ii) with l(z)=i( z b t j j ) implying y g (w + z)i( z b t j j )dzdw6c (; y)(1 b t j j ) 1+. This results in D (3;1;1) j;n (; y) E j t;t j(; y) I( b t j j 1) W (1;1) 6C( (; y)) r E 6C (; y) 6C (; y) (1 X t;t j ) 1+ b t j j 1+ j r b t j 1+ E 1=r [(1 X t;t j ) r(1+) ]E 1=r [ j r(1+) ] j b t j 1+ ; j provided r 1; 0 satisfy (1 + )r, or (4.8). In a similar way, consider j W (1;2) t;t j(; y) I( b t j j 1) 6 dg(u) dz 6C y dg(u) dz g (w + z X t;t j ) I( z b t j u ; b t j u 1) dw g (w + z)(1 X t;t j ) 1+ j y j I( z b t j u ; b t j u 1) dw 6C (; y) dg(u) j (1 X t;t j ) 1+ b t j u 1+ ;
19 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) implying again, under condition (4.8), (; y) E W (1;2) t;t j(; y)i( b t j j 1) D (3;1;2) j;n j 6 C (; y) b t j 1+ : j As D (3;1) j;n (; y)6c 2 D(3;1;i) j;n (; y), one obtains the same bounds for D (3;1) j;n (; y), and, eactly in the same way, for D (3;2) j;n (; y). This yields, nally, D (3) j;n (; y)6c (; y) b t j 1+ ; (5.7) j provided r ; 0 satisfy (4.8) and r is suciently close to. Thus, we have shown in (5.4),(5.6) and (5.7) the same bound for D (i) j;n (; y); =0; 1; 3, respectively, and the case i = 2 is completely analogous to that of the estimation of D (3;1;2) j;n (; y). Then we have the required bound for i = 1, which completes the proof of Lemma 4.3 in the case i =1. Proof of Lemma 4.3 (cases i =2; 3 and i =0). Consider the case i =2, or M (2) j;n (; y). We have M (2) j;n (; y)= j n;t b t j (f( + n;t ;y+ n;t ) j f( + n;t X t;t j ;y+ n;t X t;t j )) so that, by the independence of j and X t;t j, for any 1 r we can write E M (2) j;n (; y) r = E 0 r E n;t b t j (f( + n;t ;y+ n;t ) j r f( + n;t X t;t j ;y+ n;t X t;t j )) where H (1) j;n 6 C (; y)=e 2 ; H (i) j;n (; y) say; (5.8) b t j f( + n;t ;y+ n;t ) t=j 1 f( + n;t X t;t j ;y+ n;t X t;t j ) I( X t;t j 61) ;
20 328 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) H (2) j;n (; y)=e b t j f( + n;t ;y+ n;t ) t=j 1 f( + n;t X t;t j ;y+ n;t X t;t j ) I( X t;t j 1) : Now by Lemma 4:2(4:4), f( + n;t ;y+ n;t ) f( + n;t X t;t j ;y+ n;t X t;t j ) I( X t;t j 61) 6I( X t;t j 61) y+n; t + n; t (f (z) f (z X t;t j )) dz 6C X t;t j 0 ( + n;t ;y+ n;t )6C X t;t j 0 (; y); where 0 = 1 2, say. Therefore, j;n (; y) 6 Cr 0 (; y)e H (1) 6 C 0 (; y) 6 C 0 (; y) b t j X t;t j j b t j E 1=r [( X t;t j r ] j (1 t j ) (1 t j ) (1=r) : (5.9) j Consider H (2) n;j (; y). From Lemma 4.2(4:4) and (a:2), f( + n;t X t;t j ;y+ n;t X t;t j ) 6C y g (z X t;t j )dz; implying f( + n;t ;y+ n;t ) 6C H (2) j;n (; y) 6 CE j 6 (; y) y g (z)dz; b t j I( X t;t j 1) y r (1 t j ) (1+) j (g (z)+g (z X t;t j )) dz (5.10) eactly as in (5.5) and (5.6). Lemma 4.3 in the case i = 2 now follows from (5.9) and (5.10).
21 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) Consider the case i = 3, or M (3) j;n = j n;t b t j (f( + n;t X t;t j ;y+ n;t X t;t j ) j f s 1 ( + n;t X t;t j ;y+ n;t X t;t j )) : Similarly as in (5.8), we have E[ M (3) j;n r ]6C 2 K (1) j;n (; y)=e j b t j I( X t;t j 61) K (i) j;n y f s 1(z + n;t X t;t j ) dz ; (; y), where f (z + n;t X t;t j ) K (2) j;n (; y)=e j b t j I( X t;t j 1) y + f s 1(z + n;t X t;t j ) )dz : ( f (z + n;t X t;t j ) By Lemma 4.2(4.6), (; y) 6 0 (; y) K (1) j;n 6 C 0 (; y) b t j (1 t j ) (1=r) j (1 t j ) (1+) ; j where 0 0 r 1 and =1 1=(r) 0 provided r is suciently close to. On the other hand, by Lemma 4.2(4:4), r y (; y)6ce b t j I( X t;t j 1) g (z X t;t j )dz K (2) j;n j and the right-hand side can be estimated eactly as in (5.10). This proves Lemma 4.3 (case i = 3). It remains to prove the case i=0. As the sum M (0) j;n ()= n (j+s 1) j U t;t j (; y) consists of a nite number (6s 1 ) of terms, for each j6n, and vanishes for j6 s 1, it suces to show for any t Z and any 06s6s 1 the inequality: E U t;s (; y) r 6C(; y): (5.11)
22 330 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) But implying U t;s (; y) 6 P[ + n;t X t 6y + n;t F t s ] +P[ + n;t X t 6y + n;t F t s 1 ]+ t f( + n;t ;y+ n;t ); E U t;s (; y) r 6C(F( + n;t ;y+ n;t )+f( + n;t ;y+ n;t )) by the boundedness of f and the fact that 16r. Now, Lemma 4.2 completes the proof of (5.11) and the proof of Lemma 4.3 itself. Proof of Theorem 2.1. We follow the proof in Giraitis et al. (1996) (GKS). Observe, the majorizing measure in Lemma 4.1 above may be taken to be = C (F + ); 0, where is given by (5.2) and C depends only on. Furthermore, according to Lemma 4.2, this constant may be chosen so that, for any 1, and any y, we have the relations F( + ; y + )6(; y); F ( + ; y + )6(; y); where F (; y)= y f (z) dz. For any integer k 1, dene the partition such that =: 0;k 1;k 2 k 1;k 2 k ;k := + ; ( j;k ; j+1;k )=()2 k ; j =0; 1; = :::;2 k 1: Given 0, let K K n := [log 2 (n 1+ =A n )]+1: For any and any k =0; 1;:::;K, dene j k by j k ;k 6 j k +1;k : Dene a chain linking to each point by = j 0 ; 0 6 j 1 ;1 6 6 j K ;K6 j K +1;K: Then S () n ()=S n () ( j 0 ;0 ; j 1 ;1 )+S n () +S n () ( j K ;K;): Similarly as in Koul and Mukherjee (1993), where S () n ( j 1 ;1 ; j 2 ;2 )+ + S n () ( j ( j K ;K;) 6 S () ( j K ;K; j K +1;K) +2B n (); B n () = n n;i (F( j K ;K; j K +1;K)+ X t F ( j K ;K; j K +1;K) 6 C( j K ;K; j K +1;K) K 1 ;K 1 ; j K ;K) (1 + X t )6 C()2 K (1 + X t );
23 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) so that [ ] E sup B n () 6C2 K ne[(1 + X 0 )]6CA n n : Net, similarly as in the above mentioned papers, for any k =0; 1;:::;K 1; and any 0 we obtain [ ] P sup S n () ( j k ; j k+1 ;k+1 ) A n =(k +3) 2 and 2 k P[ S () n ( i;k+1 ; i+1;k+1 ) A n =(k +3) 2 ] 2 k+1 1 6(k +3) r r A r n E[ S n () ( i;k+1 ; i+1;k+1 ) r ] 2 k+1 1 6(k +3) r r n ( i;k+1 ; i+1;k+1 ) 6C(k +3) r r n [ ] P sup S n () ( j K ;K; j K +1;K) A n =(K +3) 2 2 K 1 6 P[ S () n ( i;k ; i+1;k ) A n =(K +3) 2 ] 6C(K +3) r r n : Consequently, for any 0 61, [ ] P sup S n () () A n 6C r n K k=0 6 r n (K +3) r+1 ; [ ] (k +3) r + P sup B n () A n which completes the proof of Theorem 2.1, as (K +3) r+1 = O(log r+1 n). Proof of Theorem 3.1. Drop the subscript n from the notation where no ambiguity arises. We shall rst show that ˆ =O P (A n =n): (5.12) Put n = A n =n. elation (5.12) follows from C t ( t )=O P (1); (5.13) A 1 n and the fact that, for any 0 and any L 0 one can nd K 0 and N such that for all n N [ ] P inf s K A 1 n C t ( t n s C t ) L 1 : (5.14)
24 332 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) Here, (5.13) follows from Theorem 2.1; see also (5.16), case i = 3 below, while the relation (5.14) can be proved, using Theorem 2.1, the monotonicity of, and an argument like the one used in Koul (1992b, Lemma 5:5:4). Net, we shall check that for each 0 there eists a K and N such that for all n N, ( ) P inf s 6K A 1 n C t ( t n s C t ) 6 1 : (5.15) To prove this, let 1 denote the rst derivative of () at = 0, and write 4 C t ( t n s C t )= i (s); where 1 (s)= 2 (s)= 3 (s)= C t ( ( t n s C t ) ( n s C t ) ( t )+(0)); C t (( n s C t ) (0) 1 n s C t )); C t ( ( t )+ 1 t ); 4 (s)= 1 C t ( n s C t t ): It suces to show that for each K sup A 1 n i (s) =o P (1); ; 2; 3 (5.16) s 6K and that for any 0 one can nd K; N such that for all n N, P[ inf s 6K A 1 n 4 (s) 6] 1 : But the last relation obviously follows from 4 ( 1 n ( ˆ ls )) = 0 and the fact that ˆ ls =O P ( n ); see Section 3. elation (5.16) for i = 2 follows from A 1 n 2 n = o(1) and the fact that () is twice continuously dierentiable at = 0. To show (5.16) for i = 3, write { } 3 (s)= C t [I( t 6) F()+f() t ] d () and use Theorem 2.1 and the fact that has bounded variation. In a similar way, write 1 (s) { } = C t [I( t 6 + n s C t ) F( + n s C t ) I( t 6)+F()] d () 6C sup C t [I( t 6 + n s C t ) F( + n s C t ) I( t 6)+F()] : Finally, relation (5.16) for i = 1 follows from Theorem 2.1 by employing the argument in Koul (1992b, Theorem 2:3:1).
25 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) According to (5.12), for any 0 there are K; N such that P[ ˆ 6 n K] 1 n N. According to (5.16), on the set ˆ 6 n K, with high probability 1 2 one has the following inequalities A n C t ( t ( ˆ ) C t ) or, equivalently, 1 C t ( ˆ ) C t 1 n 1 V nv n 1 n ( ˆ ) A 1 n C t t 3 C t t 6 + sup s 6K 3 i (s) A 1 n sup s 6K i (s) : Whence and from (5.16), the assumptions about the non-degeneracy of 1 and the matri V=lim n n 1 V nv n, both statements of Theorem 3.1 follow, thereby completing its proof. 6. Asymptotics of the bivariate characteristic function Theorem 6.1. Assume the moving average X t (1:2) satises the standard assumptions of Section 1. Then for any u 1 ;u 2 there eists the limit lim t t 1 cov(e iu1x0 ; e iu2xt )k u1;u 2 E[e iu1x0 ]E[e iu2x0 ]; (6.1) where k u1;u 2 = c 2 c 0 [ u 1 ( s) + +u 2 (1 s) + u 1 ( s) + u 2 (1 s) + ]ds; with c 2 =2 (2 )cos(=2)=(1 ). Proof. Write (u)=e[e iu0 ];(u)=e[e iux0 ]; t (u 1 ;u 2 )=E[e i(u1x0+u2xt) ]. Then, t (u 1 ;u 2 )= (u 1 b j + u 2 b t j ); (u 1 )(u 2 )= (u 1 b j ) (u 2 b t j ); j Z j Z where we put b j =0;j 0. Similarly, as Giraitis et al. (1996, Proof of Lemma 2), write t (u 1 ;u 2 )= =: a 1 a 2 a 3 ; I 1 I 2 I 3 (u 1 )(u 2 )= = =: a 1 a 2 a 3; I 1 I 2 I 3 where I 1 = {j Z: j 6t d }; I 2 = {j Z: t j 6t d }; I 3 = Z \ (I 1 I 2 ) and where d 0 will be chosen below. Then t (u 1 ;u 2 ) (u 1 )(u 2 )=a 1 a 2 a 3 a 1a 2a 3 =(a 1 a 1)a 2 a 3 + a 1(a 2 a 2)a 3 + a 1a 2(a 3 a 3):
26 334 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) Hence the theorem will follow from a i 61; a i 61; ; 2; 3 and that, for any u 1 ;u 2 ed, a i a i =o(t 1 ); ; 2; (6.2) a 3 a 3 = a 3k u1;u 2 t 1 +o(t 1 ): (6.3) Consider (6.2). Using the inequality c i c i 6 c i c i ; for c i 61; c i 61, one obtains a 1 a 1 6 j 6t d (u 1 b j + u 2 b t j ) (u 1 b j ) (u 2 b t j ) : Note u 2 b t j 6Ct for j 6t d and d 1. Therefore, as (y 1 + y 2 ) (y 1 ) (y 2 ) 6C y 2 with C62E[ 0 ], we obtain a 1 a 1 =O =O(t d ); j 6t d t proving (6.2) for i = 1, provided d 0 is chosen so that d 1 which is of course possible. The case i = 2 being analogous, it remains to prove (6.3). It is well-known (Ibragimov and Linnik, 1973, Theorem 2:6:5) that assumption (1.5) (where we put c 1 = 1) is equivalent to the representation log (u)= c 2 u (1 + (u)); in a neighborhood of the origin u = 0, where (u) 0(u 0). Note j I 3 implies u 1 b j + u 2 b t j = o(1). Therefore, for suciently large t, a 3 a 3 = a 3(e c2 Qt(u1;u2) 1); where Q t (u 1 ;u 2 )=Qt 0 (u 1 ;u 2 )+q t (u 1 ;u 2 ) and Qt 0 (u 1 ;u 2 )= [ u 1 b j + u 2 b t j u 1 b j u 2 b t j ]; j t d ; t j t d q t (u 1 ;u 2 )= Therefore, (6.3) follows from [ u 1 b j + u 2 b t j (u 1 b j + u 2 b t j ) j t d ; t j t d u 1 b j (u 1 b j ) u 2 b t j (u 2 b t j )]: lim t t 1 Qt 0 (u 1 ;u 2 )=k u1;u 2 ; (6.4) lim q t t1 t (u 1 ;u 2 )=0: (6.5) To show (6.4), write t 1 Qt 0 (u 1 ;u 2 )= m t;u1;u 2 (s)ds + t (u 1 ;u 2 ); where t (u 1 ;u 2 )=t 1 j t d or t j t d ; [ u 1 b j + u 2 b t j u 1 b j u 2 b t j ] = o(1)
27 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) by a similar argument as in the proof of (6.2), and where m t;u1;u 2 is a piecewise-constant function on the real line given by m t;u1;u 2 (s)= u 1 t b j + u 2 t b t j u 1 t b j u 2 t b t j if s [j=t; (j +1)=t);j Z. It is easy to note by (1.3) that for each s (and any u 1 ;u 2 ) ed, lim m t;u t 1;u 2 (s)= c 0 [ u 1 ( s) + + u 2 (1 s) + u 1 ( s) + u 2 (1 s) + ] m u1;u 2 (s); where the limit function is integrable on the real line. Furthermore, one can check that the sequence {m t;u1;u 2 } t=0;1;::: is dominated by a integrable function. By the Lebesgue dominated convergence theorem, lim t m t;u 1;u 2 (s)ds = m u 1;u 2 (s)ds = k u1;u 2.Ina similar way, one can verify convergence (6.5). Theorem 6.1 is proved. eferences Astrauskas, A., Limit theorems for sums of linearly generated random variables. Lithuanian Math. J. 23, Astrauskas, A., Levy, J.B., Taqqu, M.S., The asymptotic dependence structure of the linear fractional Levy motion. Lithuanian Math. J. 31, Avram, F., Taqqu, M.S., Weak convergence of moving averages with innite variance. In: Eberlein, E., Taqqu, M.S. (Eds.), Dependence in Probability and Statistics. Birkhauser, Boston, pp Avram, F., Taqqu, M.S., Weak convergence of sums of moving averages in the -stable domain of attraction. Ann. Probab. 20, von Bahr, B., Esseen, C.-G., Inequalities for the rth absolute moment of the sum of random variables, 16r62. Ann. Math. Statist. 36, Baillie,.T., Long memory processes and fractional integration in econometrics. J. Econometrics 73, Beran, J., M-estimators of location for data with slowly decaying correlations. J. Amer. Statist. Assoc. 86, Beran, J., Statistical methods for data with long range dependence. Statist. Sci. 7, Billingsley, P., Convergence of Probability Measures. Wiley, New York. Christoph, G., Wolf, W., Convergence Theorems with Stable Limit Law. Akademie Verlag, Berlin. Csorgo, S., Mielniczuk, J., Density estimation under long-range dependence. Ann. Statist. 23, Dehling, H., Taqqu, M.S., The empirical process of some long-range dependent sequences with an application to U-statistics. Ann. Statist. 17, Dobrushin,.L., Major, P., Non-central limit theorems for non-linear functions of Gaussian elds. Z. Wahrsch. Verw. Geb. 50, Doukhan, P., Massart, P., io, E., The invariance principle for the empirical measure of a weakly dependent process. Ann. Inst. H. Poincare B 31, Giraitis, L., Koul, H.L., Surgailis, D., Asymptotic normality of regression estimators with long memory errors. Statist. Probab. Lett. 29, Giraitis, L., Surgailis, D., Limit theorems for polynomials of linear processes with long range dependence. Lithuanian Math. J. 29, Giraitis, L., Surgailis, D., Central limit theorem for the empirical process of a linear sequence with long memory. J. Statist. Plann. Inference 80, Granger, C.W.J., Joyeu,., An introduction to long-memory time series and fractional dierencing. J. Time Ser. Anal. 1, Hsing, T., On the asymptotic distributions of partial sums of functionals of innite-variance moving averages. Ann. Probab. 27, Ho, H.C., Hsing, T., On the asymptotic epansion of the empirical process of long memory moving averages. Ann. Statist. 24, Ho, H.C., Hsing, T., Limit theorems for functionals of moving averages. Ann. Probab. 25,
28 336 H.L. Koul, D. Surgailis / Stochastic Processes and their Applications 91 (2001) Hosking, J..M., Fractional dierencing. Biometrika 68, Ibragimov, I.A., Linnik, Yu.V., Independent and Stationary Sequences of andom Variables. Wolters-Noordho, Groningen. Kasahara, Y., Maejima, M., Weighted sums of i.i.d. random variables attracted to integrals of stable processes. Probab. Theory el. Fields 78, Knight, K., Estimation in dynamic linear regression models with innite variance errors. Econometric Theory 9, Kokoszka, P.S., Taqqu, M.S., Fractional AIMA with stable innovations. Stochastic Process. Appl. 60, Koul, H.L., 1992a. M-estimators in linear models with long range dependent errors. Statist. Probab. Lett. 14, Koul, H.L., 1992b. Weighted Empiricals and Linear Models. IMS Lecture Notes Monograph Series, 21. Hayward, CA. Koul, H.L., Mukherjee, K., Asymptotics of -, MD- and LAD-estimators in linear regression models with long range dependent errors. Probab. Theory el. Fields 95, Koul, H.L., Surgailis, D., Asymptotic epansion of M-estimators with long memory errors. Ann. Statist. 25, obinson, P., 1994a. Semiparametric analysis of long-memory time series. Ann. Statist. 22, obinson, P., 1994b. Time series with strong dependence. In: Sims, E. (Ed.), Advances in Econometrics., Econometric Society Monographs No. 23, Vol. I. Cambridge University Press, Cambridge, pp osenblatt, M., Independence and dependence. Proceedings of the Fourth Berkley Symposium on Mathematics, Statistics & Probability, Vol. 2. University of California Press, Berkeley, pp Samorodnitsky, G., Taqqu, M.S., Stable non-gaussian andom Processes. Chapman & Hall, New York. Shao, Q., Yu, H., Weak convergence for weighted empirical process of dependent sequences. Ann. Probab. 24, Surgailis, D., Zones of attraction of self-similar multiple integrals. Lithuanian Math. J. 22, Taqqu, M.S., Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrsch. Verw. Geb. 50,
Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535
Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June
More information2 Makoto Maejima (ii) Non-Gaussian -stable, 0 << 2, if it is not a delta measure, b() does not varnish for any a>0, b() a = b(a 1= )e ic for some c 2
This is page 1 Printer: Opaque this Limit Theorems for Innite Variance Sequences Makoto Maejima ABSTRACT This article discusses limit theorems for some stationary sequences not having nite variances. 1
More informationNonparametric Density Estimation fo Title Processes with Infinite Variance.
Nonparametric Density Estimation fo Title Processes with Infinite Variance Author(s) Honda, Toshio Citation Issue 2006-08 Date Type Technical Report Text Version URL http://hdl.handle.net/10086/16959 Right
More informationNOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY
Econometric Theory, 26, 2010, 1855 1861. doi:10.1017/s0266466610000216 NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY UWE HASSLER Goethe-Universität Frankfurt
More informationUNIT ROOT TESTING FOR FUNCTIONALS OF LINEAR PROCESSES
Econometric Theory, 22, 2006, 1 14+ Printed in the United States of America+ DOI: 10+10170S0266466606060014 UNIT ROOT TESTING FOR FUNCTIONALS OF LINEAR PROCESSES WEI BIAO WU University of Chicago We consider
More informationSpatial autoregression model:strong consistency
Statistics & Probability Letters 65 (2003 71 77 Spatial autoregression model:strong consistency B.B. Bhattacharyya a, J.-J. Ren b, G.D. Richardson b;, J. Zhang b a Department of Statistics, North Carolina
More informationStochastic volatility models: tails and memory
: tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and
More informationNotes on Time Series Modeling
Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g
More informationBounds on expectation of order statistics from a nite population
Journal of Statistical Planning and Inference 113 (2003) 569 588 www.elsevier.com/locate/jspi Bounds on expectation of order statistics from a nite population N. Balakrishnan a;, C. Charalambides b, N.
More informationEXACT CONVERGENCE RATE AND LEADING TERM IN THE CENTRAL LIMIT THEOREM FOR U-STATISTICS
Statistica Sinica 6006, 409-4 EXACT CONVERGENCE RATE AND LEADING TERM IN THE CENTRAL LIMIT THEOREM FOR U-STATISTICS Qiying Wang and Neville C Weber The University of Sydney Abstract: The leading term in
More informationMi-Hwa Ko. t=1 Z t is true. j=0
Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional
More information1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary
Prexes and the Entropy Rate for Long-Range Sources Ioannis Kontoyiannis Information Systems Laboratory, Electrical Engineering, Stanford University. Yurii M. Suhov Statistical Laboratory, Pure Math. &
More informationAsymptotic Properties of Kaplan-Meier Estimator. for Censored Dependent Data. Zongwu Cai. Department of Mathematics
To appear in Statist. Probab. Letters, 997 Asymptotic Properties of Kaplan-Meier Estimator for Censored Dependent Data by Zongwu Cai Department of Mathematics Southwest Missouri State University Springeld,
More informationReduced rank regression in cointegrated models
Journal of Econometrics 06 (2002) 203 26 www.elsevier.com/locate/econbase Reduced rank regression in cointegrated models.w. Anderson Department of Statistics, Stanford University, Stanford, CA 94305-4065,
More informationAlmost sure limit theorems for U-statistics
Almost sure limit theorems for U-statistics Hajo Holzmann, Susanne Koch and Alesey Min 3 Institut für Mathematische Stochasti Georg-August-Universität Göttingen Maschmühlenweg 8 0 37073 Göttingen Germany
More informationLARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*
LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science
More informationConstrained Leja points and the numerical solution of the constrained energy problem
Journal of Computational and Applied Mathematics 131 (2001) 427 444 www.elsevier.nl/locate/cam Constrained Leja points and the numerical solution of the constrained energy problem Dan I. Coroian, Peter
More informationGoodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach
Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach By Shiqing Ling Department of Mathematics Hong Kong University of Science and Technology Let {y t : t = 0, ±1, ±2,
More informationWeak Limits for Multivariate Random Sums
Journal of Multivariate Analysis 67, 398413 (1998) Article No. MV981768 Weak Limits for Multivariate Random Sums Tomasz J. Kozubowski The University of Tennessee at Chattanooga and Anna K. Panorska - The
More information290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f
Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica
More informationMinimum distance tests and estimates based on ranks
Minimum distance tests and estimates based on ranks Authors: Radim Navrátil Department of Mathematics and Statistics, Masaryk University Brno, Czech Republic (navratil@math.muni.cz) Abstract: It is well
More informationIndependence of some multiple Poisson stochastic integrals with variable-sign kernels
Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological
More informationSome functional (Hölderian) limit theorems and their applications (II)
Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen
More informationThe Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1
Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September
More informationWLLN for arrays of nonnegative random variables
WLLN for arrays of nonnegative random variables Stefan Ankirchner Thomas Kruse Mikhail Urusov November 8, 26 We provide a weak law of large numbers for arrays of nonnegative and pairwise negatively associated
More informationLOGARITHMIC MULTIFRACTAL SPECTRUM OF STABLE. Department of Mathematics, National Taiwan University. Taipei, TAIWAN. and. S.
LOGARITHMIC MULTIFRACTAL SPECTRUM OF STABLE OCCUPATION MEASURE Narn{Rueih SHIEH Department of Mathematics, National Taiwan University Taipei, TAIWAN and S. James TAYLOR 2 School of Mathematics, University
More informationConsistent semiparametric Bayesian inference about a location parameter
Journal of Statistical Planning and Inference 77 (1999) 181 193 Consistent semiparametric Bayesian inference about a location parameter Subhashis Ghosal a, Jayanta K. Ghosh b; c; 1 d; ; 2, R.V. Ramamoorthi
More informationMathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector
On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean
More informationDensity estimators for the convolution of discrete and continuous random variables
Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract
More informationPointwise convergence rates and central limit theorems for kernel density estimators in linear processes
Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence
More informationAn almost sure invariance principle for additive functionals of Markov chains
Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department
More informationON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain
ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision
More informationOn uniqueness of moving average representations of heavy-tailed stationary processes
MPRA Munich Personal RePEc Archive On uniqueness of moving average representations of heavy-tailed stationary processes Christian Gouriéroux and Jean-Michel Zakoian University of Toronto, CREST 3 March
More informationTail empirical process for long memory stochastic volatility models
Tail empirical process for long memory stochastic volatility models Rafa l Kulik and Philippe Soulier Carleton University, 5 May 2010 Rafa l Kulik and Philippe Soulier Quick review of limit theorems and
More informationConvergence rates in weighted L 1 spaces of kernel density estimators for linear processes
Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,
More informationALMOST SURE CONVERGENCE OF THE BARTLETT ESTIMATOR
Periodica Mathematica Hungarica Vol. 51 1, 2005, pp. 11 25 ALMOST SURE CONVERGENCE OF THE BARTLETT ESTIMATOR István Berkes Graz, Budapest, LajosHorváth Salt Lake City, Piotr Kokoszka Logan Qi-man Shao
More informationECON 616: Lecture 1: Time Series Basics
ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters
More informationAsymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables
Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab
More informationRandom coefficient autoregression, regime switching and long memory
Random coefficient autoregression, regime switching and long memory Remigijus Leipus 1 and Donatas Surgailis 2 1 Vilnius University and 2 Vilnius Institute of Mathematics and Informatics September 13,
More informationON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY
J. Korean Math. Soc. 45 (2008), No. 4, pp. 1101 1111 ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY Jong-Il Baek, Mi-Hwa Ko, and Tae-Sung
More informationAsymptotic distributions of some scale estimators in nonlinear models with long memory errors having infinite variance 1
Asymptotic distributions of some scale estimators in nonlinear models with long memory errors having infinite variance 1 by Hira L. Koul and Donatas Surgailis Michigan State University and Vilnius University
More informationOn large deviations for combinatorial sums
arxiv:1901.0444v1 [math.pr] 14 Jan 019 On large deviations for combinatorial sums Andrei N. Frolov Dept. of Mathematics and Mechanics St. Petersburg State University St. Petersburg, Russia E-mail address:
More informationHan-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek
J. Korean Math. Soc. 41 (2004), No. 5, pp. 883 894 CONVERGENCE OF WEIGHTED SUMS FOR DEPENDENT RANDOM VARIABLES Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek Abstract. We discuss in this paper the strong
More informationSTOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER. Department of Mathematics. University of Wisconsin.
STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER Department of Mathematics University of Wisconsin Madison WI 5376 keisler@math.wisc.edu 1. Introduction The Loeb measure construction
More informationONE DIMENSIONAL MARGINALS OF OPERATOR STABLE LAWS AND THEIR DOMAINS OF ATTRACTION
ONE DIMENSIONAL MARGINALS OF OPERATOR STABLE LAWS AND THEIR DOMAINS OF ATTRACTION Mark M. Meerschaert Department of Mathematics University of Nevada Reno NV 89557 USA mcubed@unr.edu and Hans Peter Scheffler
More informationModeling and testing long memory in random fields
Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous
More informationLarge deviations for martingales
Stochastic Processes and their Applications 96 2001) 143 159 www.elsevier.com/locate/spa Large deviations for martingales Emmanuel Lesigne a, Dalibor Volny b; a Laboratoire de Mathematiques et Physique
More informationLong strange segments in a long-range-dependent moving average
Stochastic Processes and their Applications 93 (200) 9 48 www.elsevier.com/locate/spa Long strange segments in a long-range-dependent moving average Svetlozar T. Rachev a, Gennady Samorodnitsky b; a Institute
More informationThe symmetry in the martingale inequality
Statistics & Probability Letters 56 2002 83 9 The symmetry in the martingale inequality Sungchul Lee a; ;, Zhonggen Su a;b;2 a Department of Mathematics, Yonsei University, Seoul 20-749, South Korea b
More informationSome Contra-Arguments for the Use of Stable Distributions in Financial Modeling
arxiv:1602.00256v1 [stat.ap] 31 Jan 2016 Some Contra-Arguments for the Use of Stable Distributions in Financial Modeling Lev B Klebanov, Gregory Temnov, Ashot V. Kakosyan Abstract In the present paper,
More informationAn empirical Central Limit Theorem in L 1 for stationary sequences
Stochastic rocesses and their Applications 9 (9) 3494 355 www.elsevier.com/locate/spa An empirical Central Limit heorem in L for stationary sequences Sophie Dede LMA, UMC Université aris 6, Case courrier
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationKrzysztof Burdzy University of Washington. = X(Y (t)), t 0}
VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationA Representation of Excessive Functions as Expected Suprema
A Representation of Excessive Functions as Expected Suprema Hans Föllmer & Thomas Knispel Humboldt-Universität zu Berlin Institut für Mathematik Unter den Linden 6 10099 Berlin, Germany E-mail: foellmer@math.hu-berlin.de,
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationSpread, estimators and nuisance parameters
Bernoulli 3(3), 1997, 323±328 Spread, estimators and nuisance parameters EDWIN. VAN DEN HEUVEL and CHIS A.J. KLAASSEN Department of Mathematics and Institute for Business and Industrial Statistics, University
More informationA scale-free goodness-of-t statistic for the exponential distribution based on maximum correlations
Journal of Statistical Planning and Inference 18 (22) 85 97 www.elsevier.com/locate/jspi A scale-free goodness-of-t statistic for the exponential distribution based on maximum correlations J. Fortiana,
More informationCentral Limit Theorem for Non-stationary Markov Chains
Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities
More informationMonte Carlo Methods for Statistical Inference: Variance Reduction Techniques
Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Hung Chen hchen@math.ntu.edu.tw Department of Mathematics National Taiwan University 3rd March 2004 Meet at NS 104 On Wednesday
More informationA CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin
A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN Dedicated to the memory of Mikhail Gordin Abstract. We prove a central limit theorem for a square-integrable ergodic
More informationTrimmed sums of long-range dependent moving averages
Trimmed sums of long-range dependent moving averages Rafal l Kulik Mohamedou Ould Haye August 16, 2006 Abstract In this paper we establish asymptotic normality of trimmed sums for long range dependent
More information1. Introduction In many biomedical studies, the random survival time of interest is never observed and is only known to lie before an inspection time
ASYMPTOTIC PROPERTIES OF THE GMLE WITH CASE 2 INTERVAL-CENSORED DATA By Qiqing Yu a;1 Anton Schick a, Linxiong Li b;2 and George Y. C. Wong c;3 a Dept. of Mathematical Sciences, Binghamton University,
More informationCharacterizations on Heavy-tailed Distributions by Means of Hazard Rate
Acta Mathematicae Applicatae Sinica, English Series Vol. 19, No. 1 (23) 135 142 Characterizations on Heavy-tailed Distributions by Means of Hazard Rate Chun Su 1, Qi-he Tang 2 1 Department of Statistics
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationA note on continuous behavior homomorphisms
Available online at www.sciencedirect.com Systems & Control Letters 49 (2003) 359 363 www.elsevier.com/locate/sysconle A note on continuous behavior homomorphisms P.A. Fuhrmann 1 Department of Mathematics,
More informationMinimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.
Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationFrom Fractional Brownian Motion to Multifractional Brownian Motion
From Fractional Brownian Motion to Multifractional Brownian Motion Antoine Ayache USTL (Lille) Antoine.Ayache@math.univ-lille1.fr Cassino December 2010 A.Ayache (USTL) From FBM to MBM Cassino December
More informationComputation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy
Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Hidehiko Ichimura Graduate School of Public Policy and Graduate School of Economics University of Tokyo A Conference in honor of
More informationBalance properties of multi-dimensional words
Theoretical Computer Science 273 (2002) 197 224 www.elsevier.com/locate/tcs Balance properties of multi-dimensional words Valerie Berthe a;, Robert Tijdeman b a Institut de Mathematiques de Luminy, CNRS-UPR
More informationEstimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements
of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements François Roueff Ecole Nat. Sup. des Télécommunications 46 rue Barrault, 75634 Paris cedex 13,
More informationOPTIMAL STOPPING OF A BROWNIAN BRIDGE
OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationVII Selected Topics. 28 Matrix Operations
VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve
More informationMOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES
J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary
More informationQED. Queen s Economics Department Working Paper No. 1244
QED Queen s Economics Department Working Paper No. 1244 A necessary moment condition for the fractional functional central limit theorem Søren Johansen University of Copenhagen and CREATES Morten Ørregaard
More informationThe Pacic Institute for the Mathematical Sciences http://www.pims.math.ca pims@pims.math.ca Surprise Maximization D. Borwein Department of Mathematics University of Western Ontario London, Ontario, Canada
More informationGaussian distributions and processes have long been accepted as useful tools for stochastic
Chapter 3 Alpha-Stable Random Variables and Processes Gaussian distributions and processes have long been accepted as useful tools for stochastic modeling. In this section, we introduce a statistical model
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationCombinatorial Dimension in Fractional Cartesian Products
Combinatorial Dimension in Fractional Cartesian Products Ron Blei, 1 Fuchang Gao 1 Department of Mathematics, University of Connecticut, Storrs, Connecticut 0668; e-mail: blei@math.uconn.edu Department
More informationAn FKG equality with applications to random environments
Statistics & Probability Letters 46 (2000) 203 209 An FKG equality with applications to random environments Wei-Shih Yang a, David Klein b; a Department of Mathematics, Temple University, Philadelphia,
More informationCHAPTER 3: LARGE SAMPLE THEORY
CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually
More informationWeak invariance principles for sums of dependent random functions
Available online at www.sciencedirect.com Stochastic Processes and their Applications 13 (013) 385 403 www.elsevier.com/locate/spa Weak invariance principles for sums of dependent random functions István
More informationNP-hardness of the stable matrix in unit interval family problem in discrete time
Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and
More informationA note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series
Publ. Math. Debrecen 73/1-2 2008), 1 10 A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series By SOO HAK SUNG Taejon), TIEN-CHUNG
More informationLifting to non-integral idempotents
Journal of Pure and Applied Algebra 162 (2001) 359 366 www.elsevier.com/locate/jpaa Lifting to non-integral idempotents Georey R. Robinson School of Mathematics and Statistics, University of Birmingham,
More informationComment on Weak Convergence to a Matrix Stochastic Integral with Stable Processes
Comment on Weak Convergence to a Matrix Stochastic Integral with Stable Processes Vygantas Paulauskas Department of Mathematics and Informatics, Vilnius University Naugarduko 24, 03225 Vilnius, Lithuania
More informationThe Central Limit Theorem: More of the Story
The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit
More informationTime Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley
Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the
More informationFunctional central limit theorem for super α-stable processes
874 Science in China Ser. A Mathematics 24 Vol. 47 No. 6 874 881 Functional central it theorem for super α-stable processes HONG Wenming Department of Mathematics, Beijing Normal University, Beijing 1875,
More informationComplete q-moment Convergence of Moving Average Processes under ϕ-mixing Assumption
Journal of Mathematical Research & Exposition Jul., 211, Vol.31, No.4, pp. 687 697 DOI:1.377/j.issn:1-341X.211.4.14 Http://jmre.dlut.edu.cn Complete q-moment Convergence of Moving Average Processes under
More informationNormal approximation of Poisson functionals in Kolmogorov distance
Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for
More informationCOMPACT DIFFERENCE OF WEIGHTED COMPOSITION OPERATORS ON N p -SPACES IN THE BALL
COMPACT DIFFERENCE OF WEIGHTED COMPOSITION OPERATORS ON N p -SPACES IN THE BALL HU BINGYANG and LE HAI KHOI Communicated by Mihai Putinar We obtain necessary and sucient conditions for the compactness
More informationSpurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics
UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics
More informationAsymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals
Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.
More informationJordan Journal of Mathematics and Statistics (JJMS) 8(3), 2015, pp THE NORM OF CERTAIN MATRIX OPERATORS ON NEW DIFFERENCE SEQUENCE SPACES
Jordan Journal of Mathematics and Statistics (JJMS) 8(3), 2015, pp 223-237 THE NORM OF CERTAIN MATRIX OPERATORS ON NEW DIFFERENCE SEQUENCE SPACES H. ROOPAEI (1) AND D. FOROUTANNIA (2) Abstract. The purpose
More informationConsistent estimation of the memory parameter for nonlinear time series
Consistent estimation of the memory parameter for nonlinear time series Violetta Dalla, Liudas Giraitis and Javier Hidalgo London School of Economics, University of ork, London School of Economics 18 August,
More informationOn some properties of elementary derivations in dimension six
Journal of Pure and Applied Algebra 56 (200) 69 79 www.elsevier.com/locate/jpaa On some properties of elementary derivations in dimension six Joseph Khoury Department of Mathematics, University of Ottawa,
More information