Random coefficient autoregression, regime switching and long memory

Size: px
Start display at page:

Download "Random coefficient autoregression, regime switching and long memory"

Transcription

1 Random coefficient autoregression, regime switching and long memory Remigijus Leipus 1 and Donatas Surgailis 2 1 Vilnius University and 2 Vilnius Institute of Mathematics and Informatics September 13, 2002 Abstract We discuss long memory properties of AR(1) process X t with random coefficient a t taking independent values A j [0, 1] on consecutive intervals of a stationary renewal process with a power-law interrenewal distribution. In the case when the distribution of generic A j has either an atom at the point a = 1, or a beta-type probability density in a neighborhood of a =1,weshow that the covariance function of X t decays hyperbolically with exponent between 0 and 1, and that a suitably normalized partial sums process of X t weakly converges to a symmetric Lévy process. 0 Introduction The first order autoregressive equation X t = ax t 1 + ε t (0.1) is fundamental to the theory and applications of time series. In the most simple case of (0.1), a is a scalar nonrandom coefficient taking values in the interval ( 1, 1) and ε t,t Z 1

2 are independent normally distributed random variables with zero mean. A natural generalization of (0.1) is to assume the coefficient random and/or time-dependent. Vervaat (1979), Lewis and Lawrence (1981), Tjøstheim (1986), among others, studied the simple model X t = a t X t 1 + ε t, (0.2) {a t } and {ε t } are two random processes, which are usually assumed to be mutually independent. The so-called Random Coefficient AutoRegressive (RCAR) model (see e.g. Nicholls and Quinn (1982)) usually refers to the situation when the coefficient sequence {a t } is i.i.d. Dependence properties of the RCAR model with independent coefficients are broadly similar to those of the standard AR(1) process (0.1). The other extreme case of (0.2) when a t is a random constant corresponds to a nonergodic process X t whose covariance need not tend to zero (see e.g. Robinson (1978)). Granger (1980) showed that aggregation of random coefficient AR(1) can lead to Gaussian long memory processes whose spectral density has power-law singularity at the origin. Pourahmadi (1988) was probably the first to observe that the model (0.2) with random time-dependent a t (taking only two distinct values 0 and 1) may exibit long memory, in the sense that the covariance of X t is nonsummable and decays hyperbolically. In his case, the values 0 and 1 can be taken on consecutive renewal intervals with a power-law interrenewal distribution. In our paper we discuss a similar (but more general) situation when the process a t is the so-called renewal-reward process: a t := A j, S j 1 <t S j, j Z (0.3)...<S 1 < 0 S 0 <S 1 <... (0.4) is a strictly stationary renewal process on Z and {A i,i Z} is a sequence of i.i.d. r.v. s independent of the renewal process (0.4). We assume that the interrenewal times U i := S i S i 1, i Z are independent {1, 2,...} valued random variables having a common distribution U with finite expectation µ := EU <. The above conditions imply strict stationarity of the renewal-reward process a t. We also assume that the sequences {S j,a j,j Z} and {ε j,j Z} are independent and ε j,j Z are i.i.d. r. v. s with zero mean and unit variance. We do not impose gaussianity or any particular distributional assumptions on ε j s. On the other hand, the assumptions on the distributions of U and generic A = A i given below play an important role in the proof of our results, although they probably can be relaxed (in particular, the requirement that A takes values in the interval [0, 1]). 2

3 Assumption U(α). There exist constants c U > 0andα>1 such that P [U = u] c U u α 1, u. Assumption A(q). P [0 A 1] = 1 and (i) If q = 0, then A has an atom at 1: 0 <f 1 := P [A =1]< 1. (ii) If q>0, then A has a probability density f(a) in some neighborhood of a = 1 such that f(a) =f 1 (a)(1 a) q 1, f 1 (a) is a continuous function such that f 1 := f 1 (1) > 0. Let us describe the main results of the paper. In Sect. 1 we find sufficient and necessary conditions on U and A for the existence of a covariance stationary solution X t of (0.2) given by i 1 X t = ε t i a t p, (0.5) i=0 with the convention 1 p=0 a t p := 1. These conditions are satisfied under Assumptions U(α) and A(q). The main results of the paper are Theorems 1 and 2 below. They refer to X t of (0.5), with a t defined by (0.3). p=0 Theorem 1 Let Assumptions U(α) anda(q) be satisfied, 2 <α+ q<3. Then r t = EX 0 X t c 2 t 2 α q, t. (0.6) The explicit form of the asymptotic constant c 2 > 0 (depending on α and q) is given in Sect. 2. Theorem 1 is a particular case of Theorem 2.1 Assumption A(q) is replaced by regular decay condition ν u c A u q of moments ν u := EA u,asu. Theorem 1 implies that for 2 <α+ q<3, the stationary solution X t (0.5) has long memory. The fact that the intensity of long memory, or the exponent α + q 2 (0, 1) in (0.6), depends both on the tail parameter α of the interrenewal time distribution, as well as on the parameter q characterizing the average closeness of (0.2) to the unit root a = 1 of the standard AR(1) model, is very natural. Theorem 1 refers to second-order properties of X t only. Theorem 2 discusses distributional properties of partial sums process of X t for large n. Put λ := 2(α + q)/3. Write = for weak convergence of finite dimensional distributions. 3

4 Theorem 2 Let Assumptions U(α) anda(q) be satisfied, 2 <α+q <3, andlete ε 3 <. Then n 1/λ X s = Z λ (t), (0.7) 0 s<[nt] Z λ (t) is a λ stable symmetric Lévy process whose characteristic function is given in (3.19) below. The question of the functional convergence in (0.7) (e.g., in the Skorohod topology) is open; see also Pipiras, Taqqu and Levy (2002, p.4). Theorem 2 might appear surprising as the summands X t have finite variance (but infinite fourth moment, see Proposition 2.3) and their marginal distribution is not heavy-tailed. However, stable limits of sums of (stationary) r.v. s may occur under long memory even when the summands are bounded. One of the simplest models of such kind is the renewal-reward model (0.3) itself which was recently discussed in several papers in connection with aggregation, see Mikosch, Resnick, Rootzén and Stegeman (2002), Pipiras, Levy and Taqqu (2002) and the references therein. It is easy to show that the process a t of (0.3) exibits long memory if EA 2 < and the distribution of U satisfies Assumption U(α) with1 < α < 2. Taqqu and Levy (1986) discuss the convergence of partial sums process of a t to a stable Lévy process under more general assumptions on A and U. A far reaching generalization of the renewal-reward model is the stochastic regime model of Davidson and Sibbertsen (2002), in which the rewards are random short memory processes which fluctuate around a local mean (regime). One of the principal results of the last paper is the fact that suitably normalized sums of their stochastic regime model converge to a stable Lévy process. A different class of long memory processes with finite variance whose partial sums converge to a Lévy process is discussed in Surgailis (2002a, 2002b). The random coefficient autoregressive process of (0.2)-(0.3) can be also regarded as a stochastic regime model, each regime corresponds to a standard AR(1) process (0.1) with fixed coefficient a. As a varies between 0 and 1, the class of possible regimes ranges from I(0) (i.i.d.) to I(1) (random walk) behavior. In the most simple case when a t, or A j, assume values 0 and 1 only, the solution alternates between I(0) and I(1) regimes: { εt, if a t =0, X t = ε t + ε t ε S 0 (t), if a t =1, S 0 (t) := max{s j : S j < t,a j = 0} = max{s : s < t,a s = 0}, see (0.5), also Pourahmadi (1988). Several other models involving structural changes, stochastic regime switching and long memory were recently discussed in the econometric literature, see Parke (1999), Granger and Hyung (1999), Diebold and Inoue (2001), Liu (2001), Gourieroux and Jasiak (2001), Leipus and Viano (2002). 4

5 1 Existence of stationary solution Conditions for the existence of covariance stationary solution (0.5) of the random coefficient autoregressive equation (0.2) with general a t were discussed in Pourahmadi (1988). Brandt (1986) studied the existence and uniqueness of strictly stationary solution of (0.2) without any moments. In this section we discuss the existence of covariance stationary solution in the case of a t given by (0.3), but without imposing specific Assumptions U(α) anda(q). In the sequel, C stands for generic constant which may change from line to line. Recall the distribution of the first arrival time S 0 0 and the last arrival time S 1 before t = 0 in a strictly stationary renewal process (0.4) satisfy P [S 0 = u] = P [S 1 = u 1] = µ 1 P [U u +1], u =0, 1,... (1.1) Put ν k := EA k, k =1, 2,... Theorem 1.1 Equation (0.2) with a t as in (0.3) admits a covariance stationary solution X t given by (0.5) if and only if Eν 2U < 1 (1.2) and E U (ν ν 2u ) <. (1.3) Proof. According to Pourahmadi (1988), the existence of covariance stationary solution of (0.2) is equivalent to u J := E[a a 2 0a ]=E a 2 p <. We have J = E u=0 p=0 u a 2 pi(s 1 < u)+e p=0 By (1.1) and the independence of A 0 and S 1, J 1 = EA 2u 0 I(S 1 u) = and ν 2u v=u = µ 1 J 2 = E v=1 u=0 p=0 u a 2 pi( u S 1 < 0) =: J 1 + J 2. ν 2u P [S 1 u] P [U v] =µ 1 E A 2v 0 I(S 1 = v) = µ 1 ν 2v P [U v]j 0 v=1 5 u U (ν ν 2u ), (1.4) a 2 p u=v p=v = J 0 µ 1 E U ν 2v, (1.5) v=1

6 J 0 := E u u=0 p=0 a 2 0, p and a 2 0,t is the renewal-reward process corresponding to the renewal process... < S 0, 1 < S 0,0 =0<S 0,1 <... with the same distribution U of interrenewal times and fixed renewal point S 0,0 = 0. Similarly as above, for J 0 one obtains the equation J 0 = = u=0 ( EA 2(u+1) 0 I(S 0, 1 < u)+ u v=1 ν 2u P [U u]+ ν 2v P [U = v]j 0 v=1 = E(ν ν 2U )+J 0 Eν 2U. Clearly, J 0 < if (1.2) and (1.3) hold in which case ) EA 2v 0 I(S 0, 1 = v)a 2 v...a 2 u J 0 = E(ν ν 2U ) 1 Eν 2U. (1.6) From (1.4), (1.5), (1.6) one obtains that under conditions (1.2) and (1.3), J = µ 1 E = µ 1 E U (ν ν 2u )+J 0 µ 1 E(ν ν 2U ) U (ν ν 2u )+ (E(ν ν 2U )) 2 < µ(1 Eν 2U ) and therefore covariance stationary solution (0.5) is well-defined. Moreover, the above argument shows J = if either (1.2) or (1.3) is violated. This concludes the proof of Theorem 1.1. Corollary 1.2 Assume P [ A 1] = 1 and µ = EU <. Then: (i) Covariance stationary solution (0.5) exists if P [ A < 1] > 0 and µ 2 := EU 2 <. (ii) If P [ A =1]> 0, then the sufficient conditions of (i) are also necessary for the existence of covariance stationary solution (0.5). Proof. (i) Note P [ A < 1] > 0 implies ν 2u < 1,u 1 and hence (1.2). Condition (1.3) is also satisfied as E U (ν ν 2u ) <EU 2 <. (ii) The necessity of P [ A =1]< 1 for (1.2) is obvious. Let us check the necessity of µ 2 <. Since ν 2u P [ A =1]> 0, together with (1.3) this yields >E U (ν ν 2u ) P [ A =1]E U u P [ A =1]EU2 /2, or µ 2 <. Corollary 1.3 Let Assumptions U(α) anda(q) besatisfied,α + q>2. Then covariance stationary solution (0.5) exists. 6

7 Proof. Note Assumption A(q) implies ν u = EA u = { f1 + o(1), q =0, (c A + o(1))u q, q > 0, (1.7) c A = f 1 0 e x x q 1 dx. If q = 0 then α>2 implying µ 2 < and the existence of X t (0.5) by Corollary 1.2. Let q>0. Then (1.2) easily follows we need to check condition (1.3) only. We have E U (ν ν 2u )=µe(ν ν 2U0 ) U 0 =1, 2,... is distributed according to P [U 0 = u] =µ 1 P [U u] cu α. By (1.7), v v 1, q > 1, ν 2u C u q C log v, q =1, v 1 q, q < 1. Hence, if q<1, E U (ν ν 2u ) CEU 1 q 0 C 1 u 1 q u α du = C 1 u 1 α q du <, implying (1.3). If q 1, (1.3) follows similarly by α > 1. Hence, by Theorem 1.1, the covariance stationary solution (0.5) exists. 2 Decay of covariance Theorem 2.1 Let Assumption U(α) besatisfied.letp [0 A 1] = 1, P[A =1]< 1, and let there exists q 0, 2 <α+ q<3 and a constant c A > 0 such that Then ν u c A u q, u. (2.1) r t = EX 0 X t c 2 t 2 α q, t, (2.2) c 2 := c A c U (µα(α 1)) 1 0 (1 + 2x) q (1 + x) 1 α dx. Proof. Note first check that the covariance stationary solution X t exists under conditions of the theorem, which follows from (2.1) and the proof of Corollary 1.3. Let us prove (2.2). According to Pourahmadi (1988), the covariance of X t (0.5) equals r t = E j 1 a t...a 1 j=0 p=0 a 2 p = E j 1 a 0...a 1 t j=0 p=0 a 2 t p. 7

8 Let us split r t = r t + r t into two parts r t and r t, corresponding to S 1 t j and t j<s 1 1, respectively. Then Ea 0...a 1 t a 2 t...a 2 t j+1i(s 1 t j) = EA t+2j 0 P [S 1 t j]. Hence by (1.1), (2.1) and Assumption U(α), r t = = j 1 Ea 0...a 1 t a 2 t pi(s 1 t j) j=0 p=0 ν t+2j P [S 1 t j] c 2 t 2 α q. (2.3) j=0 Let us prove that the term r t = j 1 Ea 0...a 1 t a 2 t pi( t j<s 1 1) j=0 s t+1 p=0 Ea 0...a s I(s S 1 1) =: ρ t is negligible w.r.t. r t. By definition of a t, ρ t = π 0 (u 1 )π(u 1 u 2 )...π(u k 1 u k )π (u k s), (2.4) k=1 s t s<u k <...<u 1 <0 π 0 (u) :=ν u P [S 1 = u], π(u) :=ν u P [U = u], π (u) :=ν u P [U u], u 1. Note π := π(u) =Eν U EA < 1, π(u) Cu q α 1,π 0 (u) Cu q α,π (u) Cu q α. According to Giraitis, Robinson and Surgailis (2000, Lemma 4.2), π(u 1 )π(u 2 u 1 )...π(t u k 1 ) Ck 3 π k t q α 1, t,k 1. 0<u 1 <...<u k 1 <t Therefore ( ρ t C k 3 π k) π 0 (u 1 ) u 1 u k q α 1 π (u k s) k=1 C s t s<u k <u 1 <0 s t s<u k <u 1 <0 C s q α Ct 1 q α. s t u 1 q α u 1 u k q α 1 u k s q α Therefore r t = o(r t), which concludes the proof of the theorem. 8

9 From the proof of the above theorem, it is easy to observe that long memory behavior of X t is due to the past of the autoregressive process until the first change of the coefficient, while the remaining part of the process is short memory. More precisely, let S (t) be the last renewal time before time t: S (t) :=max{s j : S j <t}. As it follows from the definition (0.3), the process a s is constant on the interval [S (t)+1,t]: a s = A j(t) on s [S (t)+1,t], j(t) Z is defined by S (t) =S j(t) 1. By stationarity of the renewal process, the backward time t S (t) has the same distribution as S 1, i.e. P [t S (t) =u] =µ 1 P [U u], u =1, 2,... Then X t = X 0 t + X 1 t, (2.5) Xt 0 := ε t + A j(t) ε t 1 + A 2 j(t) ε t A t S (t) 1 j(t) Xt 1 :=: a t a t 1...a s+1 ε s. s S (t) ε S (t)+1, Clearly, both X 0 t and X 1 t are stationary processes. Put r 0 t := EX 0 0 X0 t,r 1 t := EX 1 0 X1 t. Corollary 2.2 Assume conditions of Theorem 2.1. Then (2.5) represents the decomposition of X t of (0.5) into the long memory part Xt 0 and the short memory part Xt 1,inthesense that rt 0 r t c 2 t 2 α q, rt 1 = O(t 1 q α ). (2.6) In particular, t=0 r1 t <. Proof. Let us show the second relation of (2.6). We have Xt 1 = a t...a t s ε s, and therefore for t 0 s S (t) r 1 t := EX 1 0X 1 t = s S ( t) Ea 0...a 1 t a 2 t...a 2 t s. Clearly under conditions of Theorem 2.1, as S ( t) S 1 (= S (0)) a.s., rt 1 r t, r t defined in the proof of Theorem 2.1 and satisfies rt 1 = O(t 1 q α ). The first relation of (2.6) also follows from the proof of Theorem

10 Proposition 2.3 Assume conditions of Theorem 2.1. Then EX 4 0 =. Proof. As X 0 = X0 0 + X1 0 and X0 0 and X1 0 are conditionally independent given A j,s j,j Z, it suffices to show E(X0 0)4 =. Then E[(X 0 0) 4 A j,s j,j Z] C ( S 1 1 i=0 ) 2 A 2i 0 with some constant C>0 depending on Eε 2i,i=1, 2 only. Hence E(X 0 0) 4 CE ( S 1 1 i=0 Hence by (2.1), (1.1) and Assumption U(α), E(X 0 0) 4 C C i,j=1 u α ) 2 A 2i 0 = C u (i + j) q i,j=1 u i+j u 1 i,j=0 (i + j) q u α C ν 2(i+j) P [S 1 = u]. (i + j) 1 q α = i,j=1 for q + α<3. 3 Convergence to a Lévy process (proof of Theorem 2) By Corollary 2.2, it suffices to show (0.7) with X t replaced by the long memory part Xt 0, i.e. n 1/λ Xs 0 = Z λ (t), (3.1) 0 s<[nt] as 0 s<n X1 t = O P (n 1/2 )and1/λ > 1/2. Note Xs 0 = Y0 0 + Y i + YN 0 [nt] 1 +1 (3.2) 1 i N [nt] 1 0 s<[nt] N t := max{j 0:S j t} is the number of renewal points in the interval [0,t], and Y i is the sum of the autoregressive process with parameter A i in the renewal interval [S i +1,S i ], more precisely ( ) Y i := ε s + A i ε s A s S i 1 1 i ε Si 1 +1, i =1, 2,... (3.3) 0 s S 0 S i 1 <s S i The two extreme terms on the r.h.s. of (3.2) are appropriate modifications of (3.3): Y0 0 := ( ( ε s +A 0 ε s A s 0ε 0 ), YN 0 [nt] 1 +1 := ε s +A 0 ε s A s 0ε 0 ), 10 S ([nt])<s<[nt]

11 and can be easily seen negligible in the limit (3.1), which then follows from N [nt] n 1/λ i=1 Y i = Z λ (t). (3.4) Observe the Y i s are conditionally independent given S j,a j,j Z and Law(Y i S j,a j,j Z) =Law(T (A i,u i )), (3.5) U i = S i S i 1 is the length of the renewal interval, as usual, and T (a, n) := n s=1 (ε s + aε s a s 1 ε 1 ). (3.6) As (A i,u i ),i 1 are independent, this implies that Y i,i 1 are independent (and identically distributed) r.v. s. Hence the l.h.s. of (3.4) is a sum of a random number N [nt] [nt]/µ of i.i.d. r.v. s. Our next aim is to show that the distribution of generic Y = Y i belongs to the domain of attraction of a symmetric λ stable law. To that end, we first study the tail behavior of the distribution of the conditional variance W i := E[Yi 2 A j,s j,j Z]. We have W i =Φ(A i,u i ), Φ(a, n) := ET 2 (a, n) = (a t s + a t s a t+s 2 ). (3.7) Put 1 t,s n 1 Θ(v) :=v 2 (e vτ e vτ )(e vτ e v )dτ, v > 0. Lemma 3.1 Under conditions of Theorem 2, 0 P [Φ(A, U) >x] c V x λ/2, x, (3.8) c V := { cu f 1 3 α/3 α 1, q =0, ( ) c U f 1 0 y 3 Θ(v) > 1, q > 0. dy y 1+α+q 0 dv v 1 q I (3.9) Proof. For simplicity of notation, assume c U = f 1 =1. Put Θ n (v) := 1 n 3 Φ ( 1 v n,n ), v 0. Observe by the Lebesgue dominated convergence theorem, Θ n (v) = 1 {( n 3 1 v ) t s ( + 1 v ) t s +2 ( v ) t+s 2 } n n n = Θ(v), 1 t,s n { t+s t s } e vτ dτ dtds 11

12 as n, the convergence being uniform on each compact interval v [0,K]. We note the bound Θ n (v) C/(1 + v) 2. (3.10) For v>1, this follows by writing Φ(a, n) = n k=1 (1 + a ak 1 ) 2 Cn/(1 a) 2, a =1 v/n. Then Φ(a, n) Cn 3 /v 2, implying (3.10). For v 1, (3.10) is obvious by Φ(a, n) n 3. To prove (3.8), consider first the case q>0. Then P [Φ(A, U) >x]=p 0 (x)+p 1 (x), p 0 (x) := P [Φ(A, U) >x,1 ɛ<a<1, U>n 0 ], p 1 (x) P [Φ(A, U) >x,a<1 ɛ]+p [Φ(A, U) >x,u n 0 ]. Here, n 0,ɛ 1 > 0 are large enough and p 0 (x) = l n n 1+α n>n ɛ da f 1 (a) I(Φ(a, n) >x), (1 a) 1 q l n c U =1(n ), f 1 (a) f 1 =1(a 1). By choosing n 0,ɛ appropriately, we may assume above l n = f 1 (a) 1. Then with c V (x) := p 0 (x) = n>n = n 0 n 1+α 1 dz [z] 1+α+q da I(Φ(a, n) >x) (1 a) 1 q dv ( 1 ( v 1 q I [z] 3 Φ 1 v ) [z], [z] > x ) [z] 3 1 ɛ ɛ[z] = c V (x)x (α+q)/3 = c V (x)x λ/2, 0 dy y 1+α+q ω dv 1(y; x) 0 v 1 q ω 2(v; x, y)i(y 3 Θ [x 1/3 y] (v) >ω 3(v; x, y)), ω 1 (y; x) := (x 1/3 y/[x 1/3 y]) 1+α+q I(x 1/3 y>n 0 ), ω 2 (v; x, y) := I(0 <v<ɛ[x 1/3 y]), ω 3 (v; x, y) := (x 1/3 y/[x 1/3 y]) 3, and ω 1 (y; x) 1,ω 2 (v; x, y) 1,ω 3 (v; x, y) 1 for any v, y > 0asx. Then c V (x) c V (x ) follows by the dominated convergence theorem. To justify the use of the latter, note ω 1,ω 2 C, ω 3 1 uniformly in all arguments. Then, using (3.10), c V (x) c V := C 0 dy y 1+α+q 12 0 dv v 1 q I(Cy3 /(1 + v) 2 > 1),

13 the last integral converges by 1 + α + q>1, 2(α + q)/3 >q.thus, p 0 (x) c V x λ/2, x. To finish the proof of (3.8) in the case q>0, we need to estimate the term p 1 (x), namely, to show p 1 (x) =o(x λ/2 ), x. Note Φ(a, n) Cn on a<1 ɛ and therefore P [Φ(A, U) >x,a<1 ɛ] P [U >C 1 x] Cx α = o(x (α+q)/3 ) as α>1 > (α + q)/3 =λ/2. A similar bound on the set U n 0 follows from the fact that Φ(a, n) is bounded for n n 0 and any a [0, 1]. If q = 0, then P [Φ(A, U) >x]=f 1 P [Φ(1,U) >x]+p [Φ(A, U) >x,a<1]. As Φ(1,n)=n(n + 1)(2n +1)/6 n 3 /3so P [Φ(1,U) >x] P [U 3 > 3x] =P [U >(3x) 1/3 ] (c V /f 1 )x α/3, x, with c V given in (3.9). It remains to show lim sup x α/3 P [Φ(A, U) >x,a<1] = 0. (3.11) x For any δ > 0 one can find δ > 0 such that P [1 δ < A < 1] < δ. Then using Φ(a, n) Φ(1,n) P [Φ(A, U) >x,1 δ <A<1] δp[φ(1,u) >x] Cδx α/3 according to the argument above. Finally, for any fixed δ > 0, sup 0 a 1 δ Φ(a, n) Cn/(1 δ ) 2 Cn and therefore P [Φ(A, U) >x,a 1 δ ] P [CU > x] Cx 1 EU = o(x α/3 ) as α>3. As δ>0 is arbitrary, this proves (3.11) and the lemma. If the conditional law (3.5) is Gaussian (i.e., the errors ε i s are Gaussian), Lemma 3.1 easily implies the regular tail behavior of Y i with exponent λ and the corresponding statement about the domain of attraction. However we do not assume conditional gaussianity of our random coefficient autoregressive process. In the general case, this tail behavior will follow from Lemmas 3.2 and 3.3 below. 13

14 Lemma 3.2 Let be given a r.v. Ỹ and a σ algebra F such that E[Ỹ F] =0, E[Ỹ 2 F] =: W >0, a.s., P [ W >u] c 0 u λ/2 (u ) (3.12) for some c 0 > 0, 1 <λ<2. Moreover, let the normalized r.v. Z( W ):=Ỹ W 1/2 satisfy the following asymptotic normality condition: there exists a nonrandom function δ(u),u > 0 with lim u δ(u) =0, such that sup P [Z( W ) x F] G(x) δ( W ), a.s. (3.13) x R G(x) =P [Z x] is the c.d.f. of standard normal Z N(0, 1). Then P [Ỹ > x] c 1x λ (x ), P[Ỹ x] c 1 x λ (x ), (3.14) c 1 := (c 0 /2)E Z λ. Proof. By definition, Ỹ = W 1/2 Z( W ) and therefore P [Ỹ > x]=e{p [Z( W ) >xu 1/2 F] u= W }. Let X := W 1/2 Z, Z N(0, 1) is independent of W. Then the lemma follows from P [ X >x] c 1 x λ (x ), P[ X x] c 1 x λ (x ). (3.15) and P [Ỹ > x] P [ X >x] = o(x λ ) (x ), (3.16) P [Ỹ x] P [ X x] = o( x λ ) (x ). (3.17) Relation (3.15) is well-known, see e.g. Breiman (1965). Let us show (3.16)-(3.17). We have P [ X >x]=e{p [Z >xu 1/2 ] u= W } and therefore P [Ỹ > x] P [ X >x] 3 d i (x), i=1 d 1 (x) := E{ P [Z( W ) >xu 1/2 F] u= W P [Z >xu 1/2 ] u= W I(x W 1/2 K)}, d 2 (x) := E{P [Z( W ) >xu 1/2 F] u= W I(x W 1/2 >K)}, d 3 (x) := E{P [Z >xu 1/2 ] u= W I(x W 1/2 >K)}. 14

15 Consider the last term. As P [Z >xu 1/2 ] u/x 2 so (x/k) 2 d 3 (x) x 2 E{ WI( W (x/k) 2 )} = x 2 wdp[ W >w] (x/k) 2 = x 2 (x/k) 2 P [ W >(x/k) 2 ]+x 2 P [ W >w]dw (C/K 2 λ )x λ. A similar bound for d 2 (x) is immediate by P [Z( W ) >xu 1/2 F] (u/x 2 )E[Z 2 ( W ) F] = u/x 2. Therefore, sup x>0 x λ (d 2 (x)+d 3 (x)) can be made arbitrary small by choosing K>0 large enough. Let us estimate d 1 (x). As lim u δ(u) 0, for any K<,δ 0 > 0 there exists u 0 = u 0 (K, δ 0 ) > 0 such that δ(u) <δ 0 /K λ for all u>u 0. Then by (3.13), sup P [Z( W ) >v F] P [Z >v] δ(u) δ 0 /K λ, a.s. on W >u0. v R Then for all x>u 1/2 0 K 2 large enough, d 1 (x) (δ 0 /K λ )P [ W (x/k) 2 ] Cδ 0/K λ (x/k) λ Cδ 0 x λ. Hence lim sup x x λ d 1 (x) Cδ 0, thereby proving (3.16). Relation (3.17) follows similarly. The following Lemma 3.3 is needed to verify condition (3.13) of Lemma 3.2 for Ỹ = Y = T (A, U), W = W =Φ(A, U) andf = σ{a j,s j,j Z}, witht (a, n), Φ(a, n) defined in (3.6), (3.7) respectively. 0 0 Lemma 3.3 sup sup P a [0,1] x R [ T (a, n) Φ 1/2 (a, n) x ] G(x) 0. (3.18) Proof. We use the elegant bound due to Berry and Esséen (see e.g. Petrov (1975)). Observe T (a, n) is a sum of independent r.v. s: T (a, n) =ε n + ε n 1 (1 + a)+...+ ε 1 (1 + a a n 1 ). Then the Berry -Esséen theorem yields [ T (a, n) ] sup P x R Φ 1/2 (a, n) x G(x) KE ε 3 n i=1 (1 + a ai 1 ) 3 Φ 3/2, (a, n) K is an absolute constant. The square of the r.h.s. clearly does not exceed (up to a constant (KE ε 3 ) 2 ) the quantity Γ n (a) := (13 +(1+a) (1+a a n 1 ) 3 ) 2 (1 2 +(1+a) (1+a a n 1 ) 2 ) 3. 15

16 Hence the lemma follows from sup Γ n (a) 0. a [0,1] According to Lemma 4.1 below, the function Γ n is strictly increasing in a for any n 1. Therefore sup Γ n (a) = Γ n (1) = ( n 3 ) 2 27n(n +1) a [0,1] ( n 2 3 = ) 2(2n +1) 3 0. The lemma is proved. From Lemmas 3.2 and 3.3 and the trivial bound n Φ(a, n) n 3 ( a [0, 1]) one obtains Corollary 3.4 Let Y be a copy of Y i (3.3). Then P [Y > x] c Y x λ (x ), P[Y x] c Y x λ (x ), c Y := (c V /2)E Z λ, with Z N(0, 1) and c V given in (3.9). Proof of Theorem 2 (continued). one has the weak convergence From Corollary 3.4 and the classical central limit theorem, [nt]/µ n 1/λ i=1 Y i = Z λ (t), (3.19) Z λ (t) is a symmetric Lévy process, with the characteristic function Ee iuz λ(1) = e c u λ, u R, (3.20) c := Γ(2 λ) cos(πλ/2)c V E Z λ /(λ 1), Z N(0, 1) and c V is given in (3.9). Clearly, relation (3.4) and the theorem follow from (3.18) and N [nt] [nt]/µ Q n (t) := Y i Y i = o P (n 1/λ ). (3.21) i=1 Let us prove (3.20). Assume µ = t = 1 for simplicity. Put Q n := Q n (1), F := σ{s j,a j,j Z}. WehaveP [ Q n >δn 1/λ ]=EP[ Q n >δn 1/λ F] and (3.20) follows from i=1 E[Q 2 n F] =o P (n 2/λ ). (3.22) As Y i,i 1 are conditionally uncorrelated given F and E[Yi 2 F] =W i =Φ(A i,u i ), so (3.21) follows from N n n R n := W i W i = o P (n 2/λ ). (3.23) i=1 i=1 16

17 By the renewal theorem, N n /n µ 1, n, a.s. (3.24) For any δ, δ 1 > 0, one has P [ R n >δn 2/λ ] = P [ R n >δn 2/λ, N n n >δ 1 n] + P [ R n >δn 2/λ,n(1 δ 1 ) N n (1 + δ 1 )n] =:β + β. As β P [ 2nδ 1 i=1 W i >δn 2/λ ], by Lemma 3.1 and the central limit theorem, for any δ, δ 2 > 0 one can find δ 1 > 0 sufficiently small so that β <δ 2 for all n large enough. Next, by (3.23), for any δ 1,δ 2 > 0 one can find n 0 > 0 such that β P [ N n n >δ 1 n] <δ 2 for any n>n 0 and all δ. This proves (3.22) and Theorem 2 as well. 4 Appendix Consider the function (1 3 +(1+a 1 ) (1+a 1 + a 1 a a 1 a 2...a n ) 3 ) 2 Γ n (a 1,...,a n ) := (1 2 +(1+a 1 ) (1+a 1 + a 1 a a 1 a 2...a n ) 2 ) 3, in real variables a i [0, 1], i=1,...,n. Lemma 4.1 The function Γ n (a 1,...,a n ) is strictly increasing in each a i [0, 1], i = 1,...,n. Proof. The lemma follows from Γ n (a 1,...,a n )/ a j > 0, (4.1) for each j =1,...,n. Fix j and let x := a j. Then 1 3 +(1+a 1 ) (1+a 1 + a 1 a a 1 a 2...a n ) 3 { = A 3 0 B +1+(1+A 1 x) 3 +(1+A 2 x) 3 +(1+A 3 x) (1+A n j+1 x) 3}, B := ( 1+(1+a 1 ) (1+a a 1...a j 2 ) 3) /A 3 0, A 0 := 1 + a 1 + a 1 a a 1...a j 1, A k := a 1...a j 1 (1 + a j a j+1...a j+k 1 )/A 0, k =1,...,n j

18 Similarly, 1 2 +(1+a 1 ) (1+a 1 + a 1 a a 1 a 2...a n ) 2 { = A 2 0 C +1+(1+A 1 x) 2 +(1+A 2 x) 2 +(1+A 3 x) (1+A n j+1 x) 2}, C := ( 1+(1+a 1 ) (1+a a 1...a j 2 ) 2) /A 2 0. Put also Then Σ i := n j+1 k=1 A i k, i =1, 2, 3. Γ n (a 1,...,a n ) = (B +2+n j +3Σ 1x +3Σ 2 x 2 +Σ 3 x 3 ) 2 (C +2+n j +2Σ 1 x +Σ 2 x 2 ) 3. We obtain Γ n / x (Σ 1 +2Σ 2 x +Σ 3 x 2 )(C +2+n j +2Σ 1 x +Σ 2 x 2 ) (Σ 1 +Σ 2 x)(b +2+n j +3Σ 1 x +3Σ 2 x 2 +Σ 3 x 3 ), the proportionality factor is strictly positive. Finally, Γ n / x [C B]Σ 1 + [Σ 2 (2C B +1)+(n j +1)Σ 2 Σ 2 1]x + [Σ 3 (C +1)+(n j +1)Σ 3 Σ 1 Σ 2 ]x 2 + [Σ 1 Σ 3 Σ 2 2]x 3. Now (4.1) follows from B C, (4.2) Σ 2 1 (n j +1)Σ 2, (4.3) Σ 1 Σ 2 (n j +1)Σ 3, (4.4) Σ 2 2 Σ 1 Σ 3. (4.5) Here, inequality (4.2), or 1+(1+a 1 ) (1+a a 1...a j 2 ) 3 (1+(1+a 1 ) (1+a a 1...a j 2 ) 2)( ) 1+a 1 + a 1 a a 1...a j 1 is obvious for any a i 0,i =1,...,j 1. The remaining inequalities (4.3) -(4.5) easily follow from the Cauchy -Schwarz and Hölder inequalities. The lemma is proved. 18

19 References Brandt, A. (1986) The stochastic equation Y n+1 = A n Y n +B n with stationary coefficients. Adv. Appl. Prob. 18, Breiman, L. (1965) On some limit theorems similar to the arc-sin law. Theory Probab. Appl. 10, Davidson, J. and Sibbertsen, Ph. (2002) Generating schemes for long memory processes: regimes, aggregation and linearity. Preprint. Diebold, F.X. and Inoue, A. (2001) Long memory and regime switching. J. Econometrics 105, Giraitis, L., Robinson, P.M. and Surgailis, D. (2000) A model for long memory conditional heteroscdasticity. Ann. Appl. Probab. 10, Gourieroux, C. and Jasiak, J. (2001) Memory and infrequent breaks. Economics Letters 70, Granger, C.W.J. (1980) Long memory relationships and the aggregation of dynamic models. J. Econometrics 14, Granger, C.W.J. and Hyung, N. (1999) Occasional structural breaks and long memory. Discussion paper Department of Economics. University of California, San Diego. Leipus, R. and Viano, M.-C. (2001) Long memory and stochastic trend. PUB. IRMA, LILLE Vol. 56, No. VII. Lewis, P.A.W. and Lawrence, A.J. (1981) A new autoregressive time series model in exponential variables (NEAR (1)). Adv. Appl. Probab. 13, Mikosch, Th., Resnick, S., Rootzén, H. and Stegeman, A. (2002) Is network traffic approximated by stable Lévy motion or fractional Brownian motion? Ann. Appl. Probab. 12, Nicholls, D.F. and Quinn, B.G. (1982) Random Coefficient Autoregressive Models: An Introduction. Lecture Notes in Statistics, vol. 11. Springer-Verlag, New York. Parke, W.R. (1999) What is fractional integration? Rev. Econ. Statist. 81, Petrov, V.V. (1975) Sums of Independent Random Variables. Springer-Verlag, New York. Pipiras, V., Taqqu, M.S. and Levy, J.B. (2002) Slow, fast and arbitrary growth conditions for the renewal reward processes when the renewals and the rewards are heavy-tailed. Preprint. Pourahmadi, M. (1988) Stationarity of the solution of X t = A t X t 1 + ε t and analysis of non-gaussian dependent variables. J. Time Series Anal. 9,

20 Robinson, P.M. (1978) Statistical inference for a random coefficient autoregressive model. Scand. J. Statist. 5, Tjøstheim, D. 7, (1986) Some doubly stochastic time series models. J. Time Series Anal. Surgailis, D. (2002a) Stable limits of empirical processes of moving averages with infinite variance. Stoch. Process. Appl. 100, Surgailis, D. (2002b) Stable limits of bounded functions of long memory moving averages with finite variance. Preprint. Taqqu, M.S. and Levy, J.B. (1986) Using renewal processes to generate long-range dependence and high variability. In: Eberlein, E. and Taqqu, M.S. (eds.), Dependence in Probability and Statistics, pp Birkhäuser, Boston. Vervaat, W. (1979) On a stochastic difference equation and a representation of nonnegative infinitely divisible random variables. Adv. Appl. Probab. 11,

GARCH processes probabilistic properties (Part 1)

GARCH processes probabilistic properties (Part 1) GARCH processes probabilistic properties (Part 1) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

Estimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements

Estimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements François Roueff Ecole Nat. Sup. des Télécommunications 46 rue Barrault, 75634 Paris cedex 13,

More information

A Quadratic ARCH( ) model with long memory and Lévy stable behavior of squares

A Quadratic ARCH( ) model with long memory and Lévy stable behavior of squares A Quadratic ARCH( ) model with long memory and Lévy stable behavior of squares Donatas Surgailis Vilnius Institute of Mathematics and Informatics onatas Surgailis (Vilnius Institute of Mathematics A Quadratic

More information

Stochastic volatility models: tails and memory

Stochastic volatility models: tails and memory : tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and

More information

FRACTIONAL BROWNIAN MOTION WITH H < 1/2 AS A LIMIT OF SCHEDULED TRAFFIC

FRACTIONAL BROWNIAN MOTION WITH H < 1/2 AS A LIMIT OF SCHEDULED TRAFFIC Applied Probability Trust ( April 20) FRACTIONAL BROWNIAN MOTION WITH H < /2 AS A LIMIT OF SCHEDULED TRAFFIC VICTOR F. ARAMAN, American University of Beirut PETER W. GLYNN, Stanford University Keywords:

More information

NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY

NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY Econometric Theory, 26, 2010, 1855 1861. doi:10.1017/s0266466610000216 NOTES AND PROBLEMS IMPULSE RESPONSES OF FRACTIONALLY INTEGRATED PROCESSES WITH LONG MEMORY UWE HASSLER Goethe-Universität Frankfurt

More information

Poisson Cluster process as a model for teletraffic arrivals and its extremes

Poisson Cluster process as a model for teletraffic arrivals and its extremes Poisson Cluster process as a model for teletraffic arrivals and its extremes Barbara González-Arévalo, University of Louisiana Thomas Mikosch, University of Copenhagen Gennady Samorodnitsky, Cornell University

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

Heavy Tailed Time Series with Extremal Independence

Heavy Tailed Time Series with Extremal Independence Heavy Tailed Time Series with Extremal Independence Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Herold Dehling Bochum January 16, 2015 Rafa l Kulik and Philippe Soulier Regular variation

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Beyond the color of the noise: what is memory in random phenomena?

Beyond the color of the noise: what is memory in random phenomena? Beyond the color of the noise: what is memory in random phenomena? Gennady Samorodnitsky Cornell University September 19, 2014 Randomness means lack of pattern or predictability in events according to

More information

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0} VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and

More information

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stable Lévy motion with values in the Skorokhod space: construction and approximation

Stable Lévy motion with values in the Skorokhod space: construction and approximation Stable Lévy motion with values in the Skorokhod space: construction and approximation arxiv:1809.02103v1 [math.pr] 6 Sep 2018 Raluca M. Balan Becem Saidani September 5, 2018 Abstract In this article, we

More information

Strictly Stationary Solutions of Autoregressive Moving Average Equations

Strictly Stationary Solutions of Autoregressive Moving Average Equations Strictly Stationary Solutions of Autoregressive Moving Average Equations Peter J. Brockwell Alexander Lindner Abstract Necessary and sufficient conditions for the existence of a strictly stationary solution

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Comment on Weak Convergence to a Matrix Stochastic Integral with Stable Processes

Comment on Weak Convergence to a Matrix Stochastic Integral with Stable Processes Comment on Weak Convergence to a Matrix Stochastic Integral with Stable Processes Vygantas Paulauskas Department of Mathematics and Informatics, Vilnius University Naugarduko 24, 03225 Vilnius, Lithuania

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

A note on a scaling limit of successive approximation for a differential equation with moving singularity.

A note on a scaling limit of successive approximation for a differential equation with moving singularity. A note on a scaling limit of successive approximation for a differential equation with moving singularity. Tetsuya Hattori and Hiroyuki Ochiai ABSTRACT We study a method of successive approximation to

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES

HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES by Peter W. Glynn Department of Operations Research Stanford University Stanford, CA 94305-4022 and Ward Whitt AT&T Bell Laboratories Murray Hill, NJ 07974-0636

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Max stable Processes & Random Fields: Representations, Models, and Prediction

Max stable Processes & Random Fields: Representations, Models, and Prediction Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

The Subexponential Product Convolution of Two Weibull-type Distributions

The Subexponential Product Convolution of Two Weibull-type Distributions The Subexponential Product Convolution of Two Weibull-type Distributions Yan Liu School of Mathematics and Statistics Wuhan University Wuhan, Hubei 4372, P.R. China E-mail: yanliu@whu.edu.cn Qihe Tang

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS Fixed Point Theory, Volume 9, No. 1, 28, 3-16 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS GIOVANNI ANELLO Department of Mathematics University

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Randomly Weighted Sums of Conditionnally Dependent Random Variables

Randomly Weighted Sums of Conditionnally Dependent Random Variables Gen. Math. Notes, Vol. 25, No. 1, November 2014, pp.43-49 ISSN 2219-7184; Copyright c ICSRS Publication, 2014 www.i-csrs.org Available free online at http://www.geman.in Randomly Weighted Sums of Conditionnally

More information

Lecture Characterization of Infinitely Divisible Distributions

Lecture Characterization of Infinitely Divisible Distributions Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Modeling and testing long memory in random fields

Modeling and testing long memory in random fields Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous

More information

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Remigijus Leipus (with Yang Yang, Yuebao Wang, Jonas Šiaulys) CIRM, Luminy, April 26-30, 2010 1. Preliminaries

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

GARCH processes continuous counterparts (Part 2)

GARCH processes continuous counterparts (Part 2) GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Notes 9 : Infinitely divisible and stable laws

Notes 9 : Infinitely divisible and stable laws Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Gaussian distributions and processes have long been accepted as useful tools for stochastic

Gaussian distributions and processes have long been accepted as useful tools for stochastic Chapter 3 Alpha-Stable Random Variables and Processes Gaussian distributions and processes have long been accepted as useful tools for stochastic modeling. In this section, we introduce a statistical model

More information

Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows

Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows Uğur Tuncay Alparslan a, and Gennady Samorodnitsky b a Department of Mathematics and Statistics,

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

On the quantiles of the Brownian motion and their hitting times.

On the quantiles of the Brownian motion and their hitting times. On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

Refining the Central Limit Theorem Approximation via Extreme Value Theory

Refining the Central Limit Theorem Approximation via Extreme Value Theory Refining the Central Limit Theorem Approximation via Extreme Value Theory Ulrich K. Müller Economics Department Princeton University February 2018 Abstract We suggest approximating the distribution of

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Weak Variation of Gaussian Processes

Weak Variation of Gaussian Processes Journal of Theoretical Probability, Vol. 10, No. 4, 1997 Weak Variation of Gaussian Processes Yimin Xiao1,2 Received December 6, 1995; revised June 24, 1996 Let X(l) (ter) be a real-valued centered Gaussian

More information

1. Stochastic Processes and Stationarity

1. Stochastic Processes and Stationarity Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture Note 1 - Introduction This course provides the basic tools needed to analyze data that is observed

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Proceedings of the World Congress on Engineering 29 Vol II WCE 29, July 1-3, 29, London, U.K. Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Tao Jiang Abstract

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Long-tailed degree distribution of a random geometric graph constructed by the Boolean model with spherical grains Naoto Miyoshi,

More information

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.

LECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes. MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

Lecture No 1 Introduction to Diffusion equations The heat equat

Lecture No 1 Introduction to Diffusion equations The heat equat Lecture No 1 Introduction to Diffusion equations The heat equation Columbia University IAS summer program June, 2009 Outline of the lectures We will discuss some basic models of diffusion equations and

More information

Lecture 3 - Expectation, inequalities and laws of large numbers

Lecture 3 - Expectation, inequalities and laws of large numbers Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Weak max-sum equivalence for dependent heavy-tailed random variables

Weak max-sum equivalence for dependent heavy-tailed random variables DOI 10.1007/s10986-016-9303-6 Lithuanian Mathematical Journal, Vol. 56, No. 1, January, 2016, pp. 49 59 Wea max-sum equivalence for dependent heavy-tailed random variables Lina Dindienė a and Remigijus

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Extremogram and Ex-Periodogram for heavy-tailed time series

Extremogram and Ex-Periodogram for heavy-tailed time series Extremogram and Ex-Periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Jussieu, April 9, 2014 1 2 Extremal

More information

RESEARCH REPORT. Estimation of sample spacing in stochastic processes. Anders Rønn-Nielsen, Jon Sporring and Eva B.

RESEARCH REPORT. Estimation of sample spacing in stochastic processes.   Anders Rønn-Nielsen, Jon Sporring and Eva B. CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING www.csgb.dk RESEARCH REPORT 6 Anders Rønn-Nielsen, Jon Sporring and Eva B. Vedel Jensen Estimation of sample spacing in stochastic processes No. 7,

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

STOCHASTIC GEOMETRY BIOIMAGING

STOCHASTIC GEOMETRY BIOIMAGING CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 2018 www.csgb.dk RESEARCH REPORT Anders Rønn-Nielsen and Eva B. Vedel Jensen Central limit theorem for mean and variogram estimators in Lévy based

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information