LARGE DEVIATIONS FOR SOLUTIONS TO STOCHASTIC RECURRENCE EQUATIONS UNDER KESTEN S CONDITION D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J.

Size: px
Start display at page:

Download "LARGE DEVIATIONS FOR SOLUTIONS TO STOCHASTIC RECURRENCE EQUATIONS UNDER KESTEN S CONDITION D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J."

Transcription

1 LARGE DEVIATIONS FOR SOLUTIONS TO STOCHASTIC RECURRENCE EQUATIONS UNDER KESTEN S CONDITION D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ Abstract. In this paper we prove large deviations results for partial sums constructed from the solution to a stochastic recurrence equation. We assume Kesten s condition [17] under which the solution of the stochastic recurrence equation has a marginal distribution with power law tails, while the noise sequence of the equations can have light tails. The results of the paper are analogs of those obtained by A.V. and S.V. Nagaev [21, 22] in the case of partial sums of iid random variables. In the latter case, the large deviation probabilities of the partial sums are essentially determined by the largest step size of the partial sum. For the solution to a stochastic recurrence equation, the magnitude of the large deviation probabilities is again given by the tail of the maximum summand, but the exact asymptotic tail behavior is also influenced by clusters of extreme values, due to dependencies in the sequence. We apply the large deviation results to study the asymptotic behavior of the ruin probabilities in the model. (1.1) 1. Introduction Through the last 40 years, the stochastic recurrence equation Y n = A n Y n 1 + B n, n Z, and its stationary solution have attracted much attention. Here (A i, B i ), i Z, is an iid sequence, A i > 0 a.s. and B i assumes real values. (In what follows, we write A, B, Y,..., for generic elements of the strictly stationary sequences (A i ), (B i ), (Y i ),..., and we also write c for any positive constant whose value is not of interest.) It is well known that if E log A < 0 and E log + B <, there exists a unique strictly stationary ergodic solution (Y i ) to the stochastic recurrence equation (1.1) with representation n Y n = A i+1 A n B i, n Z, i= where, as usual, we interpret the summand for i = n as B n. One of the most interesting results for the stationary solution (Y i ) to the stochastic recurrence equation (1.1) was discovered by Kesten [17]. He proved under general conditions that the marginal distributions of (Y i ) have power law tails. For later use, we formulate a version of this result due to Goldie [10]. Theorem 1.1. (Kesten [17], Goldie [10]) Assume that the following conditions hold: 2000 Mathematics Subject Classification. Primary 60F10; secondary 91B30, 60G70. Key words and phrases. Stochastic recurrence equation, large deviations, ruin probability. D. Buraczewski and E. Damek were partially supported by the NCN grant N N Thomas Mikosch s research is partly supported by the Danish Natural Science Research Council (FNU) Grants Point process modelling and statistical inference and Heavy tail phenomena: Modeling and estimation. J. Zienkiewicz was supported by the MNiSW grant N N D. Buraczewski was also supported by the European Commission via IEF Project (contract number PIEF-GA SCHREC). 1

2 2 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ (1.2) There exists α > 0 such that EA α = 1. ρ = E(A α log A) and E B α are both finite. The law of log A is non-arithmetic. For every x, PAx + B = x < 1. Then Y is regularly varying with index α > 0. In particular, there exist constants c +, c 0 such that c + + c > 0 and (1.3) PY > x c + x α, and PY x c x α as x. Moreover, if B 1 a.s. then the constant c + takes on the form c := E[(1 + Y ) α Y α ]/(αρ), Goldie [10] also showed that similar results remain valid for the stationary solution to stochastic recurrence equations of the type Y n = f(y n 1, A n, B n ) for suitable functions f satisfying some contraction condition. The power law tails (1.3) stimulated research on the extremes of the sequence (Y i ). Indeed, if (Y i ) were iid with tail (1.3) and c + > 0, then the maximum sequence M n = max(y 1,..., Y n ) would satisfy the limit relation (1.4) lim n P(c+ n) 1/α M n x = e x α = Φ α (x), x > 0, where Φ α denotes the Fréchet distribution, i.e. one of the classical extreme value distributions; see Gnedenko [11]; cf. Embrechts et al. [6], Chapter 3. However, the stationary solution (Y i ) to (1.1) is not iid and therefore one needs to modify (1.4) as follows: the limit has to be replaced by Φ θ α for some constant θ (0, 1), the so-called extremal index of the sequence (Y i ); see de Haan et al. [12]; cf. [6], Section 8.4. The main objective of this paper is to derive another result which is a consequence of the power law tails of the marginal distribution of the sequence (Y i ): we will prove large deviation results for the partial sum sequence S n = Y Y n, n 1, S 0 = 0. This means we will derive exact asymptotic results for the left and right tails of the partial sums S n. Since we want to compare these results with those for an iid sequence we recall the corresponding classical results due to A.V. and S.V. Nagaev [21, 22] and Cline and Hsing [3]. Theorem 1.2. Assume that (Y i ) is an iid sequence with a regularly varying distribution, i.e. there exists an α > 0, constants p, q 0 with p + q = 1 and a slowly varying function L such that (1.5) PY > x p L(x) x α and PY x q L(x) x α as x. Then the following relations hold for α > 1 and suitable sequences b n : (1.6) lim sup PS n ES n > x n p n P Y > x = 0 and (1.7) x b n lim sup n x b n PS n ES n x n P Y > x q = 0. If α > 2 one can choose b n = an log n, where a > α 2, and for α (1, 2], b n = n δ+1/α for any δ > 0.

3 LARGE DEVIATIONS 3 For α (0, 1], (1.6) and (1.7) remain valid if the centering ES n is replaced by 0 and b n = n δ+1/α for any δ > 0. For α (0, 2] one can choose a smaller bound b n if one knows the slowly varying function L appearing in (1.5). A functional version of Theorem 1.2 with multivariate regularly varying summands was proved in Hult et al. [13] and the results were used to prove asymptotic results about multivariate ruin probabilities. Large deviation results for iid heavy-tailed summands are also known when the distribution of the summands is subexponential, including the case of regularly varying tails; see the recent paper by Denisov et al. [5] and the references therein. In this case, the regions where the large deviations hold very much depend on the decay rate of the tails of the summands. For semi-exponential tails (such as for the log-normal and the heavy-tailed Weibull distributions) the large deviation regions (b n, ) are much smaller than those for summands with regularly varying tails. In particular, x = n is not necessarily contained in (b n, ). The aim of this paper is to study large deviation probabilities for a particular dependent sequence (Y n ) as described in Kesten s Theorem 1.1. For dependent sequences (Y n ) much less is known about the large deviation probabilities for the partial sum process (S n ). Gantert [8] proved large deviation results of logarithmic type for mixing subexponential random variables. Davis and Hsing [4] and Jakubowski [14, 15] proved large deviation results of the following type: there exist sequences s n such that PS n > a n s n n PY > a n s n c α for suitable positive constants c α under the assumptions that Y is regularly varying with index α (0, 2), n P ( Y > a n ) 1 and (Y n ) satisfies some mixing conditions. Both Davis and Hsing [4] and Jakubowski [14, 15] could not specify the rate at which the sequence (s n ) grows to infinity, and an extension to α > 2 was not possible. These facts limit the applicability of these results, for example for deriving the asymptotics of ruin probabilities for the random walk (S n ). Large deviations results for particular stationary sequences (Y n ) with regularly varying finite-dimensional distributions were proved in Mikosch and Samorodnitsky [19] in the case of linear processes with iid regularly varying noise and in Konstantinides and Mikosch [18] for solutions (Y n ) to the stochastic recurrence equation (1.1), where B is regularly varying with index α > 1 and EA α < 1. This means that Kesten s condition (1.2) is not satisfied in this case and the regular variation of (Y n ) is due to the regular variation of B. For these processes, large deviation results and ruin bounds are easier to derive by applying the heavy-tail large deviation heuristics : a large value of S n happens in the most likely way, namely it is due to one very large value in the underlying regularly varying noise sequence, and the particular dependence structure of the sequence (Y n ) determines the clustering behavior of the large values of S n. This intuition fails when one deals with the partial sums S n under the conditions of Kesten s Theorem 1.1: here a large value of S n is not due to a single large value of the B n s or A n s but to large values of the products A 1 A n. The paper is organized as follows. In Section 2 we prove an analog to Theorem 1.2 for the partial sum sequence (S n ) constructed from the solution to the stochastic recurrence equation (1.1) under the conditions of Kesten s Theorem 1.1. The proof of this result is rather technical: it is given in Section 3 where we split the proof into a series of auxiliary results. There we treat the different cases α 1, α (1, 2] and α > 2 by different tools and methods. In particular, we will use exponential tail inequalities which are suited for the three distinct situations. In contrast to the iid situation described in Theorem 1.2, we will show that the x-region where the large deviations hold cannot be chosen as an infinite interval (b n, ) for a suitable lower bound b n, but one also needs upper bounds c n b n. In Section 4 we apply the large deviation results to get precise asymptotic bounds for the ruin probability related to the random walk (S n ). This ruin bound is an analog of

4 4 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ the celebrated result by Embrechts and Veraverbeke [7] in the case of a random walk with iid step sizes. 2. Main result The following is the main result of this paper. It is an analog of the well known large deviation result of Theorem 1.2. Theorem 2.1. Assume that the conditions of Theorem 1.1 are satisfied and additionally there exists ε > 0 such that EA α+ε and E B α+ε are finite. Then the following relations hold: (1) For α (0, 2], M > 2, (2.1) sup n sup n 1/α (log n) M x PS n d n > x n P Y > x <, If additionally e sn n 1/α (log n) M and lim n s n /n = 0 then (2.2) lim sup PS n d n > x n n P Y > x c+ c c + + c = 0, n 1/α (log n) M xe sn where d n = 0 or d n = ES n according as α (0, 1] or α (1, 2]. (2) For α > 2 and any c n, (2.3) sup n sup c n n 0.5 log nx If additionally c n n 0.5 log n e sn (2.4) lim sup n c n n 0.5 log nxe sn PS n ES n > x n P Y > x <. and lim n s n /n = 0 then PS n ES n > x c+ c n P Y > x c + + c = 0. Clearly, if we exchange the variables B n by B n in the above results we obtain the corresponding asymptotics for the left tail of S n. For example, for α > 1 the following relation holds uniformly for the x-regions indicated above: PS n ney x lim = c c n n P Y > x c + + c. Remark 2.2. The deviations of Theorem 2.1 from the iid case (see Theorem 1.2) are two-fold. First, the extremal clustering in the sequence (Y n ) manifests in the presence of the additional constants c and c ±. Second, the precise large deviation bounds (2.2) and (2.4) are proved for x-regions bounded from above by a sequence e sn for some s n with s n /n 0. Mikosch and Wintenberger [20] extended Theorem 2.1 to more general classes of stationary sequences (Y t ). In particular, they proved similar results for stationary Markov chains with regularly varying finitedimensional distributions, satisfying a drift condition. The solution (Y t ) to (1.1) is a special case of this setting if the distributions of A, B satisfy some additional conditions. Mikosch and Wintenberger [20] use a regeneration argument to explain that the large deviation results do not hold uniformly in the unbounded x-regions (b n, ) for suitable sequences (b n ), b n. 3. Proof of the main result 3.1. Basic decompositions. In what follows, it will be convenient to use the following notation Ai A Π ij = j i j and Π 1 otherwise j = Π 1j,

5 LARGE DEVIATIONS 5 and Ỹ i = Π 2i B 1 + Π 3i B Π ii B i 1 + B i, i 1. Since Y i = Π i Y 0 + Ỹi the following decomposition is straightforward: n n (3.1) S n = Y 0 Π i + Ỹ i =: Y 0 η n + S n, where (3.2) Sn = Ỹ1 + + Ỹn and η n = Π Π n, n 1. In view of (3.1) and Lemma 3.1 below it suffices to bound the ratios P S n d n > x n P Y > x uniformly for the considered x-regions, where d n = E S n for α > 1 and d n = 0 for α 1. The proof of the following bound is given at the end of this subsection. Lemma 3.1. Let (s n ) be a sequence such that s n /n 0. Then for any sequence (b n ) with b n the following relations hold: lim n sup P Y 0 η n > x b nxe sn n P Y > x = 0 and lim sup n sup b nx P Y 0 η n > x n P Y > x <, Before we further decompose S n we introduce some notation to be used throughout the proof: for any x in the considered large deviation regions, (3.3) m = [(log x) 0.5+σ ] for some positive number σ < 1/4, where [ ] denotes the integer part. n 0 = [ρ 1 log x], where ρ = E(A α log A). n 1 = n 0 m and n 2 = n 0 + m For α > 1, let D be the smallest integer such that D log EA > α 1. Notice that the latter inequality makes sense since EA < 1 due to (1.2) and the convexity of the function ψ(h) = EA h, h > 0. For α 1, fix some β < α and let D be the smallest integer such that D log EA β > α β where, by the same remark as above, EA β < 1. Let n 3 be the smallest integer satisfying D log x n 3, x > 1. Notice that since the function Ψ(h) = log ψ(h) is convex, putting β = 1 if α > 1, by the choice of D we have 1 D < Ψ(α) Ψ(β) α β < Ψ (α) = ρ, therefore n 2 < n 3 if x is sufficiently large. For fixed n, we change the indices i j = n i + 1 and, abusing notation and suppressing the dependence on n, we reuse the notation Ỹ j = B j + Π jj B j Π j,n 1 B n. Writing n 4 = min(j + n 3, n), we further decompose Ỹj: (3.4) Ỹ j = Ũj + W j = B j + Π jj B j Π j,n4 1B n4 + W j. Clearly, W j vanishes if j n n 3 and therefore the following lemma is nontrivial only for n > n 3. The proof is given at the end of this subsection.

6 6 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ Lemma 3.2. For any small δ > 0, there exists a constant c > 0 such that n (3.5) P ( W j c j ) > x c n x α δ, x > 1, j=1 where c j = 0 or c j = E W j according as α 1 or α > 1. By virtue of (3.5) and (3.4) it suffices to study the probabilities P a j = 0 for α 1 and a j = EŨj for α > 1. We further decompose Ũi into (3.6) Ũ i = X i + S i + Z i, where for i n n 3, (3.7) n X i = B i + Π ii B i Π i,i+n1 2B i+n1 1, S i = Π i,i+n1 1B i+n1 + + Π i,i+n2 1B i+n2, Z i = Π i,i+n2 B i+n Π i,i+n3 1B i+n3. j=1 (Ũj a j ) > x, where For i > n n 3, define X i, S i, Z i as follows: For n 2 < n i < n 3 choose X i, S i as above and Z i = Π i,i+n2 B i+n Π i,n 1 B n. For n 1 n i n 2, choose Z i = 0, Xi as before and Finally, for n i < n 1, define S i = 0, Z i = 0 and S i = Π i,i+n1 1B i+n1 + + Π i,n 1 B n. X i = B i + Π ii B i Π i,n 1 B n. Let p 1, p, p 3 be the largest integers such that p 1 n 1 n n 1 + 1, pn 1 n n 2 and p 3 n 1 n n 3, respectively. We study the asymptotic tail behavior of the corresponding block sums given by (3.8) X j = jn 1 i=(j 1)n 1+1 X i, S j = jn 1 i=(j 1)n 1+1 where j is less or equal p 1, p, p 3 respectively. The remaining steps of the proof are organized as follows. S i, Z j = jn 1 i=(j 1)n 1+1 Section 3.2. We show that the X j s and Z j s do not contribute to the considered large deviation probabilities. This is the content of Lemmas 3.4 and 3.5. Section 3.3. We provide bounds for the tail probabilities of S j ; see Proposition 3.6 and Lemma 3.8. These bounds are the main ingredients in the proof of the large deviation result. Section 3.4. In Proposition 3.9 we combine the bounds provided in the previous subsections. Section 3.5: We apply Proposition 3.9 to prove the main result. Proof of Lemma 3.1. The infinite series η = i=0 Π i has the distribution of the stationary solution to the stochastic recurrence equation (1.1) with B 1 a.s. and therefore, by Theorem 1.1, P (η > x) c x α, x. It follows from a slight modification of Jessen and Mikosch [16], Lemma 4.1(4), and the independence of Y 0 and η that (3.9) P Y 0 η > x c x α log x, x. Z i,

7 LARGE DEVIATIONS 7 Since s n /n 0 as n we have P Y 0 η n > x sup b nxe sn n P Y > x sup P Y 0 η > x b nxe sn n P Y > x 0. There exist c 0, x 0 > 0 such that P Y 0 > y c 0 y α for y > x 0. Therefore P Y 0 η n > x Px/η n x 0 + c 0 x α Eη α n1 x/ηn>x 0 cx α Eη α n. By Bartkiewicz et al. [1], Eη α n cn. Hence This concludes the proof. P Y 0 η n > x I n = sup b nx n P Y > x sup cx α Eηn α b nx n P Y > x <. Proof of Lemma 3.2. Assume first that α > 1. Since E W j is finite, D log EA > α 1 and D log x n 3, we have for some positive δ (3.10) E W j (EA)n3 1 EA E B c ed log x log EA c x (α 1) δ, and hence by Markov s inequality n n P ( W j E W j ) > x 2 x 1 E W j c n x α δ. j=1 If β < α 1 an application of Markov s inequality yields for some positive δ, j=1 j=1 j=1 n P W n j > x x β E W j β x β ne B β (EA β ) n3 (1 EA β ) cx β ne D log x log EAβ c n x α δ. In the last step we used the fact that D log EA β > α β. This concludes the proof of the lemma Bounds for PX j > x and PZ j > x. We will now study the tail behavior of the single block sums X 1, Z 1 defined in (3.8). We start with a useful auxiliary result. Lemma 3.3. Assume ψ(α+ɛ) = EA α+ɛ < for some ɛ > 0. Then there is a constant C = C(ɛ) > 0 such that ψ(α + γ) C e ργ for γ ɛ/2, where ρ = E(A α log A). Proof. By a Taylor expansion and since ψ(α) = 1, ψ (α) = ρ, we have for some θ (0, 1), (3.11) ψ(α + γ) = 1 + ργ + 0.5ψ (α + θγ)γ 2. If θγ < ɛ/2 then, by assumption, ψ (α + θγ) = EA α+θγ (log A) 2 is bounded by a constant c > 0. Therefore, ψ(α + γ) 1 + ργ + cγ 2 = e log(1+ργ+c γ2) C e ργ. The following lemma ensures that the X i s do not contribute to the considered large deviation probabilities.

8 8 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ Lemma 3.4. There exist positive constants C 1, C 2, C 3 such that where PX 1 > x PX 1 > x C 1 x α e C2 (log x)c 3, x > 1, n 1 X 1 = ( B i + Π ii B i Π i,i+n1 2 B i+n1 1 ). Proof. We have X 1 = n 0 k=m+1 R k, where for m < k n 0, R k = Π 1,n0 k B n0 k Π i,i+n0 k 1 B i+n0 k + + Π n1,n 1+n 0 k 1 B n1+n 0 k. Notice that for x sufficiently large, n 0 k=m+1 R k > x n 0 k=m+1 R k > x/k 3. Indeed, on the set R k x/k 3, m < k n 0 we have for some c > 0 and sufficiently large x, by the definition of m = [(log x) 0.5+σ ], n 0 k=m+1 R k x m + 1 We conclude that, with I k = PR k > x/k 3, n 0 P k=m+1 k=1 1 k 2 c x (log x) 0.5+σ < x. R k > x n 0 k=m+1 Next we study the probabilities I k. Let δ = (log x) 0.5. By Markov s inequality, I k (x/k 3 ) (α+δ) ER α+δ k By Lemma 3.3 and the definition of n 0 = [ρ 1 log x], I k. (x/k 3 ) (α+δ) n α+δ 0 (EA α+δ ) n0 k E B α+δ. I k c (x/k 3 ) (α+δ) n α+δ 0 e (n0 k)ρδ c x α k 3(α+δ) n α+δ 0 e kρδ. Since k (log x) 0.5+σ m there are positive constants ζ 1, ζ 2 such that kδ k ζ1 (log x) ζ2 and therefore for sufficiently large x and appropriate positive constants C 1, C 2, C 3, n 0 k=m+1 This finishes the proof. I k c x α n α+δ 0 n 1 k=m+1 e ρ kζ 1 (log x) ζ 2 k 3(α+δ) C 1 x α e C2 (log x)c 3. The following lemma ensures that the Z i s do not contribute to the considered large deviation probabilities. Lemma 3.5. There exist positive constants C 4, C 5, C 6 such that where PZ 1 > x PZ 1 > x C 4 x α e C5 (log x)c 6, x > 1, n 1 Z 1 = (Π i,i+n2 B i+n Π i,i+n3 1 B i+n3 ).

9 LARGE DEVIATIONS 9 Proof. We have Z 1 = n 3 n 2 k=1 R k, where R k = Π 1,n2+k B n2+k Π i,i+n2+k 1 B i+n2+k + + Π n1,n 1+n 2+k 1 B n1+n 2+k. As in the proof of Lemma 3.4 we notice that, with J k = P R k > x/(n 2 + k) 3, for x sufficiently large n 3 n 2 P k=1 R k > x n 3 n 2 Next we study the probabilities J k. Choose δ = (n 2 + k) 0.5 < ɛ/2 with ɛ as in Lemma 3.3. By Markov s inequality, J k ((n 2 + k) 3 /x) α δ α δ E R k ((n 2 + k) 3 /x) α δ n1 α δ (EA α δ ) n2+k E B α δ. By Lemma 3.3 and since n 2 + k = n 0 + m + k, k=1 J k. (EA α δ ) n2+k c e δρ(n2+k) c x δ e δρ(m+k). There is ζ 3 > 0 such that δ(m + k) (log x + k) ζ3. Hence, for appropriate constants C 4, C 5, C 6 > 0, n 3 n 2 k=1 This finishes the proof. n 3 n 2 J k c x α n α δ 1 k=1 (n 2 + k) 3(α δ) e ρ(log x+k)ζ 3 C 4 x α e C5 (log x)c Bounds for PS j > x. The next proposition is a first major step towards the proof of the main result. For the formulation of the result and its proof, recall the definitions of S i and S i from (3.7) and (3.8), respectively. Proposition 3.6. Assume that c + > 0 and let (b n ) be any sequence such that b n. Then the following relation holds: lim sup PS 1 > x n x b n n 1 PY > x c (3.12) = 0. If c + = 0 then (3.13) lim sup PS 1 > x n x b n n 1 P Y > x = 0. The proof depends on the following auxiliary result whose proof is given in Appendix B. Lemma 3.7. Assume that Y and η k (defined in (3.2)) are independent and ψ(α +ɛ) = EA α+ɛ < for some ɛ > 0. Then for n 1 = n 0 m = [ρ 1 log x] [(log x) 0.5+σ ] for some σ < 1/4 and any sequences b n and r n the following relation holds: lim sup Pη k Y > x n k PY > x c = 0, provided c + > 0. If c + = 0 then r nkn 1,b nx lim sup n r nkn 1,b nx Pη k Y > x k P Y > x = 0.

10 10 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ Proof of Proposition 3.6. For i n 1, consider S i + S i = Π i,n1 B n Π i,i+n1 2B i+n1 1 + S i + Π i,i+n2 B i+n Π i,n2+n 1 1B n2+n 1 = Π i,n1 (B n1+1 + A n1+1b n Π n1+1,n 2+n 1 1B n2+n 1 ). Notice that P S S n 1 > x n 1 P S 1 > x/n 1. Therefore and by virtue of Lemmas 3.4 and 3.5 there exist positive constants C 7, C 8, C 9 such that P S S n 1 > x C 7 x α e C8(log x)c 9, x 1. Therefore and since S 1 = n 1 S i it suffices for (3.12) to show that n1 S i > x We observe that where lim sup n PS 1 + x b n n 1 PY > x c = 0. n 1 S 1 + S i d =: UT 1 and T 1 + T 2 = Y, U = Π 1,n1 + Π 2,n1 + + Π n1,n 1, T 1 = B n1+1 + Π n1+1,n 1+1B n Π n1+1,n 2+n 1 1B n2+n 1, T 2 = Π n1+1,n 2+n 1 B n2+n Π n1+1,n 2+n 1+1B n2+n Since U = d η n1 and Y = d T 1 + T 2, in view of Lemma 3.7 we obtain provided c + > 0 or lim sup PU(T 1 + T 2 ) > x c = 0, n x b n n 1 PY > x lim sup PU(T 1 + T 2 ) > x = 0, n x b n n 1 P Y > x if c + = 0. Thus to prove the proposition it suffices to justify the existence of some positive constants C 10, C 11, C 12 such that (3.14) P UT 2 > x C 10 x α e C11 (log x)c 12, x > 1. For this purpose we use the same argument as in the proof of Lemma 3.4. First we write P UT 2 > x PU Π n1+1,n 1+n 2+k B n1+n 2+k+1 > x/(log x + k) 3. k=0 Write δ = (log x + k) 0.5. Then by Lemma 3.3, Markov s inequality and since n 2 = n 0 + m, PU Π n1+1,n 1+n 2+k B n1+n 2+k+1 > x/(log x + k) 3 (log x + k) 3(α δ) x (α δ) EU α δ (EA α δ ) n2+k E B α δ c (log x + k) 3(α δ) x (α δ) e (n2+k)ρδ c e (m+k)ρδ (log x + k) 3(α δ) x α.

11 LARGE DEVIATIONS 11 There is ζ > 0 such that (m + k)δ (log x + k) ζ and therefore, P UT 2 > x c x α k=0 c x α e (log x)ζ ρ/2. e (log x+k)ζρ (log x + k) 3(α δ) This proves (3.14) and the lemma. Observe that if i j > 2 then S i and S j are independent. For i j 2 we have the following bound: Lemma 3.8. The following relation holds for some constant c > 0: sup P S i > x, S j > x c n x α, x > 1. i 1, i j 2 Proof. Assume without loss of generality that i = 1 and j = 2, 3. Then we have S 1 ( Π 1,n1 + + Π n1,n 1 ) ( B n1+1 + Π n1+1,n 1+1 B n Π n1+1,n 1+n 2 1 B n2+n 1 ) =: U 1 T 1, S 2 ( Π n1+1,2n Π 2n1,2n 1 ) ( B 2n1+1 + Π 2n1+1,2n 1+1 B 2n Π 2n1+1,2n 1+n 2 1 B 2n1+n 2 ) =: U 2 T 2, S 3 ( Π 2n1+1,3n Π 3n1,3n 1 ) ( B 3n1+1 + Π 3n1+1,3n 1+1 B 3n Π 3n1+1,3n 1+n 2 1 B 3n1+n 2 ) =: U 3 T 3. d We observe that U 1 = ηn1, U i, i = 1, 2, 3, are independent, U i is independent of T i the T i s have power law tails with index α > 0. We conclude from (3.12) that for each i, and P S 1 > x, S 2 > x PT 1 > x n 1/(2α) 1 + PT 1 x n 1/(2α) 1, U 1 T 1 > x, U 2 T 2 > x c n x α + Pn 1/(2α) 1 U 1 > 1, U 2 T 2 > x c n x α + PU 1 > n 1/(2α) 1 PU 2 T 2 > x c n x α. In the same way we can bound P S 1 > t, S 3 > t. We omit details Semi-final steps in the proof of the main theorem. In the following proposition, we combine the various tail bounds derived in the previous sections. For this reason, recall the definitions of X i, S i and Z i from (3.8) and that p 1, p, p 3 are the largest integers such that p 1 n 1 n n 1 + 1, pn 1 n n 2 and p 3 n 1 n n 3, respectively. Proposition 3.9. Assume the conditions of Theorem 2.1. x-regions: In particular, consider the following Λ n = (n 1/α (log n) M, ) for α (0, 2], M > 2, (c n n 0.5 log n, ) for α > 2, c n,

12 12 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ and introduce a sequence s n such that e sn Λ n and s n = o(n). Then the following relations hold: c + p c P j=1 c + + c lim sup sup (S j c j ) > x (3.15), n x Λ n n P Y > x p P j=1 0 = lim sup (S j c j ) > x c+ c (3.16) n x Λ n,log xs n n P Y > x c + + c, P p 1 0 = lim sup j=1 (X j e j ) > x (3.17), n x Λ n n P Y > x (3.18) P 0 = lim sup n x Λ n p 3 j=1 (Z j z j ) > x n P Y > x where c j = e j = z j = 0 for α 1 and c j = ES j, e j = EX j, z j = EZ j for α > 1. Proof. We split the proof into the different cases corresponding to α 1, α (1, 2] and α > 2. The case 1 < α 2. Step 1: Proof of (3.15) and (3.16). Since M > 2, we can choose ξ so small that (3.19) 2 + 4ξ < M and ξ < 1/(4α), and we write y = x/(log n) 2ξ. Consider the following disjoint partition of Ω: p Ω 1 = S j y, Ω 2 = Ω 3 = Then for A = p j=1 (S j c j ) > x, (3.20) j=1 1i<kp S i > y, S k > y, p S k > y, S i y for all i k. k=1 PA = PA Ω 1 + PA Ω 2 + PA Ω 3 =: I 1 + I 2 + I 3. Next we treat the terms I i, i = 1, 2, 3, separately. Step 1a: Bounds for I 2. We prove (3.21) lim n sup x Λ n (x α /n) I 2 = 0. We have I 2 1i<kp P S i > y, S k > y. For k i + 3, S k and S i are independent and then, by (3.12), P S i > y, S k > y = (P S 1 > y) 2 c (n 1 (y)) 2 y 2α, where n 1 (y) is defined in the same way as n 1 = n 1 (x) with x replaced by y. n 1 (y) n 1 (x). For k = i + 1 or i + 2, we have by Lemma 3.8 P S i > y, S k > y c (n 1 (y)) 0.5 y α., Also notice that

13 LARGE DEVIATIONS 13 Summarizing the above estimates and observing that (3.19) holds, we obtain for x Λ n, This proves (3.21). Step 1b: Bounds for I 1. We will prove I 2 c [p 2 n 2 1y 2α + pn y α ] c n x α [ x α n (log n) 4ξα + (log n) 2ξα n1 0.5 ] c n x α[ (log n) (4ξ M)α + (log n) 2ξα 0.5]. (3.22) lim n sup x Λ n (x α /n) I 1 = 0. For this purpose, we write S y j = S j1 Sj y and notice that ES j = ES y j + ES j1 Sj >y. Elementary computations show that (3.23) S 1 α n max(α,1) 1 (2m + 1) max(α,1) E B α. Therefore by the Hölder and Minkowski inequalities, and by (3.12) ES j 1 Sj >y (E S j α ) 1/α (P S j > y) 1 1/α c (log x) 1.5+σ y α+1 (n 1 (y)) 1 1/α c (log x) 1.5+σ+2ξ(α 1) x α+1 n 1. Let now γ > 1/α and n 1/α (log n) M x n γ. Since p n 1 n and (3.19) holds, (3.24) If x > n γ then Hence (3.25) p ES j 1 Sj >y c (log x) 1.5+σ+2ξ(α 1) x α+1 n = o(x). x > (log x) M n 1/α and x α < (log x) Mα n 1. p ES j 1 Sj >y c x(log x) 1.5+σ+2ξ(α 1) (log x) Mα = o(x). Using the bounds (3.24) and (3.25), we see that for x sufficiently large, p I 1 P (S y j ESy j ) > 0.5 x (3.26) j=1 ( = P 1jp,j 1,4,7,... 3 P 1jp,j 1,4,7, jp,j 2,5,8,... + (S y j ESy j ) > x/6. 1jp,j 3,6,9,... )(S y j ESy j ) > 0.5 x In the last step, for the ease of presentation, we slightly abused notation since the number of summands in the 3 partial sums differs by a bounded number of terms which, however, do not contribute to the asymptotic tail behavior of I 1. Since the summands S y 1, Sy 4,... are iid and bounded, we may apply Prokhorov s inequality (A.1) to the random variables R k = S y k ESy 1 in (3.26) with y = x/(log n) 2ξ and B p = p var(s y 1 ). Then a n = x/(2y) = 0.5 (log n) 2ξ and since, in view of (3.23), var(s y 1 ) y2 α E S 1 α, ( p var(s y 1 I 1 c ) ) an ( c (log n) (1.5+σ) α+2ξ(α 1) 1) a n ( n ) an. xy x α Therefore, for x Λ n, (x α /n) I 1 c (log n) ((1.5+σ)α+2ξ(α 1))an Mα(an 1),

14 14 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ which tends to zero if M > 2, ξ satisfies (3.19) and σ is sufficiently small. Step 1c: Bounds for I 3. Here we assume c + > 0. In this case, we can bound I 3 by using the following inequalities: for every ɛ > 0, there is n 0 such that for n n 0, uniformly for x Λ n and every fixed k 1, (3.27) (1 ɛ)c PA S k > y, S i y, i k n 1 PY > x (1 + ɛ)c. Write z = x/(log n) ξ and introduce the probabilities, for k 1, J k = P A j c j ) > z, S k > y, S i y, i k, j k(s (3.28) V k = P A j c j ) z, S k > y, S i y, i k. j k(s Write S = (S j c j ), where summation is taken over the set j : 1 j p, j k, k ± 1, k ± 2. For n sufficiently large J k is dominated by P S > z 8y, S k > y, S i y, i k P S k > y P S > 0.5 z, S i y, i k. Applying the Prokhorov inequality (A.1) in the same way as in step 1b, we see that P S > 0.5 z, S i y, i k c n z α c (log n) (M ξ)α, and by Markov s inequality, Therefore P S 1 > y c n 1(y) y α c n 1 y α. sup (x α /n 1 )J k c (log n) 3αξ Mα 0. x Λ n Thus it remains to bound the probabilities V k. We start with sandwich bounds for V k : (3.29) (3.30) PS k c k > x + z, S z 8y, S i y, i k V k PS k c k > x z, S z + 8y, S i y, i k. By (3.12), for every small ɛ > 0, n sufficiently large and uniformly for x Λ n, we have (3.31) (1 ɛ)c PS k c k > x + z n 1 PY > x PS k c k > x z n 1 PY > x (1 + ɛ)c, where we also used that lim n (x + z)/x = 1. Then the following upper bound is immediate from (3.30): From (3.29) we have V k n 1 PY > y PS k c k > x z (1 + ɛ)c. n 1 PY > x V k n 1 PY > y PS k c k > x + z L k. n 1 PY > x

15 LARGE DEVIATIONS 15 In view of the lower bound in (3.31), the first term on the right-hand side yields the desired lower bound if we can show that L k is negligible. Indeed, we have n 1 PY > xl k = PS k c k > x + z [ S > z 8y i k S i > y ] PS k c k > x + z, S > z 8y + i k PS k c k > x + z, S i > y [ ] PS k c k > x + z P S > z 8y + p P S 1 > y +c PS k c k > x + z, S i > y i k 2,i k Similar bounds as in the proofs above yield that the right-hand side is of the order o(n 1 /x α ), hence L k = o(1). We omit details. Thus we obtain (3.27). Step 1d: Final bounds. Now we are ready for the final steps in the proof of (3.16) and (3.15). Suppose first c + > 0 and log x s n. In view of the decomposition (3.20) and steps 1a and 1b we have as n and uniformly for x Λ n, P p j=1 (S j c j ) > x I 3 n PY > x n PY > x n 1 n p k=1 P p j=1 (S j ES j ) > x, S k > y, S j y, j k. n 1 PY > x In view of step 1c, in particular (3.27), the last expression is dominated from above by (p n 1 /n)(1 + ɛ)c (1 + ɛ)c and from below by n 1 p n (1 ɛ)c n n 2 n ( 1 (1 ɛ)c (1 ɛ)c 1 3s ) n. n nρ Letting first n and then ɛ 0, (3.15) follows and (3.16) is also satisfied provided the additional condition lim n s n /n = 0 holds. If c + = 0 then I 3 = o(n P Y > x). Let now x Λ n and recall the definition of V k from (3.28). Then for every small δ and sufficiently large x P p j=1 (S j c j ) > x n P Y > x and (3.15) follows from the second part of Lemma 3.7. I 3 n P Y > x n p 1 k=1 V k n n 1 P Y > x PS 1 > x(1 δ) c 1 sup x Λ n n 1 P Y > x Step 2: Proof of (3.17) and (3.18). We restrict ourselves to (3.17) since the proof of (3.18) is analogous. Write B = p 1 j=1 (X j e j ) > x. Then By Lemma 3.4, PB P B p 1 X k > y k=1 + P B X j y for all j p 1 = P 1 + P 2. P 1 p 1 P X 1 > y C 1 p 1 y α e C2(log y)c 3 = o(nx α ). For the estimation of P 2 consider the random variables X y j = X j1 Xj y and proceed as in step 1b.

16 16 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ The case α > 2. The proof is analogous to α (1, 2]. We indicate differences in the main steps. Step 1b. First we bound the large deviation probabilities of the truncated sums (it is an analog of step 1b of Proposition 3.9). We assume without loss of generality that c n log n. Our aim now is to prove that for y = xc 0.5 n : (3.32) lim sup x α p n x Λ n n P (S j ES j ) > x, S j y for all j p = 0. j=1 We proceed as in the proof of (3.22) with the same notation S y j = S j1 Sj y. As in the proof mentioned, p ES j 1 Sj >y = o(x) and so we estimate the probability of interest by I := 3 P (S y (3.33) j ESy 1 ) > x/6 1jp,j 1,4,7,... The summands in the latter sum are independent and therefore one can apply the two-sided version of the Fuk-Nagaev inequality (A.3) to the random variables in (3.33): with a n = βx/y = c 0.5 n β and p var(s y 1 ) cpn2 1 cnn 1, (3.34) I (c p ) n(1.5+σ)α an 1 + exp xy α 1 (1 β)2 x 2 2e α. cnn 1 We suppose first that x n Then the first quantity in (3.34) multiplied by x α /n is dominated by ( ) c(log n) (1.5+σ)α c 0.5(α 1) an n (n/x α ) an 1 c 0.5an(1+α)+α (c(log n) (1.5+σ)α ) an n (n 0.5α 1 (log n) α ) an 1 0. The second quantity in (3.34) multiplied by x α /n is dominated by x α n exp (1 β)2 c 2 n(log n) 2 2e α n αγ 1 n c c2 n 0. cn 1 If x > n 0.75 then xn 0.5 > x δ for an appropriately small δ. Then the first quantity in (3.34) is dominated by ( c(log x) (1.5+σ)α) a nc 0.5a n(α 1) n The second quantity is dominated by which finishes the proof of (3.32). (n/x α ) an 1 c 0.5an(1+α)+α (c(log x) (1.5+σ)α ) an n (n 0.5α 1 x αδ ) an 1 0. x α n exp (1 β)2 c 2 nx 2δ (log n) 2 2e α cn 1 x α e cxδ 0, Step 1c. We prove that, for any ε (0, 1), sufficiently large n and fixed k 1, the following relation holds uniformly for x Λ n, (3.35) (1 ε)c P p j=1 (S j ES j ) > x, S k > y, S i y, i k (1 + ε)c. n 1 PY > x Let z Λ n be such that x/z. As for α (1, 2], one proves x α p P j ES j ) > x, n 1 j=1(s (S j ES j ) > z, S k > y, S j y, j k 0. j k

17 LARGE DEVIATIONS 17 Apply the Fuk-Nagaev inequality (A.3) to bound P j ES j ) > j k(s z 2, S j y, j k. In the remainder of the proof one can follow the arguments of the proof in step 1c for α (1, 2]. The case α 1. The proof is analogous to the case 1 < α 2; instead of Prokhorov s inequality (A.1) we apply S.V. Nagaev s inequality (A.2). We omit further details Final steps in the proof of Theorem 2.1. We have for small ε > 0, (3.36) n P ( S i E S n i ) > x(1 + 2ε) P ( X i E X n i ) > xε P ( Z i E Z i ) > xε P S n d n > x n P ( S i E S n i ) > x(1 2ε) + P ( X i E X n i ) > xε + P ( Z i E Z i ) > xε. Divide the last two probabilities in the first and last lines by np Y > x. Then these ratios converge to zero for x Λ n, in view of (3.17), (3.18) and Lemmas 3.4 and 3.5. Now P n i=pn 1+1 ( S i E S i ) > x(1 2ε) =P n n 1 i=pn 1+1 P where S i = Π i,i+n1 1 B i+n1 + + Π i,i+n2 1 B i+n2. Notice that if i n n 2 then (for α > 1) n n 1 i=pn 1+1 ( S i E S i ) > x(1 2ε) ( S i > x(1 2ε) n n 1 i=pn 1+1 E S i, E S i = E S 1 and for n n 2 < i n n 1 Hence there is C such that E S i = (EA) n1 ( 1 + EA (EA) n i n1 )EB. n n 1 i=pn 1+1 E S i 2n 1 C

18 18 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ and so by Proposition 3.6 P n n 1 i=pn 1+1 ( S i > x(1 2ε) n n 1 i=pn 1+1 E S n 1 i P ( S i > x(1 2ε) 2n 1C 2 2n 1 + P ( S i > x(1 2ε) 2n 1C, 2 i=n 1 ( ) C 1 n 1 x α = o np Y > x, provided lim n s n /n = 0. Taking into account (3.16) and the sandwich (3.36), we conclude that (2.2) holds. If the x-region is not bounded from above and n > n 1 (x) then the above calculations together with Lemma 3.1 show (2.1). If n n 1 (x), then and again (2.1) holds. PS n d n > x C 1 x α e C2(log x)c Ruin probabilities In this section we study the ruin probability related to the centered partial sum process T n = S n ES n, n 0, i.e. for given u > 0 and µ > 0 we consider the probability [ ψ(u) = Psup Tn µ n ] > u. n 1 We will work under the assumptions of Kesten s Theorem 1.1. Therefore the random variables Y i are regularly varying with index α > 0. Only for α > 1 the variable Y has finite expectation and therefore we will assume this condition throughout. Notice that the random walk (T n n µ) has dependent steps and negative drift. Since (Y n ) is ergodic we have n 1 (T n n µ) a.s. µ and in particular sup n 1 (T n n µ) < a.s. It is in general difficult to calculate ψ(u) for a given value u, and therefore most results on ruin study the asymptotic behavior of ψ(u) when u. If the sequence (Y i ) is iid it is well known (see Embrechts and Veraverbeke [7] for a special case of subexponential step distribution and Mikosch and Samorodnitsky [19] for a general regularly varying step distribution) that (4.1) ψ ind (u) u PY > u µ (α 1), u. (We write ψ ind to indicate that we are dealing with iid steps.) If the step distribution has exponential moments the ruin probability ψ ind (u) decays to zero at an exponential rate; see for example the celebrated Cramér-Lundberg bound in Embrechts et al. [6], Chapter 2. It is the main aim of this section to prove the following analog of the classical ruin bound (4.1): Theorem 4.1. Assume that the conditions of Theorem 1.1 are satisfied and additionally B 0 a.s. and there exists ε > 0 such that EA α+ε and EB α+ε are finite, α > 1 and c + > 0. The following asymptotic result for the ruin probability holds for fixed µ > 0, as u : (4.2) P sup(s n ES n nµ) > u n 1 c µ(α 1) u α+1 c c + u PY > u µ(α 1) Remark 4.2. We notice that the dependence in the sequence (Y t ) manifests in the constant c /c + in relation (4.2) which appears in contrast to the iid case; see (4.1)..

19 LARGE DEVIATIONS 19 To prove our result we proceed similarly as in the proof of Theorem 2.1. First notice that in view of (3.9), Psup(Y 0 η n E(Y 0 η n )) > u PY 0 η > u = o(u 1 α ). n 1 Thus, it is sufficient to prove u α 1 P sup( S n E S n nµ) > u c n 1 µ(α 1), for S n defined in (3.2). Next we change indices as indicated after (3.3). However, this time we cannot fix n and therefore we will proceed carefully; the details will be explained below. Then we further decompose S n into smaller pieces and study their asymptotic behavior. Proof of Theorem 4.1. The following lemma shows that the centered sums ( S n E S n ) n um for large M do not contribute to the asymptotic behavior of the ruin probability as u. Lemma 4.3. The following relation holds: lim M lim sup u α 1 P sup ( S n E S n nµ) > u = 0. u n>um Proof. Fix a large number M and define the sequence N l = um2 l, l 0. Assume for the ease of presentation that (N l ) constitutes a sequence of even integers; otherwise we can take N l = [um]2 l. Observe that P sup ( S n E S n nµ) > u n>um p l, where p l = P max n [Nl,N l+1 )( S n E S n nµ) > u. For every fixed l, in the events above we make the change of indices i j = N l+1 i and write, again abusing notation, With this notation, we have l=0 Ỹ j = B j + Π jj B j Π j,nl+1 2B Nl+1 1. p l = P max n (0,N l ] N l+1 1 i=n (Ỹi EỸi µ) > u. Using the decomposition (3.4) with the adjustment n 4 = min(j+n 3, N l+1 1), we write Ỹj = Ũj+ W j. Then, by Lemma 3.2, for small δ > 0, p l1 = P max N l+1 1 ( W i E W i µ/4) > u/4 n (0,N l ] P N l+1 1 i=n i=n l ( W i E W i ) + N l 1 W i > u/4 + N l µ/4 P N l+1 1 ( W i E W i ) > u/4 + N l (µ/4 E W 1 ) cn l+1 N α δ l c(um2 l ) 1 α δ. We conclude that for every M > 0, p l1 = o(u 1 α ) as u. l=0

20 20 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ As in (3.7) we further decompose Ũi = X i + S i + Z i, making the definitions precise in what follows. Let p be the smallest integer such that pn 1 N l+1 1 for n 1 = n 1 (u). For i = 1,..., p 1 define X i as in (3.8), and X p = N l+1 1 i=(p 1)n 1+1 X i. Now consider p l2 = P max N l+1 1 ( X i E X i µ/4) > u/4 n (0,N l ] i=n P N l+1 1 N ( X i E X l 1 i ) + max ( X i E X i ) > u/4 + N l µ/4 n (0,N l ] i=n l i=n P p max (X i EX i ) > u/8 + N l µ/8 kp/2 i=k +P max kp/2 i=k = p l21 + p l22. p (X i EX i ) u/8 + N l µ/8, max max kn 1 kp/2 1j<n 1 i=kn 1 j The second quantity is estimated by using Lemma 3.4 as follows for fixed M > 0 p l22 c p P n1 ( X i E X i ) > u/8 + N l µ/8 X i > u/8 + N l µ/8 C 1 p N α l e C2(log N l) C 3 = o(u 1 α )2 (α 1)l, where C i, i = 1, 2, 3, are some positive constants. Therefore for every fixed M, p l22 = o(u 1 α ), as u. l=0 Next we treat p l21. We observe that X i and X j are independent for i j > 1. Splitting summation in p l21 into the subsets of even and odd integers, we obtain an estimate of the following type p l21 c 1 P max (X 2i EX 2i ) > c 2 (u + N l ), kp/2 k2ip where the summands are now independent. By the law of large numbers, for any ɛ (0, 1), k p/2, large l, P (X 2i EX 2i ) > ɛc 2 (u + N l ) 1/2. 2ik An application of the maximal inequality (A.5) in the Appendix and an adaptation of Proposition 3.9 yield p l21 2P (X 2i EX 2i ) > (1 ɛ)c 2 (u + N l ) cn 1 α 2ip l. Using the latter bound and summarizing the above estimates, we finally proved that lim M lim sup u α 1 u l=0 p l2 = 0. Similar arguments show that the sums involving the S i s and Z i s are negligible as well. This proves the lemma.

21 LARGE DEVIATIONS 21 In view of Lemma 4.3 it suffices to study the following probabilities for sufficiently large M > 0: P max nmu ( S n E S n nµ) > u. Write N 0 = Mu, change again indices i j = N 0 i + 1 and write, abusing notation, Ỹ j = B j + Π jj B j Π j,n0 1B N0. Then we decompose Ỹj as in the proof of Lemma 4.3. Reasoning in the same way as above, one proves that the probabilities related to the quantities W i, Xi and Z i are of lower order than u 1 α as u and, thus, it remains to study the probabilities (4.3) P max N 0 nn 0 i=n ( S i E S i µ) > u, where S i were defined in (3.7). Take n 1 = log N 0 /ρ, p = N 0 /n 1 and denote by S i the sums of n 1 consecutive S i s as defined in (3.8). Observe that for any n such that n 1 (k 1) + 1 n kn 1, k 1 p we have N 0 i=n ( S i E S i µ) 2n 1 (E S 1 + µ) + 2n 1 (E S 1 + µ) + (p+1)n 1 i=(k 1)n 1+1 p i=k 1 ( S i E S i µ) (S i ES i n 1 µ) and N 0 i=n ( S i E S i µ) 2n 1 (E S 1 + µ) + pn 1 i=kn 1 ( S i E S i µ) 2n 1 (E S p µ) + (S i ES i n 1 µ). i=k Therefore and since n 1 is of order log u, instead of the probabilities (4.3) it suffices to study ψ p (u) = P max np i=n p (S i ES i n 1 µ) > u. Choose q = [M/ε α+1 1 ] + 1 for some small ε 1 and large M. Then the random variables R k = kq 3 i=(k 1)q S i, k = 1,..., r = p/q,

22 22 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ are independent and we have ψ p (u) P max nqr +P max niqr i kq 2,kq 1 jr k=j +P max +P jr k=j max qr<n<p i=n (S i ES i n 1 µ) > u(1 3ε 1 ) r (S kq 2 ES kq 2 n 1 µ) > ε 1 u r (S kq 1 ES kq 1 n 1 µ) > ε 1 u p (S i ES i n 1 µ) > ε 1 u =: 4 ψ (i) p (u). The quantities ψ p (i) (u), i = 2, 3, can be estimated in the same way; we focus on ψ p (2) (u). Applying Petrov s inequality (A.4) and Proposition 3.9, we obtain for some constant c not depending on ε 1, ψ p (2) (u) P max jr k=j r (S kq 2 ES kq 2 ) > ε 1 u r c P (S kq 2 ES kq 2 ) > ε 1 u/2 k=1 c r n 1 (ε 1 u) α c ε 1 u α+1. Hence we obtain lim ε1 0 lim sup u u α 1 ψ (2) p (u) = 0. By (3.15), for some constant c, ψ (4) p (u) c qn 1 (ε 1 u) α c Mu r(ε 1 u) α. Since r q > M/ε α+1 1 for large u we conclude for such u that r 1 M 1 ε α+1 1 and therefore ψ p (4) (u) cε 1 u 1 α and lim ε1 0 lim sup u u α 1 ψ p (4) (u) = 0. Since A and B are non-negative we have for large u with µ 0 = µ (q 2), ψ p (1) (u) P max jr P max jr j (R i ER i µ 0 n 1 ) > u(1 3ε 1 ) qn 1 (ES 1 + µ) j (R i ER i µ 0 n 1 ) > u(1 4ε 1 ). Combining the bounds above we proved for large u, small ε 1 and some constant c > 0 independent of ε 1 that j ψ p (u) P max (R i ER i µ 0 n 1 ) > u(1 4ε 1 ) + c ε 1 u α+1. jr Similar arguments as above show that ψ p (u) P max jr j (R i ER i µ 0 n 1 ) > u(1 + 4ε 1 ) c ε 1 u α+1.

23 LARGE DEVIATIONS 23 Thus we reduced the problem to study an expression consisting of independent random variables R i and the proof of the theorem is finished if we can show the following result. Write j Ω r = max (R i ER i n 1 µ 0 ) > u. Lemma 4.4. The following relation holds lim lim sup u α 1 PΩ r c c + = 0. M u µ(α 1) jr Proof. Fix some ε 0 > 0 and choose some large M. Define C 0 = (q 2)c c +. Reasoning as in the proof of (3.16), we obtain for any integers 0 j < k r and large u (4.4) 1 ε 0 u P k ( ) α i=j+1 Ri ER i > u 1 + ε 0 n 1 (k j)c 0 Choose ε, δ > 0 small to be determined later in dependence on ε 0. Eventually, both ε, δ > 0 will become arbitrarily small when ε 0 converges to zero. Define the sequence k 0 = 0, k l = [δn 1 1 (1 + ε) l 1 u], l 1. Without loss of generality we will assume k l0 = Mun 1 1 for some integer number l 0. For η > ε 0 (2l 0 ) 1 consider the independent events j D l = max (R i ER i ) > 2ηu, l = 0,..., l 0 1. Define the disjoint sets We will show that (4.5) k l <jk l+1 i=k l +1 W l = Ω r D l m l D c m, l = 0,..., l 0 1. PΩ r l 0 1 l=0 PW l o(u1 α ), u. First we observe that Ω r l 0 1 l=0 D l. Indeed, on l 0 1 l=0 Dc l we have j max jr (R i ER i n 1 µ 0 ) l 0 2ηu ε 0 u, and therefore Ω r cannot hold for small ε 0. Next we prove that (4.6) P m l(d m D l ) = o(u 1 α ), u. Then (4.5) will follow. First apply Petrov s inequality (A.4) to PD l with q 0 arbitrarily close to one and power p 0 (1, α). Notice that E R i p0 is of the order qn 1, hence m p0 is of the order δεqu cδεmε α 1 1 u. Next apply (4.4). Then one obtains for sufficiently large u, and small ε, δ, and some constant c depending on ε, δ, ε 0, ε 1, kl+1 PD l q0 1 P i=k l +1 (R i ER i ) > ηu q 1 0 n 1(k l+1 k l )(1 + ε 0 )C 0 (ηu) α c u 1 α. Hence P m l (D l D m ) = O(u 2(1 α) ) as desired for (4.6) if all the parameters ε, δ, ε 0, ε 1 are fixed.

24 24 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ Thus we showed (4.5) and it remains to find suitable bounds for the probabilities PW l. On the set W l we have j j max (R i ER i µ 0 n 1 ) max (R i ER i ) 2ηlu ε 0 u, jk l jk l max j k l+1 <jr i=k l +1 (R i ER i ) 2ηl 0 u ε 0 u. Therefore for small ε 0 and large u on the event W l, j j max (R i ER i µ 0 n 1 ) = max (R i ER i µ 0 n 1 ) jr k l <jr k l (R i ER i µ 0 n 1 ) + max 2ε 0 u k l µ 0 n 1 + max j k l <jk l+1 i=k l +1 j k l <jk l+1 i=k l +1 (R i ER i ). (R i ER i ) + max j k l+1 <jr i=k l+1 +1 Petrov s inequality (A.4) and (4.4) imply for l 1 and large u that j PW l P max (R i ER i ) (1 2ε 0 )u + µ 0 n 1 k l k l <jk l+1 i=k l +1 kl+1 q0 1 P i=k l +1 (R i ER i ) (1 3ε 0 )u + µ 0 n 1 k l (R i ER i ) q 1 0 (k l+1 k l )n 1 (1 + ε 0 )C 0 ((1 3ε 0 ) + µ 0 δ(1 + ε) l 1 ) α u α = q 1 0 δε(1 + ε) l 1 (1 + ε 0 )C 0 ((1 3ε 0 ) + µ 0 δ(1 + ε) l 1 ) α u1 α. For l = 0 we use exactly the same arguments, but in this case (k 1 k 0 )n 1 = δu and k 0 = 0. Thus we arrive at the upper bound l 0 1 ( l0 1 PW l q0 1 (1 + ε δ 0)C 0 (1 3ε 0 ) α + δε(1 + ε) l 1 ) ((1 3ε 0 ) + µ 0 δ(1 + ε) l 1 ) α u 1 α (4.7) l=0 = q 1 0 (1 + ε 0) A(ε, δ, ε 0, l 0 )u 1 α. To estimate PW l from below first notice that on W l, for large u, max jr j (R i ER i µ 0 n 1 ) k l+1 (R i ER i µ 0 n 1 ) k l+1 i=k l +1 k l+1 i=k l +1 (R i ER i ) k l+1 µ 0 n 1 k l ER 1 (R i ER i ) k l+1 µ 0 n 1 ε 0 u.

25 LARGE DEVIATIONS 25 By (4.6) and (4.4), we have for l 1 and as u, Hence k l+1 PW l P l 0 1 l=0 Thus we proved that P i=k l +1 k l+1 1 (R i ER i ) > (1 + ε 0 )u + µ 0 n 1 k l+1 D l Dm c m l i=k l (R i ER i ) > (1 + ε 0 )u + µ 0 n 1 k l+1 P D l D m m l (k l+1 k l )n 1 (1 ε 0 )C 0 ((1 + ε 0 )u + µ 0 k l+1 n 1 ) α o(u1 α ) (1 2ε 0)C 0 δ(1 + ε) l 1 ε ((1 + ε 0 ) + µ 0 δ(1 + ε) l ) α u1 α. ( l0 1 δ PW l (1 2ε 0 )C 0 (1 + ε 0 + µ 0 δ) α + = (1 2ε 0 )C 0 B(ε, δ, ε 0, l 0 )u 1 α. l=1 δ(1 + ε) l 1 ) ε ((1 + ε 0 ) + µ 0 δ(1 + ε) l ) α u 1 α (4.8) l 0 1 (1 2ε 0 )B(ε, δ, ε 0, l 0 ) lim inf u C 1 0 uα 1 PW l l=0 l 0 1 lim sup C0 1 uα 1 PW l q0 1 (1 + ε 0)A(ε, δ, ε 0, l 0 ). u l=0 Finally, we will justify that the upper and lower bounds are close for small ε, δ, ε 0, large M and q 0 close to 1. For a real number s which is small in absolute value define the functions f s (x) = (1 + s + µ 0 x) α and F s (x) = (1 + s + µ 0 x) f s (x) on [δ, M]. Let x l = δ(1 + ε) l 1, l = 1,..., l 0. Since x l+1 x l = δε(1 + ε) l 1 are uniformly bounded by εm and f s is Riemann integrable on [0, ), choosing ε small, we have A(ε, δ, ε 0, l 0 ) = Thus we obtain the bound l 0 1 l=1 M δ f 3ε0 (x l )(x l+1 x l ) f 3ε0 (x) dx = F 3ε 0 (δ) F 3ε0 (M) + ε 0 µ 0 (α 1). (4.9) lim lim lim lim q 0 1 ε 0 0 M δ 0 q 1 0 (1 + ε 0)A(ε, δ, ε 0, l 0 ) = (µ 0 (α 1)) 1. Proceeding in a similar way, B(ε, δ, ε 0, l 0 ) M δ f ε0 (x)dx = F ε 0 (δ) F ε0 (M) ε 0 µ 0 (α 1) The right-hand side converges to (µ 0 (α 1)) 1 by letting δ 0, M and ε 0 0. The latter limit relation in combination with (4.8) and (4.9) proves the lemma..

26 26 D. BURACZEWSKI, E. DAMEK, T. MIKOSCH AND J. ZIENKIEWICZ Appendix A. Inequalities for sums of independent random variables In this section, we consider a sequence (X n ) of independent random variables and their partial sums R n = X X n. We always write B n = var(r n ) and m p = n j=1 E X j p for p > 0. First, we collect some of the classical tail estimates for R n. Lemma A.1. The following inequalities hold. Prokhorov s inequality; cf. Petrov [23], p. 77: Assume that the X n s are centered, X n y for all n 1 and some y > 0. Then PR n x exp x 2 y arsinh( xy ) (A.1), x > 0. 2 B n S. V. Nagaev s inequality; see [22]: Assume m p < for some p > 0. Then n ( e mp ) x/y (A.2) PR n > x PX j > y +, x, y > 0. xy p 1 j=1 Fuk-Nagaev inequality; cf. Petrov [23], p. 78: Assume that the X n s are centered, p > 2, β = p/(p + 2) and m p <. Then n ( mp ) βx/y PR n > x PX j > y + + exp βxy p 1 (1 β)2 x 2 (A.3) 2e p, x, y > 0. B n j=1 Petrov s inequality; cf. Petrov [23], p. 81: Assume that the X n s are centered and m p < for some p (1, 2]. Then for every q 0 < 1, with L = 1 for p = 2 and L = 2 for p (1, 2), (A.4) Pmax in R i > x q 1 0 PR n > x [(L/(1 q 0 )) 1 m p ] 1/p, x R. Lévy-Ottaviani-Skorokhod inequality; cf. Petrov [23], Theorem 2.3 on p. 51. If PR n R k c) q, k = 1,..., n 1, for some constants c 0 and q > 0, then (A.5) Pmax in R i > x q 1 PR n > x c, x R. Appendix B. Proof of Lemma 3.7 Assume first c + > 0. We have by independence of Y and η k, for any k 1, x > 0 and r > 0, Pη k Y > x ( ) PY > x/z k PY > x (0,x/r] = + [x/r, ) k PY > x dp(η k z) = I 1 + I 2. For every ε (0, 1) there is r > 0 such that for x r and z x/r, PY > x/z PY > x Hence for sufficiently large x, z α [1 ε, 1 + ε] and PY > xx α c + ε. I 1 k 1 Eη α k 1 ηk x/r[1 ε, 1 + ε] and I 2 c k 1 x α Pη k > x/r c k 1 Eη α k 1 ηk >x/r. We have I 1 (k 1 Eη α k k 1 Eη α k 1 ηk >x/r)[1 ε, 1 + ε] and by virtue of Bartkiewicz et al. [1], lim k k 1 Eη α k = c. Therefore it is enough to prove that (B.1) lim sup n r nkn 1,b nx k 1 Eη α k 1 ηk >x = 0. By the Hölder and Markov inequalities we have for ɛ > 0, (B.2) Eηk α 1 ηk >x (Eη α+ɛ k ) α/(α+ɛ)( Pη k > x ) ɛ/(α+ɛ) x ɛ Eη α+ɛ k.

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

Limit theorems for dependent regularly varying functions of Markov chains

Limit theorems for dependent regularly varying functions of Markov chains Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Stochastic volatility models: tails and memory

Stochastic volatility models: tails and memory : tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows

Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows Uğur Tuncay Alparslan a, and Gennady Samorodnitsky b a Department of Mathematics and Statistics,

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Information Theory and Statistics Lecture 3: Stationary ergodic processes

Information Theory and Statistics Lecture 3: Stationary ergodic processes Information Theory and Statistics Lecture 3: Stationary ergodic processes Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Measurable space Definition (measurable space) Measurable space

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Stochastic volatility models with possible extremal clustering

Stochastic volatility models with possible extremal clustering Bernoulli 19(5A), 213, 1688 1713 DOI: 1.315/12-BEJ426 Stochastic volatility models with possible extremal clustering THOMAS MIKOSCH 1 and MOHSEN REZAPOUR 2 1 University of Copenhagen, Department of Mathematics,

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Relative growth of the partial sums of certain random Fibonacci-like sequences

Relative growth of the partial sums of certain random Fibonacci-like sequences Relative growth of the partial sums of certain random Fibonacci-like sequences Alexander Roitershtein Zirou Zhou January 9, 207; Revised August 8, 207 Abstract We consider certain Fibonacci-like sequences

More information

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0} VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

The expansion of random regular graphs

The expansion of random regular graphs The expansion of random regular graphs David Ellis Introduction Our aim is now to show that for any d 3, almost all d-regular graphs on {1, 2,..., n} have edge-expansion ratio at least c d d (if nd is

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms Course Notes Part IV Probabilistic Combinatorics and Algorithms J. A. Verstraete Department of Mathematics University of California San Diego 9500 Gilman Drive La Jolla California 92037-0112 jacques@ucsd.edu

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Solution Set for Homework #1

Solution Set for Homework #1 CS 683 Spring 07 Learning, Games, and Electronic Markets Solution Set for Homework #1 1. Suppose x and y are real numbers and x > y. Prove that e x > ex e y x y > e y. Solution: Let f(s = e s. By the mean

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

Ruin Probabilities of a Discrete-time Multi-risk Model

Ruin Probabilities of a Discrete-time Multi-risk Model Ruin Probabilities of a Discrete-time Multi-risk Model Andrius Grigutis, Agneška Korvel, Jonas Šiaulys Faculty of Mathematics and Informatics, Vilnius University, Naugarduko 4, Vilnius LT-035, Lithuania

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Large deviations for random walks under subexponentiality: the big-jump domain

Large deviations for random walks under subexponentiality: the big-jump domain Large deviations under subexponentiality p. Large deviations for random walks under subexponentiality: the big-jump domain Ton Dieker, IBM Watson Research Center joint work with D. Denisov (Heriot-Watt,

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Heavy Tailed Time Series with Extremal Independence

Heavy Tailed Time Series with Extremal Independence Heavy Tailed Time Series with Extremal Independence Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Herold Dehling Bochum January 16, 2015 Rafa l Kulik and Philippe Soulier Regular variation

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

The parabolic Anderson model on Z d with time-dependent potential: Frank s works

The parabolic Anderson model on Z d with time-dependent potential: Frank s works Weierstrass Institute for Applied Analysis and Stochastics The parabolic Anderson model on Z d with time-dependent potential: Frank s works Based on Frank s works 2006 2016 jointly with Dirk Erhard (Warwick),

More information

Existence and convergence of moments of Student s t-statistic

Existence and convergence of moments of Student s t-statistic U.U.D.M. Report 8:18 Existence and convergence of moments of Student s t-statistic Fredrik Jonsson Filosofie licentiatavhandling i matematisk statistik som framläggs för offentlig granskning den 3 maj,

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Tail process and its role in limit theorems Bojan Basrak, University of Zagreb

Tail process and its role in limit theorems Bojan Basrak, University of Zagreb Tail process and its role in limit theorems Bojan Basrak, University of Zagreb The Fields Institute Toronto, May 2016 based on the joint work (in progress) with Philippe Soulier, Azra Tafro, Hrvoje Planinić

More information

On Kesten s counterexample to the Cramér-Wold device for regular variation

On Kesten s counterexample to the Cramér-Wold device for regular variation On Kesten s counterexample to the Cramér-Wold device for regular variation Henrik Hult School of ORIE Cornell University Ithaca NY 4853 USA hult@orie.cornell.edu Filip Lindskog Department of Mathematics

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

forms a complete separable metric space. The Hausdorff metric is subinvariant, i.e.,

forms a complete separable metric space. The Hausdorff metric is subinvariant, i.e., Applied Probability Trust 13 August 2010) LARGE DEVIATIONS FOR MINKOWSKI SUMS OF HEAVY-TAILED GENERALLY NON-CONVEX RANDOM COMPACT SETS THOMAS MIKOSCH, University of Copenhagen ZBYNĚK PAWLAS, Charles University

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Random convolution of O-exponential distributions

Random convolution of O-exponential distributions Nonlinear Analysis: Modelling and Control, Vol. 20, No. 3, 447 454 ISSN 1392-5113 http://dx.doi.org/10.15388/na.2015.3.9 Random convolution of O-exponential distributions Svetlana Danilenko a, Jonas Šiaulys

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Weak quenched limiting distributions of a one-dimensional random walk in a random environment Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

ON THE MOMENTS OF ITERATED TAIL

ON THE MOMENTS OF ITERATED TAIL ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

A generalization of Strassen s functional LIL

A generalization of Strassen s functional LIL A generalization of Strassen s functional LIL Uwe Einmahl Departement Wiskunde Vrije Universiteit Brussel Pleinlaan 2 B-1050 Brussel, Belgium E-mail: ueinmahl@vub.ac.be Abstract Let X 1, X 2,... be a sequence

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence 1 Overview We started by stating the two principal laws of large numbers: the strong

More information

Lecture 3 - Expectation, inequalities and laws of large numbers

Lecture 3 - Expectation, inequalities and laws of large numbers Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part

More information

arxiv: v1 [math.ap] 17 May 2017

arxiv: v1 [math.ap] 17 May 2017 ON A HOMOGENIZATION PROBLEM arxiv:1705.06174v1 [math.ap] 17 May 2017 J. BOURGAIN Abstract. We study a homogenization question for a stochastic divergence type operator. 1. Introduction and Statement Let

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

THIELE CENTRE. Markov Dependence in Renewal Equations and Random Sums with Heavy Tails. Søren Asmussen and Julie Thøgersen

THIELE CENTRE. Markov Dependence in Renewal Equations and Random Sums with Heavy Tails. Søren Asmussen and Julie Thøgersen THIELE CENTRE for applied mathematics in natural science Markov Dependence in Renewal Equations and Random Sums with Heavy Tails Søren Asmussen and Julie Thøgersen Research Report No. 2 June 216 Markov

More information

Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions

Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions 1 / 36 Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions Leonardo Rojas-Nandayapa The University of Queensland ANZAPW March, 2015. Barossa Valley, SA, Australia.

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

On the Bennett-Hoeffding inequality

On the Bennett-Hoeffding inequality On the Bennett-Hoeffding inequality of Iosif 1,2,3 1 Department of Mathematical Sciences Michigan Technological University 2 Supported by NSF grant DMS-0805946 3 Paper available at http://arxiv.org/abs/0902.4058

More information

Nonlinear Time Series Modeling

Nonlinear Time Series Modeling Nonlinear Time Series Modeling Part II: Time Series Models in Finance Richard A. Davis Colorado State University (http://www.stat.colostate.edu/~rdavis/lectures) MaPhySto Workshop Copenhagen September

More information

ON METEORS, EARTHWORMS AND WIMPS

ON METEORS, EARTHWORMS AND WIMPS ON METEORS, EARTHWORMS AND WIMPS SARA BILLEY, KRZYSZTOF BURDZY, SOUMIK PAL, AND BRUCE E. SAGAN Abstract. We study a model of mass redistribution on a finite graph. We address the questions of convergence

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Applied Probability Trust (16 August 2010)

Applied Probability Trust (16 August 2010) Applied Probability Trust 16 August 2010 A LARGE DEVIATION PRINCIPLE FOR MINKOWSKI SUMS OF HEAVY-TAILED RANDOM COMPACT CONVEX SETS WITH FINITE EXPECTATION THOMAS MIKOSCH, University of Copenhagen ZBYNĚK

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

Precise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions

Precise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions This article was downloaded by: [University of Aegean] On: 19 May 2013, At: 11:54 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College

More information

A log-scale limit theorem for one-dimensional random walks in random environments

A log-scale limit theorem for one-dimensional random walks in random environments A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk

More information

arxiv:math/ v4 [math.pr] 12 Apr 2007

arxiv:math/ v4 [math.pr] 12 Apr 2007 arxiv:math/612224v4 [math.pr] 12 Apr 27 LARGE CLOSED QUEUEING NETWORKS IN SEMI-MARKOV ENVIRONMENT AND ITS APPLICATION VYACHESLAV M. ABRAMOV Abstract. The paper studies closed queueing networks containing

More information

Extremogram and Ex-Periodogram for heavy-tailed time series

Extremogram and Ex-Periodogram for heavy-tailed time series Extremogram and Ex-Periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Jussieu, April 9, 2014 1 2 Extremal

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Thomas Mikosch and Daniel Straumann: Stable Limits of Martingale Transforms with Application to the Estimation of Garch Parameters

Thomas Mikosch and Daniel Straumann: Stable Limits of Martingale Transforms with Application to the Estimation of Garch Parameters MaPhySto The Danish National Research Foundation: Network in Mathematical Physics and Stochastics Research Report no. 11 March 2004 Thomas Mikosch and Daniel Straumann: Stable Limits of Martingale Transforms

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

General Theory of Large Deviations

General Theory of Large Deviations Chapter 30 General Theory of Large Deviations A family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations

More information

STOCHASTIC GEOMETRY BIOIMAGING

STOCHASTIC GEOMETRY BIOIMAGING CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 2018 www.csgb.dk RESEARCH REPORT Anders Rønn-Nielsen and Eva B. Vedel Jensen Central limit theorem for mean and variogram estimators in Lévy based

More information

Rare event simulation for the ruin problem with investments via importance sampling and duality

Rare event simulation for the ruin problem with investments via importance sampling and duality Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).

More information

Notes 15 : UI Martingales

Notes 15 : UI Martingales Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.

More information

arxiv: v2 [math.pr] 22 Apr 2014 Dept. of Mathematics & Computer Science Mathematical Institute

arxiv: v2 [math.pr] 22 Apr 2014 Dept. of Mathematics & Computer Science Mathematical Institute Tail asymptotics for a random sign Lindley recursion May 29, 218 arxiv:88.3495v2 [math.pr] 22 Apr 214 Maria Vlasiou Zbigniew Palmowski Dept. of Mathematics & Computer Science Mathematical Institute Eindhoven

More information