THIELE CENTRE. Markov Dependence in Renewal Equations and Random Sums with Heavy Tails. Søren Asmussen and Julie Thøgersen

Size: px
Start display at page:

Download "THIELE CENTRE. Markov Dependence in Renewal Equations and Random Sums with Heavy Tails. Søren Asmussen and Julie Thøgersen"

Transcription

1 THIELE CENTRE for applied mathematics in natural science Markov Dependence in Renewal Equations and Random Sums with Heavy Tails Søren Asmussen and Julie Thøgersen Research Report No. 2 June 216

2

3 Markov Dependence in Renewal Equations and Random Sums with Heavy Tails Søren Asmussen and Julie Thøgersen Department of Mathematics Aarhus University The Markov renewal equation Z i (x) = z i (x) + j E Abstract Z j (x y)f ij (dy), i E, is considered in the subcritical case where the matrix of total masses of the F ij has spectral radius strictly less than one, and the asymptotics of the Z i (x) is found in the heavy-tailed case involving a local subexponential assumption on the F ij. Three cases occur according to the balance between the z i (x) and the tails of the F ij, A crucial step in the analysis is obtaining multivariate and local versions of a lemma due to Kesten on domination of subexponential tails. These also lead to various results on tail asymptotics of sums of a random number of heavy-tailed random variables in models which are more general than in the literature. 1 Introduction The occurrence of heavy tails has been argued repeatedly in a variety of application areas covering insurance and finance ([1]), telecommunications and internet traffic ([2]), optics ([3]), cell proliferation ([4]) and many more; a broad overview is in [5]. Correspondingly, performance analysis of models for such situations has triggered a vast literature on probabilistic features of heavy tails. This paper deals with two particular problems in this area. The first is asymptotics of renewal-type equations of the form Z i (x) = z i (x) + j E Z j (x y)f ij (dy), i E, (1.1) where E is a finite index set, (Z i ) i E a set of unknown functions defined on [, ), (z i ) i E a set of non-negative known functions, and (F ij ) i,j E a set of non-negative heavy-tailed measures on [, ). The second is the tail behaviour of a random sum S = N X i (1.2) i=1 1

4 where N N is a light-tailed r.v. and the X i are non-negative heavy-tailed r.v. s, such that certain types of dependence to be specified later may occur. The simple renewal equation Z(x) = z(x) + Z(x y)f (dy) (1.3) is a classical structure in applied probability and occurs for example in branching processes ([6]), ruin problems ([7]) and ergodicity problems for possibly non- Markovian processes ([8, VI.1]). The emphasis is usually on asymptotic properties of Z(x) as x where the simplest situation is existence of a limit 1 when F is a probability measure, i.e. when F = 1 where F is the total mass of F. However, in the branching process example F is the expected number of children of an individual so obviously the case F 1 is of interest. One then typically has an exponential order e γx of Z(x). Here γ > when F > 1 and γ < when F < 1 and F is light-tailed (this last situation also occurs in Cramér-Lundberg asymptotics for ruin probabilities and queues, cf. [8, V.7]). For F < 1 and F heavy-tailed, the order depends on a delicate balance between the tails of F and z and in fact the results are more recent, [9] (see also [1] for a closely related results). The system (1.1) goes under the name of the Markov renewal equation, cf. [8]; this terminology stems from the case of P = ( F ij ) i,j E being stochastic, i.e. the transition matrix of a Markov chain ξ, ξ 1,... In branching processes, it occurs when individuals have several types, and in insurance and finance, it relates to regime-switching; another relevant example comes from computer reliability problems, [11]. The known asymptotic results on the Z i (x) depend crucially on the spectral radius ρ of P : if ρ = 1, in particular if P is stochastic, limits exists whereas otherwise the order is exponential, e γx, where γ > when ρ > 1 and γ < when ρ < 1 and the F ij are light-tailed [again, regularity conditions are required]. The gap is the case ρ < 1 and heavy tails. The main contribution of the paper in the setting of (1.1) is to fill this gap. The result is stated as Theorem 2.2 below. Three cases occur depending on whether the tail of the z i or the F ij dominate, or if they are of same order. The conditions involve the non-standard concept of local subexponentiality. One main example of random sums like (1.2) is total claims distributions in insurance: N is the number of claims in a given period and X 1, X 2,... the claim sizes which classically are taken i.i.d. and independent of N which is assumed lighttailed (for example, the negative binomial distribution is popular because of its interpretation as a gamma mixture of Poissons). In finance, S could be the payoff of a portfolio consisting of assets with values X 1, X 2,... With light-tailed X i, P(S > x) decays roughly exponential, with heavy tails the asymptotic form is EN P(X 1 > x). Our main result in that direction, Theorem 2.3 below, is an extension to sums of the form d k=1 given Nk 1 X i;k where N 1,..., N d are dependent and the distribution of 1 This and the following statements require regularity conditions which we omit; see the citations 2

5 X i;k depends on i. For example in the insurance setting, (N 1,..., N d ) could be conditionally independent given (τ 1,..., τ d ) with Poisson rate λ k for N k, where τ k is the time spent by some environmental process in state k in the period [, T ] (dependence occurs because τ τ d = T ). The main step in the proof of Theorem 2.3 is a version of a classical lemma due to Kesten on tail domination of subexponential sums; for the proof of Theorem 2.2 we also need a local version of this. The paper is organised as follows. Section 2 starts with some necessary background, in particular on local subexponentiality, and proceeds to state the two main results of the paper, Theorems 2.2 and 2.3 referred to above. The rest of the paper is then proofs supplemented with miscellaneous results of some independent interest. Sections 3 5 give the random sum part in various settings. This together with a number of additional steps then allows to conclude the proof of Theorem 2.2 in Section 6. This proceeds by first assuming P to be substochastic, thereby allowing Markov chain interpretations, and finally reduce the general case to this by a Perron-Frobenius type transformation. 2 Preliminaries and statement of main results For a distribution F on R +, let F (x) = F (, x] be the distribution function (c.d.f.) and F (x) = 1 F (x) = F (x, ) the tail. If F and G are distributions and X F and Y G independent random variables, then F G denotes the convolution of F and G, F G(x) = G(x y)f (dy) = F (x y)g(dy), which is the distribution of the sum X + Y. We next briefly mention the most standard definitions and relations in the heavytailed area. For a more detailed and thorough treatment, see [1], [12]. A distribution F is said to be heavy-tailed if F does not possess any positive exponential moments, that is exp(λx)f (dx) = for all λ >. It is long-tailed, written F L, if it has unbounded support and F (x + y)/f (x) 1 as x for any fixed y R. Any F L is especially heavy-tailed. F is said to be subexponential if F n (x)/f (x) n for all n (actually, it is sufficient that this holds for n = 2). The intuition is that the only significant way the sum of independent F -distributed random variables can exceed som large threshold x is if the maximum of the random variables exceeds that x. This is also known as the principle of a single big jump. Similarly, a density f is long-tailed if f(x) > for all sufficiently large x and f(x + t)/f(x) 1 for any fixed t. If F and G have densities f, g, respectively, the convolution F G then has density f g given by f g(x) = f(x y)g(y)dy. The density f of F is said to be subexponential, written F S ac, if f is long-tailed and f 2 (x) = x f(x y)f(y)dy 2f(x) as x. Note that S ac is a subclass of S. Also, since subexponentiality is a tail property it is actually sufficient for a distribution F to only have a density f(x) for sufficiently large x. We proceed to the less standard concepts of local long-tailedness and subexponentiality, as introduced in [9]. The local property can be viewed as intermediate 3

6 between being long-tailed (subexponential) and having a long-tailed (subexponential) density. First, we need to introduce some notation. For a fixed T > define = T = (, T ] and let x + = {x + y y } = (x, x + T ]. Definition 2.1. A distribution F is said to be -long-tailed, written F L, if F (x + ) > for all sufficiently large x and F (x + y + )/F (x + ) 1 as x for any fixed y. It is is -subexponential, written F S, if F L and F 2 (x + )/F (x + ) 2. Notice that if we allow T = then the class L corresponds to the ordinary longtailed distributions L; if T <, then L L. Similarly, for any finite T the class S S, and if we allow T = the two classes coincides. It appears that -subexponential distributions possess many similar properties as the ordinary subexponential distributions. A main and crucial difference is that the tail function F (x) is monotone whereas F (x + ) may not be. Also, it it worth noticing that a distribution F with a subexponential density f is -subexponential for any T >. We are now ready to state our main results. Let P be the the matrix with ijth element p ij = F ij. Theorem 2.2. Consider the Markov renewal equation (1.1). Assume that P is irreducible with spr(p ) < 1 and that F ij (x + ) p ij c ij G(x + ) where c ij > and G is a -subexponential distribution function for all T >. Let g(x) = G(x, x + 1), I j = z j (y) dy, and let d ij be the ijth element of the matrix (I P ) 1 M(I P ) 1 with M = (p kl c kl ) k,l E, k ij the ijth element of P n = (I P ) 1. Then three cases occur: i) Assume that z j is directly Riemann integrable and z j (x)/g(x) for all j E. Then Z i (x) j E I j d ij g(x). ii) Assume that z j is directly Riemann integrable and z j (x)/g(x) a j where a j > for at least one j E. Then Z i (x) j E (I j d ij + k ij a j ) g(x). iii) Assume that z j (y)/i j has a subexponential density and that z j (x)/g(x) for all j. Then Z i (x) j E k ij z j (x). Theorem 2.3. Let N 1,..., N d be non-negative integer-valued r.v. s and S = d N i i=1 j=1 X ij 4

7 where the X ij are independent given N 1,..., N d with distribution F i of X ij. Assume F i (x) c i F (x) for some F S and some c 1,..., c d and that there are ε i > such that E[(1 + ε i ) N i ] < for all i = 1,..., d. Then P(S > x) ( c 1 E[N 1 ] + + c d E[N d ] ) F (x). Note that N 1,..., N d are not assumed independent. A local version of Theorem 2.3 is given below as Theorem Random sums of subexponentials A classical bound due to Kesten ([13] pp , [1] p. 41) states that for a subexponential distribution F and ε > there exists a D = D (ε) > F n (x) D (1 + ε) n F (x) for any x and n N. (3.1) The following result is a similar result involving convolution of the tails of multiple subexponential distribution functions. Proposition 3.1. Let F S, let c 1,..., c d and let F 1,..., F d be distributions with F i (x) c i F (x). Then for every ε > there exists a D = D(ε) such that the following inequality holds F n 1 1 F n d d (x) D(1 + ε) n F (x) for all x and n 1,..., n d N (3.2) where n = n n d. Furthermore F n 1 1 F n d d (x) (c 1 n c d n d ) F (x). (3.3) Proof. Define the distribution function F by having tail F (x) = sup {F i (x)} sup {c i F (x)} = sup {c i }F (x). i=1,...,d i=1,...,d i=1,...,d Notice that taking the supremum will preserve properties as being cadlag, decreasing, and having values between [, 1]. Since the tail of F is the supremum of the tails of some c.d.f. s, F is indeed af c.d.f. itself. Let k = sup i=1,...,d c i. The tail of F is then asymptotically of the form kf (x), and since S is closed under tail equivalence we must have F S. From the classical Kesten bound it follows that for every ε > there is a D = D (ε) such that (3.1) holds. Now for i =, 1..., d let X i,1,..., X i,ni F i where n = n Then by construction we have the stochastic ordering X i,j st X,k for all i = 1,..., d, j = 1,..., n i, k = 1,..., n, which implies that the sums also have the corresponding stochastic ordering X 1,1 + + X 1,n1 + + X d,1 + + X d,nd st X,1 + + X,n. 5

8 Written in terms of convolutions of the tails of the distribution functions this means that F n 1 1 F n d(x) F n (x). d By (3.1) it is therefore sufficient to show that there exists a D = D(ε) such that D F (x) DF (x). From the definition of F and F i (x) c i F (x) it follows that whenever K > max 1,...,d {c i }, there exists a x such that F (x)/f (x) K for all x > x. Replacing K by a larger K if necessary, we may assume this holds for all x. Thus { } F (x) D F (x) = D F (x) F (x) D F (x) sup F (x) x F (x) { } { } F (x) F (x) ) D (sup + sup F (x) x x F (x) x>x F (x) D (K + 2 max i=1,...,d {c i})f (x) DF (x) with D = D (K + 2 max i=1,...,d {c i }). The asympotics in (3.3) now easily follows. To this end, just notice that F n i i (x) n i c i F (x) for all i by standard subexponential theory and proceed by induction, using that G 1 G 2 (x) (γ 1 + γ 2 )F (x) if G i (x) γ i F (x) for i = 1, 2 and F S. Kesten s bounds is commonly used as majorant in dominated convergence to find the asymptotics of a randomly stopped sum of i.i.d. subexponential random variables. Proposition 3.1 can be used correspondingly for the multidimensional case which we use to give the proof of Theorem 2 Proof. The law of total probability yields P(S > x) F (x) = n 1,n 2,...,n d =1 n 1,n 2,...,n d =1 = c 1 E[N 1 ] + + c d E[N d ] P(N 1 = n 1,..., N d = n d ) F n 1 F n d(x) F (x) P(N 1 = n 1,..., N d = n d )(c 1 n c d n d ) using Proposition 3.1 and dominated convergence; this is justified since Hölder s inequality implies that E[(1 + ε) N 1+ +N d] < if we take (say) 1 + ε = min{1 + ε1,..., 1 + ε d } 1/d. 4 Random sums of local subexponentials The objective of the following Lemma is to obtain an upper bound for convolution of local subexponential distribution functions, which is needed to expand the local version of Kesten s bounds stated as Proposition 4 in [9] to include the convolution of several local subexponential distribution functions. 6

9 Lemma 4.1. Let H S. Assume that G i, i = 1, 2 are distributions satisfying G i (x + ) b i H(x + ) for all x x for some b i 1 and x. Then for some constant a depending only on H, it holds that G 1 G 2 (x + ) ab 1 b 2 H(x + ) for all x x. (4.1) Proof. Define a 1 = H(u + ) sup u,v x : u v 1 H(v + ) and notice that a 1 is finite for sufficiently large x since H is -subexponential, and therefore especially -longtailed. We have the representation G 1 G 2 (x + ) = G 2 (x y + )G 1 (dy) + P 1 (x) + P 2 (x). +T x G 2 (, x + T y]g 1 (dy) Consider the first term. Let k be the smallest integer such that x/k 1 and 1/k T. Split interval (, x] into k disjoint equally sized parts. For x x the first term can then be written as a sum and assessed as follows k 1 (i+1)/k P 1 (x) = G 2 (x y + )G 1 (dy) i= k 1 b 2 i= k 1 a 1 b 2 xi/k (i+1)/k i= xi/k (i+1)/k xi/k H(x y + )G 1 (dy) H(x xi/k + )G 1 (dy) k 1 a 1 b 2 H(x xi/k + )G 1 (xi/k, x(i + 1)/k) i= k 1 a 1 b 1 b 2 H(x xi/k + )H(xi/k + ) i= where the summands can be evaluated backwards (i+1)/k xi/k H(x y + )H(dy) 1 (i+1)/k H(x ix/k + )H(dy) a 1 xi/k = 1 a 1 H(x ix/k + )H(xi/k + ). Inserting this upper bound for the summands yields the inequality k 1 P 1 (x) a 2 1b 1 b 2 i= (i+1)/k xi/k H(x y + )H(dy) = a 2 1b 1 b 2 H(x y + )H(dy) a 2 1b 1 b 2 H 2 (x + ). 7

10 Since H is -subexponential there must be a finite δ such that H 2 (x + ) (2 + δ)h(x + ) for all x x if x is sufficiently large. This provides the final inequality for the first term P 1 (x) (2 + δ)a 2 1b 1 b 2 H(x + ). Now, consider the second term. It follows directly that P 2 (x) Altogether we now have +T so we can take a = (2 + δ)a x G 1 (dy) b 1 H(x + ) b 1 b 2 H(x + ). G 1 G 2 (x + ) ((2 + δ)a )b 1 b 2 H(x + ) We can use this to obtain a local version of Proposition 3.1. Proposition 4.2. Let F S for some, and F i (x + ) c i F (x + ) for c 1,..., c d. Then for every ε > there exists a D = D(ε) and a x = x (ε) such that the following inequality holds F n 1 1 F n d d (x + ) D(1 + ε) n F (x + ) (4.2) for all x x and n 1,..., n d N where n = n n d. Furthermore F n 1 1 F n d(x + ) (c 1 n c d n d )F (x + ). d Proof. Let ε > be given. Notice that from Proposition 1 it follows that for i = 1,..., d there is a V i = V i (ε) such that F n i i (x + ) V i (1 + ε) n i F (x + ). (4.2) can then be proven by induction using Lemma 4.1. For d = 2 it follows directly from the Lemma that there is an a such that F n 1 1 (x + ) F n 2 2 (x + ) av 1 V 2 (1 + ε) n 1+n 2 F (x + ). Letting D = av 1 V 2 we have the desired inequality. Assume that (4.2) holds for d distribution functions with constant D. Now consider the case with d+1 distribution functions, then we have F n 1 1 F n d+1 d+1 (x + ) = (F n 1 1 F n d 1 ) F n d+1 d+1 (x + ) adv d+1 (1 + ε) n 1+ +n d (1 + ε) n d+1 F (x + ) = adv d+1 (1 + ε) n 1+ +n d+1 F (x + ). From Corollary 2 in [9] it follows that F n i i (x + ) n i c i F (x + ) for i = 1,..., d, and using Proposition 3 in [9] one can further deduce that F n 1 1 F n d(x + ) (n 1 c n d c d )F (x + ). d Theorem 4.3. Assume in addition to the ssumptions of Theorem 2.3 that F S for some and F i (x + ) c i F (x + ) for c 1,..., c d. Then F N 1 F N d (x + ) (c 1 E[N 1 ] + + c d E[N d ])F (x + ). Proof. Use dominated convergence justified by Proposition

11 5 Random sums with subexponential densities In [9] it is shown that when F S ac then, for any ε >, there exist x = x (ε) and V = V (ε) such that f n (x) V (1 + ε) n f(x) for any x x and n N. We now seek to obtain a version of Theorem 2.3 involving densities instead. To do this we need an upper bound for convolutions, as for the local subexponential case. Lemma 5.1. Let F S ac. Let f 1 and f 2 be densities with the properties f i (x) b i f(x) with b i > 1 for i = 1, 2 and all x x. Then there is a constant A independent of f 1 and f 2 such that f 1 f 2 (x) Ab 1 b 2 f(x) for all x x. Proof. Since f is longtailed then f(x) > for all x x for x sufficiently large, hence f(x y) a sup y (,x x ) f(x) is finite. Consider the partition f 1 f 2 (x) = = x I 1 + I 2 + I 3. f 1 (x y)f 2 (y)dy f 1 (x y)f 2 (y)dy + x x f 1 (x y)f 2 (y)dy + x f 1 (x y)f 2 (dy) Now each term is assessed individually, starting with the first for x x, x x I 1 b 1 f(x y)f 2 (dy) ab 1 f(x)f 2 (dy) ab 1 b 2 f(x). Analogously for the third term, I 3 = x f 2 (x y)f 1 (y)dy ab 1 b 2 f(x). Only the second term is left to be evaluated. Now let x 2x, then it follows I 2 = b 1 b 2 x x f(x y)f(y)dy b 1 b 2 f(x y)f(y)dy = b 1 b 2 f 2 (x). Since f is a subexponential density, if x is sufficiently large there must be a δ > such that f 2 (x) (2 + δ)f(x) for all x x. Hence, I 2 b 1 b 2 (2 + δ)f(x). To conclude the proof, let A = 2a δ. 9

12 Theorem 5.2. Let F S ac and f 1,..., f d be densities with f i (x) c i f(x) for some c i, i = 1,..., d. Then for any ε > there is a D = D(ε) and x = x (ε) such that f n 1 1 f n d(x) D(1 + ε)f(x) for any x x and n 1,..., n d N. Furthermore f n 1 Proof. Analogous to Proposition 4.2. n d 1 f n d(x) (n 1 c n d c d )f(x). n d 6 Proof of Theorem 2.2 Recall that P = (p ij ) i,j E where the entries are defined by p ij = F ij. Let λ = spr(p ) be the Perron-Frobenius root of P and v the corresponding right eigenvector. Hence, P n v = λ n v, which by strict positivity of v implies that the n th element p (n) ij of P n will decay at rate λ n. We will first treat the case where in addition to λ < 1 the matrix P is substochastic. This means that the row sums satisfy j p ij 1, with strict inequality for at least one i.. Introduce an irreducible absorbing Markov chain (ξ n ) n N with state space E { } where is the coffin state. Let Q = (q ij ) i,j E { } be the transition matrix with q ij = p ij for i, j E, q i =, q = 1, and q i = 1 j E p ij. Recall that there must be at least one index i such that q i >. Then N = inf{n : ξ n = } will be the time until absorbtion. Let T n denote the waiting time between jump n and n + 1. Then T, T 1,... are conditionally independent given F = σ(ξ, ξ 1,...) with conditional distribution function satisfying on {ξ n = i, ξ n+1 = j}, and we have G ij (t) = P(T n t F) = P(T n t ξ n, ξ n+1 ) F ij (t) = P(ξ n+1 = j, T n t ξ n = i) = q ij G ij (t). Henceforth we will only consider i, j. Thus, F ij (t) = p ij G ij (t). The solution of the Markov renewal equation (1.1) is given by Proposition 4.4 in [8] as Z i (x) = j E Z ij (x) where Z ij (x) = z j (x y)u ij (dy) (6.1) with Markov renewal kernel U ij being the expected number of returns to state j E before time t given that the Markov chain starts in state i E. That is, U ij (t) = (F n ) ij (t) = n= P i (ξ n = j, S n 1 t), where S n 1 = T + + T n 1. First, it is necessary to consider the local asymptotics of U ij. 1 n=

13 Lemma 6.1. Under the assumptions of Theorem 2.2, U ij (t + ) d ij G(t + ). (6.2) Proof. Notice that the n th convolution of the semi-markov kernel also can be defined locally (F n ) ij (t + ) = P i (ξ n = j, S n 1 t + ). Rewriting the right-hand side using conditional expectations yields P i (ξ n = j, S n 1 t + ) = P i (ξ n = j, T + + T n 1 t + ) = p (n) ij P(T + + T n 1 t + ξ = i, ξ n = j) [ = p (n) ij E E [ ] ] P(T + + T n 1 t + ξ = i, ξ n = j) ξ 1,..., ξ n 1 = p (n) ij E[ G (n) ij (t + )], where G (n) ij is the distribution of the sum T + + T n 1 conditioned on the Markov states ξ = i, ξ 1,..., ξ n 1, ξ n = j. Let N (n) kl the the random variable counting the number of jumps from state k E to state l E before the n th jump, that is n 1 N (n) kl = m= 1 {ξm=l,ξ m+1 =k}. Correspondingly, let N (n) kl ij be the random variable representing the number of jumps from k E to l E before jump n given that ξ = i and ξ n = j, i.e. it is distributed as N (n) kl conditioned on ξ = i, ξ n = j. This has expected value E [ N (n) ] [ (n) kl ij = E N kl ξ = i, ξ n = j ] = E [ n 1 n 1 m= = P(ξ = i, ξ m = k, ξ m+1 = l, ξ n = j) P(ξ = i, ξ n = j) n 1 m= = (P m ) ik p kl (P n m 1 ) lj. (P n ) ij m= 1 {ξ =i,ξ m=k,ξ m+1 =l,ξ n=j} P(ξ = i, ξ n = j) = n 1 m= p(m) ik p klp (n m 1) lj p (n) ij Recall that G (n) ij is the distribution of the sum of the random variables T, T 1,..., T n 1 conditioned on the Markov chain until the nth jump. The distribution of T m merely depends on ξ m and ξ m+1. Therefore, G (n) ij can be interpreted as the random convolution G (n) ij (t + ) = G N (n) kl ij kl k,l E (t + ). Applying Theorem 4.3 the local asymptotics of G (n) ij can then be specified as E [ (n) kl ij kl (t + ) ] ( E [ N (n) ] ) kl ij ckl G(t + ). k,l E G N 11 k,l E ]

14 This asymptotic specification transfers to the semi-markov kernel as follows (F n ) ij (t + ) k,l E n 1 (P m ) ik p kl (P n m 1 ) lj c kl G(t + ) m= and on to the Markov renewal kernel U ij (t + ) n 1 (P m ) ik p kl (P n m 1 ) lj c kl G(t + ) n= k,l E m= = k,l E m= n=m+1 (P m ) ik p kl (P n m 1 ) lj c kl G(t + ). Since P has spectral radius strictly less than 1 the infinite series above converges with limits Thus n=m+1 (P n m 1 ) lj = concluding the proof. (P k ) lj = (I P ) 1 lj, (P m ) ik = (I P ) 1 k= k,l E m= ( ) U ij (t + ) (I P ) 1 ik p kl(i P ) 1 lj c kl G(t + ), For the proof of Theorem 2.2, it suffices in view of (6.1) to find the asymptotics of the summands Z ij (x) = z j(x y)u ij (dy) in the solution of the Markov renewal equation. As an introductory remark, notice that G S for all T > has the implications ik. G(x, x + 1/n] g(x) n for all n and g(x + y) g(x) 1 for all y < y < for some appropriate y. For a suitable A < x/2 decompose Z ij (x) into three parts, namely ( A A ) Z ij (x) = + + z j (x y)u ij (dy) A x A J 1 (x, A) + J 2 (x, A) + J 3 (x, A). We now consider and evaluate the three parts separately. In case i) of the Theorem we have J 1 (x, A) = A = g(x) z j (x y)u ij (dy) A z j (x y) g(x y) g(x y) U ij (dy) = o(g(x)) g(x) 12

15 when x. Also J 2 (x, A) = A A = o(1) z j (x y)u ij (dy) = A From this we can conclude A lim lim J 2 (x, A) A x g(x) A g(x y)u ij (dy) = o(g(x)) = lim A lim x A z j (x y) g(x y) g(x y)u ij(dy) A A g(x y) U ij (dy). g(x) o(g(x)) (U(x A) U(A)) =. g(x) Finally, consider a finite partition of the interval (x A, x] into n equally sized intervals. A is now assumed to be an integer. Furthermore, let z n (x) = sup y x 1 n z(y). Then we can bound J 3 from above by the upper Riemann sum J 3 (x, A) = x A An 1 d ij k= g(x) d ij n z j (x y)u ij (dy) z n ( k n An 1 k= An 1 ( k z n n k= ) ( G x k + 1 n, x k n ( ) k z n g(x) d ij n n k= ) U ij ( ) z n ( k n ) x k + 1 n, x k ) n as x and A. Since z is assumed to be directly Riemann integrable, we have 1 ( ) k z n I j n n k= as n tends to infinity. So we have now obtained J 3 (x, A) lim sup lim sup A x g(x) d ij I j. Using the same approach with lim inf a similar bound from below is obtained using lower Riemann sums. This finishes case i). Now consider case ii). The first of the decomposed parts has the asymptotics J 1 (x, A) = A z j (x y)u ij (dy) a j g(x) a j g(x)u ij (A) A g(x y) U ij (dy) g(x) which leads to lim lim J 1 (x, A) A x g(x) = lim A lim x a ju ij (A) = k ij a j. 13

16 The second part in case ii) can also be evaluated as follows J 2 (x, A) = A A = O(1) z j (x y)u ij (dy) = A A g(x y)u ij (dy). A A z j (x y) g(x y) g(x y)u ij(dy) Notice the similarity to J 2 in case i). Corresponding calculations show that J 2 (x, A) = o(g(x)). J 3 is also similar to case i), which concludes case ii). In case iii) we are no longer able to consider the decomposition we have so far. Instead, let K j denote the probability measure with density z j (x)/i j. Recall that K j S. Now let = (, 1] and consider a new decomposition z j (x y)u ij (dy) = where the second term satisfies A z j (x y)u ij (dy) + I 1 (x, A) + I 2 (x, A), x A I 2 (x, A) A sup z j (y) U ij (x A, x] = o(z j (x)). y A z j (x y)u ij (dy) Letting A tend to infinity in a slower rate than x (recall that we have chosen A to be less than x/2) will preserve this inequaility. Now consider a corresponding decomposition of the convolution K j U ij. (K j U ij )(x + ) = = A K j (x y + )U ij (dy) K j (x y + )U ij (dy) + I 1(x, A) + I 2(x, A). x A K j (x y + )U ij (dy) As for I 2 we can evaluate I 2(x, A) = o(z j (x)). Since z j L we have z j (x) I j K j (x + ) for = (, 1] and therefore I 1 (x, A) I j I 1(x, A). This yields z j (x y)u ij (dy) I j (K j U ij )(X + ). Due to the assumption z j (x)/g(x) the use of Proposition 3 in [9] with c 1 = 1 and c 2 = gives I j (K j U ij )(x + ) k ij I j K(x + ) k ij z j (x) which concludes the proof of Theorem 2.2 in the substochastic case. If instead of P being substochastic it merely satisfies spr(p ) < 1, we define the measure F ij (dx) = F ij (dx)v i /v j for i, j E and let P be the matrix with elements F ij = F ij v i /v j. P will then be a substochastic matrix with row sums λ and 14

17 therefore it must also have spectral radius less than one. Letting Z i (x) = v i Z i (x) and z i (x) = v i z i (x) another renewal equation occurs Z i (x) = z i (x) + j E Z j (x y) F ij (dy), which has the same properties as analyzed previously in the substochastic case with Markov kernel Ũij = U ij v i /v j. What we have shown already can therefore be applied with coefficients ã j, d ij, k ij and Ĩj. The assumption of the asymptotic properties of G ij stated in Theorem 2.2 implies that G ij (t + ) = v i G ij (t + )/v j c ij G(t + ), where c ij = c ij v i /v j and same relation transfers to d ij = d ij v i /v j. Correspondingly for k ij = Ũij(, ) = U ij (, )v i /v j = k ij v i /v j. Last to mention is Ĩj = z j (y)dy = v j z j (y)dy = v j I j and ã j = v j a j. Inserting these into the asympotics of Z i gives the same result as Theorem 2.2. For example in case ii), the substochatic results gives Z i (x) j E (Ĩj d ij + ã j kij )g(x) which translates to Z i (x) = 1 Zi (x) 1 ( v i v ) i v j I j d ij + v j a j k ij g(x) v i v i v j v j j E = ) (I j d ij + a j k ij g(x). j E References [1] P. Embrechts, C. Klüppelberg, and T. Mikosch, Modelling Extremal Events for Insurance and Finance, vol. 33. Springer Science, [2] R. Adler, R. Feldman, and M. Taqqu, A Practical Guide to Heavy Tails. Birkhäuser, Basel, [3] R. Barakat, Sums of independent lognormally distributed random variables, JOSA, vol. 66, no. 3, pp , [4] X. Cao, Inference for Age-dependent Branching Process and Their Applications. PhD thesis, George Mason University, 215. [5] S. I. Resnick, Heavy-tail Phenomena: Probabilistic and Statistical Modeling. Springer Science & Business Media, 27. [6] P. Jagers, Branching Processes with Biological Applications. Wiley, [7] W. Feller, An Introduction to Probability Theory and Its Applications. Volume II. John Wiley & Sons London-New York-Sydney-Toronto, 2nd ed., [8] S. Asmussen, Applied Probability and Queues. Springer, 2nd ed., 23. [9] S. Asmussen, S. Foss, and D. Korshunov, Asymptotics for sums of random variables with local subexponential behaviour, Journal of Theoretical Probability, vol. 16, no. 2, pp , 23. [1] K. Wang, Y. Chen, and Z. Tan, Asymptotics for the solutions to defective renewal equations, Abstract and Applied Analysis, vol. 214, pp. 1 5,

18 [11] S. Asmussen, L. Lipsky, and S. Thompson, Markov renewal methods in restart problems in complex systems, in The Fascination of Probability, Statistics and their Applications, pp , Springer, 216. [12] S. Foss, D. Korshunov, and S. Zachary, An Introduction to Heavy-tailed and Subexponential Distributions. Springer, 211. [13] K. B. Athreya and P. E. Ney, Branching Processes. Springer,

Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions

Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk The University of

More information

Asymptotic behavior for sums of non-identically distributed random variables

Asymptotic behavior for sums of non-identically distributed random variables Appl. Math. J. Chinese Univ. 2019, 34(1: 45-54 Asymptotic behavior for sums of non-identically distributed random variables YU Chang-jun 1 CHENG Dong-ya 2,3 Abstract. For any given positive integer m,

More information

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab

More information

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Remigijus Leipus (with Yang Yang, Yuebao Wang, Jonas Šiaulys) CIRM, Luminy, April 26-30, 2010 1. Preliminaries

More information

On Upper Bounds for the Tail Distribution of Geometric Sums of Subexponential Random Variables

On Upper Bounds for the Tail Distribution of Geometric Sums of Subexponential Random Variables On Upper Bounds for the Tail Distribution of Geometric Sums of Subexponential Random Variables Andrew Richards arxiv:0805.4548v2 [math.pr] 18 Mar 2009 Department of Actuarial Mathematics and Statistics

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Long-tailed degree distribution of a random geometric graph constructed by the Boolean model with spherical grains Naoto Miyoshi,

More information

On lower limits and equivalences for distribution tails of randomly stopped sums 1

On lower limits and equivalences for distribution tails of randomly stopped sums 1 On lower limits and equivalences for distribution tails of randomly stopped sums 1 D. Denisov, 2 S. Foss, 3 and D. Korshunov 4 Eurandom, Heriot-Watt University and Sobolev Institute of Mathematics Abstract

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

The Subexponential Product Convolution of Two Weibull-type Distributions

The Subexponential Product Convolution of Two Weibull-type Distributions The Subexponential Product Convolution of Two Weibull-type Distributions Yan Liu School of Mathematics and Statistics Wuhan University Wuhan, Hubei 4372, P.R. China E-mail: yanliu@whu.edu.cn Qihe Tang

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Proceedings of the World Congress on Engineering 29 Vol II WCE 29, July 1-3, 29, London, U.K. Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Tao Jiang Abstract

More information

Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims

Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims Serguei Foss Heriot-Watt University, Edinburgh Karlovasi, Samos 3 June, 2010 (Joint work with Tomasz Rolski and Stan Zachary

More information

Characterizations on Heavy-tailed Distributions by Means of Hazard Rate

Characterizations on Heavy-tailed Distributions by Means of Hazard Rate Acta Mathematicae Applicatae Sinica, English Series Vol. 19, No. 1 (23) 135 142 Characterizations on Heavy-tailed Distributions by Means of Hazard Rate Chun Su 1, Qi-he Tang 2 1 Department of Statistics

More information

Existence and convergence of moments of Student s t-statistic

Existence and convergence of moments of Student s t-statistic U.U.D.M. Report 8:18 Existence and convergence of moments of Student s t-statistic Fredrik Jonsson Filosofie licentiatavhandling i matematisk statistik som framläggs för offentlig granskning den 3 maj,

More information

arxiv: v2 [math.pr] 22 Apr 2014 Dept. of Mathematics & Computer Science Mathematical Institute

arxiv: v2 [math.pr] 22 Apr 2014 Dept. of Mathematics & Computer Science Mathematical Institute Tail asymptotics for a random sign Lindley recursion May 29, 218 arxiv:88.3495v2 [math.pr] 22 Apr 214 Maria Vlasiou Zbigniew Palmowski Dept. of Mathematics & Computer Science Mathematical Institute Eindhoven

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Precise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions

Precise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions This article was downloaded by: [University of Aegean] On: 19 May 2013, At: 11:54 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Ruin probabilities in multivariate risk models with periodic common shock

Ruin probabilities in multivariate risk models with periodic common shock Ruin probabilities in multivariate risk models with periodic common shock January 31, 2014 3 rd Workshop on Insurance Mathematics Universite Laval, Quebec, January 31, 2014 Ruin probabilities in multivariate

More information

On Sums of Conditionally Independent Subexponential Random Variables

On Sums of Conditionally Independent Subexponential Random Variables On Sums of Conditionally Independent Subexponential Random Variables arxiv:86.49v1 [math.pr] 3 Jun 28 Serguei Foss 1 and Andrew Richards 1 The asymptotic tail-behaviour of sums of independent subexponential

More information

Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows

Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows Asymptotic Analysis of Exceedance Probability with Stationary Stable Steps Generated by Dissipative Flows Uğur Tuncay Alparslan a, and Gennady Samorodnitsky b a Department of Mathematics and Statistics,

More information

An Introduction to Probability Theory and Its Applications

An Introduction to Probability Theory and Its Applications An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I

More information

Sums of dependent lognormal random variables: asymptotics and simulation

Sums of dependent lognormal random variables: asymptotics and simulation Sums of dependent lognormal random variables: asymptotics and simulation Søren Asmussen Leonardo Rojas-Nandayapa Department of Theoretical Statistics, Department of Mathematical Sciences Aarhus University,

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

Stability of the Defect Renewal Volterra Integral Equations

Stability of the Defect Renewal Volterra Integral Equations 19th International Congress on Modelling and Simulation, Perth, Australia, 12 16 December 211 http://mssanz.org.au/modsim211 Stability of the Defect Renewal Volterra Integral Equations R. S. Anderssen,

More information

Characterization through Hazard Rate of heavy tailed distributions and some Convolution Closure Properties

Characterization through Hazard Rate of heavy tailed distributions and some Convolution Closure Properties Characterization through Hazard Rate of heavy tailed distributions and some Convolution Closure Properties Department of Statistics and Actuarial - Financial Mathematics, University of the Aegean October

More information

Conditional Tail Expectations for Multivariate Phase Type Distributions

Conditional Tail Expectations for Multivariate Phase Type Distributions Conditional Tail Expectations for Multivariate Phase Type Distributions Jun Cai Department of Statistics and Actuarial Science University of Waterloo Waterloo, ON N2L 3G1, Canada Telphone: 1-519-8884567,

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Large deviations for random walks under subexponentiality: the big-jump domain

Large deviations for random walks under subexponentiality: the big-jump domain Large deviations under subexponentiality p. Large deviations for random walks under subexponentiality: the big-jump domain Ton Dieker, IBM Watson Research Center joint work with D. Denisov (Heriot-Watt,

More information

Optional Stopping Theorem Let X be a martingale and T be a stopping time such

Optional Stopping Theorem Let X be a martingale and T be a stopping time such Plan Counting, Renewal, and Point Processes 0. Finish FDR Example 1. The Basic Renewal Process 2. The Poisson Process Revisited 3. Variants and Extensions 4. Point Processes Reading: G&S: 7.1 7.3, 7.10

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Adventures in Stochastic Processes

Adventures in Stochastic Processes Sidney Resnick Adventures in Stochastic Processes with Illustrations Birkhäuser Boston Basel Berlin Table of Contents Preface ix CHAPTER 1. PRELIMINARIES: DISCRETE INDEX SETS AND/OR DISCRETE STATE SPACES

More information

Randomly weighted sums under a wide type of dependence structure with application to conditional tail expectation

Randomly weighted sums under a wide type of dependence structure with application to conditional tail expectation Randomly weighted sums under a wide type of dependence structure with application to conditional tail expectation Shijie Wang a, Yiyu Hu a, Lianqiang Yang a, Wensheng Wang b a School of Mathematical Sciences,

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Operational Risk and Pareto Lévy Copulas

Operational Risk and Pareto Lévy Copulas Operational Risk and Pareto Lévy Copulas Claudia Klüppelberg Technische Universität München email: cklu@ma.tum.de http://www-m4.ma.tum.de References: - Böcker, K. and Klüppelberg, C. (25) Operational VaR

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Operational Risk and Pareto Lévy Copulas

Operational Risk and Pareto Lévy Copulas Operational Risk and Pareto Lévy Copulas Claudia Klüppelberg Technische Universität München email: cklu@ma.tum.de http://www-m4.ma.tum.de References: - Böcker, K. and Klüppelberg, C. (25) Operational VaR

More information

Monotonicity and Aging Properties of Random Sums

Monotonicity and Aging Properties of Random Sums Monotonicity and Aging Properties of Random Sums Jun Cai and Gordon E. Willmot Department of Statistics and Actuarial Science University of Waterloo Waterloo, Ontario Canada N2L 3G1 E-mail: jcai@uwaterloo.ca,

More information

Research Article Extended Precise Large Deviations of Random Sums in the Presence of END Structure and Consistent Variation

Research Article Extended Precise Large Deviations of Random Sums in the Presence of END Structure and Consistent Variation Applied Mathematics Volume 2012, Article ID 436531, 12 pages doi:10.1155/2012/436531 Research Article Extended Precise Large Deviations of Random Sums in the Presence of END Structure and Consistent Variation

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems

Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems Corina Constantinescu and Enrique Thomann Abstract The classical result of Cramer-Lundberg states that if the rate

More information

The Lebesgue Integral

The Lebesgue Integral The Lebesgue Integral Having completed our study of Lebesgue measure, we are now ready to consider the Lebesgue integral. Before diving into the details of its construction, though, we would like to give

More information

Random convolution of O-exponential distributions

Random convolution of O-exponential distributions Nonlinear Analysis: Modelling and Control, Vol. 20, No. 3, 447 454 ISSN 1392-5113 http://dx.doi.org/10.15388/na.2015.3.9 Random convolution of O-exponential distributions Svetlana Danilenko a, Jonas Šiaulys

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

ON THE MOMENTS OF ITERATED TAIL

ON THE MOMENTS OF ITERATED TAIL ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent

More information

Intertwining of Markov processes

Intertwining of Markov processes January 4, 2011 Outline Outline First passage times of birth and death processes Outline First passage times of birth and death processes The contact process on the hierarchical group 1 0.5 0-0.5-1 0 0.2

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

Rare event simulation for the ruin problem with investments via importance sampling and duality

Rare event simulation for the ruin problem with investments via importance sampling and duality Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).

More information

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki

More information

On the inefficiency of state-independent importance sampling in the presence of heavy tails

On the inefficiency of state-independent importance sampling in the presence of heavy tails Operations Research Letters 35 (2007) 251 260 Operations Research Letters www.elsevier.com/locate/orl On the inefficiency of state-independent importance sampling in the presence of heavy tails Achal Bassamboo

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Ruin Probabilities of a Discrete-time Multi-risk Model

Ruin Probabilities of a Discrete-time Multi-risk Model Ruin Probabilities of a Discrete-time Multi-risk Model Andrius Grigutis, Agneška Korvel, Jonas Šiaulys Faculty of Mathematics and Informatics, Vilnius University, Naugarduko 4, Vilnius LT-035, Lithuania

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

TAIL ASYMPTOTICS FOR A RANDOM SIGN LINDLEY RECURSION

TAIL ASYMPTOTICS FOR A RANDOM SIGN LINDLEY RECURSION J. Appl. Prob. 47, 72 83 (21) Printed in England Applied Probability Trust 21 TAIL ASYMPTOTICS FOR A RANDOM SIGN LINDLEY RECURSION MARIA VLASIOU, Eindhoven University of Technology ZBIGNIEW PALMOWSKI,

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions

Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions 1 / 36 Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions Leonardo Rojas-Nandayapa The University of Queensland ANZAPW March, 2015. Barossa Valley, SA, Australia.

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

University Of Calgary Department of Mathematics and Statistics

University Of Calgary Department of Mathematics and Statistics University Of Calgary Department of Mathematics and Statistics Hawkes Seminar May 23, 2018 1 / 46 Some Problems in Insurance and Reinsurance Mohamed Badaoui Department of Electrical Engineering National

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 134-84 Research Reports on Mathematical and Computing Sciences Subexponential interval graphs generated by immigration-death processes Naoto Miyoshi, Mario Ogura, Taeya Shigezumi and Ryuhei Uehara

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

MAT 257, Handout 13: December 5-7, 2011.

MAT 257, Handout 13: December 5-7, 2011. MAT 257, Handout 13: December 5-7, 2011. The Change of Variables Theorem. In these notes, I try to make more explicit some parts of Spivak s proof of the Change of Variable Theorem, and to supply most

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Asymptotics of sums of lognormal random variables with Gaussian copula

Asymptotics of sums of lognormal random variables with Gaussian copula Asymptotics of sums of lognormal random variables with Gaussian copula Søren Asmussen, Leonardo Rojas-Nandayapa To cite this version: Søren Asmussen, Leonardo Rojas-Nandayapa. Asymptotics of sums of lognormal

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series C.7 Numerical series Pag. 147 Proof of the converging criteria for series Theorem 5.29 (Comparison test) Let and be positive-term series such that 0, for any k 0. i) If the series converges, then also

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Linear-fractional branching processes with countably many types

Linear-fractional branching processes with countably many types Branching processes and and their applications Badajoz, April 11-13, 2012 Serik Sagitov Chalmers University and University of Gothenburg Linear-fractional branching processes with countably many types

More information

arxiv: v1 [math.pr] 9 May 2014

arxiv: v1 [math.pr] 9 May 2014 On asymptotic scales of independently stopped random sums Jaakko Lehtomaa arxiv:1405.2239v1 [math.pr] 9 May 2014 May 12, 2014 Abstract We study randomly stopped sums via their asymptotic scales. First,

More information

ISyE 6761 (Fall 2016) Stochastic Processes I

ISyE 6761 (Fall 2016) Stochastic Processes I Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG http://wwong.github.io Last Revision: May 25, 217 Table of Contents

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Lebesgue measure and integration

Lebesgue measure and integration Chapter 4 Lebesgue measure and integration If you look back at what you have learned in your earlier mathematics courses, you will definitely recall a lot about area and volume from the simple formulas

More information

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING Stochastic Models, 21:695 724, 2005 Copyright Taylor & Francis, Inc. ISSN: 1532-6349 print/1532-4214 online DOI: 10.1081/STM-200056037 A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING N. D. van Foreest

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information