RARE EVENT SIMULATION FOR PROCESSES GENERATED VIA STOCHASTIC FIXED POINT EQUATIONS
|
|
- Brandon Armstrong
- 5 years ago
- Views:
Transcription
1 The Annals of Applied Probability 204, Vol. 24, No. 5, DOI: 0.24/3-AAP974 Institute of Mathematical Statistics, 204 RARE EVENT SIMULATION FOR PROCESSES GENERATED VIA STOCHASTIC FIXED POINT EQUATIONS BY JEFFREY F. COLLAMORE,GUOQING DIAO AND ANAND N. VIDYASHANKAR 2 University of Copenhagen, George Mason University and George Mason University In a number of applications, particularly in financial and actuarial mathematics, it is of interest to characterize the tail distribution of a random variable V satisfying the distributional equation V = D f(v),wheref(v)= A max{v,d}+b for (A,B,D) (0, ) R 2. This paper is concerned with computational methods for evaluating these tail probabilities. We introduce a novel importance sampling algorithm, involving an exponential shift over a random time interval, for estimating these rare event probabilities. We prove that the proposed estimator is: (i) consistent, (ii) strongly efficient and (iii) optimal within a wide class of dynamic importance sampling estimators. Moreover, using extensions of ideas from nonlinear renewal theory, we provide a precise description of the running time of the algorithm. To establish these results, we develop new techniques concerning the convergence of moments of stopped perpetuity sequences, and the first entrance and last exit times of associated Markov chains on R. We illustrate our methods with a variety of numerical examples which demonstrate the ease and scope of the implementation.. Introduction. This paper introduces a rare event simulation algorithm for estimating the tail probabilities of the stochastic fixed point equation (SFPE) (.) V D = f(v) where f(v) A max{v,d}+b for (A,B,D) (0, ) R 2. SFPEs of this general form arise in a wide variety of applications, such as extremal estimates for financial time series models and ruin estimates in actuarial mathematics. Other related applications arise in branching Received July 20; revised September 203. Supported in part by Danish Research Council (SNF) Grant Point Process Modelling and Statistical Inference, No Supported by Grant NSF DMS MSC200 subject classifications. Primary 65C05, 9G60, 68W40, 60H25; secondary 60F0, 60G40, 60J05, 60J0, 60J22, 60K5, 60K20, 60G70, 68U20, 9B30, 9B70, 9G70. Key words and phrases. Monte Carlo methods, importance sampling, perpetuities, large deviations, nonlinear renewal theory, Harris recurrent Markov chains, first entrance times, last exit times, regeneration times, financial time series, GARCH processes, ARCH processes, risk theory, ruin theory with stochastic investments. 243
2 244 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR processes in random environments and the study of algorithms in computer science. See Collamore (2009), Collamore and Vidyashankar (203b), or Section 4 below for a more detailed description of some of these applications. In a series of papers [e.g., Kesten (973), Vervaat (979), Goldie (99)], the tail probabilities for the SFPE (.) have been asymptotically characterized. Under appropriate moment and regularity conditions, it is known that (.2) lim u uξ P{V >u}=c for finite positive constants C and ξ, whereξ is identified as the nonzero solution to the equation E[A α ]=. Recently, in Collamore and Vidyashankar (203b), the constant C has been identified as the ξth moment of the difference of a perpetuity sequence and a conjugate sequence. The purpose of this article is to introduce a rigorous computational approach, based on importance sampling, for Monte Carlo estimation of the rare event probability P{V >u}. While importance sampling methods have been developed for numerous large deviation problems involving i.i.d. and Markov-dependent random walks [cf. Asmussen and Glynn (2007)], the adaptation of these methods to (.) is distinct and requires new techniques. In this paper, we propose a nonstandard approach involving a dual change of measure of a process {V n } performed over two random time intervals: namely, the excursion of {V n } to (u, ) followed by the return of this process to a given set C R. The motivation for our algorithm stems from the observation that the SFPE (.) induces a forward recursive sequence, namely, (.3) V n = A n max{d n,v n }+B n, n=, 2,...,V 0 = v, where {(A n,b n,d n ) : n Z + } is an i.i.d. sequence with the same law as (A,B,D). It is important to observe that in many applications, the mathematical process under study is obtained through the backward iterates of the given SFPE [as described by Letac (986) orcollamore and Vidyashankar (203b), Section 2.]. For example, the linear recursion f(v)= Av + B induces the backward recursive sequence or perpetuity sequence Z n := V 0 + B + B 2 (.4) + +, n=, 2,... A A A 2 A A n However, since {Z n } is not Markovian, it is less natural to simulate {Z n } than the corresponding forward sequence {V n }. Thus, a central aspect of our approach is the conversion of the given perpetuity sequence, via its SFPE, into a forward recursive sequence which we then simulate. Because {V n } is Markovian, we can then study this process over excursions emanating from, and then returning to, a given set C R. In the special case of the perpetuity sequence in (.4), simulation methods for estimating P{lim n Z n >u} have recently been studied in Blanchet, Lam and Zwart (202) under the strong assumption that {B n } is nonnegative. Their method is very different from ours, involving the simulation of {Z n } directly until the first B n
3 IMPORTANCE SAMPLING FOR SFPE 245 passage time to a level cu, wherec (0, ), and a rough analytical approximation to relate this probability to the first passage probability at level u. Their methods do not generalize to the other processes studied in this paper, such as the ruin problem with investments or related extensions. In contrast, our goal here is to develop a general algorithm which is flexible and can be applied to the wider class of processes governed by (.) and some of its extensions. While we focus on (.), it is worthwhile to mention here that our algorithm provides an important ingredient for addressing a larger class of problems, including nonhomogeneous recursions on trees, which are analyzed in Collamore, Vidyashankar and Xu (203). Also, it seems plausible that the method should extend to the class of random maps which can be approximated by (.) in the sense of Collamore and Vidyashankar (203b), Section 2.4. This extension would encompass several other problems of applied interest, such as the AR() process with ARCH() errors. Yet another feasible generalization is to Markov-dependent recursions under Harris recurrence, utilizing the reduction to i.i.d. recursions described in Collamore (2009)and Collamore and Vidyashankar (203a), Section 3. In this paper, we present an algorithm and establish that it is consistent and efficient; that is, it displays the bounded relative error property. It is interesting to note that in the proof of efficiency, certain new issues arise concerning the convergence of the perpetuity sequence (.4). Specifically, while it is known that (.4) converges to a finite limit under minimal conditions, the necessary and sufficient condition for the L β convergence of {Z n } in (.4)isthatE[A β ] < ; cf. Alsmeyer, Iksanov and Rösler (2009). However, our analysis will involve moments of quantities similar to {Z n },butwheree[a β ] is greater than one, and hence our perpetuity sequences will necessarily be divergent in L β. To circumvent this difficulty, we study these perpetuity sequences over randomly stopped intervals, namely, over cycles emanating from, and returning to, a given subset C of R. As a technical point, it is worth noting that if the return time, K, were replaced by the more commonly studied regeneration time τ of the chain {V n }, then the existing literature on Markov chain theory would still not shed much light on the tails of τ and hence the convergence of V τ. Thus, the fact that K has sufficient exponential tails for the convergence of V K is due to the recursive structure of the particular class of Markov chains we consider and seems to be a general property for this class of Markov chains. These results concerning the moments of L β -divergent perpetuity sequences complement the known literature on perpetuities and appear to be of some independent interest. Next, we go beyond the current literature by establishing a sharp asymptotic estimate for the running time of the algorithm, thereby showing that our algorithm is, in fact, strongly efficient; cf. Remark 2.2 below. To this end, we introduce methods from nonlinear renewal theory, as well as methods from Markov chain theory involving the first entrance and last exit times of the process {V n }. Finally, motivated by the Wentzell Freidlin theory of large deviations, we provide an optimality result; specifically, we consider other possible level-dependent changes of measure for the process {V n } selected from a wide class of dynamic importance sampling
4 246 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR algorithms [in the sense of Dupuis and Wang (2005)]. We show that our algorithm is the unique choice which attains bounded relative error, thus establishing the validity of our method amongst a natural class of possible algorithms. 2. The algorithm and a statement of the main results. 2.. Background: The forward and backward recursive sequences. We start with a general SFPE of the form (2.) V = D f(v) F Y (V ), where F Y : R R d R is deterministic, measurable and continuous in its first component. Let v be an element of the range of F Y,andlet{Y n } be an i.i.d. sequence of r.v. s such that Y n = Y for all n. Then the forward sequence generated D by the SFPE (2.) isdefinedby (2.2) V n (v) = F Yn F Yn F Y (v), n =, 2,..., V 0 = v, whereas the backward sequence generated by this SFPE is defined by (2.3) Z n (v) = F Y F Y2 F Yn (v), n =, 2,..., Z 0 = v. While the forward sequence is always Markovian, the backward equation need not be Markovian; however, for every v and n, V n (v) and Z n (v) are identically distributed. This observation is critical since it suggests that regardless of whether the SFPE was originally obtained via forward or backward iteration a natural approach to analyzing the process is through its forward iterates Background: Asymptotic estimates. We now specialize to the recursion (.). This recursion is often referred to as Letac s model E. Let F n denote the σ -field generated by {(A i,b i,d i ) : i n},andlet λ(α) = E [ A α] and (α) = log λ(α), α R. Let μ denote the distribution of Y = (log A,B,D) and μ α denote the α-shifted distribution with respect to the first variable; that is, μ α (E) := (2.4) e αx dμ(x,y,z), E B ( R 3),α R, λ(α) E where, here and in the following, B(E) denotes the Borel sets of E. LetE α [ ] denote expectation with respect to this α-shifted measure. For any r.v. X,letL(X) denote the probability law of X, and let supp(x) denote the support of X. Also, write X L(X) to denote that X has this probability law. Given an i.i.d. sequence {X n }, we will often write X for a generic element of this sequence. Finally, for any function f, let dom(f ) denote the domain of f,andlet f, f, etc. denote the successive derivatives of f. We now state the main hypotheses needed to establish the asymptotic decay of P{V >u} in (.2); note that (H 0 ) is only needed to obtain the explicit representa-
5 IMPORTANCE SAMPLING FOR SFPE 247 tion of C,asgiveninCollamore and Vidyashankar (203b). These conditions will form the starting point of our study. HYPOTHESES. (H 0 )Ther.v.A has an absolutely continuous component with respect to Lebesgue measure with a nontrivial continuous density in a neighborhood of R. (H ) (ξ) = 0forsomeξ (0, ) dom( ). (H 2 ) E[ B ξ ] < and E[(A D ) ξ ] <. (H 3 ) P{A>,B >0} > 0orP{A>,B 0,D>0} > 0. Note that (H 3 ) implies that the process {V n } is nondegenerate (i.e., it is not concentrated at a single point). Under these hypotheses, it can be shown that the forward sequence {V n } generated by the SFPE (.) is a Markov chain which is ϕ-irreducible and geometrically ergodic [Collamore and Vidyashankar (203b), Lemma 5.]. Thus {V n } converges to a r.v. V which itself satisfies the SFPE (.). Moreover, with respect to its α-shifted measure, the process {V n } is transient [Collamore and Vidyashankar (203b), Lemma 5.2]. Our present goal is to develop an efficient Monte Carlo algorithm for evaluating P{V >u},forfixed u, which remains efficient in the asymptotic limit as u The algorithm. Since the forward process V n = A n max{d n,v n }+B n satisfies V n A n V n for large V n, and since {V n } is transient in its ξ-shifted measure, large deviation theory suggests that we consider shifted distributions and, in particular, the shifted measure μ ξ,whereξ is given as in (H ). To relate P{V >u} under its original measure to the paths of {V n } under μ ξ -measure, let C := [ M,M] for some M 0, and let π denote the stationary distribution of {V n }. Now define a probability measure γ on C by setting (2.5) γ(e)= π(e) π(c), E B(C). Let K := inf{n Z + : V n C}. TheninSection3, we will establish the following representation formula: K (2.6) P{V >u}=π(c)e γ [N u ], N u := {Vn >u}, where E γ [ ] denotes the expectation when the initial state V 0 γ. Thus motivated by large deviation theory and the previous formula, we simulate {V n } over a cycle emanating from the set C (with initial state V 0 γ ), and then returning to C,where simulation is performed in the dual measure, which we now describe. Set T u = inf{n : V n >u},andlet { μξ, for n =,...,T L(log A n,b n,d n ) = u, (D) μ, for n>t u, n=0
6 248 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR where μ ξ is defined as in (2.4)andξ is given as in (H ). Let {V n } be generated by the forward recursion (.3), but with a driving sequence {V n } {(log A n,b n,d n )} which is governed by (D) rather than by the fixed measure μ. Roughly speaking, the dual measure (D) shifts the distribution of log A n on a path of {V n } until this process exceeds the level u, and reverts to the original measure thereafter. Let E D [ ] denote expectation with respect to (D). To relate the simulated sequence in the dual measure to the required probability in the original measure, we introduce a weighting factor. Specifically, in the proof of Theorem 2.2 below, we will show E D [E u ]=π(c)e D [ Nu e ξs Tu {Tu <K} V 0 γ ], where S n := n i= log A i and γ is given as in (2.5). Using this identity, it is natural to introduce the importance sampling estimator (2.7) E u = N u e ξs Tu {Tu <K}. Then π(c)e u is an unbiased estimator for P{V >u}. However, since the stationary distribution π and hence the distribution γ is seldom known even if the underlying distribution of (log A,B,D) is known we first run multiple realizations of {V n } according to the known measure μ and thereby estimate π(c) and γ.let ˆπ k (C), ˆγ k denote the estimates obtained for π(c), γ, respectively, and let Êu,n denote the estimate obtained upon averaging the realizations of E u. This yields the estimator ˆπ k (C)Êu,n. This discussion can be formalized as follows: Rare event simulation algorithm using forward iterations of the SFPE V 0 ˆγ k,m= 0 repeat m m + V m = A m max{d m,v m }+B m, (log A m,b m,d m ) μ ξ until V m >uor V m C if V m >uthen repeat m m + V m = A m max{d m,v m }+B m, (log A m,b m,d m ) μ until V m C E u = N u e ξs Tu {Tu <K} else E u = 0 end if
7 IMPORTANCE SAMPLING FOR SFPE 249 The actual estimate is then obtained by letting E u,j (j =,...,n) denote the realizations of E u produced by the algorithm and setting P{V >u}= ˆπ k (C)Êu,n, where ˆπ k (C) = k k {V (j) C} and Êu,n = n E u,j, n j= j= where V (),V (2),...,V (k) is a sample from the distribution of V (which, we emphasize, is sampled from the center of the distribution). In Section 4, we describe how to obtain samples from V from a practical perspective. Finally, note that Êu,n also depends on k. It is worth observing that in the special case D = andb = 0, Letac s model E reduces to a multiplicative random walk. Moreover, in that case, one can always take γ to be a point mass at {}, at which point the process regenerates. In this much-simplified setting, our algorithm reduces to a standard regenerative importance sampling algorithm, as may be used to evaluate the stationary exceedance probabilities in a GI/G/ queue Consistency and efficiency of the algorithm. We begin by stating our results on consistency and efficiency. THEOREM 2.. Assume Letac s model E, and suppose that (H ), (H 2 ) and (H 3 ) are satisfied. Then for any C such that C supp(π) and any u such that u/ C, the algorithm is strongly consistent; that is, (2.8) lim k lim n ˆπ k(c)êu,n = P{V >u} a.s. REMARK 2.. If the stationary distribution π of {V n } is known on C (e.g., C ={v} for v R), then it will follow from the proof of the theorem that π(c)êu,n is an unbiased estimator for P{V >u}. THEOREM 2.2. Assume Letac s model E, and suppose that (H ) and (H 3 ) are satisfied. Also, in place of (H 2 ), assume that for some α>ξ, (2.9) E [( A B 2) α ] < and E [( A D 2) α ] <. Moreover, assume that one of the following two conditions holds: λ(α) < for some α< ξ; or E[( D +(A B )) α ] < for all α>0. Then, there exists an M>0 such that (2.0) sup sup u 2ξ [ E D E 2 ] u V 0 ˆγ k <. u 0 k Z +
8 250 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR Equation (2.0) implies that our estimator exhibits bounded relative error. However, a good choice of M is critical for the practical usefulness of the algorithm. A canonical method for choosing M can be based on the drift condition satisfied by {V n } (as given in Lemma 3. below), but in practice, a proper choice of M is problem-dependent and only obtained numerically based on the methods we introduce below in Section Running time of the algorithm. Next we provide precise asymptotics for the running time of the algorithm. In the following theorem, recall that K denotes the first return time to C (corresponding to the termination of the algorithm), whereas T u denotes the first passage time to (u, ). THEOREM 2.3. Assume Letac s model E, and suppose that hypotheses (H 0 ) (H 3 ) hold, is finite on {0,ξ} and for some ε>0, (2.) Then P ξ {V V 0 = v}=o ( v ε) as v. (2.2) (2.3) (2.4) [ lim E Tu D u E D [K {K< } ] < ; log u T u <K ] = (ξ) ; [ ] K lim E Tu D u log u T u <K = (0). REMARK 2.2. The ultimate objective of the algorithm is to minimize the simulation cost, that is, the total number of Monte Carlo simulations needed to attain a given accuracy. This grows according to (2.5) Var(E u ) { c E D [K T u <K]+c 2 E D [K {Tu K}] } as u for appropriate constants c and c 2 ;cf.siegmund (976). However, as a consequence of Theorem 2.4, we have that under the dual measure (D), E D [K T u <K] log u as u for some positive constant, while the last term in (2.5) converges to a finite constant. Thus, by combining Theorems 2.3 and 2.4, we conclude that our algorithm is indeed strongly efficient.
9 IMPORTANCE SAMPLING FOR SFPE Optimality of the algorithm. We conclude with a comparison of our algorithm to other algorithms obtained through forward iterations involving alternative measure transformations. A natural alternative would be to simulate with some measure μ α until the time T u = inf{n : V n >u} and revert to some other measure μ β thereafter. More generally, we may consider simulating from a general class of distributions with some form of state dependence, as we now describe. Let ν( ; w,q) denote a probability measure on B(R 3 ) indexed by two parameters, w [0, ] and q {0, }, where(w, q) denotes a realization of (W n,q n) for W n := log V n and Q n := {Tu <n}. log u Set W n = W n {W n [0,]} + (W n ) {W n >}. Note that (W n,q n ) is F n measurable. Let ν n ( ) = ν( ; W n,q n ) be a random measure derived from the measure ν. Observe that, conditioned on F n, ν n is a probability measure. Now, we assume that the family of random measures {ν n ( )} {ν( ; W n,q n )} satisfy the following regularity condition: Condition (C 0 ): μ ν for each pair (w, q) [0, ] {0, }, and ( ) dμ E D [log dν (Y n; W n,q n ) W n = w,q n = q] is piecewise continuous as a function of w. Let M denote the class of measures {ν n } where ν satisfies (C 0 ). Thus, we consider a class of distributions where we shift all three members of the driving sequence Y n = (log A n,b n,d n ) in some way, allowing dependence on the history of the process through the parameters (w, q). Now suppose that simulation is performed using a modification of our main algorithm, where Y n ν n for some collection ν := {ν,ν 2,...} M. LetE u (ν) denote the corresponding importance sampling estimator. Let ˆπ k denote an empirical estimate for π, as described in the discussion of our main algorithm, and let E (ν) u,,...,e(ν) u,n denote simulated estimates for E u (ν) obtained by repeating this algorithm, but with {ν n } in place of the dual measure (D). Then it is easy to see, using the arguments of Theorem 2.2, that (2.6) lim lim ˆπ k(c)ê(ν) k n u,n = P{V >u}, where Ê(ν) u,n denotes the average of n simulated samples of E u (ν) (and depends on k); cf. (2.8). It remains to compare the variance of these estimators, which is the subject of the next theorem. THEOREM 2.4. Assume that the conditions of Theorems 2.2 and 2.3 hold. Let ν be a probability measure on B(R 3 ) indexed by parameters w [0, ] and q {0, }, and assume that ν M. Then for any initial state v C, (2.7) lim inf u log u log( u 2ξ [( E ν E (ν)) 2 V0 u = v ]) 0.
10 252 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR Moreover, equality holds in (2.7) if and only if ν( ; w,0) = μ ξ and ν( ; w,) = μ for all w [0, ]. Thus, the dual measure in (D) is the unique optimal simulation strategy within the class M. 3. Proofs of consistency and efficiency. We start with consistency. PROOF OF THEOREM 2.. Let K 0 := 0, K n := inf{i >K n : V i C}, n Z +, denote the successive return times of {V n } to C. Set X n = V Kn, n= 0,,... Then we claim that the stationary distribution of {X n } is given by γ(e) = π(e)/π(c), whereπ is the stationary distribution of {V n }. Notice that {X n } is ϕ-irreducible and geometrically ergodic [cf. Collamore and Vidyashankar (203b), Lemma 5.]. Now set N n := n i= {Vi C}. Then by the law of large numbers for Markov chains, (3.) π(e) = lim n N n n ( N ) n {Xi E} N n i= = π(c)γ (E) a.s., E B(C). Hence γ(e)= π(e)/π(c). Next, we assert that P{V >u}=π(c)e γ [N u ]. To establish this equality, again apply the law of large numbers for Markov chains to obtain that P{V >u}:=π ( (u, ) ) (3.2) = lim n { KNn n i=0 {Vi >u} + n i=k Nn {Vi >u} By the Markov renewal theorem [Iscoe, Ney and Nummelin (985), Lemma 6.2], we claim that the last term on the right-hand side (RHS) of this equation converges to zero a.s. To see this, let I(n) denote the last regeneration time occurring in the interval [0,n],letJ(n)denote the first regeneration time occurring after time n, letτ denote a typical regeneration time. Then by Lemma 6.2 of Iscoe, Ney and Nummelin (985) and the geometric ergodicity of {V n }, (3.3) lim n E[ e ε(j(n) I(n))] = E[τ] E[ τe ετ ] <, some ε>0. Now by Nummelin s split-chain construction [Nummelin (984), Section 4.4] and by the definition of K Nn, I(n) K Nn n J(n). Hence by a Borel Cantelli argument, (3.4) n n i=k Nn {Vi >u} 0 a.s. asn. } a.s.
11 IMPORTANCE SAMPLING FOR SFPE 253 Next consider the first term on the RHS of (3.2). Assume V 0 has distribution γ. For any n Z +,setn u,n = K n i=k n {Vi >u} (namely, the number of exceedances above level u which occur over the successive cycles starting from C). Let S N n = N u, + +N u,n, n Z +. It can be seen that {(X n,n u,n )} is a positive Harris chain and, hence, by another application of the law of large numbers for Markov chains, (3.5) Sn N E γ [N u ]= lim n n K n := lim {Vi >u} a.s. n n Since N n /n π(c) as n, it follows from (3.2), (3.4) and(3.5) that ( K N n Nn ) (3.6) P{V >u}= lim {Vi >u} = π(c)e γ [N u ]. n n N n i=0 Finally recall E u := N u e ξs Tu {Tu <K} and hence by an elementary change-ofmeasure argument [as in (3.8) below],wehavee γ [N u ]=E D [E u ]. To complete the proof, it remains to show that lim E [ D Nu e ξs ] [ (3.7) Tu {Tu <K} V 0 ˆγ k = ED Nu e ξs Tu {Tu <K} V 0 γ ], k where S n := n i= log A i.set [ (3.8) H(v)= E D ED [N u F Tu ]e ξs Tu {Tu <K} V 0 = v ]. We now claim that H(v) is uniformly bounded in v C. To establish this claim, first apply Proposition 4. of Collamore and Vidyashankar (203b) to obtain that ( ( ) ) VTu (3.9) E D [N u F Tu ] {Tu <K} C (u) log + C 2 (u) {Tu <τ}, u where τ K is the first regeneration time and C i (u) C i < as u (i =, 2). Moreover, for Z n := V n /(A A n ), we clearly have ( ) ξ e ξs Tu = u ξ VTu (3.0) Z ξ T u u. Substituting the last two equations into (3.8) yields H(v) [ E D Z ξ (3.) T u {Tu <τ} V 0 = v ] for finite constants and, where the last step was obtained by Collamore and Vidyashankar (203b), Lemma 5.5(ii). Consequently, H(v) is bounded uniformly in v C. Since ˆγ k and γ are both supported on C, it then follows since ˆγ k γ that lim k C H(v)d ˆγ k (v) = C n=0 H (v) dγ (v),
12 254 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR which is (3.7). Before turning to the proof of efficiency, it will be helpful to have a characterization of the return times of {V n } to the set C when Y n μ β for β dom( ), where Y n := (log A n,b n,d n ) and μ β is defined according to (2.4). First let λ β (α) = eαx dμ β (x,y,z), β (α) = log λ β (α), α R R 3 and note by the definition of μ β that (3.2) β (α) = (α + β) (β). Recall that if P denotes the transition kernel of {V n }, then we say that {V n } satisfies a drift condition if there exists a function h : R [0, ) such that (D) h(y)p (x, dy) ρh(x) for all x/ C, S where ρ (0, ) and C is some Borel subset of R. LEMMA 3.. Assume Letac s model E, and suppose that (H ), (H 2 ) and (H 3 ) are satisfied. Let {V n } denote the forward recursive sequence generated by this SFPE under the measure μ β, chosen such that inf α>0 λ β (α) <. Then the drift condition (D) holds with h(x) = x α, where α>0 is any constant satisfying the equation β (α) < 0. Moreover, we may take ρ = ρ β and C =[ M β,m β ], where (3.3) ρ β := tλ β (α) for some t (, λ β (α) and ( Eβ [ B α]) /α ( λ β (α)(t ) ) /α, if α (0, ), (3.4) M β := ( Eβ [ B α]) /α (( λ β (α) ) /α ( t /α )), if α. Furthermore, for any (ρ β,m β ) satisfying this pair of equations, (3.5) sup P β {K>n V 0 = v} ρβ n for all n Z +. v C PROOF. yields (3.6) Let B n := A n D n + B n.ifα, then Minkowskii s inequality E β [ V α V 0 = v ] (( [ E β A α ]) /α ( v + Eβ [ B α]) /α ) α ( = ρ β v α t /α + (E β[ B α ]) /α ) α ρ /α where ρ β := tλ β (α). v β )
13 IMPORTANCE SAMPLING FOR SFPE 255 Then (D) is established. For M β,sett /α + (E β [ B α ]) /α /(ρ /α β v) = and solve for v. Similarly, if α<, use x + y α x α + y α, α (0, ], in place of Minkowskii s inequality. Then (3.5) follows by a standard argument, as in Nummelin (984) orcollamore and Vidyashankar (203b), Remark 6.2. We now introduce some additional notation which will be needed in the proof of Theorem 2.2. LetA 0 and, for any n = 0,, 2,...,set P n = A 0 A n, n S n = log A i, i=0 V n B Z n = and Z (p) n = {K>n}, A 0 A n A n=0 0 A n where (3.7) B 0 = V 0 and B n = A n D n + B n. Also introduce the dual measure with respect to an arbitrary measure μ α,where α dom( ). Namely, define { μα, for n =,...,T u, (D α ) L(log A n,b n,d n ) = μ, for n>t u. Note that it follows easily from this definition that for any r.v. U which is measurable with respect to F K, [( ) Tu (3.8) E[U {Tu <K}]=E D λ(α) e αs Tu U {Tu <K}], an identity which will be useful in the following. PROOF OF THEOREM 2.2. Assume V 0 = v C. We will show that the result holds uniformly in v C. Case : λ(α) <,forsomeα< ξ. To evaluate [ E D E 2] [ u := ED N 2 u e 2ξS Tu {Tu <K}], first note that V n e S n := V n /P n := Z n.sincev Tu >u, it follows that 0 ue S Tu Z Tu. Moreover, as in the proof of Lemma 5.5 of Collamore and Vidyashankar (203b) [cf. (5.27), (5.28)], we obtain n B i B n Z n implying Z Tu {Tu <K} {n Tu <K}. P n Consequently, (3.9) P i=0 i u 2ξ E D [ E 2 u ] ED [N 2 u ( n=0 n=0 ) 2ξ ] B n {n Tu <K}. P n
14 256 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR If 2ξ, apply Minkowskii s inequality to the RHS to obtain ( u 2ξ [ E D E 2]) /2ξ [ ( ) B 2ξ /2ξ n u (E D {n Tu<K}]) (3.20) = n=0 n=0 N 2 u ( E [ N 2 u P ξ n P n 2ξ B n /2ξ {n T u <K}]), where the last step follows from (3.8). Using the independence of (A n, B n ) and {n <Tu K}, it follows by an application of Hölder s inequality that the left-hand side (LHS) of (3.20) is bounded above by ( [ E N 2r]) /2rξ ( [( u E A B n 2 ) sξ ]) /2sξ ( [ sξ /2sξ, n E P n {n <T u K}]) n=0 where r + s =. Set ζ = sξ for the remainder of the proof. The last term on the RHS of the previous equation may be expressed in μ ζ -measure as E [ P ζ n ( ) n P ζ (3.2) {n <T u K}] = λ( ζ) {n <T u K}. Substituting this last equation into the upper bound for (3.20), we conclude that ( u 2ξ [ E D E 2]) /2ξ (( ) n P ζ u J n λ( ζ) {n <T u K} ) /2ζ (3.22), where n=0 J n := ( E [ Nu 2r ]) /2rξ ( [( E A B n n) 2 ζ ]) /2ζ, n= 0,,... Since N u K, applying Lemma 3. with β = 0 yields (3.23) sup E [ Nu 2r V 0 = v ] < for any finite constant r. v C Moreover, for sufficiently small s > and ζ = sξ, it follows by (2.9) that E[(A B 2 ) ζ ] <. Thus, to show that the quantity on the LHS of (3.22) isfinite, it suffices to show for some ζ>ξand some t>, P ζ {n <T u K} ( tλ( ζ) ) n+ (3.24) for all n N 0, where N 0 is a finite positive integer, uniformly in u and uniformly in v C. To this end, note that {T u K>n } {K>n }, and by Lemma 3. [using that min α λ ζ (α) < (λ( ζ)) by (3.2)], sup P ζ {K>n V 0 = v} ( tλ( ζ) ) n+ (3.25), v C where C := [ M,M] and M>M ξ.[sinceζ>ξwas arbitrary, we have replaced M ζ with M ξ in this last expression. We note that we also require M>M 0 for (3.23) to hold.] We have thus established (3.24) for the case 2ξ.
15 IMPORTANCE SAMPLING FOR SFPE 257 If 2ξ <, then the above argument can be repeated but using the deterministic inequality x + y α x α + y α, α (0, ], in place of Minkowskii s inequality, establishing the theorem for this case. Case 2: λ( ζ)= for ζ>ξ, while E[(A B) α ] < for all α>0. First assume 2ξ. Then, as before, (u 2ξ E D [E 2 u ])/2ξ is bounded above by the RHS of (3.20). In view of the display following (3.20), it is sufficient to show that uniformly in v C (for some set C =[ M,M]), (3.26) sup E [ P ζ n {n <T u K}] < for some ζ>ξ. n Z + Set W n = P ζ n {n <T u K}, and first observe that E[W n ] <. Indeed, ( ) B n (3.27) V n A n V n +, n=, 2,... A n V n and n <T u K V i (M, u) for i =,...,n. Hence (3.27) implies ( ) A ζ u ζ ( i + B ) ζ i, M MA i (3.28) i =,...,n on{n <T u K}. This equation yields an upper bound for P n. Using the assumption that E[(A B) α ] < for all α>0, we conclude by (3.28) thate[w n ] <. Next let {L k } be a sequence of positive real numbers such that L k 0ask, and set F k = k i= {A i L k }. Assume that L k has been chosen sufficiently small such that (3.29) E[W k F c k ], k=, 2,... k2 Then it suffices to show that (3.30) E[W k Fk ] <. k=0 To verify (3.30), set A 0,k = and introduce the truncation A n,k = A n {An L k } + L k {An <L k }, n=, 2,... Let λ k (α) = E[ A α,k ] and W k = ( A 0 A k ) ζ {k <Tu K}. After a change of measure [as in (3.8), (3.2)], we obtain (3.3) E[ W k ] ( λ k ( ζ) ) k E ζ [ {K>k } Fk ]. To evaluate the expectation on the RHS, start with the inequality ( ) B n (3.32) V n,k A n,k V n,k +, n=, 2,... A n,k V n,k
16 258 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR Write E ζ,w [ ] = E ζ [ V 0,k = w]. Then for any β>0, a change of measure followed by an application of Hölder s inequality yields [ E ζ,w V,k β] wβ [ ( λ k ( ζ) E ( A,k ) β ζ + B ) β ] w A,k (3.33) [( ρ k w (t β q E + B ) qβ ]) /q, w A,k where ρ k := (E[( A,k ) p(β ζ) ]) /p (t/λ k ( ζ)) and p + q =. Set ˆβ = arg min α λ(α) and choose β such that p(β ζ)= ˆβ, and assume that p> is sufficiently small such that ρ k <, k. Noting that λ( ˆβ) <, we conclude that for t (,(λ(ˆβ)) /p ) and for some constant ρ (0, ), lim λ ( [ k( ζ)ρ k := t lim E ( A,k ) p(β ζ)]) /p ( = t λ( ˆβ) ) /p (3.34) <ρ, k k where the second equality was obtained by observing that as k, L k 0 and hence λ k (α) λ(α), α>0. Equation (3.34) yields that λ k ( ζ)ρ k ρ for all k k 0, and with this value of ρ,(3.33) yields [ (3.35) E ζ,w V,k β] ρwβ for all k k 0, λ k ( ζ) provided that [( t q E + B ) qβ ] (3.36). w A,k Our next objective is to find a set C =[ M,M] such that for all w/ C, (3.36) holds. First assume qβ and apply Minkowskii s inequality to the LHS of (3.36). Then set this quantity equal to one, solve for w and set w = M k.after some algebra, this yields ( [( ) B qβ ]) /qβ (3.37) M k = t /β E. A,k The quantity in parentheses tends to E[(A B) qβ ] as k. Using the assumption E[(A B) α ] < for α>0, we conclude M := sup k M k <. If qβ <, then a similar expression is obtained for M by using the deterministic inequality x + y β x β + y β in place of Minkowskii s inequality. To complete the proof, iterate (3.35) with C =[ M,M] (as in the proof of Lemma 3.) to obtain that ( ) ρ k+ (3.38) E ζ [ {K>k } Fk ] for all k k 0. λ k ( ζ) Note that on the set F k, {V n,k : n k} and {V n : n k} agree, and thus {K >k } coincides for these two sequences. Substituting (3.38) into(3.3) yields (3.30) as required. Finally, the modifications needed when 2ξ < follow along the lines of those outlined in case, so we omit the details.
17 IMPORTANCE SAMPLING FOR SFPE Examples and simulations. In this section we provide several examples illustrating the implementation of our algorithm. 4.. The ruin problem with stochastic investments. Let the fluctuations in the insurance business be governed by the classical Cramér Lundberg model, (4.) N t X t = u + ct ζ n, where u denotes the company s initial capital, c its premium income rate, {ζ n } the claims losses, and N t the number of Poisson claim arrivals occurring in [0,t]. Let {ζ n } be i.i.d. and independent of {N t }. We now depart from this classical model by assuming that at discrete times n =, 2,..., the surplus capital is invested, earning stochastic returns {R n }, assumed to be i.i.d. Let L n := (X n X n ) denote the losses incurred by the insurance business during the nth discrete time interval. Then the total capital of the insurance company at time n is described by the recursive sequence of equations (4.2) Y n = R n Y n L n, n=, 2,..., Y 0 = u, where it is typically assumed that E[log R] > 0andE[L] < 0. Our objective is to estimate the probability of ruin, (4.3) ψ(u):= P{Y n < 0, for some n Z + Y 0 = u}. By iterating (4.2), we obtain that Y n = (R R 2 R n )(Y 0 L n ), where L n := n i= L i /(R R i ). Thus ψ(u) = P{L n > u, some n}. Setting L = (sup n Z+ L n ) 0, then by an elementary argument [as in Collamore and Vidyashankar (203b), Section 3], we obtain that L satisfies the SFPE L = D (AL + B) + where A = D and B = D L (4.4). R R This can be viewed as a special case of Letac s model E with D := B/A. Now take { A n = exp (μ σ 2 ) } (4.5) σz n for all n, 2 where {Z n } is an i.i.d. sequence of standard Gaussian r.v. s. It can be seen that ξ = 2μ/σ 2 andμ ξ Normal(μ σ 2 /2,σ 2 ). We set μ = 0.2, σ 2 = 0.25, c =, {ζ n } Exp() and let {N t } beapoisson process with parameter /2. We implemented our algorithm to estimate the probabilities of ruin for u = 0, 00, 0 3, 0 4, 0 5. In all of our simulations, the distribution in step was based on k = 0 4,andV 000 was taken as an approximation to the limit r.v. V. We arrived at this choice using extensive exploratory analysis and two-sample comparisons using Kolmogorov Smirnov tests between V 000 and other values of V n,where n = 2000, 5000, 0,000 (with p-values 0.85). Also, it is worthwhile to point out here that by Sanov s theorem and Markov chain theory, the difference between n=
18 260 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR the approximating V n and V on C is exponentially small, since C is in the center of the distribution of V. In implementing the algorithm, we chose M = 0, since, arguing as in the proof of Lemma 3., we obtain that M β = min i=,2 M (i) β,where (4.6) M () B + β = inf β,α α (0,) ( A α, β,α )/α M (2) β = inf α [, ) B + β,α A β,α and ={α R : E β [A α ] < }. (Here β,α denotes the L α norm under the measure μ β.) As previously, we consider two cases, β = 0andβ = ξ. For each of these cases, this infimum is computed numerically, yielding M 0 = 0 = M ξ. Table summarizes the probabilities of ruin (with M = 0) and the lower and upper bounds of the 95% confidence intervals (LCL, UCL) based on 0 6 simulations. The confidence intervals in this and other examples in this section are based on the simulations; that is, the lower 2.5% and upper 97.5% quantiles of the simulated values of P{V >u}. We also evaluated the true constant C(u) := P{V >u}u ξ [which would appear in (.2) if this expression were exact], and the relative error (RE). Even in the extreme tail far below the probabilities of practical interest in this problem our algorithm works effectively and is clearly seen to have bounded relative error. For comparison, we also present the crude Monte Carlo estimates of the probabilities of ruin based on realizations of V We observe that for small values of u, the importance sampling estimates and the crude Monte Carlo estimates are close, which provides an empirical validation of the algorithm for small values of u The ARCH() process. Now consider the ARCH() process, which models the squared returns on an asset via the recurrence equation R 2 n = ( a + br 2 n ) ζ 2 n = A n R 2 n + B n, n=, 2,..., TABLE Importance sampling estimation for the ruin probability with investments obtained using M = 0 u P{V >u} LCL UCL C RE Crude est..0e e e e e 0.84e e 02.0e+02.33e 02.28e 02.39e 02 2.e 0 2.2e+0.29e 02.0e e e e e 0 2.2e+0 3.2e 03.0e e e e e e+0 8.0e 04.0e+05.98e 04.90e e 04.98e 0 2.6e+0 2.0e 04
19 IMPORTANCE SAMPLING FOR SFPE 26 where A n = bζn 2, B n = aζn 2,and{ζ n} is an i.i.d. Gaussian sequence. Setting V n = Rn 2, we see that V := lim n V n satisfies the SFPE V = D AV + B, andit is easy to verify that the assumptions of our theorems are satisfied. Then it is of interest to determine P{V >u} for large u. Next we implement our algorithm to estimate these tail probabilities. As in the previous example, we identify V 000 as an approximation to V. Turning to identification of M, recall that in the previous example, we worked with a sharpened form of the formulas in Lemma 3.; however, in other examples, this approach may, like Lemma 3., yield a poor choice for M. This is due to the fact that these types of estimate for Vn α typically use Minkowskii- or Hölder-type inequalities, which are usually not very sharp. We now outline an alternative method for obtaining M and demonstrate that it yields meaningful answers from a practical perspective. In the numerical method, we work directly with the conditional expectation and avoid upper-bound inequalities. We emphasize that this procedure applies to any process governed by Letac s model E. Numerical procedure for calculating M. The procedure involves a Monte Carlo method for calculating the conditional expectation appearing in the drift condition, that is, for evaluating [( ) α [( { } V D E β V 0 = v] = E β A max V 0 v, + B ) α ], v when β = 0andβ = ξ. The goal is to find an α such that M := max{m 0,M ξ } is minimized, where M β satisfies { } D E β [(A max v, + B ) α ] ρ β for all v>m β and some ρ β (0, ). v In this expression, α is chosen such that E β [A α ] (0, ), and hence we expect that ρ β (E β [A α ], ). Note that M β depends on the choice of α; thus, we also minimize over all possible α such that E β [A α ] (0, ). Let {(A i,b i,d i ) : i N} denote a collection of i.i.d. r.v. s having the same distribution as (A,B,D). Then the numerical method for finding an optimal choice of M proceeds as follows. First, using a root finding algorithm such as Gauss Hermite quadrature, solve for ξ in the equation E[A ξ ]=. Next, for E β [A α ] <, use a Monte Carlo procedure with sample size N to compute E β [ V α V 0 = v] and solve for v in the formula N N { } A Di i max v, i= + B i v α = ρ β, where this quantity is computed in the β-shifted measure for β {0, ξ} and where ρ β <. Then select α so that it provides the smallest possible value of v. Choose M β >vfor β = 0andβ = ξ. Finally, set M = max{m 0,M ξ }.
20 262 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR Implementation. We set b = 4/5 and considered the values a :.9 0 5,. It can be shown that E [ A α n ] (2b) α Ɣ(α + /2) =. Ɣ(/2) We solved the equation E[A ξ n]= using Gauss Hermite quadrature to obtain ξ = Under the ξ-shifted measure, A n = bx n and B n = ax n,where X n Ɣ(ξ + /2, 2). Using the formulas in (4.6) form, we obtained [upon taking the limit as δ 0 and using the Taylor approximation Ɣ(δ + /2) = Ɣ(/2) + δɣ (/2) + O(δ 2 )]thatm 0 = 0.362, when a =,.9 0 5,respectively. Moreover, by applying the numerical method we have just outlined, it can be seen that M ξ = 0. [In contrast, by applying Lemma 3. directly, one obtains M ξ = since λ( ξ)=.] Table 2 summarizes the simulation results for the tail probabilities of the ARCH() process based on 0 6 simulations. We notice a substantial agreement between the crude Monte Carlo estimates and those produced by our algorithm for small values of u. More importantly, we observe that the relative error remains bounded in all of the cases considered, while the simulation results using the statedependent algorithm in Blanchet, Lam and Zwart (202) show that the relative error based on their algorithm increases as the parameter u. When compared with the state-independent algorithm of Blanchet, Lam and Zwart (202), our simulations give comparable numerical results to those they report, although direct comparison is difficult due to the unquantified role of bias in their formulas. (In contrast, from a numerical perspective, the bias is negligible in our formulas, TABLE 2 Importance sampling estimation for the tail probability of ARCH() financial process with a =, u P{V >u} LCL UCL C RE Crude est. a =.0e e e e 02.7e e e 02.0e e e e 03 2.e+00.29e e 03.0e e 04.99e e e+00.28e e 04.0e e e e e e e 06.0e+05 4.e e e e e+00 NA a = e e e e e e+00 NA.0e e 09.98e e e e+00 NA.0e e 8.77e.04e 0.03e e+0 NA.0e e e e e 07.32e+0 NA.0e+05.9e 3.83e 3.99e 3.00e e+0 NA
21 IMPORTANCE SAMPLING FOR SFPE 263 as it involves the convergence of a Markov chain near the center of its distribution, which is known to occur at a geometric rate.) We emphasize that our method also applies to a wider class of problems, as illustrated by the previous example. Finally, we remark that a variant of the ARCH() process is the GARCH(, ) financial process, which can be implemented by similar methods. Numerical results for this model are roughly analogous, but further complications arise which can be addressed as in our preprint under the same title in Math arxiv. For a further discussion of examples governed by Letac s model E and its generalizations, see Collamore and Vidyashankar (203b), Section Proofs of results concerning running time of the algorithm. The proof of the first estimate will rely on the following. LEMMA 5.. Under the conditions of Theorem 2.3, there exist positive constants β and ρ (0, ) such that [ ] (5.) E ξ h(vn ) V n ρh(vn ) on {V n M} for some M <, where h(x) := x β {x>} + {x }. PROOF. Assume without loss of generality (w.l.o.g.) that V n = v>. Then by the strong Markov property, [ E ξ h(vn ) V n = v ] [ β = E ξ V {V >} V 0 = v ] + P ξ {V V 0 = v}. Using assumption (2.), we obtain that the second term on the RHS is o(v ε ), while the first term can be expressed as v β [( E ξ A max { v D, } + v ) β{v B >} V 0 = v ] v β [ β] E ξ A as v. Next observe that E ξ [A β ]=λ(ξ β) < if0<β<ξ. Thus, choosing β = ε (0,ξ),whereε is given as in (2.), we obtain that the lemma holds for any ρ = (E ξ [A ε ], ) and M < sufficiently large. PROOF OF THEOREM 2.3. We will prove (2.2) (2.4) in three steps, each involving separate ideas and certain preparatory lemmas. PROOF OF THEOREM 2.3, STEP. Equation (2.2) holds. Let M be given as in Lemma 5., and assume w.l.o.g. that M max{m,}. LetL sup{n Z + : V n (, M]} denote the last exit time of {V n } from (, M]. Thenit follows directly from the definitions that K L on {K < }, where we recall that K is the return time to the C-set. Thus it is sufficient to verify that E ξ [L] <. To this end, we introduce two sequences of random times. Set J 0 = 0and K 0 = 0 and, for each i Z +, K i = inf{n>j i : V n > M} and J i = inf { n>k i : V n (, M] }.
22 264 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR Our main interest is in {K i }, the successive times that the process escapes from the interval (, M],andκ i := K i K i. Let N denote the total number of times that {V n } exits (, M] and subsequently returns to (, M]. Then it follows that N+ L< i= Then by the transience of {V n } in μ ξ -measure, it follows that E ξ [N] <. It remains to show that E ξ [κ i ] <, uniformly in the starting state V κi ( M, ]. But note that the E ξ [κ i ] can be divided into two parts; first, the sojourn time that the process {V n } spends in ( M, ) prior to returning to (, M] and, second, the sojourn time in the interval (, M] prior to exiting again. Now if K denotes the first return time to (, M], thenbylemma5., κ i. P ξ { K = n V 0 = v} ρ n h(v) h( M) ρn. Hence E ξ [ K { K< } V 0 = v] <, uniformly in v> M. Thus, to establish the lemma, it is sufficient to show that E ξ [ N V 0 = v] <, uniformly in v (, M], where N denotes the total number of visits of {V n } to (, M]. To this end, first note that [ M, M] is petite. Moreover, it is easy to verify that (, M) is also petite for sufficiently large M. Indeed, for large M and V 0 < M, (.) implies V = A D + B w.p. p>0. Thus, {V n } satisfies a minorization with small set (, M). Consequently (, M] is petite and hence uniformly transient. We conclude E ξ [ N] <, uniformly in V 0 (, M]. Before proceeding to step 2, we need a slight variant of Lemma 4. in Collamore and Vidyashankar (203b). In the following, let A l be a typical ladder height of the process S n = n i= log A i in its ξ-shifted measure. LEMMA 5.2. Assume the conditions of Theorem 2.3. Then (5.2) { } lim VTu ξ u u >y T u <K = P ξ { V >y} for some r.v. V, where for all y 0, (5.3) P ξ {log V >y}= { E ξ [A l P ξ A l >z } dz. ] y (5.4) PROOF. It can be shown that V Tu u V as u
23 IMPORTANCE SAMPLING FOR SFPE 265 in μ ξ -measure, independent of V 0 C [see Collamore and Vidyashankar (203b), Lemma 4.]. Set y>. Then by (5.4), P ξ {V Tu /u > y} P ξ { V >y} as u ; and using the independence of this result on its initial state, we likewise have that P ξ {V Tu /u > y T u K} P ξ { V >y} as u. Hence we conclude (5.2), provided that lim inf u P ξ {T u <K} > 0}. But by the transience of {V n }, P ξ {T u < K} P ξ {K = }> 0asu. PROOF OF THEOREM 2.3, STEP 2. Equation (2.3) holds. With respect to the measure μ ξ, it follows by Lemma 9.3 of Siegmund (985) that T u (5.5) log u in probability (ξ) (since (ξ) = E ξ [log A]). Hence, conditional on {T u <K}, (T u / log u) ( (ξ)) in probability. To show that convergence in probability implies convergence in expectation, it suffices to show that the sequence {T u / log u} is uniformly integrable. Let M be given as in Lemma 5., and first suppose that M M and supp(v n ) [ M, ) for all n. Then, conditional on {T u <K}, T u >n V i ( M,u), i =,...,n. Now apply Lemma 5.. Iterating (5.), we obtain E[h(V n ) n i= Vi / C V 0 ] ρ n h(v 0 ), n =, 2,... Then, using the explicit form of the function h in Lemma 5., we conclude that with β given as in Lemma 5., ( ) (5.6) P ξ {T u >n T u <K} ρ n u β for all n. P ξ {T u <K} Now P ξ {T u <K} >0asu. Hence, letting E (u) ξ [ ] denote the expectation conditional on {T u <K}, we obtain that for some <, (5.7) E (u) ξ [ Tu log u ; T u log u η ] ρ η log u u β and for sufficiently large η, the RHS converges to zero as u. Hence {T u / log u} is uniformly integrable. If the assumptions at the beginning of the previous paragraph are not satisfied, then write T u = L + (T u L), wherel is the last exit time from the interval (, M], as defined in the proof of Theorem 2.3,step.Then(T u L) describes the length of the last excursion to level u after exiting (, M] forever. By a repetition of the argument just given, we obtain that (5.6) holds with (T u L) in place of T u ; hence {(T u L)/ log u} is uniformly integrable. Next observe by the proof of Theorem 2.3,step,thatE ξ [L/ log u] 0asu. The result follows.
24 266 J. F. COLLAMORE, G. DIAO AND A. N. VIDYASHANKAR Turning now to the proof of the last equation in Theorem 2.3, assume for the moment that (V 0 /u) = v> (we will later remove this assumption); thus, the process starts above level u and so its dual measure agrees with its initial measure. Also define L(z) = inf { n : V n z } for any z 0. LEMMA 5.3. Theorem 2.3, (5.8) Let (V 0 /u) = v> and t (0, ). Then under the conditions of lim u [ log u E L ( u t) ] V 0 u = v = t (0). PROOF. For notational simplicity, we will suppress the conditioning on (V 0 /u) = v in the proof. We begin by establishing an upper bound. Define S (u) n := n i= X (u) i Then it can be easily seen that where X (u) i := log ( A i + u t ( A i D i + B i )). (5.9) log V n log(vu) S n (u) for all n<l ( u t ). Now let L u (u t ) = inf{n : S n (u) ( t)log u log v}. ThenL(u t ) L u (u t ) for all u. By Wald s identity, E[S L u (u t ) ]=E[X(u) ]E[ L u (u t )]. Thus, letting O u := S L u (u t ) ( t)log u log v denote the overjump of {S n (u) } over a boundary at level ( t)log u + log v, we obtain (5.0) L ( u t ) ( t)log u + log v + E[O u] E[X (u) ]. Since E[X (u) ] (0) as u, the required upper bound will be established once we show that (5.) lim u log u E[O u]=0. To establish (5.), note as in the proof of Lorden s inequality [Asmussen (2003), Proposition V.6.] that E[O u ] E[Yu 2]/E[Y u], wherey u has the negative ladder height distribution of the process {S n (u) }. Next observe by Corollary VIII.4.4 of Asmussen (2003) that (5.2) E[Y u ]=m () u es u E[Y ] as u,
RARE EVENT SIMULATION FOR PROCESSES GENERATED VIA STOCHASTIC FIXED POINT EQUATIONS
Submitted arxiv: math.pr/0000000 RARE EVENT SIMULATION FOR PROCESSES GENERATED VIA STOCHASTIC FIXED POINT EQUATIONS By Jeffrey F. Collamore, Guoqing Diao, and Anand N. Vidyashankar University of Copenhagen
More informationTAIL ESTIMATES FOR STOCHASTIC FIXED POINT EQUATIONS VIA NONLINEAR RENEWAL THEORY
Submitted arxiv: math.pr/1103.2317 TAIL ESTIMATES FOR STOCHASTIC FIXED POINT EQUATIONS VIA NONLINEAR RENEWAL THEORY By Jeffrey F. Collamore and Anand N. Vidyashankar University of Copenhagen and George
More informationRare event simulation for the ruin problem with investments via importance sampling and duality
Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).
More informationRARE EVENT SIMULATION FOR STOCHASTIC FIXED POINT EQUATIONS RELATED TO THE SMOOTHING TRANSFORMATION. Jie Xu
Proceedings of the 2013 Winter Simulation Conference R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, and M. E. Kuhl, eds. RARE EVENT SIMULATION FOR STOCHASTIC FIXED POINT EQUATIONS RELATED TO THE SMOOTHING
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationarxiv: v2 [math.pr] 4 Sep 2017
arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian
More informationRandom recurrence equations and ruin in a Markov-dependent stochastic economic environment
To appear in The Annals of Applied Probability received May 2007, revised November 2008). Random recurrence equations and ruin in a Markov-dependent stochastic economic environment Jeffrey F. Collamore
More informationLimit theorems for dependent regularly varying functions of Markov chains
Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr
More informationPoisson Processes. Stochastic Processes. Feb UC3M
Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written
More informationStability of the Defect Renewal Volterra Integral Equations
19th International Congress on Modelling and Simulation, Perth, Australia, 12 16 December 211 http://mssanz.org.au/modsim211 Stability of the Defect Renewal Volterra Integral Equations R. S. Anderssen,
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2
More informationEfficient Rare-event Simulation for Perpetuities
Efficient Rare-event Simulation for Perpetuities Blanchet, J., Lam, H., and Zwart, B. We consider perpetuities of the form Abstract D = B 1 exp Y 1 ) + B 2 exp Y 1 + Y 2 ) +..., where the Y j s and B j
More informationTHE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico
The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider
More informationThe main results about probability measures are the following two facts:
Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a
More informationAdventures in Stochastic Processes
Sidney Resnick Adventures in Stochastic Processes with Illustrations Birkhäuser Boston Basel Berlin Table of Contents Preface ix CHAPTER 1. PRELIMINARIES: DISCRETE INDEX SETS AND/OR DISCRETE STATE SPACES
More informationLectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes
Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationBRANCHING PROCESSES 1. GALTON-WATSON PROCESSES
BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented
More informationIntroduction to Rare Event Simulation
Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31
More informationWeak quenched limiting distributions of a one-dimensional random walk in a random environment
Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September
More informationLecture 5: Importance sampling and Hamilton-Jacobi equations
Lecture 5: Importance sampling and Hamilton-Jacobi equations Henrik Hult Department of Mathematics KTH Royal Institute of Technology Sweden Summer School on Monte Carlo Methods and Rare Events Brown University,
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationSimultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that
More informationRefining the Central Limit Theorem Approximation via Extreme Value Theory
Refining the Central Limit Theorem Approximation via Extreme Value Theory Ulrich K. Müller Economics Department Princeton University February 2018 Abstract We suggest approximating the distribution of
More informationPractical approaches to the estimation of the ruin probability in a risk model with additional funds
Modern Stochastics: Theory and Applications (204) 67 80 DOI: 05559/5-VMSTA8 Practical approaches to the estimation of the ruin probability in a risk model with additional funds Yuliya Mishura a Olena Ragulina
More informationStrong approximation for additive functionals of geometrically ergodic Markov chains
Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability
More informationKesten s power law for difference equations with coefficients induced by chains of infinite order
Kesten s power law for difference equations with coefficients induced by chains of infinite order Arka P. Ghosh 1,2, Diana Hay 1, Vivek Hirpara 1,3, Reza Rastegar 1, Alexander Roitershtein 1,, Ashley Schulteis
More informationEstimates for probabilities of independent events and infinite series
Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationAppendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)
Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem
More informationA log-scale limit theorem for one-dimensional random walks in random environments
A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationABC methods for phase-type distributions with applications in insurance risk problems
ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon
More informationUniversity Of Calgary Department of Mathematics and Statistics
University Of Calgary Department of Mathematics and Statistics Hawkes Seminar May 23, 2018 1 / 46 Some Problems in Insurance and Reinsurance Mohamed Badaoui Department of Electrical Engineering National
More informationA regeneration proof of the central limit theorem for uniformly ergodic Markov chains
A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,
More information3 Integration and Expectation
3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ
More informationRuin Probabilities of a Discrete-time Multi-risk Model
Ruin Probabilities of a Discrete-time Multi-risk Model Andrius Grigutis, Agneška Korvel, Jonas Šiaulys Faculty of Mathematics and Informatics, Vilnius University, Naugarduko 4, Vilnius LT-035, Lithuania
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationFaithful couplings of Markov chains: now equals forever
Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu
More informationRelative growth of the partial sums of certain random Fibonacci-like sequences
Relative growth of the partial sums of certain random Fibonacci-like sequences Alexander Roitershtein Zirou Zhou January 9, 207; Revised August 8, 207 Abstract We consider certain Fibonacci-like sequences
More informationFinite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims
Proceedings of the World Congress on Engineering 29 Vol II WCE 29, July 1-3, 29, London, U.K. Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Tao Jiang Abstract
More information9 Brownian Motion: Construction
9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationConsistency of the maximum likelihood estimator for general hidden Markov models
Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models
More informationAn Introduction to Probability Theory and Its Applications
An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I
More informationStability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk
Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid
More informationModelling the risk process
Modelling the risk process Krzysztof Burnecki Hugo Steinhaus Center Wroc law University of Technology www.im.pwr.wroc.pl/ hugo Modelling the risk process 1 Risk process If (Ω, F, P) is a probability space
More informationReduced-load equivalence for queues with Gaussian input
Reduced-load equivalence for queues with Gaussian input A. B. Dieker CWI P.O. Box 94079 1090 GB Amsterdam, the Netherlands and University of Twente Faculty of Mathematical Sciences P.O. Box 17 7500 AE
More informationLIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974
LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the
More informationarxiv: v1 [math.pr] 31 Aug 2009
The Annals of Applied Probability 2009, Vol. 19, No. 4, 1404 1458 DOI: 10.1214/08-AAP584 c Institute of Mathematical Statistics, 2009 arxiv:0908.4479v1 [math.pr] 31 Aug 2009 RANDOM RECURRENCE EQUATIONS
More informationl(y j ) = 0 for all y j (1)
Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that
More informationOther properties of M M 1
Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected
More informationDepartment of Econometrics and Business Statistics
ISSN 440-77X Australia Department of Econometrics and Business Statistics http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/ Nonlinear Regression with Harris Recurrent Markov Chains Degui Li, Dag
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationThe strictly 1/2-stable example
The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such
More informationRandom Bernstein-Markov factors
Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit
More informationON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS
Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationLecture Notes on Risk Theory
Lecture Notes on Risk Theory February 2, 21 Contents 1 Introduction and basic definitions 1 2 Accumulated claims in a fixed time interval 3 3 Reinsurance 7 4 Risk processes in discrete time 1 5 The Adjustment
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationComplexity of two and multi-stage stochastic programming problems
Complexity of two and multi-stage stochastic programming problems A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA The concept
More informationNormed Vector Spaces and Double Duals
Normed Vector Spaces and Double Duals Mathematics 481/525 In this note we look at a number of infinite-dimensional R-vector spaces that arise in analysis, and we consider their dual and double dual spaces
More informationMarked Point Processes in Discrete Time
Marked Point Processes in Discrete Time Karl Sigman Ward Whitt September 1, 2018 Abstract We present a general framework for marked point processes {(t j, k j ) : j Z} in discrete time t j Z, marks k j
More informationPractical unbiased Monte Carlo for Uncertainty Quantification
Practical unbiased Monte Carlo for Uncertainty Quantification Sergios Agapiou Department of Statistics, University of Warwick MiR@W day: Uncertainty in Complex Computer Models, 2nd February 2015, University
More informationMarkov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.
Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce
More informationAsymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment
Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Catherine Matias Joint works with F. Comets, M. Falconnet, D.& O. Loukianov Currently: Laboratoire
More informationJUHA KINNUNEN. Harmonic Analysis
JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes
More informationChapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries
Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.
More informationOn the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables
On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,
More informationMean-field dual of cooperative reproduction
The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time
More informationExact Simulation of the Stationary Distribution of M/G/c Queues
1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate
More informationInference For High Dimensional M-estimates. Fixed Design Results
: Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and
More informationON CONVERGENCE RATES OF GIBBS SAMPLERS FOR UNIFORM DISTRIBUTIONS
The Annals of Applied Probability 1998, Vol. 8, No. 4, 1291 1302 ON CONVERGENCE RATES OF GIBBS SAMPLERS FOR UNIFORM DISTRIBUTIONS By Gareth O. Roberts 1 and Jeffrey S. Rosenthal 2 University of Cambridge
More informationRare-Event Simulation
Rare-Event Simulation Background: Read Chapter 6 of text. 1 Why is Rare-Event Simulation Challenging? Consider the problem of computing α = P(A) when P(A) is small (i.e. rare ). The crude Monte Carlo estimator
More informationFluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue
Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue J. Blanchet, P. Glynn, and J. C. Liu. September, 2007 Abstract We develop a strongly e cient rare-event
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationLecture Notes on Random Walks in Random Environments
Lecture Notes on Random Walks in Random Environments Jonathon Peterson Purdue University February 2, 203 This lecture notes arose out of a mini-course I taught in January 203 at Instituto Nacional de Matemática
More informationIBM Almaden Research Center, San Jose, California, USA
This article was downloaded by: [Stanford University] On: 20 July 2010 Access details: Access Details: [subscription number 731837804] Publisher Taylor & Francis Informa Ltd Registered in England and Wales
More informationThe Codimension of the Zeros of a Stable Process in Random Scenery
The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar
More informationThe exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case
The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case Konstantin Borovkov and Zbigniew Palmowski Abstract For a multivariate Lévy process satisfying
More informationUpper and lower bounds for ruin probability
Upper and lower bounds for ruin probability E. Pancheva,Z.Volkovich and L.Morozensky 3 Institute of Mathematics and Informatics, the Bulgarian Academy of Sciences, 3 Sofia, Bulgaria pancheva@math.bas.bg
More informationOn the Central Limit Theorem for an ergodic Markov chain
Stochastic Processes and their Applications 47 ( 1993) 113-117 North-Holland 113 On the Central Limit Theorem for an ergodic Markov chain K.S. Chan Department of Statistics and Actuarial Science, The University
More informationModern Discrete Probability Branching processes
Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume II: Probability Emlyn Lloyd University oflancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester - New York - Brisbane
More informationIEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence
IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence 1 Overview We started by stating the two principal laws of large numbers: the strong
More informationThe range of tree-indexed random walk
The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More informationReinsurance and ruin problem: asymptotics in the case of heavy-tailed claims
Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims Serguei Foss Heriot-Watt University, Edinburgh Karlovasi, Samos 3 June, 2010 (Joint work with Tomasz Rolski and Stan Zachary
More informationA Note on the Approximation of Perpetuities
Discrete Mathematics and Theoretical Computer Science (subm.), by the authors, rev A Note on the Approximation of Perpetuities Margarete Knape and Ralph Neininger Department for Mathematics and Computer
More informationConnection to Branching Random Walk
Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host
More informationBisection Ideas in End-Point Conditioned Markov Process Simulation
Bisection Ideas in End-Point Conditioned Markov Process Simulation Søren Asmussen and Asger Hobolth Department of Mathematical Sciences, Aarhus University Ny Munkegade, 8000 Aarhus C, Denmark {asmus,asger}@imf.au.dk
More informationOn Finite-Time Ruin Probabilities in a Risk Model Under Quota Share Reinsurance
Applied Mathematical Sciences, Vol. 11, 217, no. 53, 269-2629 HIKARI Ltd, www.m-hikari.com https://doi.org/1.12988/ams.217.7824 On Finite-Time Ruin Probabilities in a Risk Model Under Quota Share Reinsurance
More informationGeneral Glivenko-Cantelli theorems
The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................
More informationTail inequalities for additive functionals and empirical processes of. Markov chains
Tail inequalities for additive functionals and empirical processes of geometrically ergodic Markov chains University of Warsaw Banff, June 2009 Geometric ergodicity Definition A Markov chain X = (X n )
More information