Efficient Simulations for the Exponential Integrals of Hölder Continuous Gaussian Random Fields

Size: px
Start display at page:

Download "Efficient Simulations for the Exponential Integrals of Hölder Continuous Gaussian Random Fields"

Transcription

1 Efficient Simulations for the Exponential Integrals of Hölder Continuous Gaussian Random Fields 0 Jingchen Liu and Gongjun Xu, Columbia University and University of Minnesota In this paper, we consider a Gaussian random field ft living on a compact set R d and the computation of the tail probabilities P eft dt > e b as b. We design asymptotically efficient importance sampling estimators for a general class of Hölder continuous Gaussian random fields. In addition to the variance control, we also analyze the bias relative to the interesting tail probabilities caused by the discretization.. INRODUCION In this paper, we focus on the design and the analysis of efficient Monte Carlo methods for computing the tail probabilities of integrals of exponential functions of Gaussian random fields living on a compact domain. Suppose that {ft : t } is a continuous Gaussian random field with zero mean and unit variance. hat is, for each finite subset {t,..., t n }, ft,..., ft n is a multivariate Gaussian random vector with Eft i = 0 and Varft i =, i =,, n. he domain is a d-dimensional compact subset of R d. Further conditions will be imposed on f in the later discussions. Define I = e σtft+µt dt, where µ and σ are two deterministic functions and σ is strictly positive. Our focus is on the tail probabilities.. Motivations wb = P I > e b = P e σtft+µt dt > e b as b. 2 he exponential integral of a Gaussian random field is a central object of several probability models. he integral is the limiting object of the weighted sum of correlated lognormal random variables. he tail event of the latter sum is an important topic. An incomplete list of works is given by [Ahsan 978; Duffie and Pan 997; Glasserman et al. 2000; Basak and Shapiro 200; Deutsch 2004; Foss and Richards 200]. In the portfolio risk analysis, consider a portfolio consisting of n assets S,..., S n each of which is associated with a weight e.g. his research is supported in part by NSF CMMI and NSF SES ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

2 0:2 Liu and Xu number of shares w,..., w n. One popular model assumes that log S,..., log S n is a multivariate Gaussian random vector. he value of the portfolio, S = n i= w is i, is then the sum of correlated log-normal random variables. Without loss of generality, we let w i = n. One interesting situation is that the portfolio size is large and the asset prices are usually highly correlated. One may employ a latent space approach used in the literature of social network. More specifically, we construct a Gaussian process {ft : t } and associate each asset i with a latent variable t i so that log S i = ft i. hen, the log asset prices fall into a subset of the continuous Gaussian process. Furthermore, there exists a deterministic process wt so that wt i = w i. hen, the unit share value of the portfolio is n wi S i = n wti e fti. For detailed discussion of latent state modeling, see [Hoff et al. 2002; Snijders 2002; Handcock et al. 2007; Xing et al. 200]. In the asymptotic regime that n and the correlations among the asset prices become close to one, the subset {t i : i =,..., n} becomes dense in. Ultimately, we obtain the limit n n w i S i i= wte ft htdt where ht indicates the limiting spatial distribution of {t i : i =,..., n} in. Let µt = log wt+log ht. hen the tail probability of the limiting unit share price is P e ft+µt dt > e b. Another application of the current study lies in option pricing. A stylized model for asset price St as a function of time t is geometric Brownian motion, that is, St = e W t, where W t is a Brownian motion; see [Black and Scholes 973; Merton 973]. hen the payoff of an Asian option is at expiration time is a function of the averaged price 0 ew t dt. For instance, the payoff of an Asian call option with strike price K is max 0 ew t dt K, 0; the expected payoff of a digital Asian call option is precisely P 0 ew t dt > K..2. Literature and contribution here is a rich rare-event simulation literature for the sum of independent random variables. he problems are mostly classified to light-tailed and heavy-tailed problems. An incomplete list of recent works includes [Asmussen and Kroese 2006; Dupuis et al. 2007; Blanchet and Glynn 2008; Blanchet and Liu 2008; Blanchet et al. 2008]. For the dependent case, [Snijders 20] proposes several efficient Monte Carlo estimators for the sum of finitely many correlated lognormal random variables. he current problem is a substantial generalization of this work. We employ a different change of measure that is a mixture of infinitely many exponential ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

3 Efficient Simulations for Gaussian Random Fields 0:3 change of measures suggesting that the rare-event is mostly caused by the abnormal behavior of the random field at one location. Another related literature is the extreme behaviors of Gaussian random fields. he results range from general bounds to sharp asymptotic approximations. An incomplete list of works includes [Landau and Shepp 970; Marcus and Shepp 970; Sudakov and sirelson 974; Borell 975; sirelson et al. 976; Berman 985; Hüsler 990; Ledoux and alagrand 99; alagrand 996]. A few lines of investigations on the supremum norm are given as follows. Assuming locally stationary structure, the double-sum method [Piterbarg 996] provides the exact asymptotic approximation of sup ft over a compact set, which is allowed to grow as the threshold tends to infinity. For almost surely at least twice differentiable fields, [Adler 98; aylor et al. 2005; Adler and aylor 2007] derive the analytic form of the expected Euler-Poicaré Characteristics of the excursion set which serves as a good approximation of the tail probability of the supremum. he tube method [Sun 993] takes advantage of the Karhune-Loève expansion and Weyl s formula. he Rice method [Azais and Wschebor 2005; 2008; 2009] provides an implicit description of sup ft. he discussions also go beyond the Gaussian fields such as Gaussian random fields with random variances [Hüsler et al. 20]. See also [Adler et al. 2009] for non-gaussian and heavy-tailed processes. he corresponding rare-event simulations have been studied in [Adler et al. 202]. he asymptotic tail behaviors of I have been studied recently in the literature with focus mostly on the analytic approximations of wb under smoothness conditions. [Liu 202] derives the asymptotic approximations of P eσft dt > e b as b when ft is a threetime differentiable and homogeneous Gaussian random field and σt takes constant value σ. [Liu and Xu 202c] further extends the results to the case when the process has a smooth varying mean function, i.e., P eσft+µt dt > e b. he tail asymptotics of I are difficult to develop when ft is non-differentiable. Under such a setting, accurate approximations of wb have not yet been developed except for some special cases such as ft is a Brownian motion c.f. [Yor 992; Dufresne 200]. herefore, rare-event simulation serves as an appealing alternative from the computational point of view in that the design and the analysis do not require very sharp approximations of wb. his paper, to the authors best knowledge, is the first analysis of I for a general class of non-differentiable and differentiable fields. he main contribution of this paper is to develop a provably efficient rare-event simulation algorithm to compute wb. he efficiency of the algorithm only requires that ft is uniformly Hölder continuous on the compact domain. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

4 0:4 Liu and Xu herefore, the results are applicable to essentially all the Gaussian processes practically in use. Central to the analysis is a change of measure Q in both a continuous form for the theoretical analysis and a discrete form for the simulation. he measure Q mimics the conditional distribution of f given the occurrence of the rare event {I > e b }. Importance sampling estimators are then constructed based on the measure Q. An appealing feature of the proposed method is that the specific simulation schemes do not vary substantially under different situations. hat is, the choice of the change of measure does not rely much on the specific mean, variance, or covariance structure of the process such as, constant mean/variance, varying mean/variance, multi- and uni-modal mean/variance, etc.. hus, this change of measure captures the common characteristics of the conditional distributions among all the processes in the class. With a fine tuning of the change of measure, it is conceivable that the efficiency can be further improved by taking into account more refined structures of the process such as homogeneity, smoothness, maxima of the mean and the variance function, etc. Nonetheless, a unified efficient simulation scheme is useful especially when the fine structures of the processes are not easy to obtain. We do not pursue this further improvement in this paper. he complexity analysis of the proposed estimators consists of two elements. Firstly, since f considered in this paper is continuous, exact simulation of the entire field is usually impossible. hus, we need to use a discrete object to approximate the continuous field and the bias caused by the discretization needs to be well controlled relative to wb. he second part of our analysis is the variance control, that is, to provide a bound of the second moments of the discrete importance sampling estimators. he rest of the paper is organized as follows. Section 2 provides the construction of our importance sampling algorithm and presents the main results of this paper. Section 3 includes simulation studies. Proofs of our main theorems are given in Section 4. he proofs of supporting lemmas are included in Section MAIN RESULS 2.. Preliminaries of rare-event simulations and importance sampling hroughout this paper, we are interested in computing wb 0 as b. In the context of rare-event simulations [Asmussen and Glynn 2007, Chapter VI], it is more meaningful to consider the computational error relative to wb. A well accepted efficiency concept is the so-called weak efficiency, also known as asymptotic efficiency and logarithmic efficiency [Asmussen and Glynn 2007]. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

5 Efficient Simulations for Gaussian Random Fields 0:5 Definition 2.. A Monte Carlo estimator L b is said to be weakly / asymptotically / logarithmically efficient in estimating wb if EL b = wb and lim b log EL 2 b =. 3 2 log wb Suppose that a weakly efficient estimator L b has been obtained. Let {L j n j= Lj b b : j =,..., n} be i.i.d. copies of L b and Z b = n be the averaged estimator that has a relative mean squared error Var /2 L b /n /2 wb. A simple consequence of Chebyshev s inequality yields P Z b /wb η VarL b nη 2 w 2 b. 4 he limit 3 suggests that for any λ > 0, VarL b EL 2 b = ow2 λ b. For any positive η and δ, in order to achieve the following relative accuracy P Z b /wb > η < δ, 5 it is sufficient to generate n = Oη 2 δ w λ b i.i.d. copies of L b for any λ > 0. o construct efficient estimators, in this paper, we use importance sampling for the variance reduction see [Asmussen and Glynn 2007, Chapter V]. It is based on the following identity wb = E I >eb = E Q dp I >eb, dq where Q is a probability measure on F such that dp/dq is well defined on the set {It > e b }. We use E and E Q to denote the expectations under the measures P and Q respectively. hen, the random variable dp L b = I >e b dq is an unbiased estimator of wb under the measure Q. If we choose Q = P I > e b to be the conditional distribution, then the corresponding likelihood ratio dp/dq wb on the set {I > e b } has zero variance under Q. hus, Q is also called the zero-variance change of measure. Note that this change of measure is of no practical value in that its implementation requires the computation of the probabilities wb. Nonetheless, the measure Q provides a guideline for the construction of an efficient change of measure to compute the probability wb. herefore, the task lies in constructing a change 6 ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

6 0:6 Liu and Xu of measure Q that is a good approximation of Q. In addition, from the computational point of view, we should also be able to numerically compute L b and to simulate f from Q. Besides the variance control, another important issue is the bias control. he random fields considered in this paper are continuous processes. Direct simulation is usually not feasible. herefore, we need to set up an appropriate discretization scheme to approximate the continuous objects. he bias caused by the discretization also needs to be controlled relative to wb. Suppose that a biased estimator L b has been constructed for wb such that E L b = wb. hus, the computation error can be decomposed as follows L b wb L b wb wb + wb wb. wb he first term on the right-hand-side is controlled by the relative variance of L b and the second term is the bias relative to wb. Both the bias and the variance control will be carefully analyzed in the following sections relative to wb. he overall computational complexity is then the necessary number of i.i.d. replicates for L b multiplied by the computational cost to generate one L b he change of measure and rare-event simulation We propose a change of measure Q on the continuous sample path space and further propose a discrete version, denoted by Q M, for the simulation purpose, where the subscript M indicates the size of the discretization. he description of Q requires the following quantities. For each b, we define u satisfying the following identity where 2π σ d 2 u d 2 e uσ = e b, 7 σ = sup σt. t he left-hand-side of 7 is in a similar form as the Lambert W -function. Note that when b is large, equation 7 generally has two solutions. One is on the order of b/σ ; the other one is close to zero. See Figure 2.2 for an example of the left-hand-side of 7 with d = and σ =. We choose u to be the larger solution. Remark 2.2. In the asymptotic regime that b tends to infinity, we need to have sup σtft + µt approximately exceeding level σ u so that the integral is above the ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

7 Efficient Simulations for Gaussian Random Fields 0:7 level b. We provide an intuitive calculation of 7. Conditional on that fτ = u for some τ, the conditional process is ft = Eft fτ = u + gt τ where g is a zeromean Gaussian process. It turns out that the variation of g is much smaller than that of the conditional mean. hen, we approximate the process by ft Eft fτ = u. For the special case that ft is stationary and differentiable admitting zero mean and a covariance function Covfs, ft + s = t 2 /2 + o t 2 then, σ =, one can compute eeft fτ=u dt = 2π d 2 u d 2 e u. For the precise calculations, see Section 3.4 in [Liu and Xu 202c]. In fact the pre-factor 2π d 2 u d 2 is not essential in the technical development. Weak efficiency holds if b σ u u 0.5 e u u Fig.. Plot of 2π 2 u 2 e u Furthermore, we define µ σ t = µt/σ and u t = u µ σ t. 8 We characterize the measure Q in two ways. First, we describe the simulation of the process f from Q following a three-step procedure. Simulate a random index τ uniformly over with respect to the Lebesgue measure. 2 Given the realized τ, simulate fτ Nu τ,, where u τ is defined as in 8. 3 Simulate the rest of the field {ft : t τ} from the original conditional distribution under P given τ, fτ. he above simulation description induces a measure Q. o derive the Radon-Nikodym derivative between Q and P, we need to realize that sampling f from the measure P may ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

8 0:8 Liu and Xu also follow a similar three-step procedure except that in Step 2 we sample fτ from the standard normal distribution. hus, the random variable τ and f are independent under P. We let F be the σ-field generated by f and F be that generated by f and τ. It is straightforward to verify that the Radon-Nikodym derivative dq/dp on F is Zf, τ = exp { 2 fτ u τ 2} exp { 2 fτ2}. hat is, for each A F, QA = E Zf, τ; A. hen, for each A C [ { exp 2 Qf A = E fτ u τ 2} ] { [ { exp exp { 2 fτ2} ; f A 2 = E E fτ u τ 2} ] } exp { f 2 fτ2} ; f A. 9 Note that τ and f are independent under P and τ is uniform on, thus the conditional expectation given f is written as [ { exp 2 Zf = E fτ u τ 2} ] exp { f 2 fτ2} = mes exp { 2 ft u t 2} exp { 2 ft2} dt, where mes denotes the Lebesgue measure. We insert the above form into 9 and obtain that Qf A = EZf; f A. herefore, Zf is the Radon-Nikodym derivative between Q and P restricted on F. Given that we are only interested in the tail event regarding f not τ, we work with Zf most of the time. By slightly abusing notation, we write dq dp = Zf = mes exp { 2 ft u t 2} exp { dt = 2ft2} { mes exp 2 } u2t + ftu t dt. 0 Remark 2.3. o better understand the connection between the above simulation procedure and the likelihood ratio 0, we present a discrete analogue for a finite dimensional multivariate Gaussian random vector X = X,..., X n. For the finite dimensional case, τ is uniformly distributed over {,..., n}. he density function of X under the change of measure is qx,..., x n = n n q τ x τ fx τ x τ τ= where x τ = x,..., x τ, x τ+,..., x n, q τ x is the sampling distribution of x τ given τ, and f is the density under the original measure. hus, the likelihood ratio is qx,..., x n fx,..., x n = n n τ= q τ x τ fx τ ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

9 Efficient Simulations for Gaussian Random Fields 0:9 that is the discrete analogue of 0. Based on the above discussion, we have the continuous version importance sampling estimator taking the form L b = I >e b and its second moment equals E Q [ L 2 ] b = E Q [ { mes exp 2 } u2t + ftu t dt { mes exp 2 } ] 2 u2t + ftu t dt ; I > e b. he measure Q is constructed such that the behavior of f under Q mimics the tail behavior of f given the rare event {I > e b } under P. According to the above simulation procedure, a random variable τ is first sampled uniformly over, then fτ is simulated with a large mean at level u τ. Under the zero-variance change of measure, the large value of the integral I is mostly caused by the high excursion of ft at one location. he random index τ searches the maximum of ft over the index set. It worth emphasizing that τ is not necessarily the exact maximum but should be very close to it. In the case when ft is differentiable and strictly stationary and µt 0 and σt, the zero-variance change of measure can be more precisely quantified. For the stationary case, u t is a constant and we write it as u. hen, the zero-variance change of measure is approximated as follows sup QA P f A sup γ u t > u 0 as b, A where Q = P f I > e b and γ u t = ft+ r 2 ft 2σu +κ 0. 2 ft is the Hessian matrix, r is the trace operator, and κ 0 is a constant only depending on the covariance function. See [Liu and Xu 202a] for the detailed description of the above results. According to the total variation approximation, the high excursion of I is almost the same as the high excursion of ft with a small correction depending on the Hessian matrix. he propose measure Q is different from Q mostly in two ways. First, under Q, the overshoot of γ u t ft over the level u is of order Ou ; under Q, the overshoot is of order O. Second, the measure Q does not tilt the distribution of 2 ft. hese are the main sources of inefficiency. On the other hand, the Radon-Nikodym derivative dq/dp takes a very friendly form that one can take advantage of the Jensen s inequality to prove weak efficiency for a ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

10 0:0 Liu and Xu general class of Gaussian processes, especially for non-differentiable processes, for which there is no quantitative result for the zero-variance change of measure he algorithm and efficiency results For the implementation, we introduce a suitable discretization scheme on. For any positive integer N, let G N,d be a subset of R d G N,d = { i N, i 2 N,..., i } d : i,..., i d Z, N where Z is the set of integers. hat is, G N,d is a regular lattice on R d. For each t = t,, t d G N,d, define N t = { s,, s d : s j t j /N, t j ] for j =,, d } that is the N -cube intersected with and upper-cornered at t. Furthermore, let N = {t G N,d : mes N t > 0}. Since is compact, N is a finite set. We enumerate the elements in N = {t,, t M }, where M mes N d. We use as an approximation of wb where I M = w M b = P I M > e b mes N t i e σtifti+µti. 2 i= We now state the main results that require the following technical conditions. A ft is almost surely continuous with respect to t and furthermore admits E[ft] = 0 and E[f 2 t] = for all t ; A2 here exist δ, κ H > 0, and β 0, ] such that, for all s t < δ, the mean and variance functions satisfy µt µs + σt σs κ H s t β ; ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

11 Efficient Simulations for Gaussian Random Fields 0: A3 For each s, t, define the covariance function Cs, t = Covfs, ft. For all s s < δ and t t < δ, the covariance function satisfies Ct, s Ct, s κ H t t 2β + s s 2β. Condition A assumes that f has zero mean and unit variance. For a general Gaussian random field with mean µt and variance σ 2 t, we treat the mean and the standard deviation as additional parameters. Conditions A2 and A3 essentially ensure that the process µt+σtft is uniformly Hölder continuous, that is, for s t < δ, Varfs ft 2κ H s t 2β. hese assumptions are weak enough such that they accommodate essentially all Gaussian processes practically in use, such as fractional Brownian motion, smooth Gaussian processes, etc. herefore, the algorithm developed in this paper is suitable for a wide range of applications. he first result controls the relative bias of w M b. HEOREM 2.4. Consider a Gaussian random field f satisfying conditions A-A3. For any 0 < ɛ < /2, there exists a constant κ 0 such that for any η > 0, b >, and the lattice size N = κ /β 0 log η /β η /β b 2+ɛ/β, where β is given as in conditions A2 and A3. w M b wb wb < η, hanks to the continuity of ft, as N tends to infinity, the lattice N becomes dense in. he finite sum I M converges in probability to I and therefore w M b/wb converges to. On the other hand, the convergence of w M b/wb is not uniform in b. he above theorem provides a lower bound of N such that w M b/wb is close enough to unity and the relative bias can be controlled. With N chosen as in heorem 2.4, we proceed to estimating w M b by importance sampling, which is based on the change of measure proposed in 0. We make the corresponding adaptation under the above discretization N. In particular we define a measure Q M as the discrete version of Q such that dq M /dp takes the form: dq M dp = M i= e 2 fti ut i 2 = M e 2 fti2 i= M eut i fti 2 u2 t i. 3 ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

12 0:2 Liu and Xu he computation of the above likelihood ratio and the event I M > b only consists of {ft i : i =,..., M}. For the simulation under the measure Q M, we propose the following algorithm. he algorithm has two steps: Step : Simulate ι uniformly from {,..., M} and generate ft ι from Nu tι,. Given ι, ft ι, simulate ft,, ft ι, ft ι+,, ft M from the original conditional distribution under the measure P. Step 2: Compute and output L b = M i= {IM >e b }. 4 M eu t i ft i 2 u2 t i It is not hard to verify that the above simulation procedure is consistent with the likelihood ratio 3 and thus L b = {IM >e b } dp dq M is an unbiased estimator of w M b. he next theorem controls the variance of the estimator L b. HEOREM 2.5. Suppose that f is a Gaussian random field satisfying conditions A-A3. If N is chosen as in heorem 2.4, then log E Q M L2 lim b b 2 log w M b =. he above results show that the estimator L b in Algorithm is asymptotically efficient in estimating w M b. o estimate wb, we simulate n i.i.d. copies of L b, { L j b : j =,..., n} and the final estimator is Z b = n j n j= L b. he estimation error is Z b wb w M b wb + Z b w M b. 5 he first term is controlled by heorem 2.4, i.e., w M b wb ηwb if we choose the discretization size N = O log η /β η /β b 2+ɛ/β. he second term of 5 is controlled by the discussion as in 4 if we choose the number of replicates n = Oη 2 δ w λ b. he simulation of L b consists of generating a random vector of dimension M = ON d. Note that the complexity of computing the eigenvalues and eigenvectors or the Cholesky decomposi- ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

13 Efficient Simulations for Gaussian Random Fields 0:3 tion of an M-dimension matrix is OM 3 = ON 3d. hus, the total computational complexity to achieve the prescribed accuracy in 5 is ON 3d η 2 δ w η b. he current choice of measure Q as in 0 does not depend on the particular form of the covariance structure. When there is more knowledge available, we can further tune and adapt the measure Q to more refined structures and to improve the efficiency. Notice that the measure Q twists the process f at the random location τ. he main tuning parameters are the distribution of τ which is currently chosen to be uniform and the distribution of fτ which is currently chosen to be Nu τ,. herefore, the general form of the change of measure Q is dq dp = ht g tft ϕ t ft dt where ht is the density of τ, g t is the sampling density of fτ given that τ = t, and ϕ t is the density of ft under P. We may refine the choice of g t and h to take into account the specific structures of f. For instance, if µt has one unique maximum attained at t, then it would be more efficient to choose ht concentrating around t. he theoretical analysis also needs to be adapted to the specific choices of h and g t. he most difficult part of the analysis lies in obtaining more accurate asymptotic approximations of wb so as to justify stronger type of efficiency. his is particularly challenging when ft is not differentiable and it is beyond the topic of the current paper. In this paper, we stick to the unified simulation scheme that is applicable to a large class of processes and admits an acceptable efficiency property. Remark 2.6. One limitation of the current analysis is that the total computational complexity grows exponentially fast with dimension d. his is mainly due to the regular discretization method. One may alternatively use other numerical methods such as quasi-monte Carlo, randomized quasi-monte Carlo, or Monte Carlo, whose complexities do not depend on dimension, to approximate the integral e ft dt. In the literature of quasi-monte Carlo and randomized quasi-monte Carlo low-discrepancy sequences have been developed. hese more refined choices may further reduce bias compared to the regular lattice G N,d. For more detailed analysis, see [L Ecuyer and Munger 200; L Ecuyer et al. 200; L Ecuyer 2009]. We do not perform rigorous analysis along this line that is beyond the scope of this paper. 3. SIMULAION o illustrate the proposed algorithm, we first apply it to homogeneous Gaussian random fields {ft, t = [0, ] d } with dimension d = and 2. For the not-so-small tail probabilities, ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

14 0:4 Liu and Xu we also use crude Monte Carlo to compute them as a validation of the importance sampling algorithm. For each case, we assume that f has zero mean and covariance function Cs, t = e t s α, 6 where is the L 2 -norm and α is taken to be and 2 corresponding to different covariance structures of f. When α = 2, f is infinitely differentiable; when α =, f is non-differentiable. We discretize following the procedure in Section 2.2. For d =, we take N = {i/00 : i =,, 00} with discretization size N = 00. he detailed simulation is described in the next steps.. Generate a random variable ι Uniform {, 2,, 00}. 2. Simulate f ι 00 Nu,, where u is calculated from equation Given ι and f ι 00, simulate {f i 00, i =,, ι, ι +,, 00} under the covariance structure specified in 6. First, we consider the one-dimensional case with constant σt and µt. Let µt = 0 and σt =. hen the tail probability of interest takes the form wb = P 0 e ft dt > e b. he estimated tail probabilities wb along with the estimated standard deviations Std Q L b = Var Q L b are shown in able I. All the results are based on 0 4 independent simulations. he standard deviation of the final estimate in the column Est. is the reported standard deviation in the column of Std. divided by 00. Comparing the simulation results of α = and 2, we can see that the algorithm has a smaller relative error when α = 2. he CPU time to generate 0 4 samples is less than one second. o validate the simulation results, we use crude Monte Carlo for b = 3 and 5. Based on 0 6 independent simulations, for b = 3, the estimated tail probabilities are 4.7e-4 Std. 2e-5 and 8.2e-4 Std. 3e-5 when α = and 2, respectively; based on 0 9 independent simulations, for b = 5, the estimated tail probabilities are.2e-8 Std. 3e-9 and 7.7e-8 Std. 9e-9 when α = and 2, respectively. hese results are consistent with those computed by the importance sampling estimators. We also consider the non-constant mean and variances. In particular, we choose σt = t and µt = 2t. he corresponding simulation results for wb = ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

15 Efficient Simulations for Gaussian Random Fields 0:5 able I. Estimates of wb on = [0, ], σt =, and µt = 0, based on 0 4 independent simulations. b Est. Std. Std./Est. b = e-04.4e α = b = 5.3e e b = 7.80e-5.66e b = e e α = 2 b = e e b = e e he standard deviation of the estimate is Std./00. P 0 eσtft+µt dt > e b are shown in able II. he CPU time to generate 0 4 samples is less than one second. For b = 3, crude Monte Carlo estimator based on 0 6 independent simulations gives the estimated tail probabilities.4e-3 Std. 4e-5 and 2.3e-3 Std. 5e-5 when α = and α = 2, respectively; for b = 5, crude Monte Carlo estimator based on 0 9 independent simulations gives the estimated tail probabilities.8e-8 Std. 4e-9 and.4e-7 Std. e-8 when α = and α = 2, respectively. For the case that d = 2, we start with constant mean and variance, µt = 0 and σt =. hen the tail probability of interest takes the form wb = P [0,] 2 e ft dt > e b. Similarly, we discretize and take N = {i, j/00 : i, j =,, 00} with the discretization size N = able III shows the estimated tail probabilities wb along with Std Q L b. In addition, we perform crude Monte Carlo for b = 3. he estimated tail probabilities, based on 0 5 independent simulations, are.8e-4 Std. 4e-5 and 5.3e-4 Std. 7e-5 when α = and α = 2, respectively. Furthermore, able IV gives estimated tail probabilities for non-constant functions σt = t 0.5, and µt = 2t,, where and 2 are the L - and L 2 -norms respectively. he CPU time to generate 0 4 samples varies from from to 5 minutes. For b = 3, the crude Monte Carlo based on 0 5 independent simulations gives 3.3e-3 Std. 2e-4 and 5.6e-3 Std. 2e-4 for α = and α = 2, respectively. he simulation results show that the coefficients of variation Std Q L b /wb increase as the tail probabilities become smaller. Nonetheless, the coefficients of variation stay reasonably small when the probability is as small as 0 7. he continuity of the process and the dimension do affect the empirical performance of the algorithm. More precisely, the algorithm admits smaller coefficients of variation when the process is more continuous corresponding to a larger value of α and the domain is of a lower dimension. In addition, for stationary processes, the algorithm has a slightly better performance than the nonstationary cases. his is because, for the stationary cases, the uniform distribution of τ is closer to the distribution of the maximum of f under Q. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

16 0:6 Liu and Xu able II. Estimates of wb on = [0, ], σt = t 0.5 2, and µt = 2t, based on 0 4 independent simulations. b Est. Std. Std./Est. b = 3.39e e α = b = e-08.3e b = e e b = e e α = 2 b = 5.3e e b = 7.9e e he standard deviation of the estimate is Std./00. able III. Estimates of wb on = [0, ] 2, σt =, and µt = 0, based on 0 4 independent simulations. b Est. Std. Std./Est. b = 3 2.0e-04.77e α = b = 5.6e-09.58e b = 7.5e e b = e e α = 2 b = 5.46e e b = e-5 3.e he standard deviation of the estimate is Std./00. able IV. Estimates of wb on = [0, ] 2, σt = t 0.5, 0.5 2, and µt = 2t,, based on 0 4 independent simulations. b Est. Std. Std./Est. b = e-03.8e α = b = e e-08.6 b = 7.30e e b = 3 5.5e-03.58e α = 2 b = e e b = e-5 3.6e he standard deviation of the estimate is Std./ PROOF OF HE HEOREMS he proofs of the theorems need several supporting lemmas. o smooth the discussion, we provide their statements at places where they are used and delay their proofs to Section 5. PROOF OF HEOREM 2.4. later, let L = { f C : For any η, ɛ > 0 and a large constant κ 0 > 0 to be determined } sup σsfs + µs σtft + µt ηκ /3 0 b +ɛ, 7 s,t: t s dn where C is the set of all continuous functions on and N is chosen as in the statement of the theorem. he following lemma suggests that we only need to focus on the set L. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

17 Efficient Simulations for Gaussian Random Fields 0:7 LEMMA 4.. Under the conditions in heorem 2.4, for any ɛ > 0, there exists a constant κ 0 such that, for any η > 0, b >, and N = κ /β 0 log η /β η /β b 2+2ɛ/β, we have P I > e b, L c ηwb, P I M > e b, L c ηwb. 8 With N chosen as in Lemma 4., we have that w M b wb 9 P I M > e b, I < e b, L + P I M < e b, I > e b, L + 2ηwb. On the set L, we have that I M I [ = e σtifti+µti e σtft+µt] dt i= N t i 2 mes N t i e σtifti+µti sup σt i ft i + µt i σtft + µt t t i dn i= 2ηκ /3 0 b +ɛ I M. Recall that t i is the corner point of N t i. he first inequality in the above display is an application of aylor s expansion. A similar argument yields that hus, the first term in 9 is bounded by P I M > e b, I < e b, L P and the second term is bounded by P I M < e b, I > e b, L P I M I 2ηκ /3 0 b +ɛ I. e b 2ηκ /3 0 b +ɛ < I < e b, L e b < I < e b + 2ηκ /3 0 b +ɛ, L. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

18 0:8 Liu and Xu We insert the above bounds back to 9. here exists some constant c > 0 such that w M b wb P e b 2ηκ /3 0 b +ɛ < I < e b + 2ηκ /3 0 b +ɛ, L + 2ηwb P b + log 2ηκ /3 0 b +ɛ < log I < b + log + 2ηκ /3 0 b +ɛ, L + 2ηwb cηκ /3 0 b +ɛ F b cηκ /3 0 b +ɛ + 2ηwb, 20 where F x is the probability density function of log I. he following lemma provides an upper bound of the density function F x. LEMMA 4.2. Under the conditions of heorem 2.4, let F x be the probability density function of log I. hen F x exists almost everywhere. Moreover, for all ɛ > 0 and λ > 0, F x = ox +ɛ/2 w x + x λ as x. 2 We apply Lemma 4.2 to 20 by setting x = b cηκ /3 0 b +ɛ. Furthermore, we choose λ small such that x + x λ = b and thus F b cηκ /3 0 b +ɛ ob +ɛ/2 wb. We insert the above bound to the right-hand-side of 20 and obtain that for κ 0 large enough, w M b wb cκ /3 0 ηb +ɛ ob +ɛ/2 wb + 2ηwb 3ηwb. hen, we can redefine η and κ 0 and conclude the conclusion. PROOF OF HEOREM 2.5. he main idea. he key element of this proof is an application of the Jensen s inequality that for all u and ft M [ e ufti M i= e fti] u. i= hus, if M M i= efti > e b, then M M i= eufti > e ub. he following technical proof is to write the likelihood ratio and I M in a form that the above inequality is applicable. hus, we are able to develop an upper bound of the second moment of the likelihood ratio on the set {I M > e b }. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

19 Efficient Simulations for Gaussian Random Fields 0:9 he technical proof. We first present a lemma that provides not-so-accurate but useful bounds of wb. LEMMA 4.3. Under Conditions A-A3, there exist constants c 0, c and c 2 such that exp b2 + c b log b + c 0 2σ 2 wb exp b2 2σ 2 + c 2 b. Consequently, log wb lim b b 2 2σ 2. Under the change of measure, the discrete likelihood ratio is dq M dp = M i= e 2 fti ut i 2 = M e 2 fti2 i= M e 2 u2 t i +ft iu ti. Note that most N t i are rectangles and there are just a few on the boundary of that are not rectangles. If we let κ = 2mes, then I M is bounded from the above by κ N d e σtifti+µti e σtifti+µti mes N t i = I M. i= i= hat is, those N t i s on the boundary of are replaced by rectangles. hus, the second moment of the estimator is bounded by E Q M [ dp 2 ; I M > e b ] dq M M 2 E Q M M e 2 u2 t +ft i iu ti ; κ N d i= e σtifti+µti > e b For the next step, we wish to take the term 2 u2 t i in the exponential out of the expectation. Note that 2 u2 t i 2 u min t µ σ t 2 and we continue the above calculation M 2 e u min t µ σt 2 E Q M M eftiut i ; κ N d i= i= 22 e σtifti+µti > e b. 23 i= ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

20 0:20 Liu and Xu We now split the set {t,..., t M } into {t i : ft i > 0} and {t i : ft i 0}. Furthermore, there exists κ such that N d {t i:ft i 0} e σtifti+µti κ. hus, there exists δ 0 > 0 such that, for b sufficiently large and on the set { κ N d M i= eσtifti+µti > e b }, we have κ < δ 0 e b and {i:ft i>0} herefore, κ N d M i= eσtifti+µti > e b implies that M e σ ft i+max t µt M i= M e σtifti+µti δ 0 e σtifti+µti. 24 {i:ft i 0} i= e σtifti+µti δ 0N d Mκ e b. 25 We now consider the behavior of the likelihood ratio on the set { κ N d M i= eσtifti+µti > e b }. For ft > 0, we have that u t ft u max µ σ tft = u max µ σt [σ ft + max µt] u max µ σ t max µ σ t. σ For u large enough, we have that M e ut i fti M i= Notice that M that = M i:ft i>0 i:ft i>0 e u max µσtfti 26 { u maxt µ σ t exp σ [σ ft i + max t } µt] u max µ σ t max µ σt t t i:ft i 0 eu max µσtfti. We continue the above calculations and obtain e u maxt µσt max t µ σt M i= { } u maxt µ σ t exp [σ ft i + max σ µt]. t ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

21 Efficient Simulations for Gaussian Random Fields 0:2 We apply Jensen s inequality to the above summation and obtain that M i= e ut i fti e u max t µσ t σ max t µt [ M u maxt µσt/σ e σ ft i+max t µt] i= We apply the result in 25 and continue the above calculations. here exist κ 2, κ 3 > 0 such that on the set { κ N d M i= eσtifti+µti > e b } M i= e ut i fti e κ2u δ0 N d u maxt µ σt/σ e b Mκ e κ3u log b+u2. 27 For the last step, we use the definition of u as in 7 and obtain that σ u κ 4 log b b σ u where κ 4 > 0 and further lim b b σ u =. Combining 22, 23, and 27, there exists a constant κ 5 > 0 such that M 2 e u min t µ σt 2 E Q M M eftiut i ; κ N d i= e σtifti+µti > e b Combining the above results with heorem 2.4 and Lemma 4.3, we have lim inf b i= log E Q M L2 b b 2 /σ 2 lim inf + κ 5b log b 2 log w M b b b 2 /σ 2 =. e σ 2 b2 +κ 5b log b. On the other hand, Jensen s inequality suggests that log E Q M L2 b 2 log w M b and lim sup b log E Q M L2 b 2 log w M b. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

22 0:22 Liu and Xu 5. PROOFS OF LEMMAS In this section, we present the proofs of the supporting lemmas in the main proof. he first lemma is known as the Borel-IS lemma, which is proved independently by [Borell 975; sirelson et al. 976]. LEMMA 5. BOREL-IS. Let ft, t U U is a parameter set be a mean zero Gaussian random field. f is almost surely bounded on U. hen, E[sup U ft] <, and P where σ 2 U = sup t U Var[ft]. sup t U f t E[sup f t] b exp b2 t U 2σU 2, he Borel-IS lemma provides a very general bound of the tail probability P sup ft > b exp t U b E[sup t U f t] 2 In most cases, E[sup t U f t] is much smaller than b. hus, for b sufficiently large, the tail probability can be further bounded by 2σ 2 U P sup ft > b exp b2 t U 4σU 2. he following result by [Dudley 973] c.f. heorem 6.7 in [Adler et al. 202] is often used to control E[sup t U f t]. LEMMA 5.2. Let U be a compact subset of R n, and let {ft : t U} be a mean zero, continuous Gaussian random field. Define the canonical metric d on U as. d s, t = E[f t f s] 2 and put diam U = sup s,t U d s, t, which is assumed to be finite. hen there exists a finite universal constant κ > 0 such that E[max t U f t] κ diamu/2 0 [log N ε] /2 dε, where the entropy N ε is the smallest number of d balls of radius ε whose union covers U. By means of the above lemma, one can often establish that E[max t U f t] = Oδ log δ where δ 2 = sup t U Varft. We now proceed to the proof of the supporting lemmas. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

23 Efficient Simulations for Gaussian Random Fields 0:23 PROOF OF LEMMA 4.. We present the proof of the first bound in the lemma. he proof of the second bound is completely analogous. Consider the change of measure dq dp = With a similar argument as in 23, we have P I > e b, L c [ ] dp = E Q dq ; I > eb, L c mes exp u t ft u2 t dt e u mint µσt2 /2 E Q [ mes eutft dt ; I > eb, L c ], where E Q is the expectation under measure Q. Note that I > e b implies that for all large b, e σ ft+max t µt dt mes e σtft+µt dt mes {ft 0} mes eb Furthermore, on the set {I > e b }, we have that e utft dt mes mes {ft<0} e σtft+µt dt mes eb e max t µt. 29 { exp mes {ft>0} u max t } µ σ tft dt. u maxt µσt We add and subtract the term σ max t µt in the exponent and continue the above calculation = { u maxt µ σ t exp σ ft + max mes {ft>0} σ µt u max } t µ σ t max t σ µt dt t Since µσtft mes dt, the above display is bounded from below by {ft 0} eu maxt { u maxt µ σ t exp σ ft + max mes σ µt u max } t µ σ t max t σ µt dt t ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

24 0:24 Liu and Xu By means of Jensen s inequality and the lower bound in 29, the above display is further lower bounded by { exp u max t µ σ t σ } [ max µt t { exp u max t µ σ t max σ µt t herefore, we have the following bound mes } [ e b mes emax t µt P I > e b, L c [ ] e u mint µσt2 /2 E Q mes eutft dt ; I > eb, L c ] u maxt µ σt/σ exp{σ ft + max µt}dt t ] u maxt µ σt/σ. 30 [ e u mint µσt2 /2 e u max t µσ t σ max t µt e b u maxt µσ t ] mes emax t µt σ Q I > e b, L c Notice the facts that u min t µ σ t 2 /2 = u 2 /2 + Ou and [ e b ] u max t µσ t σ mes emax t µt = e b/σ +Ou+O. By the facts that u = b/σ + Olog b and Q I > e b, L c QL c, there exists a constant c such that for b sufficiently large, P I > e b, L c { exp b2 2σ 2 } + c b log b QL c. hen Lemma 5.3 presented momentarily together with Lemma 4.3 implies that there exist constants κ and λ such that P I > e b, L c η exp b2 2σ 2 + c b log b λb +ɛ κ ηwb. Note that κ can be chosen such that the above inequality holds for all b and η. hen, we can redefine η and the constant κ 0 and obtain the conclusion. he proof of the second inequality in 8 is similar. he only difference is that the integral in the above derivation is replaced by a summation over discretization N. Hence, we omit the details. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

25 Efficient Simulations for Gaussian Random Fields 0:25 LEMMA 5.3. Under the conditions in heorem 2.4, there exists a constant λ such that Q L c η exp λb +ɛ, where L is the set defined as in 7. PROOF OF LEMMA 5.3. We focus on a given τ and define another process f t = ft u τ Ct, τ. hen under the measure of Q, f t has the same distribution as that of ft under the original measure P. Under the above notation and N = κ /β 0 log η /β η /β b 2+2ɛ/β, we have that L c = { sup σsf s + σsu τ Cs, τ + µs s,t: t s dn σtf t + σtu τ Ct, τ + µt > ηκ /3 0 b +ɛ }. According to the Borel-IS lemma, Q sup f t > /2 log η κ /2 0 b /2+ɛ log η κ 0 b /2+ɛ E Q [sup t f t] 2 exp t 2σ 2 2. Since E Q [sup t f t] = O, there exists λ > 0 such that for large κ 0 Q sup f t > log η κ /2 0 b /2+ɛ exp λ log η κ 0 b +2ɛ = η λκ0b+2ɛ η exp λ κ 0 b +2ɛ. t 3 In what follows, we bound the tail of L c {sup t f t log η κ /2 0 b /2+ɛ }. For all s, t satisfying t s β d β N β = d β κ 0 log η ηb 2+2ɛ and b >, there exist positive constants c, c 2, and c such that σsf s + σsu τ Cs, τ + µs σtf t + σtu τ Ct, τ + µt σs f s f t + f t σs σt +u τ σs Cs, τ Ct, τ + u τ Ct, τ σs σt + µs µt 32 σ f s f t + cκ /2 0 log η /2 ηb 2ɛ. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

26 0:26 Liu and Xu o obtain the last inequality in the above display, we need to notice that for 0 < ɛ < /2, f t = O log η κ /2 0 b /2+ɛ }, and σs σt = O log η ηb 2+2ɛ f t σs σt = O log η κ /2 0 b /2+ɛ log η ηb 2+2ɛ = O log η /2 ηb 3/2 ɛ = O log η /2 ηb 2ɛ. he last three terms of 32 are bounded by u τ σs Cs, τ Ct, τ + u τ Ct, τ σs σt + µs µt = O log η ηb +2ɛ. Applying the above bound, we obtain that Q Q L c, sup t f t log η κ /2 0 b /2+ɛ sup σ f s f t + cκ /2 0 log η /2 ηb 2ɛ > ηκ /3 0 b +ɛ s,t: t s dn Q sup σ f s f t > s,t: t s dn 2 ηκ /3 0 b. +ɛ For the component f s f t and t s dn, the variance function has an upper bound: Varf s f t = 2 Cs, t 2κ H s t 2β 2κ H d 2β N 2β = 2κ H d 2β κ 2 0 log η 2 η 2 b 4+ɛ. We apply Lemma 5.2 to the double-indexed process ξs, t = f s f t on the set U = {s, t : s, t, s t dn }. hen, logn ε = O log ε and diamu = O log η ηb 2+ε. he result of Lemma 5.2 yields that E Q[ ] sup f s f t = Oηb 2+ɛ log b. t s dn ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

27 Efficient Simulations for Gaussian Random Fields 0:27 We apply the Borel-IS inequality Q sup σ f s f t > s,t: t s dn 2 ηκ /3 0 b +ɛ 2 exp 2 ηκ /3 0 b +ɛ E Q [sup t s dn f s f t] 4σ 2 κ Hd 2β κ 2 0 log η 2 η 2 b 4+ɛ exp λ 2 log η 2 κ 4/3 0 b 2+ɛ 33 for some λ 2 > 0. Combining the results in 3 and 33, there exists λ such that for large κ 0, QL c exp λ log η κ 0 b +2ɛ + exp λ 2 log η 2 κ 4/3 0 b 2+ɛ η exp λb +ɛ. 2 PROOF OF LEMMA 4.2. Let Φ be the cumulative distribution function of the standard Gaussian distribution. Define cumulative distribution function of I and the associated quantiles with respect to Φ as F x = P log I x and t x = Φ F x. We cite one result Proposition in [Liu and Xu 202b] that gives an upper bound of F x: LEMMA 5.4. Under the conditions of Lemma 4.2, F x exists almost everywhere. Choose y < x depending on x in a way that x y 0 and xx y when we send x to infinity. hen, where σ = sup t σt. lim sup x σ 2 2πσ exp t 2 y + 2x yy F x, 34 2σ 2 Lemma 5.4 is equivalent to F x + o 2πσ exp σ2 t2 y + 2x yy 2σ ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

28 0:28 Liu and Xu Y Y = σ x t x x Fig. 2. Plot of t x o prove Lemma 4.2, it is sufficient to show that the right-hand-side of 35 is ox +ɛ/2 wx + x λ. Lemma 4.3 implies that t x lim x x = σ. 36 hen, for λ > 0 we let x = x + x λ and y = x log log x/x. With such choices of x and y, the following holds t 2 x t 2 y + 2σ 2 x yy = 2 + oσ x t x t y 2σ 2 x yy = 2 + oσ xx y t x t y x y x y x y σ + o. 37 According to heorem 4.4. in [Bogachev 998] see also [Ehrhard 983], t x is a concave function of x, and thus t z t x /z x is a monotone non-increasing function of z when z > x and lim z x+ t z t x /z x exists. Figure 5 illustrates the function t x. hen for y = x log log x/x and x = x + x λ, we have that he concavity of t x t x t y / x y lim z y+ t z t y /z y. 38 and 36 imply that lim z y+ t z t y /z y is a non-increasing function of y and converges to σ. Sending x to infinity on both sides of 38 gives lim sup x t x t y / x y σ. hus, the first term in the parenthesis of 37 is bounded by σ + o. herefore, there exists a constant c 0 > 0 such that t 2 x t 2 y + 2σ 2 x yy c 0 xx y = c 0 log log x. ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

29 Efficient Simulations for Gaussian Random Fields 0:29 hen, we obtain an upper bound for 35: F x exp σ2 t2 y + 2x yy 2πσ 2σ 2 exp t2 x c 0 log log x. 2πσ 2 Note that w x = F x = + o 2πt x exp t2 x = + o σ exp t2 x. 2 2πx 2 herefore, for any ɛ > 0, we have that F x + o σ 2 xlog xc0/2 w x = ox +ɛ/2 wx + x λ. PROOF OF LEMMA 4.3. We start to derive a lower bound for wb. Let σt = sup t σt = σ. On the set L as defined in 7 with η = η 0 fixed, there exists a δ 0 > 0 such that e σtft+µt dt δ 0 N d e σ ft. t t <N hen, on the set L if σ ft > b + d log N log δ 0 then e σtft+µt dt e b. t t <N herefore, we have the lower bound wb P I > e b, L P e σtft+µt dt > e b, L t t <N P σ ft > b + d log N log δ 0, L P σ ft > b + d log N log δ 0 P L σ ft > b + d log N log δ 0 39 here exist δ 0, δ > 0 such that the conditional probability P L σ ft > b + d log N log δ 0 > δ. herefore, wb δ P σ ft > b + d log N log δ ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

30 0:30 Liu and Xu Note that N = Ob +ɛ/β and ft follows a standard normal distribution. hus, there exist positive constants c 0 and c such that wb exp b2 + c b log b+ c 0. 2σ 2 We continue to construct an upper bound. Since I > e b implies that sup t {σtft} > b max t µt log mes. herefore, there exists a constant c 2 such that wb P sup{σtft} > b max µt log mes exp b2 t 2σ 2 Combining 40 and 4, we conclude the proof. + c 2 b. 4 REFERENCES ADLER, R. 98. he Geometry of Random Fields. Wiley, Chichester, U.K.; New York, U.S.A. ADLER, R., BLANCHE, J., AND LIU, J Efficient Monte Carlo for large excursions of Gaussian random fields. Annals of Applied Probability 22, 3, ADLER, R., SAMORODNISKY, G., AND AYLOR, J High level excursion set geometry for non-gaussian infinitely divisible random fields. preprint. ADLER, R. AND AYLOR, J Random fields and geometry. Springer. AHSAN, S Portfolio selection in a lognormal securities market. Zeitschrift Fur Nationalokonomie-Journal of Economics 38, -2, ASMUSSEN, S. AND GLYNN, P Stochastic Simulation: Algorithms and Analysis. Springer, New York, NY, USA. ASMUSSEN, S. AND KROESE, D Improved algorithms for rare event simulation with heavy tails. Advances in Applied Probability 38, AZAIS, J. M. AND WSCHEBOR, M On the distribution of the maximum of a Gaussian field with d parameters. Annals of Applied Probability 5, A, AZAIS, J. M. AND WSCHEBOR, M A general expression for the distribution of the maximum of a Gaussian field and the approximation of the tail. Stochastic Processes and heir Applications 8, 7, AZAIS, J. M. AND WSCHEBOR, M Level sets and extrema of random processes and fields. Wiley, Hoboken, N.J. BASAK, S. AND SHAPIRO, A Value-at-risk-based risk management: Optimal policies and asset prices. Review of Financial Studies 4, 2, BERMAN, S. M An asymptotic formula for the distribution of the maximum of a Gaussian process with stationary increments. Journal of Applied Probability 22, 2, BLACK, F. AND SCHOLES, M Pricing of options and corporate liabilities. Journal of Political Economy 8, 3, BLANCHE, J. AND GLYNN, P Effcient rare event simulation for the maximum of heavy-tailed random walks. to appear in Annals of Applied Probability 8, 4, BLANCHE, J., GLYNN, P., AND LEDER, K Strongly efficient algorithms for light-tailed random walks: An old folk song sung to a faster new tune. In MCQMC 2008, P. LEcuyer and A. Owen, Eds. Springer, ACM ransactions on Modeling and Computer Simulation, Vol. 0, No. 0, Article 0, Publication date: 202.

On the Tail Probabilities of Aggregated Lognormal Random Fields with Small Noise

On the Tail Probabilities of Aggregated Lognormal Random Fields with Small Noise MAHEMAICS OF OPERAIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0364-765X eissn 1526-5471 00 0000 0001 INFORMS doi 10.1287/xxxx.0000.0000 c 0000 INFORMS On the ail Probabilities of Aggregated

More information

E cient Monte Carlo for Gaussian Fields and Processes

E cient Monte Carlo for Gaussian Fields and Processes E cient Monte Carlo for Gaussian Fields and Processes Jose Blanchet (with R. Adler, J. C. Liu, and C. Li) Columbia University Nov, 2010 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010

More information

MONTE CARLO FOR LARGE CREDIT PORTFOLIOS WITH POTENTIALLY HIGH CORRELATIONS. Xuan Yang. Department of Statistics Columbia University New York, NY

MONTE CARLO FOR LARGE CREDIT PORTFOLIOS WITH POTENTIALLY HIGH CORRELATIONS. Xuan Yang. Department of Statistics Columbia University New York, NY Proceedings of the 2010 Winter Simulation Conference B. Johansson, S. Jain, J. Montoya-orres, J. Hugan, and E. Yücesan, eds. MONE CARLO FOR LARGE CREDI PORFOLIOS WIH POENIALLY HIGH CORRELAIONS Jose H.

More information

Efficient rare-event simulation for sums of dependent random varia

Efficient rare-event simulation for sums of dependent random varia Efficient rare-event simulation for sums of dependent random variables Leonardo Rojas-Nandayapa joint work with José Blanchet February 13, 2012 MCQMC UNSW, Sydney, Australia Contents Introduction 1 Introduction

More information

Gaussian Random Fields: Excursion Probabilities

Gaussian Random Fields: Excursion Probabilities Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks

Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Jose Blanchet (joint work with Jingchen Liu) Columbia IEOR Department RESIM Blanchet (Columbia) IS for Heavy-tailed Walks

More information

The square root rule for adaptive importance sampling

The square root rule for adaptive importance sampling The square root rule for adaptive importance sampling Art B. Owen Stanford University Yi Zhou January 2019 Abstract In adaptive importance sampling, and other contexts, we have unbiased and uncorrelated

More information

EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences

EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS Jose Blanchet

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks

E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks Jose Blanchet and Jingchen Liu y June 14, 21 Abstract The contribution of this paper is to introduce change

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Reduced-load equivalence for queues with Gaussian input

Reduced-load equivalence for queues with Gaussian input Reduced-load equivalence for queues with Gaussian input A. B. Dieker CWI P.O. Box 94079 1090 GB Amsterdam, the Netherlands and University of Twente Faculty of Mathematical Sciences P.O. Box 17 7500 AE

More information

Refining the Central Limit Theorem Approximation via Extreme Value Theory

Refining the Central Limit Theorem Approximation via Extreme Value Theory Refining the Central Limit Theorem Approximation via Extreme Value Theory Ulrich K. Müller Economics Department Princeton University February 2018 Abstract We suggest approximating the distribution of

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Introduction to Rare Event Simulation

Introduction to Rare Event Simulation Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Class 2 & 3 Overfitting & Regularization

Class 2 & 3 Overfitting & Regularization Class 2 & 3 Overfitting & Regularization Carlo Ciliberto Department of Computer Science, UCL October 18, 2017 Last Class The goal of Statistical Learning Theory is to find a good estimator f n : X Y, approximating

More information

Extremes and ruin of Gaussian processes

Extremes and ruin of Gaussian processes International Conference on Mathematical and Statistical Modeling in Honor of Enrique Castillo. June 28-30, 2006 Extremes and ruin of Gaussian processes Jürg Hüsler Department of Math. Statistics, University

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Introduction to Algorithmic Trading Strategies Lecture 10

Introduction to Algorithmic Trading Strategies Lecture 10 Introduction to Algorithmic Trading Strategies Lecture 10 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae

Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae Robert Adler Electrical Engineering Technion Israel Institute of Technology. and many, many others October 25, 2011 I do not

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information

δ xj β n = 1 n Theorem 1.1. The sequence {P n } satisfies a large deviation principle on M(X) with the rate function I(β) given by

δ xj β n = 1 n Theorem 1.1. The sequence {P n } satisfies a large deviation principle on M(X) with the rate function I(β) given by . Sanov s Theorem Here we consider a sequence of i.i.d. random variables with values in some complete separable metric space X with a common distribution α. Then the sample distribution β n = n maps X

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Complexity of two and multi-stage stochastic programming problems

Complexity of two and multi-stage stochastic programming problems Complexity of two and multi-stage stochastic programming problems A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA The concept

More information

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields Evgeny Spodarev 23.01.2013 WIAS, Berlin Limit theorems for excursion sets of stationary random fields page 2 LT for excursion sets of stationary random fields Overview 23.01.2013 Overview Motivation Excursion

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

ABSTRACT 1 INTRODUCTION. Jingchen Liu. Xiang Zhou. Brown University 182, George Street Providence, RI 02912, USA

ABSTRACT 1 INTRODUCTION. Jingchen Liu. Xiang Zhou. Brown University 182, George Street Providence, RI 02912, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. FAILURE OF RANDOM MATERIALS: A LARGE DEVIATION AND COMPUTATIONAL STUDY Jingchen

More information

Elements of Probability Theory

Elements of Probability Theory Elements of Probability Theory CHUNG-MING KUAN Department of Finance National Taiwan University December 5, 2009 C.-M. Kuan (National Taiwan Univ.) Elements of Probability Theory December 5, 2009 1 / 58

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

The Chaotic Character of the Stochastic Heat Equation

The Chaotic Character of the Stochastic Heat Equation The Chaotic Character of the Stochastic Heat Equation March 11, 2011 Intermittency The Stochastic Heat Equation Blowup of the solution Intermittency-Example ξ j, j = 1, 2,, 10 i.i.d. random variables Taking

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Gaussian Random Fields: Geometric Properties and Extremes

Gaussian Random Fields: Geometric Properties and Extremes Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and

More information

Concentration behavior of the penalized least squares estimator

Concentration behavior of the penalized least squares estimator Concentration behavior of the penalized least squares estimator Penalized least squares behavior arxiv:1511.08698v2 [math.st] 19 Oct 2016 Alan Muro and Sara van de Geer {muro,geer}@stat.math.ethz.ch Seminar

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

Week 2: Sequences and Series

Week 2: Sequences and Series QF0: Quantitative Finance August 29, 207 Week 2: Sequences and Series Facilitator: Christopher Ting AY 207/208 Mathematicians have tried in vain to this day to discover some order in the sequence of prime

More information

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Daniel Meschenmoser Institute of Stochastics Ulm University 89069 Ulm Germany daniel.meschenmoser@uni-ulm.de

More information

Module 3. Function of a Random Variable and its distribution

Module 3. Function of a Random Variable and its distribution Module 3 Function of a Random Variable and its distribution 1. Function of a Random Variable Let Ω, F, be a probability space and let be random variable defined on Ω, F,. Further let h: R R be a given

More information

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

STOCHASTIC GEOMETRY BIOIMAGING

STOCHASTIC GEOMETRY BIOIMAGING CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 2018 www.csgb.dk RESEARCH REPORT Anders Rønn-Nielsen and Eva B. Vedel Jensen Central limit theorem for mean and variogram estimators in Lévy based

More information

EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS

EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS Proceedings of the 2006 Winter Simulation Conference L. F. Perrone, F. P. Wieland, J. Liu, B. G. Lawson, D. M. Nicol, and R. M. Fujimoto, eds. EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Measure-Transformed Quasi Maximum Likelihood Estimation

Measure-Transformed Quasi Maximum Likelihood Estimation Measure-Transformed Quasi Maximum Likelihood Estimation 1 Koby Todros and Alfred O. Hero Abstract In this paper, we consider the problem of estimating a deterministic vector parameter when the likelihood

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Can we do statistical inference in a non-asymptotic way? 1

Can we do statistical inference in a non-asymptotic way? 1 Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.

More information

Week 1 Quantitative Analysis of Financial Markets Distributions A

Week 1 Quantitative Analysis of Financial Markets Distributions A Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

STAT Sample Problem: General Asymptotic Results

STAT Sample Problem: General Asymptotic Results STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Connection to Branching Random Walk

Connection to Branching Random Walk Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Calculating credit risk capital charges with the one-factor model

Calculating credit risk capital charges with the one-factor model Calculating credit risk capital charges with the one-factor model Susanne Emmer Dirk Tasche September 15, 2003 Abstract Even in the simple Vasicek one-factor credit portfolio model, the exact contributions

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be

More information

Importance Sampling to Accelerate the Convergence of Quasi-Monte Carlo

Importance Sampling to Accelerate the Convergence of Quasi-Monte Carlo Importance Sampling to Accelerate the Convergence of Quasi-Monte Carlo Wolfgang Hörmann, Josef Leydold Department of Statistics and Mathematics Wirtschaftsuniversität Wien Research Report Series Report

More information

More Empirical Process Theory

More Empirical Process Theory More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications. Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information