EFFICIENT SIMULATION OF LARGE DEVIATIONS EVENTS FOR SUMS OF RANDOM VECTORS USING SADDLE POINT REPRESENTATIONS

Size: px
Start display at page:

Download "EFFICIENT SIMULATION OF LARGE DEVIATIONS EVENTS FOR SUMS OF RANDOM VECTORS USING SADDLE POINT REPRESENTATIONS"

Transcription

1 Applied Probability Trust (3 November 0) EFFICIENT SIMULATION OF LARGE DEVIATIONS EVENTS FOR SUMS OF RANDOM VECTORS USING SADDLE POINT REPRESENTATIONS ANKUSH AGARWAL, SANTANU DEY AND SANDEEP JUNEJA, Abstract We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queuing and financial credit risk modeling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddlepoint based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. These representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance sampling be represented as expectations. Further, it is easy to identify and approximate the zero-variance importance sampling distribution to estimate these integrals. We identify such importance sampling measures and show that they possess the asymptotically vanishing relative error property that is stronger than the bounded relative error property. To illustrate the broader applicability of the proposed methodology, we extend it to develop asymptotically vanishing relative error estimator for the practically important expected overshoot of sums of iid random variables. Postal address: STCS, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai, India. {dsantanu, juneja@tifr.res.in

2 Agarwal, Dey and Juneja Keywords: Rare Event Simulation; Importance Sampling;Saddle Point Approximation; Fourier inversion; Large Deviations 000 Mathematics Subject Classification: Primary 65C05; 60E0; 60F0 Secondary 65C50; 65T99. Introduction Let (X i : i ) denote a sequence of independent, identically distributed (iid) light tailed (their moment generating function is finite in a neighborhood of zero) non-lattice (modulus of their characteristic function is strictly less than one) random vectors taking values in R d, for d. In this paper we consider the problem of efficient simulation estimation of the probability density function of Xn = n n i= X i at points away from EX i, and the tail probability P ( X n A) for sets A that do not contain EX i and essentially are affine transformations of the non-negative orthant of R d. We develop an efficient simulation estimation methodology for these rare quantities that exploits the well known saddle point representations for the probability density function of X n obtained from Fourier inversion of the characteristic function of X (see e.g., [], [6] and [7]). Furthermore, using Parseval s relation, similar representations for P ( X n A) are easily developed. To illustrate the broader applicability of the proposed methodology, we also develop similar representation for E( X n : Xn a) in a single dimension setting (d = ), for a > EX i, and using it develop an efficient simulation methodology for this quantity as well. The problem of efficient simulation estimation of the tail probability density function has not been studied in the literature, although, from practical viewpoint its clear that the shape of such density functions provides a great deal of insight into the tail behavior of the sums of random variables. Another potential application maybe in the maximum likelihood framework for parameter estimation where closed form expressions for density functions of observed outputs are not available, but simulation based estimators provide an accurate proxy. The problem of efficiently estimating P ( X n A) via importance sampling, besides being of independent importance, may A very preliminary version of this paper appeared as [8]. Authors thank the editor for suggesting this application

3 Efficient simulation of large deviation events 3 also be considered a building block for more complex problems involving many streams of i.i.d. random variables (see e.g., [9], for a queuing application; [3] for applications in credit risk modeling). This problem has been extensively studied in rare event simulation literature (see e.g., [3], [0], [], [4], [0], []). Essentially, the literature exploits the fact that the zero variance importance sampling estimator for P ( X n A), though unimplementable, has a Markovian representation. This representation may be exploited to come up with provably efficient, implementable approximations (see [] and [5]). Sadowsky and Bucklew in [] (also see [5]) developed exponential twisting based importance sampling algorithms to arrive at unbiased estimators for P ( X n A) that they proved were asymptotically or weakly efficient (as per the current standard terminology in rare event simulation literature, see e.g., [] and [5] for an introduction to rare event simulation. Popular efficiency criteria for rare event estimators are also discussed later in Section.). The importance sampling algorithms proposed by [] were state independent in that each X k+ was generated from a distribution independent of the previously generated (X i : i k). Blanchet, Leder and Glynn in [3] also considered the problem of estimating P ( X n A) where they introduced state dependent, exponential twisting based importance sampling distributions (the distribution of generated X k+ depended on the previously generated (X i : i k)). They showed that, when done correctly, such an algorithm is strongly efficient, or equivalently has the bounded relative error property. The problem of efficient estimation of the expected overshoot E [ ( X n a) : Xn a ] is of considerable importance in finance and insurance settings. To the best of our knowledge, this is the first paper that directly tackles this estimation problem. As mentioned earlier, in this article we exploit the saddle point based representations of the rare event quantities considered. These representations allow us to write the quantity of interest α n as a product c n β n where c n α n (that is, c n /α n as n ) and is known in closed form. So the problem of interest is estimation of β n, which is an integral of a known function. Note that β n as n. In the literature, asymptotic expansions for β n exist, however they require computation of third and higher order derivatives of the log-moment generating function of X i. This is particularly difficult in higher dimensions. In addition, it is difficult to control the

4 4 Agarwal, Dey and Juneja bias in such approximations. As we note later in numerical experiments, these biases can be significant even when probabilities are as small as of order 0 9. In the insurance and financial industry, simulation, with its associated variance reduction techniques, is the preferred method for tail risk measurement even when asymptotic approximations are available (since these approximations are typically poor in the range of practical interest; see e.g., [3]). In our analysis, we note that the integral β n can be expressed as an expectation of a random variable using importance sampling. Furthermore, the zero variance estimator for this expectation is easily ascertained. We approximate this estimator by an implementable importance sampling distribution and prove that the resulting unbiased estimator of α n has the desirable asymptotically vanishing relative error property. More tangibly, the estimator of the integral β n has the property that its variance converges to zero as n. An additional advantage of the proposed approach over existing methodologies for estimating P ( X n A) and related rare quantities is that while these methods require O(n) computational effort to generate each sample output, our approach per sample requires small and fixed effort independent of n. The use of saddle point methods to compute tail probabilities has a long and rich history (see e.g., [], [6] and [7]). To the best of our knowledge the proposed methodology is the first attempt to combine the expanding literature on rare event simulation with the classical theory of saddle point approximations. The rest of the paper is organized as follows: In Section we briefly review the popular performance evaluation measures used in rare event simulation, and the existing literature on estimating P ( X n A). Then, in Section 3, we develop an importance sampling estimator for the density of X n and show that it has asymptotically vanishing relative error. In Section 4, we devise an integral representation for P ( X n A) and develop an importance sampling estimator for it and again prove that it has asymptotically vanishing relative error. In this section we also discuss how this methodology can be adapted similarly to develop asymptotically vanishing relative error estimator for E( X n : Xn a) in a single dimension setting. In Section 5 we report the results of a few numerical experiments to support our analysis. We end with a brief conclusion in Section 6. For brevity, proofs similar to relevant known results, routine technicalities, figures and some numerical experiments are omitted. These can be found in [9], a more

5 Efficient simulation of large deviation events 5 elaborate version of this paper.. Rare event simulation, a brief review Let α n = E n Y n = Y n dp n be a sequence of rare event expectations in the sense that α n 0 as n, for non-negative random variables (Y n : n ). Here, E n is the expectation operator under P n. For example, when α n = P (B n ), Y n corresponds to the indicator of the event B n. Naive simulation for estimating α n requires generating many iid samples of Y n under P n. Their average then provides an unbiased estimator of α n. Central limit theorem based approximations then provide an asymptotically valid confidence interval for α n (under the assumption that E n Y n < ). Importance sampling involves expressing α n = Y n L n d P n = Ẽn[Y n L n ], where P n is another probability measure such that P n is absolutely continuous w.r.t. Pn, with L n = dpn d P denoting the associated Radonn Nikodym derivative, or the likelihood ratio, and Ẽn is the expectation operator under P n. The importance sampling unbiased estimator ˆα n of α n is obtained by taking an average of generated iid samples of Y n L n under P n. Note that by setting d P n = Y n E n(y n) dp n the simulation output Y n L n is E n (Y n ) almost surely, signifying that such a P n provides a zero variance estimator for α n... Popular performance measures Note that the relative width of the confidence interval obtained using the central limit theorem approximation is proportional to the ratio of the standard deviation of the estimator divided by its mean. Therefore, the latter is a good measure of efficiency of the estimator. Note that under naive simulation, when Y n = I(B n ) (For any set D, I(D) denotes its indicator), the standard deviation of each sample of simulation output equals α n ( α n ) so that when divided by α n, the ratio increases to infinity as α n 0. Below we list some criteria that are popular in evaluating the efficacy of the proposed importance sampling estimator (see []). Here, V ar(ˆα n ) denotes the variance of the estimator ˆα n under the appropriate importance sampling measure. A given sequence of estimators (ˆα n : n ) for quantities (α n : n ) is said to be weakly efficient or asymptotically efficient if lim sup n V ar(ˆαn) α ɛ n < for all ɛ > 0;

6 6 Agarwal, Dey and Juneja V ar(ˆαn) to be strongly efficient or have bounded relative error if lim sup n α n < ; V ar(ˆαn) and to have asymptotically vanishing relative error if lim n α n = Efficient estimation of probability density function of X n In this section we first develop a saddle point based representation for the probability density function (pdf) of Xn in Proposition 3. (for proof see e.g., [], [6], [7] and [9]). We then develop an approximation to the zero variance estimator for this pdf. Our main result is Theorem 3., where we prove that the proposed estimator has an asymptotically vanishing relative error. Some notation is needed in our analysis. Recall that (X i : i ) denote a sequence of independent, identically distributed light tailed random vectors taking values in R d. Let (Xi,..., Xd i ) denote the components of X i, each taking value in R. Let F ( ) denote the distribution function of X i. Denote the moment generating function of F by M( ), so that M(θ) := E [ e θ X] = E[e θx +θx + +θ dx d ], where θ = (θ, θ,..., θ d ) and for x, y R d the Euclidean inner product between them is denoted by x y := x y + x y + + x d y d. The characteristic function (CF) of X i is given by ϕ(θ) := E [ e ιθ X] = E[e ι(θx +θx + +θ dx d ) ] where ι =. In this paper we assume that the distribution of X i is non-lattice, which means that ϕ(θ) < for all θ R d {0}. Let Λ(θ) := ln M(θ) denote the cumulant generating function (CGF) of X i. We define Θ to be the effective domain of Λ(θ), that is Θ := { θ = (θ, θ,..., θ d ) R d Λ(θ) < }. Throughout this article we assume that 0 Θ 0, the interior of Θ. Denote the Euclidean norm of x R d by x := x x. For a square matrix A, det(a) denotes the determinant of A, while norm of A is denoted by A := max x = Ax. Let Λ (θ) denote the Hessian of Λ(θ) for θ Θ 0. Whenever, this is strictly positive definite, let A(θ) be the inverse of the unique square root of Λ (θ).

7 Efficient simulation of large deviation events 7 Proposition 3.. Suppose Λ (θ) is strictly positive definite for some θ Θ 0. Furthermore, suppose that ϕ γ is integrable for some γ. Then f n, the density function of Xn, exists for all n γ and its value at any point x 0 is given by: ( n ) d exp [n {Λ(θ) θ x 0 }] f n (x 0 ) = ψ(n A(θ)v, θ, n) φ(v) dv, () π det(λ (θ)) v R d where ψ(y, θ, n) = exp [n η(y, θ)] and η(y, θ) = yt Λ (θ)y + Λ (θ + ιy) (θ + ιy) x 0 Λ(θ) + θ x 0. () For a given x 0 R d, x 0 EX, suppose that the solution θ to the equation Λ (θ) = x 0 exists and θ Θ 0. Then, the expansion of the integral in () is available. For example, the following is well-known (proof can be found, e.g., in [7], [], [9]): Proposition 3.. Suppose Λ (θ ) is strictly positive definite and ϕ γ is integrable for some γ. Then, 3.. Monte Carlo estimation v R d ψ(n A(θ )v, θ, n) φ(v) dv = + o ( ) n. (3) The integral in () may be estimated via Monte Carlo simulation. In particular, this integral may be re-expressed as ψ(n A(θ )v, θ, n) φ(v) g(v) dv, v R g(v) d where g is a density supported on R d. Now if V, V,..., V N are iid with distribution given by the density g, then ˆf n ( x) := ( n ) d exp [n {Λ(θ ) θ x 0 }] π det(λ (θ )) N is an unbiased estimator for f n (x 0 ). N i= ψ(n A(θ )V i, θ, n)φ(v i ) g(v i ) (4) 3... Approximating the zero variance estimator Note that to get a zero variance estimator for the above integral we need g(v) ψ(n A(θ )v, θ, n)φ(v). We now argue that ψ(n A(θ )v, θ, n) (5)

8 8 Agarwal, Dey and Juneja for all v = o(n 6 ). We may then select an IS density g that is asymptotically similar to φ for v = o(n 6 ). In the further tails, we allow g to have fatter power law tails. This ensures that large values of V in the simulation do not contribute substantially to the variance. Further analysis is needed to see (5). Note from the definition of η(v, θ), that η(0, θ) = 0, η (0, θ) = 0 and η (v, θ) = (ι) 3 Λ (θ + ιv) (6) for all θ, while η (0, θ ) = 0 (7) for the saddle point θ. Here η, η and η are the first, second and third derivatives of η w.r.t. v, with θ held fixed. Note that while η and η are d-dimensional vector and d d matrix respectively, η (v, θ) is the array of numbers: (( v i v j v k (v, θ))) i,j,k d. The following notation aids in dealing with such quantities: If A = (a ijk ) i,j,k d is a d d d array of numbers and u = (u, u,..., u d ) is a d-dimensional vector and B is a d d matrix then we use the notation A u = i,j,k d a ijku i u j u k and A B = (c ijk ) i,j,k d, where c ijk = m,n,p a mnpb mi b nj b pk. It then follows that A (Bu) = (A B) u. Since, it follows from the three term Taylor series expansion and (6) and (7) above, that ψ(n A(θ )v, θ, n) equals { } { exp nη(n ( ) } A(θ )v, θ ) = exp 6 n Λ θ + ιn A(θ )ṽ (ιa(θ )v) continuity of Λ in the neighborhood of θ implies (5) Proposed importance sampling density We now define the form of the IS density g. We first show its parametric structure and then specify the parameters that achieve asymptotically vanishing relative error. For a (0, ), b (0, ), and α (, ), set b φ(v) when v < a g(v) = C v when v a. α Note that if we put p := where IG(ω, x) = Γ(ω) ( ) d g(v) dv = b φ(v) dv = b IG v <a, a, x 0 e t t ω dt is the incomplete Gamma integral (or the Gamma v <a distribution function, see e.g, [7]), then C = ( p) dv v a 3 η v α > 0, provided p <., (8)

9 Efficient simulation of large deviation events 9 The following Assumption is important for coming up with the parameters of the proposed IS density. Assumption. There exist α 0 > and γ such that u R d u α0 ϕ(u) γ du <. By Riemann-Lebesgue lemma, if the probability distribution of X is given by a density function, then ϕ(u) 0 as u. Assumption is easily seen to hold when ϕ(u) decays as a power law as u. This is true, for example, for Gamma distributed random variables. More generally, this holds when the underlying density has integrable higher derivatives (see []): If k-th order derivative of the underlying density is integrable then for any α 0, Assumption holds with γ > +α0 k. To specify the parameters of the IS density we need further analysis. Define [ ϕ θ (u) := E θ e ιu (X x0)] M (θ + ιu) ιu x0 = e M(θ) where E θ denotes the expectation operator under the distribution F θ. Let, h(x) := sup ϕ θ (u). (9) u x Then 0 h(x), h(0) = 0, h(x) is continuous, non-decreasing and h(x) as x 0. Further, since ϕ is the characteristic function of a non-lattice distribution, h(x) > 0 if x > 0. We define h (y) = min{z h(z) y} for y (0, ). Then for any y (0, ) we have h(h (y)) y and h (z) 0 as z 0. Let {s n } n= be any sequence such that as n, s n 0; for any β positive, ( s n ) n n β 0; and nh (s n ). Taking s n to be order n ɛ for ɛ (0, ) satisfies these three properties (see [9] for this and for further discussion on how {s n } may be selected in practice). Set δ 3 (n) := h (s n ). Then, it follows that if x δ 3 (n) then h(x) s n. Equivalently, ϕ θ (u) < s n for all u δ 3 (n). Let κ min and κ max denote the minimum and maximum eigenvalue of Λ (θ ), respectively. Hence κ min we have is the maximum eigenvalue of Λ (θ ) = A(θ )A(θ ). Therefore, κ min = A(θ ).

10 0 Agarwal, Dey and Juneja Next, put δ (n) = κ max δ 3 (n). Then, nδ (n) and v δ (n) implies A(θ )v δ 3 (n). Also let δ (n) = κmin δ (n) = κmax κ min δ 3 (n), so that v < δ (n) implies A(θ )v < δ (n). Now we are in position to specify the parameters for the proposed IS density. Set α = α 0 and a n = ( d nδ (n). Let p n = b n IG, a n ). For g to be a valid density function, we need p n <. Since IG real numbers that converge to in such a way that b n < /IG lim n ( d ), a n, select b n to be a sequence of positive ( ) d, a n and ( s n ) n n d+α [ b n IG ( d, a n )] = 0. (0) For example, b n = n ξ for any ξ > 0 satisfies (0). For each n, let g n denote the pdf of the form (8) with parameters α, a n and b n chosen as above. Let E n and V ar n denote the expectation and variance, respectively, w.r.t. the density g n. Theorem 3.. Suppose Assumption holds and θ Θ 0. Then, [ ] ψ (n A(θ )V, θ, n)φ (V ) v Rd ψ (n A(θ )v, θ, n)φ (v) E n gn(v = dv = +o(n ). ) g n (v) Consequently, from Proposition 3., it follows that [ ] ψ(n A(θ )V i, θ, n)φ(v i ) V ar n 0 as n, g n (V i ) so that the proposed estimators for (f n (x 0 ) : n ) have an asymptotically vanishing relative error. We will use the following lemma from []. Lemma. For any λ, β C, exp(λ) β ( λ β + β ) exp(ω) for all ω max{ λ, β }. Also note that from the definitions of ψ and η it follows that, for any θ Θ, { exp v v } ψ(n A(θ)v, θ, n) is a characteristic function. To see this, observe that exp { } v v ψ(n A(θ)v, θ, n) equals [ exp { v v n + η ( n A(θ)v, θ )}] n = ( E θ [ e ιn A(θ)v (X x 0)]) n = [ϕ θ ( n A(θ)v )] n.

11 Efficient simulation of large deviation events Some more observations are useful for proving Theorem 3.. Since η is continuous, it follows from the three term Taylor series expansion, η(v, θ) = η(0, θ) + η (0, θ)v + (v)t η (0, θ)v + 6 η (ṽ, θ) v (where ṽ is between v and the origin) and (6) and (7) above that there exists a sequence {ɛ n } of positive numbers converging to zero so that η(v, θ ) 3! η (0, θ ) v ɛ n (κ min ) 3 v 3 for v < δ (n), or equivalently η(v, θ ) 3! Λ (θ ) (ιv) ɛ n (κ min ) 3 v 3 for v < δ (n). () Furthermore, for n sufficiently large, 3! Λ (θ ) (ιv) < 8 κ min v () and η(v, θ ) < 8 κ min v (3) for all v < δ (n). We shall assume that n is sufficiently large so that () and (3) hold in the remaining analysis. Proof. ( Theorem 3.) We write where I 3 equals v < nδ (n) v Rd ψ (n A(θ )v, θ, n)φ (v) dv = I 3 + I 4, g n (v) ψ (n A(θ )v, θ, n)φ (v) g n (v) dv and I 4 = v nδ (n) ψ (n A(θ )v, θ, n)φ (v) g n (v) From (8) we see that I 3 equals ψ (n b n v < A(θ )v, θ, n)φ(v) dv and I 4 = v α ψ (n nδ (n) C n v A(θ )v, θ, n)φ (v) dv. nδ (n) For any c > 0, put Φ d (c) := ( )) (= v <c φ(v)dv d IG, c. By triangle inequality I 3 I 3 Φ d ( nδ (n)) + Φ d ( nδ (n)) b n. b n dv.

12 Agarwal, Dey and Juneja Since as n we have Φ d ( nδ (n)) and b n, the second term in RHS converges to zero. Writing ζ 3 (θ ) = Λ (θ ) A(θ ), for the first term we have I 3 Φ d ( nδ (n)) b n = { } ψ (n b n v < A(θ )v, θ, n) φ(v) dv nδ (n) = { ψ (n b n v < A(θ )v, θ, n) ζ 3(θ } ) nδ (n) 3 n (ιv) φ(v) dv b n (π) d v < ψ (n A(θ )v, θ, n) ζ 3(θ ) nδ (n) 3 n (ιv) e v dv. ) ( ) We apply Lemma () with λ = n η (n A(θ )v, θ and β = n Λ (θ ) 3 ιn A(θ )v. Since β = np (v), where P is a homogeneous polynomial whose coefficients do not depend on n, and v < nδ (n) implies n A(θ )v < δ (n), we have from (3), () and (), respectively ( λ = n η n A(θ )v, θ ) < n 8 κ min n A(θ )v 8 κ min A(θ ) v = v 4, β = n ( ) 3! Λ (θ ) ιn A(θ )v < n 8 κ min n A(θ )v 8 κ min A(θ ) v = v 4, and λ β satisfies n (n η A(θ )v, θ ) 3! Λ (θ ) ( ) ιn A(θ )v < nɛ n (κ min ) 3 n A(θ )v 3 ɛ n v 3. n From Lemma, it now follows that the integrand in the last integral is dominated by { v exp 4 } ( ɛn v 3 + ) n n P (v) exp } { v = exp } ( { v ɛn v 3 + ) 4 n n P (v). Therefore we have I 3 = + o(n ). Also I 4 (π) d v α exp { v } ψ (n C n v > A(θ )v, θ, n) dv nδ (n) ( = (π) d v α ϕθ C n v > n A(θ n )v) dv nδ (n) ( s n) n γ ( (π) d v α ϕθ n A(θ γ )v) dv C n v R = ( s n) n γ n d+α Λ (θ ) (π) d A(θ ) u α ϕ θ (u) γ du C n u R ( s n ) n γ n d+α D u α ϕ θ (u) γ du C n u R ( s n ) n γ n d+α v dv nδ D (n) v α u α ϕ θ (u) γ du. ( p n ) u R

13 Efficient simulation of large deviation events 3 where D is a constant independent of n. By Assumption, the above integral over u is finite. For large n we also have v nδ (n) dv v α v can conclude that I 4 0 as n, proving Theorem 3.. dv v α. By choice of b n we 4. Efficient Estimation of Tail Probability In this section we consider the problem of efficient estimation of P ( X n A) for sets A that are affine transformations of the non-negative orthants R d + along with some minor variations. As in ([4]), dominating point of the set A plays a crucial role in our analysis. As is well known, a point x 0 is called a dominating point of A if x 0 uniquely satisfies the following properties: i) x 0 is in the boundary of A; ii) there exists a unique θ R d with Λ (θ ) = x 0 ; iii) A {x θ (x x 0 ) 0}. In the remaining paper, we assume the existence of a dominating point x 0 for A. Our estimation relies on a saddle-point representation of P ( X n A) obtained using Parseval s relation. Let Y n := n( X n x 0 ) and A n,x0 := n(a x 0 ) where x 0 = (x 0, x 0,..., x d 0) is an arbitrarily chosen point in R d. Let h n,θ,x0 (y) be the density function of Y n when each X i has distribution function F θ obtained by exponentially twisting F by θ. That is, df θ (x) = exp(θ x)m(θ) df (x) = exp{θ x Λ(θ)}dF (x). An exact expression for the tail probability is given by: P [ X n A] = P [Y n A n,x0 ] = e n{θ x 0 Λ(θ )} y A n,x0 e n(θ y) h n,θ,x 0 (y) dy (4) where recall that θ Θ 0 is a solution to Λ (θ) = x 0, and x 0 is the dominating point of A. Define c(n, θ, x 0 ) = exp{ n(θ y)} dy = n d y A n,x0 We need the following assumption: Assumption. n, c(n, θ, x 0 ) <. w (A x 0) exp{ n(θ w)} dw Since x 0 is a dominating point of A, for any y A n,x0, we have θ y 0. Hence, if A is a set with finite Lebesgue measure then c(n, θ, x 0 ) is finite. Assumption may hold even when A has infinite Lebesgue measure, as Example below illustrates.

14 4 Agarwal, Dey and Juneja When Assumption holds, we can rewrite the right hand side of (4) as c(n, θ, x 0 )e n{θ x 0 Λ(θ )} r n,θ,x 0 (y)h n,θ,x 0 (y) dy (5) y A n,x0 where r n,θ,x 0 (y) is a density function that equals exp{ n(θ y)} c(n,θ,x 0) for y A n,x0 and 0 otherwise. Let ρ n,θ,x 0 (t) denote the complex conjugate of the characteristic function of r n,θ,x 0 (y). [ ( )]n M Since the characteristic function of h(n, θ, x 0 ) equals e ιt θ + nx ιt 0 n, by Parseval s relation, (5) is equal to M(θ ) ( ( ) d c(n, θ, x 0 )e n{θ x 0 Λ(θ )} ρ n,θ π,x 0 (t)e ιt nx0 M θ + ιt t R M(θ ) d n ) This in turn, by the change of variable t = A(θ )v and rearrangement of terms, equals c(n, θ, x 0 )e n{θ x 0 Λ(θ )} det(λ (θ )) We need another assumption to facilitate analysis: n dt. (6) ( ) d ρ n,θ π,x 0 (A(θ )v)ψ(n A(θ )v, θ, n)φ(v) dv. v Rd (7) Assumption 3. For all t R d, lim n ρ n,θ,x 0 (t) =. Proposition 4.. Suppose A has a dominating point x 0, the associated θ Θ o and Λ (θ ) is strictly positive definite. Further, Assumptions and 3 hold. Then, ( ) d P [ X c(n, θ, x 0 )e n{θ x 0 Λ(θ )} n A], (8) π det(λ (θ )) or, equivalently by (7) lim n v R d ρ n,θ,x 0 (A(θ )v)ψ(n A(θ )v, θ, n)φ(v) dv =. (9) Proof of Proposition 4. is similar to that of Proposition 3. and is omitted (see [9]). Let g be any density supported on R d. If V, V,..., V N are iid with distribution given by density g, then an unbiased estimator for P [ X n A] is given by ( ) d ˆP [ X c(n, θ, x 0 )e n{θ x 0 Λ(θ )} n A] = π det(λ (θ )) N N j= ρ n,θ,x 0 (A(θ )V j )ψ(n A(θ )V j, θ, n)φ(v j ). (0) g(v j )

15 Efficient simulation of large deviation events 5 Note that for above estimator to be useful, one must be able to find closed form expression for c(n, θ, x 0 ) and ρ n,θ,x 0 (t) or these should be cheaply computable. In Section 4., we consider some examples where we explicitly compute c(n, θ, x 0 ) and ρ n,θ,x 0 and verify Assumptions and 3. Theorem 4.. Under Assumptions, and 3, [ ] ρ n,θ,x E 0 (A(θ )V )ψ (n A(θ )V, θ, n)φ (V ) n gn(v = + o(n ) as n, ) where g n is same as Theorem 3.. Consequently, by Proposition 4., it follows that [ ] as n V ar n ˆP [ Xn A] 0 and the proposed estimator has asymptotically vanishing relative error. Proof. The proof follows along the same line as proof of Theorem 3.. We write v Rd ρ n,θ,x 0 (A(θ )v)ψ (n A(θ )v, θ, n)φ (v) dv = I 5 + I 6 g n (v) where I 5 = = b n v <δ (n) n v <δ (n) n ρ n,θ,x 0 (A(θ )v)ψ (n A(θ )v, θ, n)φ (v) dv g n (v) ρ n,θ,x 0 (A(θ )v)ψ (n A(θ )v, θ, n)φ(v) dv. I 6 = = C n v δ (n) n Now I 5 = b n b n b n b n b n v δ (n) n v <δ (n) n v <δ (n) n v <δ (n) n v <δ (n) n v <δ (n) n ρ n,θ,x 0 (A(θ )v)ψ (n A(θ )v, θ, n)φ (v) dv g n (v) ρ n,θ,x 0 (A(θ )v) v α ψ (n A(θ )v, θ, n)φ (v) dv. ρ n,θ,x 0 (A(θ )v)ψ (n A(θ )v, θ, n)φ(v) dv { } ρ n,θ,x 0 (A(θ )v) ψ (n A(θ )v, θ, n) φ(v) dv + o() { ρ n,θ,x 0 (A(θ )v) ψ (n A(θ )v, θ, n) ζ 3(θ } ) 3 n (ιv) φ(v) dv + o() ρ n,θ,x 0 (A(θ )v) ψ (n A(θ )v, θ, n) ζ 3(θ ) 3 n (ιv) φ(v) dv + o() ψ (n A(θ )v, θ, n) ζ 3(θ ) 3 n (ιv) φ(v) dv + o().

16 6 Agarwal, Dey and Juneja Now as in the case of Theorem 3. we conclude that I 5 = + o(n ). Also, since I 6 C n (π) d C n A(θ )v δ (n) n A(θ )v δ (n) n v α ρ n,θ,x 0 (A(θ )v) ψ (n A(θ )v, θ, n) φ (v) dv v α exp { v } ψ (n A(θ )v, θ, n) dv, we conclude that I 6 0 as n proving the theorem. 4.. Examples Example. Let A = x 0 + R d +, where x 0 = (x 0, x 0,..., x d 0) is a given point in R d. Further suppose that i =,,..., d, θ i > 0. It is easy to see that existence of such a θ implies that x 0 is a dominating point for A. It also follows that Assumption holds and c(n, θ, x 0 ) =. It can easily be verified that ρ n d n,θ,x θ 0 (t, t,... t d ) = ( ) θ θ d d i=. Therefore Assumption 3 also holds in this case. By Proposition 4., + ιt i nθ i we then have e n{λ(θ ) θ x 0} P [ X n x 0 R d +] (π) d n d det(λ (θ ))θ θ. θ d By Theorem 4., (0) is an unbiased estimator for P [ X n x 0 asymptotically vanishing relative error. R d +] and has an Example. When A = x 0 + BR d + and B a nonsingular matrix, the problem can also be reduced to that considered in Example by a simple change of variable. Set y = B z. Then, it follows that for any θ c(n, θ, x 0 ) = det(b) exp{ n(b T θ z)} dz. z R d + Now if we assume that all the d components of B T θ are positive, then as in Example, both the Assumptions and 3 hold. For 0 d < d, let Q + d := {(x, x,..., x d ) R d x i 0 0 i d }. Similar analysis holds when A = x 0 + BQ + d, and B a nonsingular matrix. Then, simple change of variable y = B z reduces the problem to a lower dimension one as in Example with d replaced by d.

17 Efficient simulation of large deviation events 7 Example 3. In above examples we have considered sets A which are unbounded. In this example we show that similar analysis holds when the set A is bounded. Consider the three increasing regions (A i : i =,, 3), where A 3 corresponds to region A considered in Example, A () is the d-dimensional rectangle given by d i [xi 0, x i 0 + D i ], and A () is such that A () A () A (3). Then, x 0 is the common dominating point for all the three sets. Again suppose that i =,,..., d, θ i dependence on x 0 and θ, for i =,, let c (i) n := exp{ n(θ y)} dy y n(a (i) x 0) > 0. Suppressing and Then ρ (i) n (t) := exp{ ιt y n(θ y)} dy. c (i) n y n(a (i) x 0) c () n = ( e nθ D )( e nθ D ) ( e nθ d D d ) n d θ θ θ d and ρ () n (t, t,... t d ) = d + ιti nθ i i= e nθ i Di(+ ιt i nθ ) i e nθ i Di. Therefore, it follows that Assumption 3 holds for A (). Also note that, ρ () n (t) c () n n d c () n n d c () n y n(a () x 0) z n(a () x 0) z R d + exp{ n(θ y)} e ιt y dy exp{ θ z} e ιt z n dz exp{ θ z} e ιt z n dz. Since the last integral converges to zero, it follows that Assumption 3 holds for A (). Similar analysis carries over if these sets are transformed using a non-singular matrix B under the conditions as in Example. In Example we assumed that i =,,..., d, θi > 0. In many setting, this may not be true but the problem can be easily transformed to be amenable to the proposed algorithms. This is discussed further in [9].

18 8 Agarwal, Dey and Juneja 4.. Estimating expected overshoot The methodology developed previously to estimate the tail probability P ( X n A) can be extended to estimate E[ X α n X n A] for α (Z + {0}) d. We illustrate this in a single dimension setting (d = ) for α =, and A = (x 0, ) for x 0 > EX i. Let S n = n i= X i. In finance and in insurance one is often interested in estimating E[(S n nx 0 ) S n > nx 0 ], which is known as the expected overshoot or the peak over threshold. As we have an efficient estimator for P ( X n > x 0 ), the problem of efficiently estimating E[S n S n > nx 0 ] is equivalent to that of efficiently estimating E[(S n nx 0 )I(S n > nx 0 )]. Note that E[((S n nx 0 )I(S n > nx 0 )] = ne[y n I(Y n > 0)], where Y n = n( X n x 0 ). Using (4) we get E[Y n I(Y n > 0)] = e n{θ x 0 Λ(θ )} 0 y e n(θ y) h n,θ,x 0 (y) dy, () where recall that θ Θ is a solution to Λ (θ) = x 0 and h n,θ,x 0 (y) is the density of Y n when each X i has distribution F θ. Define c(n, θ ) = 0 y exp{ n(θ y)} dy = (n θ ) Hence, n, c(n, θ ) <. The right hand side of () may be re-expressed as c(n, θ )e n{θ x 0 Λ(θ )} where the density function r n,θ (y) = y exp{ n(θ y)} c(n,θ ) 0 r n,θ (y)h n,θ,x 0 (y) dy () for y > 0, and zero otherwise. Let ρ n,θ (t) denote the complex conjugate of the characteristic function of r n,θ (y). By simple calculations, it follows that ρ n,θ (t) = t ι t nθ nθ and lim n ρ n,θ (t) =. Then, repeating the analysis for the tail probability, analogously to (7), we see that () equals c(n, θ )e n{θ x 0 Λ(θ )} π Λ (θ ) As in Proposition 4., we can see that E[(S n nx 0 )I(S n > nx 0 )] so that E[(Sn nx0)i(sn>nx0)] P [S n>nx 0] θ. 0 ρ n,θ (A(θ )v)ψ(n A(θ )v, θ, n)φ(v) dv. ( n ) c(n, θ )e n{θ x 0 Λ(θ )} = π det(λ (θ )) ( ) e n{θ x 0 Λ(θ )} πn θ det(λ (θ )), Using analysis identical to that in Theorem 4., it follows that the resulting unbiased estimator of E[(S n nx 0 )I(S n > nx 0 )] (when density g n is used) has an asymptotically vanishing relative error.

19 Efficient simulation of large deviation events 9 5. Numerical Experiments 5.. Estimation of probability density function of X n We first use the proposed method to estimate the probability density function of X n for the case where sequence of random variables (X i : i ) are independent and identically exponentially distributed with mean. Then the sum has a known gamma density function facilitating comparison of the estimated value to the true value. The density function estimates using the proposed method (referred to as SP-IS method) are evaluated for n = 30, a n =, α = and p n = 0.9 (the algorithm performance was observed to be relatively insensitive to small perturbations in these values; see [9] for a discussion on how these parameters may be selected) based on N generated samples. Table shows the comparison of our method with the conditional Monte Carlo (CMC) method proposed in Asmussen and Glynn (008) (pg ) for estimating the density function of Xn at a few values. As discussed in Asmussen and Glynn (008), the CMC estimates are given by an average of N independent samples of nf(x S n ), where S n is generated by sampling (X,..., X n ) using their original density function f. Figure shows this comparison graphically over a wider range of density function values. As may be expected, the proposed method provides an estimator with much smaller variance compared to the CMC method. x True value SP-IS Sample CMC Sample estimate variance estimate variance Table : True density function and its estimates using the proposed (SP-IS) method and the conditional Monte Carlo (CMC) for an average of 30 independent exponentially distributed mean = random variables. For x =.0 and.5, the number of generated samples N = 000 in both the methods, and for x =.0, N = 0, 000.

20 f f f 0 Agarwal, Dey and Juneja.6.4 True value SP IS estimate CMC estimate x x x Figure : True density function and its estimates using the proposed (SP-IS) method and the conditional Monte Carlo (CMC) for an average of 30 independent exponentially distributed mean = random variables. The plot illustrates the performance of the two methods over wide range of x values. In both simulations N =, 000 at each point. 5.. Comparison with state dependent exponential twisting We compare the efficiency of SP-IS method for estimating the tail probability P ( X n A) with the optimal state dependent exponential twisting method proposed by [3] (referred to as BGL method). They restrict their analysis to convex sets A with twice continuously differentiable boundary whereas SP-IS method is applicable to sets that are affine transformations of the non-negative orthants R d +. The two methods agree in the single dimension and hence we compare them on a single dimension example (see [9] for a numerical comparison of the SP-IS method with the one proposed by Sadowsky and Bucklew (990) in the multi-dimension setting). For a sequence of random variables (X i : i ) that are independent and identically exponentially distributed with mean, P ( X n.5) is estimated for different values of n. Table reports the estimates based on different N generated samples. In this experiment, a n =, α = and p n = 0.9 for SP-IS method. BGL method is implemented as per [3] as follows: first X is generated using an exponentially twisted distribution with mean x 0 =.5. At each next step, the exponential twisting coefficient in the distribution used to generate X k+ is recomputed such that mean of the distribution is nx0 k i= Xi n k. The exponential twisting is dynamically updated until the generated k i= X i nx 0 at which point we stop the importance sampling

21 Efficient simulation of large deviation events n N True value BGL CoV SP-IS CoV VR CT (exact asymptotic c n ) BGL SP- IS ( ) ( ) ( ) Table : SP-IS method has a decreasing coefficient of variation (CoV) and it provides increasing variance reduction (VR) over BGL method. Computation time per sample (CT), reported in micro seconds, increases with n for BGL method whereas it remains constant for SP-IS method. and sample rest of n k values with the original distribution. In the other case, if distance to the boundary nx 0 k i= X i is sufficiently large relative to remaining time horizon n k ( nx k 0 ) i= Xi n k x 0, then we generate the next n k samples with exponentially twisted distribution with mean nx0 k i= Xi. In this example, the true value of tail probability for different values of n is calculated using approximation of gamma density function available in MATLAB. Variance reduction achieved by SP-IS method over BGL method is reported. This increases with increasing n. In addition, we note that the computation time per sample for BGL method increases with n whereas it remains constant for the SP-IS method. n k Table shows that the exact asymptotic c n can differ significantly from the estimated value of the probability. As shown in [9], this difference can be far more significant in multi-dimension settings, thus emphasizing the need for simulation despite the existence of asymptotics for the rare quantities considered.

22 Agarwal, Dey and Juneja 6. Conclusions and Direction for Further Research In this paper we considered the rare event problem of efficient estimation of the density function of the average of iid light tailed random vectors evaluated away from their mean, and the tail probability that this average takes a large deviation. In a single dimension setting we also considered the estimation problem of expected overshoot associated with a sum of iid random variables taking a large deviations. We used the well known saddle point representations for these performance measures and applied importance sampling to develop provably efficient unbiased estimation algorithms that significantly improve upon the performance of the existing algorithms in literature and are simple to implement. Our key contribution was combining rare event simulation with the classical theory of saddle point based approximations for tail events. We hope that this approach spurs research towards efficient estimation of much richer class of rare event problems where saddle point approximations are well known or are easily developed. Another direction that is important for further research involves relaxing Assumptions or 3 in our analysis. Then, our IS estimators may not have asymptotically vanishing relative error but may have bounded relative error. This is illustrated through an example in [9]. References [] Asmussen, S. and Glynn, P. (008). Stochastic Simulation: Algorithms and Analysis. Springer Verlag. New York, NY, USA. [] Butler, R. W. (007). Saddlepoint Approximation with Applications. Cambridge University Press. Cambridge. [3] Blanchet, J. Leder, D. and Glynn, P. (008). Strongly efficient algorithms for light-tailed random walks: An old folk song sung to a faster new tune... MCQMC 008.Editor:Pierre LEcuyer and Art Owen Springer [4] Bucklew, J. (004). An Introduction to Rare Event Simulation. Springer Series in Statistics. [5] Bucklew, J. A. Ney, P. and Sadowsky, J. S. (990). Monte Carlo Simulation and Large Deviations Theory for Uniformly Recurrent Markov Chains Journal of Applied Probability, Vol. 7, No.,

23 Efficient simulation of large deviation events 3 [6] Daniels, H. E. (954). Saddlepoint Approximation in Statistics. Annals of Mathematical Statistics. 5 No.4, [7] Dembo, A. and Zeitouni, O. (998). Large Deviation Techniques and Applications. nd ed. Springer. New York. [8] Dey, S. and Juneja, S. (0). Efficient Estimation of Density and Probability of Large Deviations of Sum of IID Random Variables. Proceedings of the 0 Winter Simulation Conference. IEEE [9] Dey, S. Juneja, S. and Agarwal, A. (0). Efficient Simulation of Density and Probability of Large Deviations of Sum of Random Vectors using Saddle Point Representations. Arxiv [0] Dieker, A. B. and Mandjes, M. (005). On Asymptotically Efficient Simulation of Large Deviations Probability. Advances in Applied Probability. 37 No., [] Feller, W. (97). An Introduction to Probability Theory and Its Applications Vol.. John Wiley and Sons. [] Glasserman, P and Juneja, S. (008). Uniformly Efficient Importance Sampling for the Tail distribution of Sums of Random Variables Mathematics of Operation Research 33 No., [3] Glasserman, P and Li, J. (005). Importance Sampling for Portfolio Credit Risk Management Science [4] Glasserman, P and Wang, Y. (997). Counterexamples in Importance Sampling for Large Deviation Probabilities The Annals of Applied Probability 7 No. 3, [5] Juneja, S. and Shahabuddin, P. (006). Rare Event Simulation Techniques Handbooks in Operation Research and Management Science 3 Simulation.Elsevier North-Holland, Amsterdam, [6] Lugnnani, R. and Rice, S. (980). Saddle Point Approximation for Distribution of the Sum of Independent Random Variables. Advances in Applied Probability No., [7] Jensen, J. L. (995). Saddlepoint Approximations. Oxford University Press. Oxford. [8] Ney, P. (983). Dominating Points and the Asymptotics of Large Deviations for Random Walk on R d Annals of Probability No., [9] Parekh, S. and Walrand, J. (989). A Quick Simulation Method for Excessive Backlogs in Networks of Queue. IEEE Transactions on Automatic Control. 34 No., [0] Sadowsky, J. S. (996). On Monte Carlo Estimation of Large Deviation Probabilities The Annals of Applied Probability. 6,

24 4 Agarwal, Dey and Juneja [] Sadowsky, J. S. and Bucklew, J. A. (990). On Large Deviation Theory and Asymptotically Efficient Monte Carlo Simulation Estimation IEEE Trans. Inform. Theory 36 No.,

Uniformly Efficient Importance Sampling for the Tail Distribution of Sums of Random Variables

Uniformly Efficient Importance Sampling for the Tail Distribution of Sums of Random Variables Uniformly Efficient Importance Sampling for the Tail Distribution of Sums of Random Variables P. Glasserman Columbia University pg20@columbia.edu S.K. Juneja Tata Institute of Fundamental Research juneja@tifr.res.in

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Introduction to Rare Event Simulation

Introduction to Rare Event Simulation Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Sandeep Juneja Tata Institute of Fundamental Research Mumbai, India joint work with Peter Glynn Applied

More information

Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error

Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error S. Juneja Tata Institute of Fundamental Research, Mumbai juneja@tifr.res.in February 28, 2007 Abstract

More information

Rare-event Simulation Techniques: An Introduction and Recent Advances

Rare-event Simulation Techniques: An Introduction and Recent Advances Rare-event Simulation Techniques: An Introduction and Recent Advances S. Juneja Tata Institute of Fundamental Research, India juneja@tifr.res.in P. Shahabuddin Columbia University perwez@ieor.columbia.edu

More information

EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS

EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS Proceedings of the 2006 Winter Simulation Conference L. F. Perrone, F. P. Wieland, J. Liu, B. G. Lawson, D. M. Nicol, and R. M. Fujimoto, eds. EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF

More information

Rare-Event Simulation

Rare-Event Simulation Rare-Event Simulation Background: Read Chapter 6 of text. 1 Why is Rare-Event Simulation Challenging? Consider the problem of computing α = P(A) when P(A) is small (i.e. rare ). The crude Monte Carlo estimator

More information

Ordinal Optimization and Multi Armed Bandit Techniques

Ordinal Optimization and Multi Armed Bandit Techniques Ordinal Optimization and Multi Armed Bandit Techniques Sandeep Juneja. with Peter Glynn September 10, 2014 The ordinal optimization problem Determining the best of d alternative designs for a system, on

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Translation Invariant Experiments with Independent Increments

Translation Invariant Experiments with Independent Increments Translation Invariant Statistical Experiments with Independent Increments (joint work with Nino Kordzakhia and Alex Novikov Steklov Mathematical Institute St.Petersburg, June 10, 2013 Outline 1 Introduction

More information

Efficient rare-event simulation for sums of dependent random varia

Efficient rare-event simulation for sums of dependent random varia Efficient rare-event simulation for sums of dependent random variables Leonardo Rojas-Nandayapa joint work with José Blanchet February 13, 2012 MCQMC UNSW, Sydney, Australia Contents Introduction 1 Introduction

More information

Portfolio Credit Risk with Extremal Dependence: Asymptotic Analysis and Efficient Simulation

Portfolio Credit Risk with Extremal Dependence: Asymptotic Analysis and Efficient Simulation OPERATIONS RESEARCH Vol. 56, No. 3, May June 2008, pp. 593 606 issn 0030-364X eissn 1526-5463 08 5603 0593 informs doi 10.1287/opre.1080.0513 2008 INFORMS Portfolio Credit Risk with Extremal Dependence:

More information

ABSTRACT 1 INTRODUCTION. Jose Blanchet. Henry Lam. Boston University 111 Cummington Street Boston, MA 02215, USA

ABSTRACT 1 INTRODUCTION. Jose Blanchet. Henry Lam. Boston University 111 Cummington Street Boston, MA 02215, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. RARE EVENT SIMULATION TECHNIQUES Jose Blanchet Columbia University 500 W 120th

More information

EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences

EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS Jose Blanchet

More information

On asymptotically efficient simulation of large deviation probabilities

On asymptotically efficient simulation of large deviation probabilities On asymptotically efficient simulation of large deviation probabilities A. B. Dieker and M. Mandjes CWI P.O. Box 94079 1090 GB Amsterdam, the Netherlands and University of Twente Faculty of Mathematical

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

On the inefficiency of state-independent importance sampling in the presence of heavy tails

On the inefficiency of state-independent importance sampling in the presence of heavy tails Operations Research Letters 35 (2007) 251 260 Operations Research Letters www.elsevier.com/locate/orl On the inefficiency of state-independent importance sampling in the presence of heavy tails Achal Bassamboo

More information

On large deviations of sums of independent random variables

On large deviations of sums of independent random variables On large deviations of sums of independent random variables Zhishui Hu 12, Valentin V. Petrov 23 and John Robinson 2 1 Department of Statistics and Finance, University of Science and Technology of China,

More information

Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood

Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood Advances in Decision Sciences Volume 2012, Article ID 973173, 8 pages doi:10.1155/2012/973173 Research Article Optimal Portfolio Estimation for Dependent Financial Returns with Generalized Empirical Likelihood

More information

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC R. G. DOLGOARSHINNYKH Abstract. We establish law of large numbers for SIRS stochastic epidemic processes: as the population size increases the paths of SIRS epidemic

More information

Semi-Parametric Importance Sampling for Rare-event probability Estimation

Semi-Parametric Importance Sampling for Rare-event probability Estimation Semi-Parametric Importance Sampling for Rare-event probability Estimation Z. I. Botev and P. L Ecuyer IMACS Seminar 2011 Borovets, Bulgaria Semi-Parametric Importance Sampling for Rare-event probability

More information

Asymptotic distribution of the sample average value-at-risk

Asymptotic distribution of the sample average value-at-risk Asymptotic distribution of the sample average value-at-risk Stoyan V. Stoyanov Svetlozar T. Rachev September 3, 7 Abstract In this paper, we prove a result for the asymptotic distribution of the sample

More information

On Lyapunov Inequalities and Subsolutions for Efficient Importance Sampling

On Lyapunov Inequalities and Subsolutions for Efficient Importance Sampling On Lyapunov Inequalities and Subsolutions for Efficient Importance Sampling By Jose Blanchet, Kevin Leder and Peter Glynn Columbia University, Columbia University and Stanford University July 28, 2009

More information

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand Concentration inequalities for Feynman-Kac particle models P. Del Moral INRIA Bordeaux & IMB & CMAP X Journées MAS 2012, SMAI Clermond-Ferrand Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting

More information

Statistical Properties of Numerical Derivatives

Statistical Properties of Numerical Derivatives Statistical Properties of Numerical Derivatives Han Hong, Aprajit Mahajan, and Denis Nekipelov Stanford University and UC Berkeley November 2010 1 / 63 Motivation Introduction Many models have objective

More information

Asymptotics and Fast Simulation for Tail Probabilities of Maximum of Sums of Few Random Variables

Asymptotics and Fast Simulation for Tail Probabilities of Maximum of Sums of Few Random Variables Asymptotics and Fast Simulation for Tail Probabilities of Maximum of Sums of Few Random Variables S. JUNEJA Tata Institute of Fundamental Research, Mumbai R. L. KARANDIKAR Indian Statistical Institute,

More information

The moment-generating function of the log-normal distribution using the star probability measure

The moment-generating function of the log-normal distribution using the star probability measure Noname manuscript No. (will be inserted by the editor) The moment-generating function of the log-normal distribution using the star probability measure Yuri Heymann Received: date / Accepted: date Abstract

More information

A LOGARITHMIC EFFICIENT ESTIMATOR OF THE PROBABILITY OF RUIN WITH RECUPERATION FOR SPECTRALLY NEGATIVE LEVY RISK PROCESSES.

A LOGARITHMIC EFFICIENT ESTIMATOR OF THE PROBABILITY OF RUIN WITH RECUPERATION FOR SPECTRALLY NEGATIVE LEVY RISK PROCESSES. A LOGARITHMIC EFFICIENT ESTIMATOR OF THE PROBABILITY OF RUIN WITH RECUPERATION FOR SPECTRALLY NEGATIVE LEVY RISK PROCESSES Riccardo Gatto Submitted: April 204 Revised: July 204 Abstract This article provides

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue

Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue J. Blanchet, P. Glynn, and J. C. Liu. September, 2007 Abstract We develop a strongly e cient rare-event

More information

MOMENTS OF HYPERGEOMETRIC HURWITZ ZETA FUNCTIONS

MOMENTS OF HYPERGEOMETRIC HURWITZ ZETA FUNCTIONS MOMENTS OF HYPERGEOMETRIC HURWITZ ZETA FUNCTIONS ABDUL HASSEN AND HIEU D. NGUYEN Abstract. This paper investigates a generalization the classical Hurwitz zeta function. It is shown that many of the properties

More information

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Math 362, Problem set 1

Math 362, Problem set 1 Math 6, roblem set Due //. (4..8) Determine the mean variance of the mean X of a rom sample of size 9 from a distribution having pdf f(x) = 4x, < x

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Weighted Exponential Distribution and Process

Weighted Exponential Distribution and Process Weighted Exponential Distribution and Process Jilesh V Some generalizations of exponential distribution and related time series models Thesis. Department of Statistics, University of Calicut, 200 Chapter

More information

On Derivative Estimation of the Mean Time to Failure in Simulations of Highly Reliable Markovian Systems

On Derivative Estimation of the Mean Time to Failure in Simulations of Highly Reliable Markovian Systems On Derivative Estimation of the Mean Time to Failure in Simulations of Highly Reliable Markovian Systems Marvin K. Nakayama Department of Computer and Information Science New Jersey Institute of Technology

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Rare event simulation for the ruin problem with investments via importance sampling and duality

Rare event simulation for the ruin problem with investments via importance sampling and duality Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

E cient Monte Carlo for Gaussian Fields and Processes

E cient Monte Carlo for Gaussian Fields and Processes E cient Monte Carlo for Gaussian Fields and Processes Jose Blanchet (with R. Adler, J. C. Liu, and C. Li) Columbia University Nov, 2010 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

EXACT SAMPLING OF THE INFINITE HORIZON MAXIMUM OF A RANDOM WALK OVER A NON-LINEAR BOUNDARY

EXACT SAMPLING OF THE INFINITE HORIZON MAXIMUM OF A RANDOM WALK OVER A NON-LINEAR BOUNDARY Applied Probability Trust EXACT SAMPLING OF THE INFINITE HORIZON MAXIMUM OF A RANDOM WALK OVER A NON-LINEAR BOUNDARY JOSE BLANCHET, Columbia University JING DONG, Northwestern University ZHIPENG LIU, Columbia

More information

Orthonormal polynomial expansions and lognormal sum densities

Orthonormal polynomial expansions and lognormal sum densities 1/28 Orthonormal polynomial expansions and lognormal sum densities Pierre-Olivier Goffard Université Libre de Bruxelles pierre-olivier.goffard@ulb.ac.be February 22, 2016 2/28 Introduction Motivations

More information

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications. Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable

More information

Stability and Sensitivity of the Capacity in Continuous Channels. Malcolm Egan

Stability and Sensitivity of the Capacity in Continuous Channels. Malcolm Egan Stability and Sensitivity of the Capacity in Continuous Channels Malcolm Egan Univ. Lyon, INSA Lyon, INRIA 2019 European School of Information Theory April 18, 2019 1 / 40 Capacity of Additive Noise Models

More information

Introduction to Algorithmic Trading Strategies Lecture 10

Introduction to Algorithmic Trading Strategies Lecture 10 Introduction to Algorithmic Trading Strategies Lecture 10 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Efficient Rare-event Simulation for Perpetuities

Efficient Rare-event Simulation for Perpetuities Efficient Rare-event Simulation for Perpetuities Blanchet, J., Lam, H., and Zwart, B. We consider perpetuities of the form Abstract D = B 1 exp Y 1 ) + B 2 exp Y 1 + Y 2 ) +..., where the Y j s and B j

More information

State-dependent Importance Sampling for Rare-event Simulation: An Overview and Recent Advances

State-dependent Importance Sampling for Rare-event Simulation: An Overview and Recent Advances State-dependent Importance Sampling for Rare-event Simulation: An Overview and Recent Advances By Jose Blanchet and Henry Lam Columbia University and Boston University February 7, 2011 Abstract This paper

More information

A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit

A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit David Vener Department of Mathematics, MIT May 5, 3 Introduction In 8.366, we discussed the relationship

More information

A NEW NONLINEAR FILTER

A NEW NONLINEAR FILTER COMMUNICATIONS IN INFORMATION AND SYSTEMS c 006 International Press Vol 6, No 3, pp 03-0, 006 004 A NEW NONLINEAR FILTER ROBERT J ELLIOTT AND SIMON HAYKIN Abstract A discrete time filter is constructed

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

Convergence of generalized entropy minimizers in sequences of convex problems

Convergence of generalized entropy minimizers in sequences of convex problems Proceedings IEEE ISIT 206, Barcelona, Spain, 2609 263 Convergence of generalized entropy minimizers in sequences of convex problems Imre Csiszár A Rényi Institute of Mathematics Hungarian Academy of Sciences

More information

Chapter 4: Asymptotic Properties of the MLE

Chapter 4: Asymptotic Properties of the MLE Chapter 4: Asymptotic Properties of the MLE Daniel O. Scharfstein 09/19/13 1 / 1 Maximum Likelihood Maximum likelihood is the most powerful tool for estimation. In this part of the course, we will consider

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use:

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use: This article was downloaded by: [Stanford University] On: 20 July 2010 Access details: Access Details: [subscription number 917395611] Publisher Taylor & Francis Informa Ltd Registered in England and Wales

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

The square root rule for adaptive importance sampling

The square root rule for adaptive importance sampling The square root rule for adaptive importance sampling Art B. Owen Stanford University Yi Zhou January 2019 Abstract In adaptive importance sampling, and other contexts, we have unbiased and uncorrelated

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

Complex Analysis, Stein and Shakarchi Meromorphic Functions and the Logarithm

Complex Analysis, Stein and Shakarchi Meromorphic Functions and the Logarithm Complex Analysis, Stein and Shakarchi Chapter 3 Meromorphic Functions and the Logarithm Yung-Hsiang Huang 217.11.5 Exercises 1. From the identity sin πz = eiπz e iπz 2i, it s easy to show its zeros are

More information

A Primer on Asymptotics

A Primer on Asymptotics A Primer on Asymptotics Eric Zivot Department of Economics University of Washington September 30, 2003 Revised: October 7, 2009 Introduction The two main concepts in asymptotic theory covered in these

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

Review and continuation from last week Properties of MLEs

Review and continuation from last week Properties of MLEs Review and continuation from last week Properties of MLEs As we have mentioned, MLEs have a nice intuitive property, and as we have seen, they have a certain equivariance property. We will see later that

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.

More information

A TEST OF FIT FOR THE GENERALIZED PARETO DISTRIBUTION BASED ON TRANSFORMS

A TEST OF FIT FOR THE GENERALIZED PARETO DISTRIBUTION BASED ON TRANSFORMS A TEST OF FIT FOR THE GENERALIZED PARETO DISTRIBUTION BASED ON TRANSFORMS Dimitrios Konstantinides, Simos G. Meintanis Department of Statistics and Acturial Science, University of the Aegean, Karlovassi,

More information

Lecture 17: Density Estimation Lecturer: Yihong Wu Scribe: Jiaqi Mu, Mar 31, 2016 [Ed. Apr 1]

Lecture 17: Density Estimation Lecturer: Yihong Wu Scribe: Jiaqi Mu, Mar 31, 2016 [Ed. Apr 1] ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture 7: Density Estimation Lecturer: Yihong Wu Scribe: Jiaqi Mu, Mar 3, 06 [Ed. Apr ] In last lecture, we studied the minimax

More information

An exponential family of distributions is a parametric statistical model having densities with respect to some positive measure λ of the form.

An exponential family of distributions is a parametric statistical model having densities with respect to some positive measure λ of the form. Stat 8112 Lecture Notes Asymptotics of Exponential Families Charles J. Geyer January 23, 2013 1 Exponential Families An exponential family of distributions is a parametric statistical model having densities

More information

Semiparametric posterior limits

Semiparametric posterior limits Statistics Department, Seoul National University, Korea, 2012 Semiparametric posterior limits for regular and some irregular problems Bas Kleijn, KdV Institute, University of Amsterdam Based on collaborations

More information

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Article Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Fei Jin 1,2 and Lung-fei Lee 3, * 1 School of Economics, Shanghai University of Finance and Economics,

More information

CSCI-6971 Lecture Notes: Monte Carlo integration

CSCI-6971 Lecture Notes: Monte Carlo integration CSCI-6971 Lecture otes: Monte Carlo integration Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu February 21, 2006 1 Overview Consider the following

More information

Converse Lyapunov theorem and Input-to-State Stability

Converse Lyapunov theorem and Input-to-State Stability Converse Lyapunov theorem and Input-to-State Stability April 6, 2014 1 Converse Lyapunov theorem In the previous lecture, we have discussed few examples of nonlinear control systems and stability concepts

More information

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Walter Schneider July 26, 20 Abstract In this paper an analytic expression is given for the bounds

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question A1 a) The marginal cdfs of F X,Y (x, y) = [1 + exp( x) + exp( y) + (1 α) exp( x y)] 1 are F X (x) = F X,Y (x, ) = [1

More information

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Preliminary Exam 2016 Solutions to Morning Exam

Preliminary Exam 2016 Solutions to Morning Exam Preliminary Exam 16 Solutions to Morning Exam Part I. Solve four of the following five problems. Problem 1. Find the volume of the ice cream cone defined by the inequalities x + y + z 1 and x + y z /3

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2)

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2) Lectures on Machine Learning (Fall 2017) Hyeong In Choi Seoul National University Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2) Topics to be covered:

More information

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,

More information

Experience Rating in General Insurance by Credibility Estimation

Experience Rating in General Insurance by Credibility Estimation Experience Rating in General Insurance by Credibility Estimation Xian Zhou Department of Applied Finance and Actuarial Studies Macquarie University, Sydney, Australia Abstract This work presents a new

More information

Brownian Bridge and Self-Avoiding Random Walk.

Brownian Bridge and Self-Avoiding Random Walk. Brownian Bridge and Self-Avoiding Random Walk. arxiv:math/02050v [math.pr] 9 May 2002 Yevgeniy Kovchegov Email: yevgeniy@math.stanford.edu Fax: -650-725-4066 November 2, 208 Abstract We derive the Brownian

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

A polynomial expansion to approximate ruin probabilities

A polynomial expansion to approximate ruin probabilities A polynomial expansion to approximate ruin probabilities P.O. Goffard 1 X. Guerrault 2 S. Loisel 3 D. Pommerêt 4 1 Axa France - Institut de mathématiques de Luminy Université de Aix-Marseille 2 Axa France

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information