Orthonormal polynomial expansions and lognormal sum densities

Similar documents
THIELE CENTRE. Orthonormal polynomial expansions and lognormal sum densities. Søren Asmussen, Pierre-Olivier Goffard and Patrick J.

Orthonormal polynomial expansions and lognormal sum densities

Polynomial approximation of mutivariate aggregate claim amounts distribution

A polynomial expansion to approximate ruin probabilities

Two numerical methods to evaluate stop-loss premiums

Efficient rare-event simulation for sums of dependent random varia

Introduction to Rare Event Simulation

1 Review of Probability

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

THIELE CENTRE. On the Laplace transform of the Lognormal distribution. Søren Asmussen, Jens Ledet Jensen and Leonardo Rojas-Nandayapa

Statistics for scientists and engineers

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics

Week 1 Quantitative Analysis of Financial Markets Distributions A

Sums of dependent lognormal random variables: asymptotics and simulation

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Random Eigenvalue Problems Revisited

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 23 Feb 1998

STA205 Probability: Week 8 R. Wolpert

Asymptotic distribution of the sample average value-at-risk

EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences

Gaussian vectors and central limit theorem

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

where r n = dn+1 x(t)

Method of Moments. which we usually denote by X or sometimes by X n to emphasize that there are n observations.

Asymptotics of sums of lognormal random variables with Gaussian copula

Shape of the return probability density function and extreme value statistics

Stat 451 Lecture Notes Numerical Integration

Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables

4. Distributions of Functions of Random Variables

EFFICIENT SIMULATION OF TAIL PROBABILITIES OF SUMS OF DEPENDENT RANDOM VARIABLES

Exponential Family Techniques for the Lognormal Left Tail

Weighted Exponential Distribution and Process

Extreme Value Analysis and Spatial Extremes

Linear Models Review

6.1 Moment Generating and Characteristic Functions

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci

On the Chi square and higher-order Chi distances for approximating f-divergences

Multivariate Normal-Laplace Distribution and Processes

Reflections and Rotations in R 3

Randomly Weighted Sums of Conditionnally Dependent Random Variables

3. Probability and Statistics

Non-Life Insurance: Mathematics and Statistics

Ordinal Optimization and Multi Armed Bandit Techniques

Statistical Methods in Particle Physics

conditional cdf, conditional pdf, total probability theorem?

Estimation of Quantiles

Meixner matrix ensembles

Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error

f (x) = k=0 f (0) = k=0 k=0 a k k(0) k 1 = a 1 a 1 = f (0). a k k(k 1)x k 2, k=2 a k k(k 1)(0) k 2 = 2a 2 a 2 = f (0) 2 a k k(k 1)(k 2)x k 3, k=3

On the Chi square and higher-order Chi distances for approximating f-divergences

the convolution of f and g) given by

Generalised Fractional-Black-Scholes Equation: pricing and hedging

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

Multivariate Distribution Models

Fast Numerical Methods for Stochastic Computations

Branching Processes II: Convergence of critical branching to Feller s CSB

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Chapter 7. Basic Probability Theory

Definition of a Stochastic Process

Lecture Tricks with Random Variables: The Law of Large Numbers & The Central Limit Theorem

The moment-generating function of the log-normal distribution using the star probability measure

Discrete Distributions Chapter 6

3 Continuous Random Variables

Chapter 5 continued. Chapter 5 sections

Formulas for probability theory and linear models SF2941

VaR vs. Expected Shortfall

A LOGARITHMIC EFFICIENT ESTIMATOR OF THE PROBABILITY OF RUIN WITH RECUPERATION FOR SPECTRALLY NEGATIVE LEVY RISK PROCESSES.

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D

Master s Written Examination

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Risk Aggregation. Paul Embrechts. Department of Mathematics, ETH Zurich Senior SFI Professor.

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Lecture 3: Central Limit Theorem

Statistics (1): Estimation

STATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero

Polynomial chaos expansions for structural reliability analysis

Multivariate Distributions

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

Multivariate Random Variable

Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution

The Laplace driven moving average a non-gaussian stationary process

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

A nonparametric test for trend based on initial ranks

Basic concepts of probability theory

Multiple Random Variables

Northwestern University Department of Electrical Engineering and Computer Science

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

1 Presessional Probability

Nonlinear Filtering Revisited: a Spectral Approach, II

Basic concepts of probability theory

Actuarial Science Exam 1/P

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM

From Gram-Charlier Series to Orthogonal Polynomials

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

1: PROBABILITY REVIEW

Chapter 2 Continuous Distributions

Lecture 2: Repetition of probability theory and statistics

Transcription:

1/28 Orthonormal polynomial expansions and lognormal sum densities Pierre-Olivier Goffard Université Libre de Bruxelles pierre-olivier.goffard@ulb.ac.be February 22, 2016

2/28 Introduction Motivations Definition Goal: Recover the PDF of the sum of n lognormally distributed random variables. Dependence The lognormals in the sum may be correlated. Extreme Value The lognormal distribution is heavy tailed. Actuarial Science Modelization of the total claim amounts of non life insurance portfolios.

3/28 Introduction Motivations Definition Further aplications... Finance Black & Scholes model security prices are lognormally distributed, The value of the portfolio is distributed as a sum of lognormally distributed random variable, Pricing of Asian option. The pay off is determined by the average of the underlying price. Telecommunication The inverse of the signal to noise ratio can be modeled as a sum of i.i.d. lognormals.

4/28 Introduction Motivations Definition Definition The random variable defined as S = e X 1 +... + e Xn where (X 1,..., X n ) MN (µ, Σ) admits a sum of lognormals distribution SLN (µ, Σ). The PDF is given by

4/28 Introduction Motivations Definition Definition The random variable defined as S = e X 1 +... + e Xn where (X 1,..., X n ) MN (µ, Σ) admits a sum of lognormals distribution SLN (µ, Σ). The PDF is given by f S (x) =???

4/28 Introduction Motivations Definition Definition The random variable defined as S = e X 1 +... + e Xn where (X 1,..., X n ) MN (µ, Σ) admits a sum of lognormals distribution SLN (µ, Σ). The PDF is given by f S (x) =??? The PDF is unknown even in the case of the sum of two i.i.d. lognormals.

5/28 Let X be a random variable governed by P X, with unknown PDF f X. I. Pick a reference probability measure ν P X absolutely continuous w.r.t. ν. f ν as first approximation of f X. II. {Q k } k N sequence of orthonormal polynomials w.r.t. ν Gram-Schmidt orthogonalization procedure. III. Orthogonal projection of f X /f ν onto {Q k } Adjustment of the first approximation.

6/28 {Q k } k N must be complete in a set of functions in which f X /f ν belongs. The L 2 (ν) space The set of square integrable functions with respect to ν. Inner product, f, g = f (x)g(x)dν(x), where f, g L 2 (ν). {Q k } k N is a sequence of orthogonal polynomials w.r.t. ν in the sense that Q k, Q l = Q k (x)q l (x)dν(x) = δ kl.

7/28 Sufficient Condition If e α x dν(x) < +, then {Q k } k N is complete in L 2 (ν). Orthonormal Polynomial Expansion If f X /f ν L 2 (ν) then f X (x) = + k=0 a k Q k (x)f ν (x), where a k = Q k, f X = f ν Q k (x) f X (x) f ν (x) dν(x) = E [Q k(x )].

8/28 Orthonormal polynomial approximation The approximation follows from simple truncation K fx K (x) = a k Q k (x)f ν (x). k=0 The accuracy relies on the decay of the a k s.

9/28 Lognormal distribution as reference f ν (x) = Associated Orthogonal Polynomials ( ) 1 xσ ln(x) µ 2 2π e σ 2, x > 0. Q k (x) = e k2 σ 2 2 k i=0 ( 1) k+i e iµ i2 σ 2 2 [e σ 2, e σ2] k e k i (1,..., e (k 1)σ2) x i, for k N where e i (X 1,..., X k ) are the elementary symmetric polynomials and [x, q] n = n 1 ( i=0 1 xq i ) is the Q-Pochhammer symbol.

10/28 Let S be SLN (µ, σ)-distributed. Integrability Condition If ( ) f S (x) = O e b log2 x for x 0 and, where b > ( 4σ 2) 1, then fs /f ν L 2 (ν)

10/28 Let S be SLN (µ, σ)-distributed. Integrability Condition If ( ) f S (x) = O e b log2 x for x 0 and, where b > ( 4σ 2) 1, then fs /f ν L 2 (ν) Asymptotics for f S f S (x) = O(exp{ c 1 ln(x) 2 }) as x 0, where c 1 = [ 2 min w w T Σ 1 w ] 1, see Tankov et al [GT15]. and f S (x) = O(exp{ c 2 ln(x) 2 }) as x where c 2 = [ 2 max i=1,...,n Σ ii ] 1, see Asmussen et al. [ARN08]

11/28 The integrability condition is satisfied as long as σ 2 > max Σ ii. 2 PB1 Is {Q k } k N complete in L 2 (ν)?

11/28 The integrability condition is satisfied as long as σ 2 > max Σ ii. 2 PB1 Is {Q k } k N complete in L 2 (ν)? Proposition No, it s not!

11/28 The integrability condition is satisfied as long as σ 2 > max Σ ii. 2 PB1 Is {Q k } k N complete in L 2 (ν)? Proposition No, it s not! PB2 The lognormal distribution is not characterized by its moments and neither is the sum of lognormals distribution.

12/28 0.8 0.6 0.4 0.2 K = 0 K = 1 K = 40 f 0.2 0.4 0.6 0.8 1.0 Fig. 1 orthogonal polynomial approximations of LN (0, 1.50 2 ) using a LN (0, 1.22 2 ) reference.

13/28 Normal distribution N (µ, σ) as reference f ν (x) = 1 ( ) 2 σ x µ 2π e σ 2, x R. Associated Orthonormal Polynomials Q k (x) = 1 k!2 k/2 H k ( x µ σ 2 ), where {H k } k N are the Hermite polynomials, see Szego [Sze39].

f S /f ν / L 2 (ν). We consider a transformation of S Integrability Condition Z = ln(s). f Z (x) = O ( e ax2 ) as x ±, where a > ( 4σ 2) 1. Asymptotics for f Z f Z (z) = O(exp{ c 1 z 2 }) as z and f Z (z) = O(exp{ c 2 z 2 }) as z + Extension of the work of Gao et al. [GXY09]. 14/28

15/28 The integrability condition is satisfied as long as σ 2 > max Σ ii. 2 f Z is approximated by f Z (z) = K a k Q k (z)f ν (z), k=0 where the coefficient a k = E {Q k [ln(s)]} are evaluated using Crude Monte Carlo for k = 1,..., K. f S is approximated by f S, N (x) = 1 x f Z (ln(x)).

16/28 Gamma Distribution Γ(m, r) As Reference f ν (x) = e x/m x r 1 Γ(r)m r, x > 0. Associated Orthonormal Polynomials [ ] Γ(n + r) 1/2 Q n (x) = ( 1) n L r 1 n (x/m), Γ(n + 1)Γ(r) where the {L r 1 n } k N are the generalized Laguerre polynomials defined in Szego [Sze39].

17/28 f S /f ν / L 2 (ν). We consider the PDF of the exponentially tilted distribution of S f θ (x) = e θx f S (x), L(θ) where L(θ) = E ( e θs). Integrability Condition If ( f θ (x) = O e δx) where δ > 1 2m, and for x +, ( f θ (x) = O x β) for x 0, where β > r/2 1, then f θ /f ν L 2 (ν).

18/28 The integrability condition is satisfied as long as m > 1 2θ. f θ is approximated by f θ (x) = K a k Q k (x)f ν (x), k=0 where the coefficient a k = E [Q k (S θ )] with S θ f θ. f S is approximated by f S, Γ (x) = L(θ)e θx fθ (x).

19/28 Regarding the evaluation of the a k s, we have a k = E [Q k (S θ )] = q k0 + q k1 E(S θ ) +... + q kk E(S k θ ) where the q ki s are the coefficients of Q k, and E(S i θ ) = E ( S i e θs) L(θ) = L i(θ) L(θ) where L i (θ) denotes the i th derivative of L(θ). We adapt techniques developed in Asmussen et al. [AJRN14, LAJRN16] to access L i (θ).

20/28 Polynomial Aproximation Settings Normal distribution as reference µ = E(Z) and σ 2 = V(Z) 10 5 replications for the CMC procedure Gamma distribution as reference θ = 1 Moment matching of order 2 between fθ and f ν to get m and r.

21/28 Some Challengers The Fenton-Wilkinson approximation f FW, see [Fen60] The log skew normal approximation f SK, see [HB15] The conditionnal Monte Carlo approximation f Cond, cf. Example 4.3 on p. 146 of [AG07]. A benchmark is obtained through numerical integration. f S (x) and f S (x) together with ( fs (x) f S (x)) are plotted over x [0, 2E(S)], The L 2 error are computed over [0, E(S)].

22/28 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.010 0.005-0.005-0.010 1 2 3 4 5 1 2 3 4 5 f FW f Sk f Cond f N f Γ f f FW fsk fcond fn f Γ 8.01 10 2 4.00 10 2 1.56 10 3 1.94 10 3 2.28 10 3 Test 1 mu = (0, 0), diag(σ) = (0.5, 1), ρ = 0.2. Reference distributions used are N (0.88, 0.71 2 ) and Gamma(2.43, 0.51) with K = 32.

23/28 0.15 0.10 0.05 0.002 0.001-0.001-0.002-0.003 2 4 6 8 10 12 2 4 6 8 10 12 f FW f Sk f Cond f N f Γ f f FW fsk fcond fn f Γ 1.82 10 2 6.60 10 3 1.90 10 3 1.80 10 3 1.77 10 4 Test 2 n = 4, µ i = 0, Σ ii = 1, ρ = 0.1. Reference distributions used are N (1.32, 0.74 2 ) and Gamma(3.37, 0.51) with K = 32

24/28 0.20 0.15 0.10 0.05 0.001-0.001-0.002-0.003 2 4 6 8 10 2 4 6 8 10 Test 6 Sum of 3 LN (0, 1) r.v.s with C Cl 10 ( ) copula (i.e., τ = 5 6 ). Reference distributions used are N (1.46, 0.712 ) and Gamma(8.78, 0.25) with K = 40. The L 2 errors of f N and f Γ are 2.45 10 3 and 2.04 10 3 respectively. f N f Γ f

25/28 The orthonormal polynomial approximation is an efficicient numerical method: Accurate and easy to code. None of the method tested here seem to universally superior to the other. Good behavior in presence of a non-gaussian dependence structure: cf. test 3 with Clayton copula. For more details, the paper is available on ArXiv [AGL16].

26/28 Søren Asmussen and Peter W Glynn, Stochastic Simulation: Algorithms and Analysis, Stochastic Modelling and Applied Probability series, vol. 57, Springer, 2007. S. Asmussen, P.-O. Goffard, and P. J. Laub, Orthonormal polynomial expansion and lognormal sum densities, arxiv preprint arxiv:1601.01763 (2016). S. Asmussen, J. L. Jensen, and L. Rojas-Nandapaya, On the Laplace transform of the lognormal distribution, Methodology and Computing in Applied Probability (2014), 1 18. Søren Asmussen and Leonardo Rojas-Nandayapa, Asymptotics of sums of lognormal random variables with Gaussian copula, Statistics & Probability Letters 78 (2008), no. 16, 2709 2714.

27/28 Lawrence Fenton, The sum of log-normal probability distributions in scatter transmission systems, IRE Transactions on Communications Systems 8 (1960), no. 1, 57 67. Archil Gulisashvili and Peter Tankov, Tail behavior of sums and differences of log-normal random variables, Bernoulli (2015), To appear, accessed online on 26th August 2015 at http://www.e-publications.org/ims/submission/bej/ user/submissionfile/17119?confirm=ef609013. Xin Gao, Hong Xu, and Dong Ye, Asymptotic behavior of tail density for sum of correlated lognormal variables, International Journal of Mathematics and Mathematical Sciences (2009), Volume 2009, Article ID 630857, 28 pages. Marwane Ben Hcine and Ridha Bouallegue, Highly accurate log skew normal approximation to the sum of correlated lognormals, In the Proc. of NeTCoM 2014 (2015).

28/28 Patrick J. Laub, Søren Asmussen, Jens L. Jensen, and Leonardo Rojas-Nandayapa, Approximating the Laplace transform of the sum of dependent lognormals, Volume 48A of Advances in Applied Probability. (2016), In Festschrift for Nick Bingham, C. M. Goldie and A. Mijatovic (Eds.), Probability, Analysis and Number Theory. To appear. G. Szegö, Orthogonal polynomials, vol. XXIII, American Mathematical Society Colloquium Publications, 1939.

28/28 The symmetric polynomials are defined as { 1 j e i (X 1,..., X k ) = 1 <...<j i k X j 1... X ji, for i k, 0, for i > k, and [x, q] n = n 1 ( i=0 1 xq i ) is the Q-Pochhammer symbol.

28/28 Log Skew Normal Approximation X is governed by a skew normal distribution SN (λ, ω, ɛ) if its PDF is given by f X (x) = 2 ( ) ( x ɛ ω φ Φ λ x ɛ ), ω ω where φ and Φ are respectively the PDF and CDF of the standard normal distribution. Y = e X is governed by a log skew normal distribution LSN (λ, ω, ɛ) Central moment matching (the two first one) with the sum of lognormals distribution. Left tail slope matching using the asymptotics derived in [GT15].

28/28 Conditional Monte Carlo Approximation By definition, the conditional Monte Carlo is the Crude Monte Carlo estimator of the representation f S (x) = E {P(S dx Y )} Recall that S = e X 1 +... + e Xn, and set Y = e X 1, then the conditional Monte Carlo estimator is written as follows [ ] } f Cond = E {f e X1 x (S e X 1 X1 ), where f e X 1 is the PDF of a lognormal distribution LN [E(X 1 ), V(X 1 )]

28/28 Approximation of L(θ) Precisions regarding the evaluation of the a k s in the gamma case The Laplace transform of S is given by ( L(θ) = E e θs) exp ( h θ (x)) dx where h θ (x) = θ(e µ ) T e x + 1 2 xt Dx, where D = Σ 1.

Replace h θ by its second order Taylor expansion arround its unique minimizer x ( 1 1 ) T 2 x Dx + 1 2 (x x ) T (Λ + D) (x x ), where Λ = θdiag ( e µ+x ), to get the approximation L(θ). The reinjection of the Taylor expansion in the initial Laplace transform expansion permits to get a correction term L(θ) = L(θ)I (θ), where I (θ) = [ ( )] det(i + ΣΛ)E v Σ 1/2 Z, with v(u) = exp { (x ) T D(e u 1 u } and Z N (0, 1). More detail can be found in [LAJRN16] 28/28