Bounds for Sums of Non-Independent Log-Elliptical Random Variables

Size: px
Start display at page:

Download "Bounds for Sums of Non-Independent Log-Elliptical Random Variables"

Transcription

1 Bounds for Sums of Non-Independent Log-Elliptical Random Variables Emiliano A. Valdez The University of New South Wales Jan Dhaene Katholieke Universiteit Leuven May 5, 003 ABSTRACT In this paper, we construct upper and lower bounds for the distribution of a sum of non-independent log-elliptical random variables. These bounds are applications of the ideas developed in Kaas, Dhaene & Goovaerts (000). The class of multivariate log-elliptical random variables is an extension of the class of multivariate log-normal random variables. Hence, the results presented here are natural extensions of the results presented in Dhaene, Denuit, Goovaerts, Kaas & Vyncke (00a, 00b), where bounds for sums of log-normal random variables were derived. The upper bound is based on the idea of replacing the sum of log-elliptical random variables by a sum of random variables with the same marginals, but with a dependency structure described by the comonotonic copula. Lower bounds and improved upper bounds are constructed by including additional information about the dependency structure by introducing some conditioning random variable. 1 Introduction Sums of non-independent random variables occur in several situations in insurance and finance. As a first example, consider a portfolio of n insurance risks X 1, X,..., X n. The aggregate claims S is defined to be the sum of these individual risks: S = X k, (1) where generally the risks are non-negative random variables, i.e. X k 0. Knowledge of the distribution of this sum provides essential information for the insurance company and can be used as an input in the calculation of premiums and reserves. A particularly important problem is the determination of stop-loss premiums of the aggregate claims S. Suppose that the insurance company agrees to enter into a stop-loss reinsurance contract where total

2 Introduction claims beyond a pre-specified amount d will be covered by the reinsurer. The stop-loss premium with retention d is defined as E Z (S d) + = F S (x) dx, () d where F S (x) =1 F S (x) =Pr(S>x) and (s d) + =max(s d, 0). In classical risk theory, the individual risks X k are typically assumed to be mutually independent, mainly because computation of the aggregate claims becomes more tractable in this case. For special families of individual claim distributions, one may determine the exact form of the distribution for the aggregate claims. Several exact and approximate recursive methods have been proposed for computing the aggregate claims in case of discrete marginal distributions, see e.g. Dhaene & De Pril (1994) and Dhaene & Vandebroek (1995). Approximating the aggregate claims distribution by a normal distribution with the same first and second moment is often not satisfactory for the insurance practice, where the third central moment is often substantially different from 0. In this case, approximations based on a translated gamma distribution or the normal power approximation will perform better, see e.g. Kaas, Goovaerts, Dhaene & Denuit (001). It is important to note that all standard actuarial methods mentioned above for determining the aggregate claims distribution are only applicable in case the individual risks are assumed to be mutually independent. However, there are situations where the independence assumption is questionable, e.g. in a situation where the individual risks X k are influenced by the same economic or physical environment. In finance, a portfolio of n investment positions may be faced with potential losses L 1,L,...,L n over a given reference period, e.g. 1 month. The total potential loss L for this portfolio is then given by L = L k. As the returns on the different investment positions will in general be non-independent, it is clear that L will be a sum of non-independent random variables. Quantities of interest are quantiles of the distribution of (3), which in finance are called Values-at-Risk. Regulatory bodies require financial institutions like banks and investment firms to meet risk-based capital requirements for their portfolio holdings. These requirements are often expressed in terms of a Value-at-Risk or some other risk measure whichdependsonthedistributionofthesumin(3). A related problem is determining an investment portfolio s total rate of return. Suppose R 1,R,...,R n denote the random yearly rates of return of n different assets in a portfolio and suppose w 1,w,..., w n denote the weights in the portfolio. Then the total portfolio s yearly rate of return is given by (3) R = w k R k, which is clearly a sum of non-independent random variables. (4) As pointed out by Dhaene et al. (00a), a topic that is currently receiving increasing attention is the combination of actuarial and financial risks. To illustrate, consider random payments of X k to be made at times k for the next n periods. Further, suppose that the stochastic discount factor over

3 Elliptical and Spherical Distributions 3 the period (0,k) is the random variable Y k. Hence, an amount of 1 at time 0 isassumedtogrowtoa stochastic amount Yk 1 at time k. The present value random variable S is defined as the scalar product of the payment vector (X 1,X,,X n ) and the discount vector (Y 1,Y,...,Y n ): S = X k Y k. The present value quantity in (5) is of considerable importance for computing reserves and capital requirements for long term insurance business. The random variables X k Y k will be non-independent not only because in any realistic model, the discount factors will be rather strongly positive dependent, but also because the claim amounts X k can often not be assumed to be mutually independent. As illustrated by the examples above, it is important to be able to determine the distribution function of sums of random variables in the case that the individidual random variables involved are not assumed to be mutually independent. In general, this task is difficult to perform or even impossible because the dependency structure is unknown or too cumbersome to work with. In this paper, we develop approximations for sums involving non-independent log-elliptical random variables. The results presented here are natural extensions of the ideas developed in Dhaene et al. (00a, 00b) where they specifically constructed bounds for sums of dependent log-normal distributions. In Sections and 3, we introduce spherical, elliptical and log-elliptical distributions, as in Fang, Kotz & Ng (1990). We develop and repeat some results that will be used in later sections. Most results proved elsewhere are simply stated, but some basic useful results are also proved. In Section 4, we summarize the ideas developed in Dhaene et al. (00a, 00b) regarding the construction of upper and lower bounds for sums of non-independent random variables. In Section 5 we determine these bounds in the case of functions of sums involving log-elliptical distributions. Section 6 concludes the paper. (5) Elliptical and Spherical Distributions.1 Definition of Elliptical Distributions It is well-known that a random vector Y =(Y 1,...,Y n ) T distribution if Y d = µ + AZ, is said to have a n-dimensional normal (6) where Z =(Z 1,..., Z m ) T is a random vector consisting of m mutually independent standard normal random variables, A is a n m matrix, µ is a n 1 vector and = d stands for equality in distribution. Equivalently, one can say that Y is normal if its characteristic function is given by E exp it T Y =exp it T µ exp 1 tt Σt, t T =(t 1,t,...,t n ). (7) for some fixed vector µ(n 1) and some fixed matrix Σ(n n). For random vectors belonging to the class of multivariate normal distributions with parameters µ and Σ, we shall use the notation Y N n (µ, Σ). It is well-known that the vector µ is the mean vector and that the matrix Σ is the variance-covariance matrix. Note that the relation between Σ and A is given by Σ = AA T.

4 Elliptical and Spherical Distributions 4 The class of multivariate elliptical distributions is a natural extension of the class of multivariate normal distributions. Definition 1. The random vector Y =(Y 1,..., Y n ) T is said to have an elliptical distribution with parameters the vector µ(n 1) and the matrix Σ(n n) if its characteristic function can be expressed as E exp it T Y =exp it T µ φ t T Σt, t T =(t 1,t,...,t n ), (8) forsomescalarfunctionφ and where Σ is given by Σ = AA T (9) for some matrix A(n m). If Y has the elliptical distribution as defined above, we shall write Y E n (µ, Σ, φ) and say that Y is elliptical. The function φ is called the characteristic generator of Y. Hence, the characteristic generator of the multivariate normal distribution is given by φ(u) =exp( u/). It is well-known that the characteristic function of a random vector always exists and that there is a one-to-one correspondence between distribution functions and characteristic functions. Note however that not every function φ can be used to construct a characteristic function of an elliptical distribution. Obviously, this function φ should fulfill the requirement φ (0) = 1. A necessary and sufficient condition for the function φ to be a characteristic generator of an n-dimensional ellipticial distribution is given in Theorem. of Fang, et al. (1990). In the sequel, we shall denote the elements of µ and Σ by µ =(µ 1,..., µ n ) T (10) and Σ =(σ kl ) for k,l =1,,..., n, (11) respectively. Note that (9) guarantees that the matrix Σ is symmetric, positive definite and has positive elements on the first diagonal. Hence, for any k and l, one has that σ kl = σ lk, whereas σ kk 0 which will often be denoted by σ k. It is interesting to note that in the one-dimensional case, the class of elliptical distributions consists mainly of the class of symmetric distributions which include well-known distributions like normal and student t. An n-dimensional random vector Y =(Y 1,..., Y n ) T is normal with parameters µ and Σ if and only if for any vector b, the linear combination b T Y of the marginals Y k has a univariate normal distribution with mean b T µ and variance b T Σb. It is straightforward to generalize this result into the case of multivariate elliptical distributions. Theorem 1. The n-dimensional random vector Y is elliptical with parameters µ(n 1) and Σ(n n), notation Y E n (µ, Σ, φ), if and only if for any vector b(n 1), one has that b T Y E 1 b T µ, b T Σb,φ.

5 Elliptical and Spherical Distributions 5 From Theorem 1, we find in particular that for k =1,,..., n, Y k E 1 µk,σ k, φ. (1) Hence, the marginal components of a multivariate elliptical distribution have an elliptical distribution with the same characteristic generator. As S = Y k = e T Y where e(n 1) = (1, 1,..., 1) T,itfollowsthat S E 1 e T µ, e T Σe,φ (13) where e T µ = P n µ k and e T Σe = P n P n l=1 σ kl. In the following theorem, it is stated that any random vector with components that are linear combinations of the components of an elliptical distribution is again an elliptical distribution with the same characteristic generator. Theorem. For any matrix B (m n), any vector c (m 1) and any random vector Y E n (µ, Σ, φ), wehavethat BY + c E m Bµ + c, BΣB T, φ. (14) It is easily seen that Theorem is a generalization of Theorem 1.. Moments of Elliptical Distributions Suppose that for a random vector Y, the expectation E Q n Y r k k exists for some set of nonnegative integers r 1,r,...,r n. Then this expectation can be found from the relation à n! Y ½ E Y r 1 k r 1 +r +...+r n k = i r 1+r +...+r n r 1 t 1 r t... r E exp it T Y ¾ (15) n tn where 0 (n 1) = (0, 0,...,0) T. The moments of Y E n (µ, Σ, φ) do not necessarily exist. However, from (8) and (15) we deduce that if E (Y k ) exists, then it will be given by t=0 E (Y k )=µ k (16)

6 Elliptical and Spherical Distributions 6 so that E (Y) =µ, if the mean vector exists. Moreover, if Cov (Y k,y l ) and/or Var(Y k ) exist, then they will be given by and/or Cov(Y k,y l )= φ 0 (0) σ kl Var(Y k )= φ 0 (0) σ k, (18) where φ 0 denotes the first derivative of the characteristic generator. In short, if the covariance matrix of Y exists, then it is given by Cov(Y) = φ 0 (0) Σ. A necessary condition for this covariance matrix to exist is φ 0 (0) <, see Cambanis et al. (1981). In the following theorem, we prove that any multivariate elliptical distribution with mutually independent components must necessarily be multivariate normal, see Kelker (1970). Theorem 3. Let Y E n (µ, Σ, φ) with mutually independent components Y k. Assume that the expectations and variances of the Y k exist and that var (Y k ) > 0. Then it follows that Y is multivariate normal. Proof. Independence of the random variables and existence of their expectations imply that the covariances exist and are equal to 0. Hence,wefind that Σ is a diagonal matrix, and that φ t T t Q = n φ t k holds for all n-dimensional vectors t. This equation is known as Hamel s equation, and its solution has the form φ (x) =e αx, for some positive constant α satisfying α = φ 0 (0). Toprovethis,first note that φ t T t µ np = φ t Q k = n φ t k or equivalently, φ (u u n )=φ (u 1 ) φ (u n ). Consider the partial derivative with respect to u k for some k =1,,...,n,wehave φ φ (u 1 + +(u k + h)+ + u n ) φ (u u n ) = lim u k h 0 h φ (u u n ) φ (h) φ (u u n ) = lim h 0 h φ (u u n )[φ(h) φ (0)] = lim h 0 h = φ (u 1 ) φ(u n ) φ 0 (0). (17)

7 Elliptical and Spherical Distributions 7 But the left-hand side is φ = φ (u 1 ) φ 0 (u k ) φ(u n )=φ(u 1 ) φ(u n ) φ0 (u k ) u k φ (u k ). Thus, equating the two we get φ 0 (u k ) φ (u k ) = φ0 (0) which gives the desired solution φ (x) =e αx with α = φ 0 (0). Thus, this leads to the characteristic generator of a multivariate normal..3 Multivariate Densities of Elliptical Distributions An elliptically distributed random vector Y E n (µ, Σ,φ) does not necessarily possess a multivariate density function f Y (y). A necessary condition for Y to possess a density is that rank (Σ) =n. Inthe case of a multivariate normal random vector Y N n (µ, Σ), the density is well-known to be f Y (y) = 1 i p h (π) n Σ exp 1 (y µ)t Σ 1 (y µ). (19) For elliptical distributions, one can prove that if Y E n (µ, Σ,φ) has a density, then it will be of the form f Y (y) = p c h i g (y µ) T Σ 1 (y µ) (0) Σ for some non-negative function g ( ) satisfying the condition 0 < Z 0 z n/ 1 g(z)dz < and a normalizing constant c given by (1) c = Γ (n/) π n/ Z z g(z)dz 1 n/ 1. () 0 Also, the opposite statement holds: Any non-negative function g ( ) satisfying the condition (1) can be c h i used to define an n-dimensional density p g (y µ) T Σ 1 (y µ) of an elliptical distribution, with Σ c given by (). The function g ( ) is called the density generator. One sometimes writes Y E n (µ, Σ,g) for the n-dimensional elliptical distributions generated from the function g ( ). A detailed proof of these results, using spherical transformations of rectangular coordinates, can be found in Landsman & Valdez (00). Note that for a given characteristic generator φ, the density generator g and/or the normalizing constant c may depend on the dimension of the random vector Y. Often one considers the class of elliptical distributions of dimensions 1,, 3,..., all derived from the same characteristic generator φ. In

8 Elliptical and Spherical Distributions 8 case these distributions have a density, we will denote their respective density generators by g n where the subscript n denotes the dimension of the random vector Y. From (19), one immediately finds that the density generators and the corresponding normalizing constants of the multivariate normal random vectors Y N n (µ, Σ) for n =1,,... are given by g n (u) =exp( u/) (3) and c n =(π) n/, (4) respectively. In Table 1, we consider some well-known families of the class of multivariate elliptical distributions. Each family consists of all elliptical distributions constructed from one particular characteristic generator φ (u). For more details about these families of elliptical distributions, see Landsman & Valdez (003) and the references in their paper. Table 1 Some Well-Known Families of Elliptical Distributions with their Characteristic Generator and/or Density Generators. Family Density g n (u) or characteristic φ (u) generators Bessel g n (u) =(u/b) a/ K a h(u/b) 1/i, a> n/,b>0 where K a ( ) is the modifiedbesselfunctionofthe3rdkind Cauchy g n (u) =(1+u) (n+1)/ Exponential Power g n (u) =exp[ r (u) s ], r, s > 0 Laplace g n (u) =exp( u ) Logistic g n (u) = exp ( u) [1 + exp ( u)] Normal Stable Laws g n (u) =exp( u/) ; φ (u) =exp( u/) h φ (u) =exp r (u) s/i, 0 <s,r >0 Student-t g n (u) = ³ 1+ u m (n+m)/, m>0 an integer As an example, let us consider the elliptical Student-t distribution E n (µ, Σ,g n ), with g n (u) = ³ 1+ m (n+m)/. u We will denote this multivariate distribution (with m degrees of freedom) by

9 Elliptical and Spherical Distributions 9 t (m) n (µ, Σ). Its multivariate density is given by Ã! f Y (y) = c n p 1+ (y (n+m)/ µ)t Σ 1 (y µ). (5) Σ m In order to determine the normalizing constant, first note from () that c n = Γ (n/) π n/ Z 0 1 z n/ 1 g(z)dz = Γ (n/) Z ³ π n/ z n/ 1 1+ z dz 1. 0 m (n+m)/ Performing the substitution u =1+(z/m), wefind Z ³ z n/ 1 1+ z Z (n+m)/ dz = m n/ 1 u 1 n/ 1 u m/ 1 du. m 0 Making one more substitution v =1 u 1,weget Z 0 z n/ 1 ³ 1+ z m from which we find c n = Γ ((n + m) /) (mπ) n/ Γ (m/). (n+m)/ dz = m n/ Γ (n/) Γ (m/) Γ ((n + m) /), 1 (6) From Theorem (1) and (1), we have that the marginals of the multivariate elliptical Student-t distribution are again Student-t distributions, hence, Y k t (m) 1 (µ, Σ). The results above lead to Γ " m+1 f Yk (y) = (mπ) 1/ Γ µ # y (m+1)/ µk, k =1,,..., n, (7) m σ k m σ k which is indeed the well-known density of a univariate Student-t random variable with m degrees of freedom. Its mean is E (Y k )=µ k, (8) whereas it can be verifiedthatitsvarianceisgivenby Var(Y k )= m m σ k, (9) m provided the degrees of freedom m >. Note that m = φ 0 (0), where φ is the characteristic generator of the family of Student-t distributions with m degrees of freedom. In order to derive the characteristic function of Y k,notethat E e ity Γ m+1 Z µ k = (mπ) 1/ Γ e iµ k t e iσ ktz 1+ 1 (m+1)/ m m z dz Γ m+1 = (mπ) 1/ Γ e iµ k t Z m σ k mσ (m+1)/ cos (tz) mσ k + z (m+1)/ dz. (30) 0 k

10 Elliptical and Spherical Distributions 10 Hence, from Gradshteyn & Ryzhik (000, p. 907), we find that the characteristic function of Y k t (m) 1 (µ, Σ) is given by E e ity k = e iµ k t 1 m/ 1 Γ m t m/ mσ k Km/ t mσk, (31) where K ν ( ) is the Bessel function of the second kind. For a similar derivation, see Witkovsky (001). Observe that equation (31) can then be used to find the characteristic generator for the family of Student-t distributions. Furthermore, note that if the random vector Y has independent components Y k that are Student-t distributed with m degrees of freedom, it follows from Theorem (3) that Y cannot belong to the family of elliptical distributed random vectors..4 Spherical Distributions An n-dimensional random vector Z =(Z 1,..., Z n ) T is said to have a multivariate standard normal distribution if all the Z i s are mutually independent and standard normally distributed. We will write this as Z N n (0 n, I n ), where 0 n is the n-vector with i-th element E (Z i )=0and I n is the n n covariance matrix which equals the identity matrix in this case. The characteristic function of Z is given by E exp it T Z =exp 1 tt t, t T =(t 1,t,...,t n ). (3) Hence from (3), we find that the characteristic generator of N n (0 n, I n ) is given by φ(u) =exp( u/). The class of multivariate spherical distributions is an extension of the class of standard multivariate normal distributions. Definition. ArandomvectorZ =(Z 1,...,Z n ) T is said to have an n-dimensional spherical distribution with characteristic generator φ if Z E n (0 n, I n,φ). We will often use the notation S n (φ) for E n (0 n, I n,φ) in the case of spherical distributions. From the definition above, we find the following corollary. Corollary 1. Z S n (φ) if and only if E exp it T Z = φ t T t, t T =(t 1,t,...,t n ).. (33) Consider an m-dimensional random vector Y such that Y = d µ + AZ, (34) for some vector µ(n 1), some matrix A(n m) and some m-dimensional elliptical random vector Z S m (φ). Then it is straightforward to prove that Y E n (µ, Σ,φ), where the variance-covariance matrix is given by Σ = AA T. From equation (33), it immediately follows that the correlation matrices of members of the class of spherical distributions are identical. That is, if we let Corr(Z) =(r ij ) be the correlation matrix, then ½ Cov(Z i,z j ) 0, if i 6= j r ij = p Var(Zi ) Var(Z j ) = 1, if i = j.

11 Elliptical and Spherical Distributions 11 The factor φ 0 (0) in the covariance matrix as shown in (17) enters into the covariance structure but cancels in the correlation matrix. Note that although the correlations between different components of a spherical distributed random variable are 0, this does not imply that these components are mutually independent. As a matter of fact, it can only be independent if they belong to the family of multivariate normal distributions. Observe that from the characteristic functions of Z and a T Z, one immediately finds the following result. Lemma 1. Z S n (φ) if and only if for any n-dimensional vector a, one has a T Z a T a S 1 (φ). (35) As a special case of this result, we find that any component Z i of Z has a S 1 (φ) distribution. From the results concerning elliptical distributions, we find that if a spherical random vector Z S n (φ) possess a density f Z (z), then it will have the form f Z (z) =cg z T z, (36) where the density generator g satisfies the condition (1) and the normalizing constant c satisfies (). Furthermore, the opposite also holds: any non-negative function g ( ) satisfying the condition (1) can be used to define an n-dimensional density cg z T z of a spherical distribution with the normalizing constant c satisfying (). One often writes S n (g) for the n-dimensional spherical distribution generated from the density generator g ( )..5 Conditional Distributions It is well-known that if (Y,Λ) has a bivariate normal distribution, then the conditional distribution of Y, given that Λ = λ, is normal with mean and variance given by E (Y Λ = λ) =E (Y )+r (Y,Λ) σ Y σ Λ [λ E (Λ)] (37) and Var(Y Λ = λ) = ³1 r (Y,Λ) σ Y, (38) where r (Y,Λ) is the correlation coefficient between Y and Λ: r (Y,Λ) = Cov (Y,Λ) p Var(Y ) Var(Λ). (39) In the following theorem, it is stated that this conditioning result can be generalized to the class of bivariate elliptical distributions. This result will be useful for developing bounds for sums of random variables in Section 4.

12 Elliptical and Spherical Distributions 1 Theorem 4. Let the random vector Y =(Y 1,..., Y n ) T E n (µ, Σ, φ) with density generator denoted by g n ( ). Define Y and Λ to be linear combinations of the variates of Y, i.e. Y = α T Y and Λ = β T Y, for some α T =(α 1, α,...,α n ) and β T =(β 1, β,...,β n ). Then, we have that (Y,Λ) has an elliptical distribution: (Y, Λ) E ³µ (Y,Λ), Σ (Y,Λ), φ (40) where the respective parameters are given by µ (Y,Λ) = Σ (Y,Λ) = µ µy = µ Λ µ σ Y µ α T µ β T µ r (Y,Λ) σ Y σ Λ, (41) r (Y, Λ) σ Y σ Λ σ Λ µ α = T Σα α T Σβ α T Σβ β T Σβ. (4) Furthermore, conditionally given Λ = λ, the random variable Y has a univariate elliptical distribution: µ Y Λ = λ E 1 µ Y + r (Y,Λ) σ Y (λ µ σ Λ ), ³1 r (Y, Λ) σ Y, φ a, (43) Λ for some characteristic generator φ a ( ) depending on a =(λ µ Λ ) /σ Λ. Proof. The joint distribution of (Y,Λ) immediately follows from Theorem (1). We know that if its joint density exists, it will be of the form f Y,Λ (y, λ) = c 1 r σ Y σ " Ã Λ µy 1 µy g 1 r σ Y µ y µy r σ Y µ λ µλ σ Λ + µ!# λ µλ, (44) σ Λ where we simply wrote r = r (Y,Λ). Consider next the characteristic function of the conditional random variable Y Λ = λ: ϕ Y Λ (t) = E e ity Λ = λ = = 1 f Λ (λ) Z Z e ity f Y,Λ (y, λ) dy, e ity f Y Λ (y λ) dy where f Y,Λ (y, λ) is given in (44). Using the transformation w 1 =(y µ Y ) /σ Y and w =(λ µ Λ ) /σ Λ, we have h Z ϕ Y Λ (t) =e itµ Y e i(tσ Y )w 1 c /c g r w 1 rw 1 w + w i dw 1 r g 1 w 1.

13 Elliptical and Spherical Distributions 13 1 Since we can write 1 r w 1 rw 1 w + w ³ = w1 1 r rw + w and using the transformation w1 = (w 1 rw ) / 1 r,wehave Note that Z Z ϕ Y Λ (t) =e it(µ Y +σ Y rw ) c g w 1 + w dw c 1 g 1 w 1 =1 which clearly indicates that c g w 1 + w c 1 g 1 w h ³ p i exp i σ Y 1 r t w1 c g w 1 + w dw c 1 g 1 w 1. (45) is a legitimate density. Furthermore, it can be shown that it is the density of a spherical random variable, denoted by W, with density generator g a (w) = g (w + a) R u 1/ g (u + a) du where a = w, see Fang et al. (1990), pages 39 to 41. Thus, the characteristic function of this spherical random variable W can be expressed as ϕ W (t) =φ a t for some characteristic generator φ a ( ) which depends on a. Therefore, the characteristic function in (45) simplifies to p ϕ Y Λ (t) = e it(µ Y +σ Y rw ) ψ W ³σ Y 1 r t = e it(µ Y +σ Y rw ) φ a σ Y 1 r t. We see it to be the characteristic function of an elliptical random variable with parameters in (37) and (38). where From this theorem, it follows that the characteristic function of Y Λ = λ is given by E e iy t Λ = λ ³ =exp iµ Y Λ=λ t φ a ³σ t Y Λ=λ and µ Y Λ=λ = µ Y + r (Y,Λ) σ Y σ Λ (λ µ Λ ) σ Y Λ=λ = ³ 1 r (Y,Λ) σ Y.

14 Log-Elliptical Distributions 14 3 Log-Elliptical Distributions Multivariate log-elliptical distributions are natural generalizations of multivariate log-normal distributions. For any n-dimensional vector x =(x 1,...,x n ) T with positive components x i,wedefine log x =(logx 1, log x,..., log x n ) T. Recall that an n-dimensional random vector has a multivariate log-normal distribution if log X has a multivariate normal distribution. In this case we have that log X N n (µ, Σ). Definition 3. The random vector X is said to have a multivariate log-elliptical distribution with parameters µ and Σ if log X has an elliptical distribution: log X E n (µ, Σ, φ). (46) In the sequel, we shall denote log X E n (µ, Σ, φ) as X LE n (µ, Σ, φ). When µ = 0 n and Σ = I n, we shall write X LS n (φ). Clearly, if Y E n (µ, Σ, φ) and X =exp(y), the LE n (µ, Σ, φ). If the density of X LE n (µ, Σ, φ) exists, then the density of Y =logx E n (µ, Σ, φ) also exists. From (0), it follows that the density of X must be equal to f X (x) = p c µ nq h i g (log x µ) T Σ 1 (log x µ), (47) Σ x 1 k see Fang et al. (1990). The density of the multivariate log-normal distribution with parameters µ and Σ follows from (19) and (47). Furthermore, any marginal distribution of a log-elliptical distribution is again log-elliptical. This immediately follows from the properties of elliptical distributions. Theorem 5. Let X LE n (µ, Σ, φ). If the mean of X k exists, then it is given by E (X k )=e µ kφ σ k. Provided the covariances exist, they are given by n h Cov(X k,x l )=exp(µ k + µ l ) φ (σ k + σ l ) i φ σ k o φ σ l. Proof. Define the vector a k =(0, 0,..., 0, 1, 0,..., 0) T to consist of all zero entries, except for the k-th entry which is 1. Thus,fork =1,,..., n, wehave E (X k ) = E e Y k = E ³e Y at k = ϕ a T k Y ( i) = exp a T k µ φ a T k Σa k = e µ kφ σ k

15 Log-Elliptical Distributions 15 and the result for the mean immediately follows. For the covariance, first define the vector b kl = (0, 0,...,1, 0,...0, 1, 0,..., 0) T to consist of all zero entries, except for the k-th and l-thentrieswhichare each 1. Note that ν kl = E (X k X l ) E (X k ) E (X l ) where E (X k X l ) = E ³e Y bt kl = ϕ b T kl Y ( i) = exp b T klµ φ b T klσb kl = exp(µ k + µ l ) φ h (σ k + σ l ) i and the result should now be obvious. Some risk measures for log-elliptical distributions are derived in the next theorem. Theorem 6. Let X LE 1 µ,σ, φ and Z S 1 (φ) with density f Z (x), then FX 1 (p) = exp µ + σfz 1 (p), 0 <p<1, E (X d) + = e µ φ σ F Z E X X >F 1 X (p) = e µ 1 p φ σ F Z ³ µ log d σ df Z ³ µ log d σ, d > 0, F 1 Z (1 p), 0 <p<1, where the density of Z is given by f Z (x) = f Z(x)e σx φ ( σ ). Proof. (1) The quantiles of X follow immediately from log X = d µ + σz. () Using f X (x) = 1 µ log x µ σx f Z, and substituting x by t = log x µ,wefind that σ σ Z xf X (x)dx = e µ φ σ µ µ log d F Z. σ d The stated result then follows from E Z (X d) + = d xf X (x) dx df Z µ µ log d σ. (3) The expression for the tail conditional expectation follows from E X X>FX 1 (p) = FX 1 (p)+ 1 h X i 1 p E F 1 X (p). +

16 Convex Order Bounds for Sums of Random Variables 16 Note that the density of Z in the theorem above can be interpreted as the Esscher transform with parameter σ of Z. Further, note that the expression for the quantiles holds for any one-dimensional elliptical distribution, whereas the expressions for the stop-loss premiums and tail conditional expectations were only derived for continuous elliptical distributions. We also have that if g is the normalized density generator of Z S 1 (φ), then F Z (x) = Z x g z dz. 4 Convex Order Bounds for Sums of Random Variables This section describes the bounds for (the distribution function of) sums of random variables as presented in Dhaene et al. (00a, 00b). We first define the notions of convex order and comonotonicity, which are essential ingredients for developing the bounds. 4.1 Convex Order In the sequel, all random variables are assumed to have a finite mean. In this case, we find E (X) = Z 0 F X (x) dx Z 0 F X (x) dx. (48) In actuarial science, it is common to replace a random variable with another which is less attractive but with a simpler structure so that the distribution function can at least be easier to determine. In this paper, the notion of less attractive will be translated in terms of convex order, as defined below. Definition 4. A random variable X is said to precede another random variable Y in convex order, written as X ¹ cx Y,if E [v (X)] E [v (Y )] holds for all convex functions v for which the expectations exist. Alternatively, it can be shown that X ¹ cx Y if and only if E (X) =E (Y ) and E (X d) + E (Y d)+ for all d, (49) see e.g. Shaked & Shanthikumar (1994). As stop-loss premiums can be seen as measures for the upper tail of the distribution function, X ¹ cx Y means that observing large outcomes is more likely for Y than for X. Other characterizations of convex ordering can be found in Dhaene et al. (00a). 4. Comonotonicity Consider an n-dimensional random vector X =(X 1,..., X n ) T with multivariate distribution function given by F X (x) =Pr(X 1 <x 1,...,X n <x n ), for any x =(x 1,,x n ) T. It is well-known that this

17 Convex Order Bounds for Sums of Random Variables 17 multivariate distribution function satisfies the so called Fréchet bounds: Ã! ³ max F Xk (x k ) (n 1), 0 F X (x) min F X1 (x 1 ),,F (x X n n), see Hoeffding (1940) or Fréchet (1951). Definition 5. ArandomvectorX is said to be comonotonic if its joint distribution is given by the Fréchet upper bound, i.e., ³ F X (x) =min F X1 (x 1 ),,F (x X n n). Alternative characterizations of comonotonicity of a random vector are given in the following theorem, the proof of which can be found in Dhaene et al. (00a). Theorem 7. Suppose X is an n-dimensional random vector. Then the following statements are equivalent: 1. X is comonotonic. ³. X = d FX 1 1 (U),..., FX 1 n (U) defined by for U Uniform(0, 1) where F 1 X k ( ) denotes the quantile function F 1 X k (q) =inf(x R F X (x) q ),for0 q There exists a random variable Z and non-decreasing functions h 1,..., h n such that X d =(h 1 (Z),...,h n (Z)). In the sequel, we shall use the superscript c to denote comonotonicity of a random vector. Hence, the vector (X c 1,..., Xc n) is a comonotonic random vector with the same marginals as the vector (X 1,..., X n ). The former vector is called the comonotonic counterpart of the latter. Consider the comonotonic random sum S c = X1 c + + Xn. c (50) In Dhaene et al. (00a), it is proven that each quantile of S c is equal to the sum of the corresponding quantiles of the marginals involved: FS 1 (q) = F 1 c X k (q), 0 q 1. (51) Furthermore, they showed that in case all marginal distributions F Xk are strictly increasing, the stoploss premiums of a comonotonic sum S c can easily be computed from the stop-loss premiums of the marginals: E (S c d) + = E (X k d k ) +, (5)

18 Convex Order Bounds for Sums of Random Variables 18 where the d k s are determined by d k = F 1 X k (F S c (d)). (53) This result can be extended to the case of marginal distributions that are not necessarily strictly increasing, see Dhaene et al. (00a). 4.3 Convex Bounds In this subsection, we shall provide lower and upper for the sum S = X X n. (54) Proofs for these bounds can be found in Dhaene et al. (00). The comonotonic counterpart of X =(X 1,...,X n ) T is denoted by X c =(X c 1,...,Xc n) T, where X c k is given by X c k = F 1 X k (U), (55) with U U (0, 1). The comonotonic convex upper bound for the sum S c is defined by S c = X c X c n. (56) It can be shown that S cx S c, (57) which implies that S c is indeed a convex order upper bound for S. Let us now suppose that we have additional information available about the dependency structure of X, in the sense that there is some random variable Λ with a known distribution function and such that we also know the distributions of the random variables X k Λ = λ for all outcomes λ of Λ and for all k =1,..., n. LetF 1 X k Λ (U) be a notation for the random variable f k(u, Λ), where the function f k is defined by f k (u, λ) =F 1 X k Λ=λ (u). Now, consider the random vector Xu =(X1 u,..., Xu n) T, where Xk u is given by X u k = F 1 X k Λ (U). (58) The improved upper bound (corresponding to Λ) of the sum S is then defined as S u = X u X u n. (59) Notice that the random vectors X c and X u have the same marginals. It can be proven that S cx S u cx S c, (60) which means that the sum S u is indeed an improved upper bound (in the sense of convex order) for the original sum S.

19 Non-Independent Log-Elliptical Risks 19 Finally, consider the random vector X l = X l 1,..., Xl n T, where X l k is given by X l k = E (X k Λ). (61) Using Jensen s inequality, it is straightforward to prove that the sum S l = X l X l n (6) isaconvexorderlowerboundfors: S l cx S. (63) 5 Non-Independent Log-Elliptical Risks In this section we develop convex lower and upper bounds for sums involving log-elliptical random variables. It generalizes the results for the log-normal case as obtained in Kaas, Dhaene & Goovaerts (000). Consider a series of deterministic non-negative payments α 1,...,α n, that are due at times 1,..., n respectively. The present value random variable S is defined by S = α i exp [ (Y Y i )], (64) i=1 where the random variable Y i represents the continuously compounded rate of return over the period (i 1,i),i=1,,...,n. Furthermore, define Y (i) =Y Y i,thesumofthefirst i elements of the random vector Y =(Y 1,..., Y n ) T,andX i =exp[ Y (i)]. Using these notations, we can write the present value random variable in (64) as S = α i e Y (i) = X i. i=1 i=1 (65) We will assume that the return vector Y =(Y 1,..., Y n ) T belongs to the class of multivariate elliptical distributions, i.e. Y E n (µ, Σ, φ), with parameters µ and Σ given by µ =(µ 1,..., µ n ) T, Σ =(σ kl ) for k, l =1,,...,n. (66) Thus, the random vector X =(X 1,...,X n ) T is a log-elliptical random vector. From (13), we find that Y (i) E 1 µ (i),σ (i), φ with µ (i) = σ (i) = ix µ k, ix l=1 ix σ kl. (67) (68)

20 Non-Independent Log-Elliptical Risks 0 In order to develop lower and improved upper bounds for S, we define the conditioning random variable Λ as a linear combination of the random variables Y i, i =1,,..., n : Λ = β i Y i. i=1 Using the property of elliptical distributions described in (14), we know that Λ E 1 µλ,σ Λ, φ where and µ Λ = σ Λ = β i µ i i=1 β i β j σ ij. i,.j=1 (69) (70) Note that if the mean and variance of Λ exist, then they are given by E (Λ) =µ Λ and Var(Λ) = φ 0 (0) σ Λ, respectively. Proposition 1. Let S bethepresentvaluesumasdefined in (64), with α i,i=1,...,n,nonnegative real numbers and Y E n (µ, Σ, φ). Let the conditioning random variable Λ be defined by Λ = β i Y i. i=1 (71) Then the comonotonic upper bound S c, the improved upper bound S u and the lower bound S l are given by S c = S u = S l = i=1 i=1 α i exp µ (i)+σ (i) F 1 Z (U), (7) q α i exp µ (i) r i σ (i) FZ 1 (U)+ 1 ri σ (i) FZ 1 (V ), (73) i=1 α i exp µ (i) r i σ (i) F 1 Z (U) φ a σ (i) 1 r i, (74) where U and V are mutually independent uniform(0,1) random variables, Z S 1 (φ), andr i is defined by P i P n l=1 r i = β lσ kl. σ (i) σ Λ (75) Proof. (a) From X(i) d = α i e µ(i)+σ(i)z,

21 Non-Independent Log-Elliptical Risks 1 we find that F 1 S c (p) = i=1 F 1 X(i) (p) = i=1 1 µ(i)+σ(i)f α i e Z (p), 0 <p<1, Hence, the comonotonic upper bound S c of S is given by (7). (b) In order to derive the lower bound S l,wefirst determine the characteristic function of the bivariate random vector (Y (i), Λ) for i =1,...,n. For any -vector (t 1,t ) T,wefind E [exp (i (t 1 Y (i)+t Λ))] = E exp is T Y with ½ t1 + t s k = β k : k =1,...,i, t β k : k = i +1,...,n. As Y E n (µ, Σ, φ), this leads to E [exp (i (t 1 Y (i)+t Λ))] = exp is T µ φ s T Σs = exp it T µ φ t T Σ t with µ = (µ (i),µ Λ ) T, µ σ Σ (i) σ 1, = σ,1 σ Λ and σ 1, = σ,1 = r i σ(i)σ Λ. From Theorem (1), we can conclude that the bivariate random vector (Y (i), Λ) is elliptical: (Y (i), Λ) T E (µ, Σ, φ). Thus, from Theorem (4), we know that (Y (I) Λ = λ) E 1 ³µ Y (i) Λ, σ Y (i) Λ, φ a with and µ Y (i) Λ = E (Y (i) Λ = λ) =µ (i)+r i σ (i) σ Λ (λ µ Λ ) (76) Note that σ Y (i) Λ = Var(Y (i) Λ = λ) =σ (i) 1 r i. (77) µ ip P r i = Corr(Y (i), Λ) =Corr Y k, n l=1 ³ Pi Corr Y k, P n l=1 β ly l = = σ (i) σ Λ P i β l Y l P n l=1 β lσ kl σ (i) σ Λ.

22 Non-Independent Log-Elliptical Risks Consequently, using the moment generating function of an elliptical, we have ³ E α k e Y (i) Λ = λ ³ = α i E e Y (i) Λ = λ = α i M Y (i) Λ ( 1) = α i exp ³ µ Y (i) Λ φ a ³ σ Y (i) Λ. Thus, if V U (0, 1), then a lower bound for the present value random sum in (64) can be written as S l = X l Xl n with ³ Xi l = E α i e Y (i) Λ µ σ (i) = α i exp µ (i) r r µλ + σ Λ F σ Z 1 (V ) µ Λ φ a σ (i) 1 r i Λ = α i exp µ (i) r i σ (i) FZ 1 (V ) φ a σ (i) 1 ri. (78) (c) As suggested in Section 4, an improved upper bound can be developed using the additional information as provided by Λ = P n i=1 β iy i..now,define Xi u = F 1 X i Λ (U) as in (59). Because (Y (i) Λ = λ) E 1 ³µ Y (i) Λ, σ Y (i) Λ,g 1, it follows that h F 1 X i Λ=λ (u) =α i exp F 1 Z Thus, if U U (0, 1), wehave h X i Λ (U) = α i exp = α i exp = α i exp F 1 = α i exp FZ 1 (u) σ Y (i) Λ µ Y (i) Λ i. (U) σ Y (i) Λ µ Y (i) Λ i FZ qσ 1 (U) (i) 1 ri q FZ 1 (U) σ (i) ½ σ (i) F 1 Z µ (i) ri σ (i) FZ 1 (V ) (V ) 1 r i µ (i) r iσ (i) F 1 (U) q 1 r i r if 1 Z (V ) µ (i) Z ¾. Thus,animprovedupperboundisexpressedasin(73). 6 Concluding Remarks Inthispaperwederivedupperandlowerboundsforthedistributionfunctionofasumofnonindependent log-elliptical random variables. The results presented in this paper extend the results of Dhaene, Denuit, Kaas, Goovaerts & Vyncke (00a, 00b) who constructed bounds for sums of lognormal random variables. The extension to the class of elliptical distributions makes sense because

23 References 3 it makes the ideas developed in Kaas, Dhaene & Goovaerts (000) also applicable in situations where the shape of log-normal distributions is not fitted, but heavier tailed distributions such as Studentt are required. As multivariate log-elliptical distributions share many of the tractable properties of multivariate log-normal distributions, it is not surprising to find that the bounds developed for the log-elliptical case are very similar in form to those developed for the log-normal case. The upper bound is based on the sum of comonotonic random variables while lower and improved upper bounds can be constructed from conditioning on some additional available random variable. 7 References [1] Cambanis, S., Huang, S., and Simons, G. (1981), On the Theory of Elliptically Contoured Distributions, Journal of Multivariate Analysis 11: [] Dhaene, J. and De Pril, N. (1994), On a Class of Approximate Computation Methods in the Individual Risk Model, Insurance: Mathematics & Economics 14: [3] Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., and Vyncke, D. (001), The Concept of Comonotonicity in Actuarial Science and Finance: Theory, Insurance: Mathematics & Economics 31(1), [4] Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., and Vyncke, D. (001), The Concept of Comonotonicity in Actuarial Science and Finance: Applications, Insurance: Mathematics & Economics 31(), [5] Dhaene, J., Vandebroek, M. (1995), Recursions for the Individual Model, Insurance: Mathematics & Economics 16, [6] Fang, K.T., Kotz, S. and Ng, K.W. (1990) Symmetric Multivariate and Related Distributions London: Chapman & Hall. [7] Fréchet, M. (1951) Sur les tableaux de corrélation dont les marges sont données et programmes linéaires, Ann. Univ. Lyon, Sect. A 9: [8] Gradshteyn, I.S. and Ryzhik, I.M. (001), Table of Integrals, Series, and Products New York: Academic Press. [9] Gupta, A.K. and Varga, T. (1993), Elliptically Contoured Models in Statistics Netherlands: Kluwer Academic Publishers. [10] Hoeffding, W. (1940) Masstabinvariante Korrelationstheorie, Schriften Math. Inst. Univ. Berlin 5: [11] Joe, H. (1997), Multivariate Models and Dependence Concepts London: Chapman & Hall. [1] Kaas, R., Dhaene, J. and Goovaerts, M.J. (000), Upper and lower bounds for sums of random variables, Insurance: Mathematics & Economics 7, [13] Kaas, R., Goovaerts, M., Dhaene, J. and Denuit, M. (001), Modern Actuarial Risk Theory, Boston, Dordrecht, London, Kluwer Academic Publishers.

24 References 4 [14] Kelker, D. (1970), Distribution Theory of Spherical Distributions and Location-Scale Parameter Generalization, Sankhya 3: [15] Kotz, S., Balakrishnan, N. and Johnson, N.L. (000), Continuous Multivariate Distributions, New York: John Wiley & Sons, Inc. [16] Landsman, Z. and Valdez, E.A. (00), Tail Conditional Expectations for Elliptical Distributions, submitted for publication, North American Actuarial Journal. Available at < [17] Shaked, M. and Shanthikumar, J.G. (1994) Stochastic Orders and their Applications New York: Academic Press. [18] Witkovsky, V. (001), On the Exact Computation of the Density and of the Quantiles of Linear Combinations of t and F random variables, Journal of Statistical Planning and Inference 94: 1-13.

Some results on Denault s capital allocation rule

Some results on Denault s capital allocation rule Faculty of Economics and Applied Economics Some results on Denault s capital allocation rule Steven Vanduffel and Jan Dhaene DEPARTMENT OF ACCOUNTANCY, FINANCE AND INSURANCE (AFI) AFI 0601 Some Results

More information

Probability Transforms with Elliptical Generators

Probability Transforms with Elliptical Generators Probability Transforms with Elliptical Generators Emiliano A. Valdez, PhD, FSA, FIAA School of Actuarial Studies Faculty of Commerce and Economics University of New South Wales Sydney, AUSTRALIA Zurich,

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Comonotonicity and Maximal Stop-Loss Premiums

Comonotonicity and Maximal Stop-Loss Premiums Comonotonicity and Maximal Stop-Loss Premiums Jan Dhaene Shaun Wang Virginia Young Marc J. Goovaerts November 8, 1999 Abstract In this paper, we investigate the relationship between comonotonicity and

More information

A multivariate dependence measure for aggregating risks

A multivariate dependence measure for aggregating risks A multivariate dependence measure for aggregating risks Jan Dhaene 1 Daniël Linders 2 Wim Schoutens 3 David Vyncke 4 December 1, 2013 1 KU Leuven, Leuven, Belgium. Email: jan.dhaene@econ.kuleuven.be 2

More information

Calculation of Bayes Premium for Conditional Elliptical Risks

Calculation of Bayes Premium for Conditional Elliptical Risks 1 Calculation of Bayes Premium for Conditional Elliptical Risks Alfred Kume 1 and Enkelejd Hashorva University of Kent & University of Lausanne February 1, 13 Abstract: In this paper we discuss the calculation

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Characterization of Upper Comonotonicity via Tail Convex Order

Characterization of Upper Comonotonicity via Tail Convex Order Characterization of Upper Comonotonicity via Tail Convex Order Hee Seok Nam a,, Qihe Tang a, Fan Yang b a Department of Statistics and Actuarial Science, University of Iowa, 241 Schaeffer Hall, Iowa City,

More information

Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management

Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management Zinoviy Landsman Department of Statistics, University of Haifa.

More information

Tail Mutual Exclusivity and Tail- Var Lower Bounds

Tail Mutual Exclusivity and Tail- Var Lower Bounds Tail Mutual Exclusivity and Tail- Var Lower Bounds Ka Chun Cheung, Michel Denuit, Jan Dhaene AFI_15100 TAIL MUTUAL EXCLUSIVITY AND TAIL-VAR LOWER BOUNDS KA CHUN CHEUNG Department of Statistics and Actuarial

More information

Conditional Tail Expectations for Multivariate Phase Type Distributions

Conditional Tail Expectations for Multivariate Phase Type Distributions Conditional Tail Expectations for Multivariate Phase Type Distributions Jun Cai Department of Statistics and Actuarial Science University of Waterloo Waterloo, ON N2L 3G1, Canada Telphone: 1-519-8884567,

More information

COMPARING APPROXIMATIONS FOR RISK MEASURES RELATED TO SUMS OF CORRELATED LOGNORMAL RANDOM VARIABLES MASTERARBEIT

COMPARING APPROXIMATIONS FOR RISK MEASURES RELATED TO SUMS OF CORRELATED LOGNORMAL RANDOM VARIABLES MASTERARBEIT COMPARING APPROXIMATIONS FOR RISK MEASURES RELATED TO SUMS OF CORRELATED LOGNORMAL RANDOM VARIABLES MASTERARBEIT Technische Universität Chemnitz Fakultät für Mathematik Integrierter Internationaler Master-

More information

A new approach for stochastic ordering of risks

A new approach for stochastic ordering of risks A new approach for stochastic ordering of risks Liang Hong, PhD, FSA Department of Mathematics Robert Morris University Presented at 2014 Actuarial Research Conference UC Santa Barbara July 16, 2014 Liang

More information

A Measure of Monotonicity of Two Random Variables

A Measure of Monotonicity of Two Random Variables Journal of Mathematics and Statistics 8 (): -8, 0 ISSN 549-3644 0 Science Publications A Measure of Monotonicity of Two Random Variables Farida Kachapova and Ilias Kachapov School of Computing and Mathematical

More information

Simulation of multivariate distributions with fixed marginals and correlations

Simulation of multivariate distributions with fixed marginals and correlations Simulation of multivariate distributions with fixed marginals and correlations Mark Huber and Nevena Marić June 24, 2013 Abstract Consider the problem of drawing random variates (X 1,..., X n ) from a

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Walter Schneider July 26, 20 Abstract In this paper an analytic expression is given for the bounds

More information

GENERAL MULTIVARIATE DEPENDENCE USING ASSOCIATED COPULAS

GENERAL MULTIVARIATE DEPENDENCE USING ASSOCIATED COPULAS REVSTAT Statistical Journal Volume 14, Number 1, February 2016, 1 28 GENERAL MULTIVARIATE DEPENDENCE USING ASSOCIATED COPULAS Author: Yuri Salazar Flores Centre for Financial Risk, Macquarie University,

More information

Stochastic Comparisons of Weighted Sums of Arrangement Increasing Random Variables

Stochastic Comparisons of Weighted Sums of Arrangement Increasing Random Variables Portland State University PDXScholar Mathematics and Statistics Faculty Publications and Presentations Fariborz Maseeh Department of Mathematics and Statistics 4-7-2015 Stochastic Comparisons of Weighted

More information

VaR bounds in models with partial dependence information on subgroups

VaR bounds in models with partial dependence information on subgroups VaR bounds in models with partial dependence information on subgroups L. Rüschendorf J. Witting February 23, 2017 Abstract We derive improved estimates for the model risk of risk portfolios when additional

More information

Tail Dependence of Multivariate Pareto Distributions

Tail Dependence of Multivariate Pareto Distributions !#"%$ & ' ") * +!-,#. /10 243537698:6 ;=@?A BCDBFEHGIBJEHKLB MONQP RS?UTV=XW>YZ=eda gihjlknmcoqprj stmfovuxw yy z {} ~ ƒ }ˆŠ ~Œ~Ž f ˆ ` š œžÿ~ ~Ÿ œ } ƒ œ ˆŠ~ œ

More information

Optimal Reinsurance Strategy with Bivariate Pareto Risks

Optimal Reinsurance Strategy with Bivariate Pareto Risks University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 2014 Optimal Reinsurance Strategy with Bivariate Pareto Risks Evelyn Susanne Gaus University of Wisconsin-Milwaukee Follow

More information

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich Modelling Dependence with Copulas and Applications to Risk Management Filip Lindskog, RiskLab, ETH Zürich 02-07-2000 Home page: http://www.math.ethz.ch/ lindskog E-mail: lindskog@math.ethz.ch RiskLab:

More information

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Remigijus Leipus (with Yang Yang, Yuebao Wang, Jonas Šiaulys) CIRM, Luminy, April 26-30, 2010 1. Preliminaries

More information

Modelling Dependent Credit Risks

Modelling Dependent Credit Risks Modelling Dependent Credit Risks Filip Lindskog, RiskLab, ETH Zürich 30 November 2000 Home page:http://www.math.ethz.ch/ lindskog E-mail:lindskog@math.ethz.ch RiskLab:http://www.risklab.ch Modelling Dependent

More information

arxiv: v1 [q-fin.rm] 11 Mar 2015

arxiv: v1 [q-fin.rm] 11 Mar 2015 Negative Dependence Concept in Copulas and the Marginal Free Herd Behavior Index Jae Youn Ahn a, a Department of Statistics, Ewha Womans University, 11-1 Daehyun-Dong, Seodaemun-Gu, Seoul 10-750, Korea.

More information

Monotonicity and Aging Properties of Random Sums

Monotonicity and Aging Properties of Random Sums Monotonicity and Aging Properties of Random Sums Jun Cai and Gordon E. Willmot Department of Statistics and Actuarial Science University of Waterloo Waterloo, Ontario Canada N2L 3G1 E-mail: jcai@uwaterloo.ca,

More information

Remarks on quantiles and distortion risk measures

Remarks on quantiles and distortion risk measures Remarks on quantiles and distortion risk measures Jan Dhaene Alexander Kukush y Daniël Linders z Qihe Tang x Version: October 7, 202 Abstract Distorted expectations can be expressed as weighted averages

More information

Asymptotic behaviour of multivariate default probabilities and default correlations under stress

Asymptotic behaviour of multivariate default probabilities and default correlations under stress Asymptotic behaviour of multivariate default probabilities and default correlations under stress 7th General AMaMeF and Swissquote Conference EPFL, Lausanne Natalie Packham joint with Michael Kalkbrener

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Multivariate Statistics

Multivariate Statistics Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical

More information

ON THE TRUNCATED COMPOSITE WEIBULL-PARETO MODEL

ON THE TRUNCATED COMPOSITE WEIBULL-PARETO MODEL ON THE TRUNCATED COMPOSITE WEIBULL-PARETO MODEL SANDRA TEODORESCU and EUGENIA PANAITESCU The composite Weibull-Pareto model 2] was introduced as an alternative to the composite Lognormal-Pareto ] used

More information

Bounds for Stop-Loss Premiums of Deterministic and Stochastic Sums of Random Variables

Bounds for Stop-Loss Premiums of Deterministic and Stochastic Sums of Random Variables Bounds for Stop-Loss Premiums of Deterministic and Stochastic Sums of Random Variables Tom Hoedemakers Grzegorz Darkiewicz Griselda Deelstra Jan Dhaene Michèle Vanmaele Abstract In this paper we present

More information

Risk Aggregation with Dependence Uncertainty

Risk Aggregation with Dependence Uncertainty Introduction Extreme Scenarios Asymptotic Behavior Challenges Risk Aggregation with Dependence Uncertainty Department of Statistics and Actuarial Science University of Waterloo, Canada Seminar at ETH Zurich

More information

Polynomial approximation of mutivariate aggregate claim amounts distribution

Polynomial approximation of mutivariate aggregate claim amounts distribution Polynomial approximation of mutivariate aggregate claim amounts distribution Applications to reinsurance P.O. Goffard Axa France - Mathematics Institute of Marseille I2M Aix-Marseille University 19 th

More information

Multivariate Measures of Positive Dependence

Multivariate Measures of Positive Dependence Int. J. Contemp. Math. Sciences, Vol. 4, 2009, no. 4, 191-200 Multivariate Measures of Positive Dependence Marta Cardin Department of Applied Mathematics University of Venice, Italy mcardin@unive.it Abstract

More information

X

X Correlation: Pitfalls and Alternatives Paul Embrechts, Alexander McNeil & Daniel Straumann Departement Mathematik, ETH Zentrum, CH-8092 Zürich Tel: +41 1 632 61 62, Fax: +41 1 632 15 23 embrechts/mcneil/strauman@math.ethz.ch

More information

Spherically Symmetric Logistic Distribution

Spherically Symmetric Logistic Distribution Journal of Multivariate Analysis 7, 2226 (999) Article ID jmva.999.826, available online at httpwww.idealibrary.com on Spherically Symmetric Logistic Distribution Nikolai A. Volodin The Australian Council

More information

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM DAMIR FILIPOVIĆ Abstract. This paper discusses separable term structure diffusion models in an arbitrage-free environment. Using general consistency

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

ON A MULTIVARIATE PARETO DISTRIBUTION 1

ON A MULTIVARIATE PARETO DISTRIBUTION 1 ON A MULTIVARIATE PARETO DISTRIBUTION 1 Alexandru V. Asimit School of Mathematics, University of Manchester, Manchester M13 9PL, United Kingdom. E-mail: Vali.Asimit@manchester.ac.uk Edward Furman 2, 3

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

ASSOCIATIVE n DIMENSIONAL COPULAS

ASSOCIATIVE n DIMENSIONAL COPULAS K Y BERNETIKA VOLUM E 47 ( 2011), NUMBER 1, P AGES 93 99 ASSOCIATIVE n DIMENSIONAL COPULAS Andrea Stupňanová and Anna Kolesárová The associativity of n-dimensional copulas in the sense of Post is studied.

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Some Recent Results on Stochastic Comparisons and Dependence among Order Statistics in the Case of PHR Model

Some Recent Results on Stochastic Comparisons and Dependence among Order Statistics in the Case of PHR Model Portland State University PDXScholar Mathematics and Statistics Faculty Publications and Presentations Fariborz Maseeh Department of Mathematics and Statistics 1-1-2007 Some Recent Results on Stochastic

More information

Copulas and dependence measurement

Copulas and dependence measurement Copulas and dependence measurement Thorsten Schmidt. Chemnitz University of Technology, Mathematical Institute, Reichenhainer Str. 41, Chemnitz. thorsten.schmidt@mathematik.tu-chemnitz.de Keywords: copulas,

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

arxiv:physics/ v1 [physics.soc-ph] 20 Aug 2006

arxiv:physics/ v1 [physics.soc-ph] 20 Aug 2006 arxiv:physics/6823v1 [physics.soc-ph] 2 Aug 26 The Asymptotic Dependence of Elliptic Random Variables Krystyna Jaworska Institute of Mathematics and Cryptology, Military University of Technology ul. Kaliskiego

More information

Simulating Exchangeable Multivariate Archimedean Copulas and its Applications. Authors: Florence Wu Emiliano A. Valdez Michael Sherris

Simulating Exchangeable Multivariate Archimedean Copulas and its Applications. Authors: Florence Wu Emiliano A. Valdez Michael Sherris Simulating Exchangeable Multivariate Archimedean Copulas and its Applications Authors: Florence Wu Emiliano A. Valdez Michael Sherris Literatures Frees and Valdez (1999) Understanding Relationships Using

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

Financial Econometrics and Volatility Models Copulas

Financial Econometrics and Volatility Models Copulas Financial Econometrics and Volatility Models Copulas Eric Zivot Updated: May 10, 2010 Reading MFTS, chapter 19 FMUND, chapters 6 and 7 Introduction Capturing co-movement between financial asset returns

More information

Dependence uncertainty for aggregate risk: examples and simple bounds

Dependence uncertainty for aggregate risk: examples and simple bounds Dependence uncertainty for aggregate risk: examples and simple bounds Paul Embrechts and Edgars Jakobsons Abstract Over the recent years, numerous results have been derived in order to assess the properties

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

Applying the proportional hazard premium calculation principle

Applying the proportional hazard premium calculation principle Applying the proportional hazard premium calculation principle Maria de Lourdes Centeno and João Andrade e Silva CEMAPRE, ISEG, Technical University of Lisbon, Rua do Quelhas, 2, 12 781 Lisbon, Portugal

More information

VaR vs. Expected Shortfall

VaR vs. Expected Shortfall VaR vs. Expected Shortfall Risk Measures under Solvency II Dietmar Pfeifer (2004) Risk measures and premium principles a comparison VaR vs. Expected Shortfall Dependence and its implications for risk measures

More information

Total positivity order and the normal distribution

Total positivity order and the normal distribution Journal of Multivariate Analysis 97 (2006) 1251 1261 www.elsevier.com/locate/jmva Total positivity order and the normal distribution Yosef Rinott a,,1, Marco Scarsini b,2 a Department of Statistics, Hebrew

More information

Optimization Problems with Probabilistic Constraints

Optimization Problems with Probabilistic Constraints Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.

More information

Quadratic forms in skew normal variates

Quadratic forms in skew normal variates J. Math. Anal. Appl. 73 (00) 558 564 www.academicpress.com Quadratic forms in skew normal variates Arjun K. Gupta a,,1 and Wen-Jang Huang b a Department of Mathematics and Statistics, Bowling Green State

More information

Journal of Multivariate Analysis

Journal of Multivariate Analysis Journal of Multivariate Analysis 100 (2009) 2324 2330 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Hessian orders multinormal

More information

Tail dependence in bivariate skew-normal and skew-t distributions

Tail dependence in bivariate skew-normal and skew-t distributions Tail dependence in bivariate skew-normal and skew-t distributions Paola Bortot Department of Statistical Sciences - University of Bologna paola.bortot@unibo.it Abstract: Quantifying dependence between

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

Tail negative dependence and its applications for aggregate loss modeling

Tail negative dependence and its applications for aggregate loss modeling Tail negative dependence and its applications for aggregate loss modeling Lei Hua Division of Statistics Oct 20, 2014, ISU L. Hua (NIU) 1/35 1 Motivation 2 Tail order Elliptical copula Extreme value copula

More information

Multi Level Risk Aggregation

Multi Level Risk Aggregation Working Paper Series Working Paper No. 6 Multi Level Risk Aggregation Damir Filipović First version: February 2008 Current version: April 2009 Multi-Level Risk Aggregation Damir Filipović Vienna Institute

More information

On a simple construction of bivariate probability functions with fixed marginals 1

On a simple construction of bivariate probability functions with fixed marginals 1 On a simple construction of bivariate probability functions with fixed marginals 1 Djilali AIT AOUDIA a, Éric MARCHANDb,2 a Université du Québec à Montréal, Département de mathématiques, 201, Ave Président-Kennedy

More information

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36 The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X

More information

The compound Poisson random variable s approximation to the individual risk model

The compound Poisson random variable s approximation to the individual risk model Insurance: Mathematics and Economics 36 (25) 57 77 The compound Poisson random variable s approximation to the individual risk model Jingping Yang, Shulin Zhou, Zhenyong Zhang LMAM, School of Mathematical

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

Multivariate extensions of expectiles risk measures

Multivariate extensions of expectiles risk measures Depend. Model. 2017; 5:20 44 Research Article Special Issue: Recent Developments in Quantitative Risk Management Open Access Véronique Maume-Deschamps*, Didier Rullière, and Khalil Said Multivariate extensions

More information

Non-Life Insurance: Mathematics and Statistics

Non-Life Insurance: Mathematics and Statistics ETH Zürich, D-MATH HS 08 Prof. Dr. Mario V. Wüthrich Coordinator Andrea Gabrielli Non-Life Insurance: Mathematics and Statistics Solution sheet 9 Solution 9. Utility Indifference Price (a) Suppose that

More information

A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES

A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES Sankhyā : The Indian Journal of Statistics 1998, Volume 6, Series A, Pt. 2, pp. 171-175 A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES By P. HITCZENKO North Carolina State

More information

Quantile prediction of a random eld extending the gaussian setting

Quantile prediction of a random eld extending the gaussian setting Quantile prediction of a random eld extending the gaussian setting 1 Joint work with : Véronique Maume-Deschamps 1 and Didier Rullière 2 1 Institut Camille Jordan Université Lyon 1 2 Laboratoire des Sciences

More information

EVALUATING THE BIVARIATE COMPOUND GENERALIZED POISSON DISTRIBUTION

EVALUATING THE BIVARIATE COMPOUND GENERALIZED POISSON DISTRIBUTION An. Şt. Univ. Ovidius Constanţa Vol. 92), 2001, 181 192 EVALUATING THE BIVARIATE COMPOUND GENERALIZED POISSON DISTRIBUTION Raluca Vernic Dedicated to Professor Mirela Ştefănescu on the occasion of her

More information

Modelling and Estimation of Stochastic Dependence

Modelling and Estimation of Stochastic Dependence Modelling and Estimation of Stochastic Dependence Uwe Schmock Based on joint work with Dr. Barbara Dengler Financial and Actuarial Mathematics and Christian Doppler Laboratory for Portfolio Risk Management

More information

FRÉCHET HOEFFDING LOWER LIMIT COPULAS IN HIGHER DIMENSIONS

FRÉCHET HOEFFDING LOWER LIMIT COPULAS IN HIGHER DIMENSIONS FRÉCHET HOEFFDING LOWER LIMIT COPULAS IN HIGHER DIMENSIONS PAUL C. KETTLER ABSTRACT. Investigators have incorporated copula theories into their studies of multivariate dependency phenomena for many years.

More information

Confidence Bounds for Discounted Loss Reserves

Confidence Bounds for Discounted Loss Reserves Confidence Bounds for Discounted Loss Reserves Tom Hoedemakers Jan Beirlant Marc J. Goovaerts Jan Dhaene Abstract In this paper we give some methods to set up confidence bounds for the discounted IBNR

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Joint Mixability. Bin Wang and Ruodu Wang. 23 July Abstract

Joint Mixability. Bin Wang and Ruodu Wang. 23 July Abstract Joint Mixability Bin Wang and Ruodu Wang 23 July 205 Abstract Many optimization problems in probabilistic combinatorics and mass transportation impose fixed marginal constraints. A natural and open question

More information

ON THE MOMENTS OF ITERATED TAIL

ON THE MOMENTS OF ITERATED TAIL ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent

More information

A New Family of Bivariate Copulas Generated by Univariate Distributions 1

A New Family of Bivariate Copulas Generated by Univariate Distributions 1 Journal of Data Science 1(212), 1-17 A New Family of Bivariate Copulas Generated by Univariate Distributions 1 Xiaohu Li and Rui Fang Xiamen University Abstract: A new family of copulas generated by a

More information

Estimation of parametric functions in Downton s bivariate exponential distribution

Estimation of parametric functions in Downton s bivariate exponential distribution Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr

More information

Asymptotic distribution of the sample average value-at-risk

Asymptotic distribution of the sample average value-at-risk Asymptotic distribution of the sample average value-at-risk Stoyan V. Stoyanov Svetlozar T. Rachev September 3, 7 Abstract In this paper, we prove a result for the asymptotic distribution of the sample

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

Regularly Varying Asymptotics for Tail Risk

Regularly Varying Asymptotics for Tail Risk Regularly Varying Asymptotics for Tail Risk Haijun Li Department of Mathematics Washington State University Humboldt Univ-Berlin Haijun Li Regularly Varying Asymptotics for Tail Risk Humboldt Univ-Berlin

More information

Type II Bivariate Generalized Power Series Poisson Distribution and its Applications in Risk Analysis

Type II Bivariate Generalized Power Series Poisson Distribution and its Applications in Risk Analysis Type II Bivariate Generalized Power Series Poisson Distribution and its Applications in Ris Analysis KK Jose a, and Shalitha Jacob a,b a Department of Statistics, St Thomas College, Pala, Arunapuram, Kerala-686574,

More information

On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions

On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions Journal of Applied Probability & Statistics Vol. 5, No. 1, xxx xxx c 2010 Dixie W Publishing Corporation, U. S. A. On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions

More information

Bounds on the value-at-risk for the sum of possibly dependent risks

Bounds on the value-at-risk for the sum of possibly dependent risks Insurance: Mathematics and Economics 37 (2005) 135 151 Bounds on the value-at-risk for the sum of possibly dependent risks Mhamed Mesfioui, Jean-François Quessy Département de Mathématiques et d informatique,

More information

Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA

Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Effective July 5, 3, only the latest edition of this manual will have its

More information

Introduction to Normal Distribution

Introduction to Normal Distribution Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction

More information

Modelling Dropouts by Conditional Distribution, a Copula-Based Approach

Modelling Dropouts by Conditional Distribution, a Copula-Based Approach The 8th Tartu Conference on MULTIVARIATE STATISTICS, The 6th Conference on MULTIVARIATE DISTRIBUTIONS with Fixed Marginals Modelling Dropouts by Conditional Distribution, a Copula-Based Approach Ene Käärik

More information

MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS

MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS EDWARD FURMAN Department of Mathematics and Statistics York University Toronto, Ontario M3J 1P3, Canada EMail: efurman@mathstat.yorku.ca

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Errata Effective July 5, 3, only the latest edition of this manual will have its errata

More information