Bounds for Sums of Non-Independent Log-Elliptical Random Variables

Similar documents
Some results on Denault s capital allocation rule

Probability Transforms with Elliptical Generators

Multivariate Normal-Laplace Distribution and Processes

Comonotonicity and Maximal Stop-Loss Premiums

A multivariate dependence measure for aggregating risks

Calculation of Bayes Premium for Conditional Elliptical Risks

Multivariate Distributions

Characterization of Upper Comonotonicity via Tail Convex Order

Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management

Tail Mutual Exclusivity and Tail- Var Lower Bounds

Conditional Tail Expectations for Multivariate Phase Type Distributions

COMPARING APPROXIMATIONS FOR RISK MEASURES RELATED TO SUMS OF CORRELATED LOGNORMAL RANDOM VARIABLES MASTERARBEIT

A new approach for stochastic ordering of risks

A Measure of Monotonicity of Two Random Variables

Simulation of multivariate distributions with fixed marginals and correlations

Supermodular ordering of Poisson arrays

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables

GENERAL MULTIVARIATE DEPENDENCE USING ASSOCIATED COPULAS

Stochastic Comparisons of Weighted Sums of Arrangement Increasing Random Variables

VaR bounds in models with partial dependence information on subgroups

Tail Dependence of Multivariate Pareto Distributions

Optimal Reinsurance Strategy with Bivariate Pareto Risks

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications

Modelling Dependent Credit Risks

arxiv: v1 [q-fin.rm] 11 Mar 2015

Monotonicity and Aging Properties of Random Sums

Remarks on quantiles and distortion risk measures

Asymptotic behaviour of multivariate default probabilities and default correlations under stress

1: PROBABILITY REVIEW

MULTIVARIATE PROBABILITY DISTRIBUTIONS

BASICS OF PROBABILITY

Multivariate Statistics

ON THE TRUNCATED COMPOSITE WEIBULL-PARETO MODEL

Bounds for Stop-Loss Premiums of Deterministic and Stochastic Sums of Random Variables

Risk Aggregation with Dependence Uncertainty

Polynomial approximation of mutivariate aggregate claim amounts distribution

Multivariate Measures of Positive Dependence

X

Spherically Symmetric Logistic Distribution

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

ON A MULTIVARIATE PARETO DISTRIBUTION 1

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

4. CONTINUOUS RANDOM VARIABLES

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

ASSOCIATIVE n DIMENSIONAL COPULAS

STA 256: Statistics and Probability I

Notes on Random Vectors and Multivariate Normal

Some Recent Results on Stochastic Comparisons and Dependence among Order Statistics in the Case of PHR Model

Copulas and dependence measurement

3. Probability and Statistics

arxiv:physics/ v1 [physics.soc-ph] 20 Aug 2006

Simulating Exchangeable Multivariate Archimedean Copulas and its Applications. Authors: Florence Wu Emiliano A. Valdez Michael Sherris

On Kusuoka Representation of Law Invariant Risk Measures

Financial Econometrics and Volatility Models Copulas

Dependence uncertainty for aggregate risk: examples and simple bounds

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Applying the proportional hazard premium calculation principle

VaR vs. Expected Shortfall

Total positivity order and the normal distribution

Optimization Problems with Probabilistic Constraints

Quadratic forms in skew normal variates

Journal of Multivariate Analysis

Tail dependence in bivariate skew-normal and skew-t distributions

The Multivariate Normal Distribution 1

Tail negative dependence and its applications for aggregate loss modeling

Multi Level Risk Aggregation

On a simple construction of bivariate probability functions with fixed marginals 1

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The compound Poisson random variable s approximation to the individual risk model

The Multivariate Normal Distribution 1

Multivariate Analysis and Likelihood Inference

Multivariate extensions of expectiles risk measures

Non-Life Insurance: Mathematics and Statistics

A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES

Quantile prediction of a random eld extending the gaussian setting

EVALUATING THE BIVARIATE COMPOUND GENERALIZED POISSON DISTRIBUTION

Modelling and Estimation of Stochastic Dependence

FRÉCHET HOEFFDING LOWER LIMIT COPULAS IN HIGHER DIMENSIONS

Confidence Bounds for Discounted Loss Reserves

Chapter 5. Chapter 5 sections

Joint Mixability. Bin Wang and Ruodu Wang. 23 July Abstract

ON THE MOMENTS OF ITERATED TAIL

A New Family of Bivariate Copulas Generated by Univariate Distributions 1

Estimation of parametric functions in Downton s bivariate exponential distribution

Asymptotic distribution of the sample average value-at-risk

I forgot to mention last time: in the Ito formula for two standard processes, putting

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Regularly Varying Asymptotics for Tail Risk

Type II Bivariate Generalized Power Series Poisson Distribution and its Applications in Risk Analysis

On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions

Bounds on the value-at-risk for the sum of possibly dependent risks

Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA

Introduction to Normal Distribution

Modelling Dropouts by Conditional Distribution, a Copula-Based Approach

MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata

Transcription:

Bounds for Sums of Non-Independent Log-Elliptical Random Variables Emiliano A. Valdez The University of New South Wales Jan Dhaene Katholieke Universiteit Leuven May 5, 003 ABSTRACT In this paper, we construct upper and lower bounds for the distribution of a sum of non-independent log-elliptical random variables. These bounds are applications of the ideas developed in Kaas, Dhaene & Goovaerts (000). The class of multivariate log-elliptical random variables is an extension of the class of multivariate log-normal random variables. Hence, the results presented here are natural extensions of the results presented in Dhaene, Denuit, Goovaerts, Kaas & Vyncke (00a, 00b), where bounds for sums of log-normal random variables were derived. The upper bound is based on the idea of replacing the sum of log-elliptical random variables by a sum of random variables with the same marginals, but with a dependency structure described by the comonotonic copula. Lower bounds and improved upper bounds are constructed by including additional information about the dependency structure by introducing some conditioning random variable. 1 Introduction Sums of non-independent random variables occur in several situations in insurance and finance. As a first example, consider a portfolio of n insurance risks X 1, X,..., X n. The aggregate claims S is defined to be the sum of these individual risks: S = X k, (1) where generally the risks are non-negative random variables, i.e. X k 0. Knowledge of the distribution of this sum provides essential information for the insurance company and can be used as an input in the calculation of premiums and reserves. A particularly important problem is the determination of stop-loss premiums of the aggregate claims S. Suppose that the insurance company agrees to enter into a stop-loss reinsurance contract where total

Introduction claims beyond a pre-specified amount d will be covered by the reinsurer. The stop-loss premium with retention d is defined as E Z (S d) + = F S (x) dx, () d where F S (x) =1 F S (x) =Pr(S>x) and (s d) + =max(s d, 0). In classical risk theory, the individual risks X k are typically assumed to be mutually independent, mainly because computation of the aggregate claims becomes more tractable in this case. For special families of individual claim distributions, one may determine the exact form of the distribution for the aggregate claims. Several exact and approximate recursive methods have been proposed for computing the aggregate claims in case of discrete marginal distributions, see e.g. Dhaene & De Pril (1994) and Dhaene & Vandebroek (1995). Approximating the aggregate claims distribution by a normal distribution with the same first and second moment is often not satisfactory for the insurance practice, where the third central moment is often substantially different from 0. In this case, approximations based on a translated gamma distribution or the normal power approximation will perform better, see e.g. Kaas, Goovaerts, Dhaene & Denuit (001). It is important to note that all standard actuarial methods mentioned above for determining the aggregate claims distribution are only applicable in case the individual risks are assumed to be mutually independent. However, there are situations where the independence assumption is questionable, e.g. in a situation where the individual risks X k are influenced by the same economic or physical environment. In finance, a portfolio of n investment positions may be faced with potential losses L 1,L,...,L n over a given reference period, e.g. 1 month. The total potential loss L for this portfolio is then given by L = L k. As the returns on the different investment positions will in general be non-independent, it is clear that L will be a sum of non-independent random variables. Quantities of interest are quantiles of the distribution of (3), which in finance are called Values-at-Risk. Regulatory bodies require financial institutions like banks and investment firms to meet risk-based capital requirements for their portfolio holdings. These requirements are often expressed in terms of a Value-at-Risk or some other risk measure whichdependsonthedistributionofthesumin(3). A related problem is determining an investment portfolio s total rate of return. Suppose R 1,R,...,R n denote the random yearly rates of return of n different assets in a portfolio and suppose w 1,w,..., w n denote the weights in the portfolio. Then the total portfolio s yearly rate of return is given by (3) R = w k R k, which is clearly a sum of non-independent random variables. (4) As pointed out by Dhaene et al. (00a), a topic that is currently receiving increasing attention is the combination of actuarial and financial risks. To illustrate, consider random payments of X k to be made at times k for the next n periods. Further, suppose that the stochastic discount factor over

Elliptical and Spherical Distributions 3 the period (0,k) is the random variable Y k. Hence, an amount of 1 at time 0 isassumedtogrowtoa stochastic amount Yk 1 at time k. The present value random variable S is defined as the scalar product of the payment vector (X 1,X,,X n ) and the discount vector (Y 1,Y,...,Y n ): S = X k Y k. The present value quantity in (5) is of considerable importance for computing reserves and capital requirements for long term insurance business. The random variables X k Y k will be non-independent not only because in any realistic model, the discount factors will be rather strongly positive dependent, but also because the claim amounts X k can often not be assumed to be mutually independent. As illustrated by the examples above, it is important to be able to determine the distribution function of sums of random variables in the case that the individidual random variables involved are not assumed to be mutually independent. In general, this task is difficult to perform or even impossible because the dependency structure is unknown or too cumbersome to work with. In this paper, we develop approximations for sums involving non-independent log-elliptical random variables. The results presented here are natural extensions of the ideas developed in Dhaene et al. (00a, 00b) where they specifically constructed bounds for sums of dependent log-normal distributions. In Sections and 3, we introduce spherical, elliptical and log-elliptical distributions, as in Fang, Kotz & Ng (1990). We develop and repeat some results that will be used in later sections. Most results proved elsewhere are simply stated, but some basic useful results are also proved. In Section 4, we summarize the ideas developed in Dhaene et al. (00a, 00b) regarding the construction of upper and lower bounds for sums of non-independent random variables. In Section 5 we determine these bounds in the case of functions of sums involving log-elliptical distributions. Section 6 concludes the paper. (5) Elliptical and Spherical Distributions.1 Definition of Elliptical Distributions It is well-known that a random vector Y =(Y 1,...,Y n ) T distribution if Y d = µ + AZ, is said to have a n-dimensional normal (6) where Z =(Z 1,..., Z m ) T is a random vector consisting of m mutually independent standard normal random variables, A is a n m matrix, µ is a n 1 vector and = d stands for equality in distribution. Equivalently, one can say that Y is normal if its characteristic function is given by E exp it T Y =exp it T µ exp 1 tt Σt, t T =(t 1,t,...,t n ). (7) for some fixed vector µ(n 1) and some fixed matrix Σ(n n). For random vectors belonging to the class of multivariate normal distributions with parameters µ and Σ, we shall use the notation Y N n (µ, Σ). It is well-known that the vector µ is the mean vector and that the matrix Σ is the variance-covariance matrix. Note that the relation between Σ and A is given by Σ = AA T.

Elliptical and Spherical Distributions 4 The class of multivariate elliptical distributions is a natural extension of the class of multivariate normal distributions. Definition 1. The random vector Y =(Y 1,..., Y n ) T is said to have an elliptical distribution with parameters the vector µ(n 1) and the matrix Σ(n n) if its characteristic function can be expressed as E exp it T Y =exp it T µ φ t T Σt, t T =(t 1,t,...,t n ), (8) forsomescalarfunctionφ and where Σ is given by Σ = AA T (9) for some matrix A(n m). If Y has the elliptical distribution as defined above, we shall write Y E n (µ, Σ, φ) and say that Y is elliptical. The function φ is called the characteristic generator of Y. Hence, the characteristic generator of the multivariate normal distribution is given by φ(u) =exp( u/). It is well-known that the characteristic function of a random vector always exists and that there is a one-to-one correspondence between distribution functions and characteristic functions. Note however that not every function φ can be used to construct a characteristic function of an elliptical distribution. Obviously, this function φ should fulfill the requirement φ (0) = 1. A necessary and sufficient condition for the function φ to be a characteristic generator of an n-dimensional ellipticial distribution is given in Theorem. of Fang, et al. (1990). In the sequel, we shall denote the elements of µ and Σ by µ =(µ 1,..., µ n ) T (10) and Σ =(σ kl ) for k,l =1,,..., n, (11) respectively. Note that (9) guarantees that the matrix Σ is symmetric, positive definite and has positive elements on the first diagonal. Hence, for any k and l, one has that σ kl = σ lk, whereas σ kk 0 which will often be denoted by σ k. It is interesting to note that in the one-dimensional case, the class of elliptical distributions consists mainly of the class of symmetric distributions which include well-known distributions like normal and student t. An n-dimensional random vector Y =(Y 1,..., Y n ) T is normal with parameters µ and Σ if and only if for any vector b, the linear combination b T Y of the marginals Y k has a univariate normal distribution with mean b T µ and variance b T Σb. It is straightforward to generalize this result into the case of multivariate elliptical distributions. Theorem 1. The n-dimensional random vector Y is elliptical with parameters µ(n 1) and Σ(n n), notation Y E n (µ, Σ, φ), if and only if for any vector b(n 1), one has that b T Y E 1 b T µ, b T Σb,φ.

Elliptical and Spherical Distributions 5 From Theorem 1, we find in particular that for k =1,,..., n, Y k E 1 µk,σ k, φ. (1) Hence, the marginal components of a multivariate elliptical distribution have an elliptical distribution with the same characteristic generator. As S = Y k = e T Y where e(n 1) = (1, 1,..., 1) T,itfollowsthat S E 1 e T µ, e T Σe,φ (13) where e T µ = P n µ k and e T Σe = P n P n l=1 σ kl. In the following theorem, it is stated that any random vector with components that are linear combinations of the components of an elliptical distribution is again an elliptical distribution with the same characteristic generator. Theorem. For any matrix B (m n), any vector c (m 1) and any random vector Y E n (µ, Σ, φ), wehavethat BY + c E m Bµ + c, BΣB T, φ. (14) It is easily seen that Theorem is a generalization of Theorem 1.. Moments of Elliptical Distributions Suppose that for a random vector Y, the expectation E Q n Y r k k exists for some set of nonnegative integers r 1,r,...,r n. Then this expectation can be found from the relation à n! Y ½ E Y r 1 k r 1 +r +...+r n k = i r 1+r +...+r n r 1 t 1 r t... r E exp it T Y ¾ (15) n tn where 0 (n 1) = (0, 0,...,0) T. The moments of Y E n (µ, Σ, φ) do not necessarily exist. However, from (8) and (15) we deduce that if E (Y k ) exists, then it will be given by t=0 E (Y k )=µ k (16)

Elliptical and Spherical Distributions 6 so that E (Y) =µ, if the mean vector exists. Moreover, if Cov (Y k,y l ) and/or Var(Y k ) exist, then they will be given by and/or Cov(Y k,y l )= φ 0 (0) σ kl Var(Y k )= φ 0 (0) σ k, (18) where φ 0 denotes the first derivative of the characteristic generator. In short, if the covariance matrix of Y exists, then it is given by Cov(Y) = φ 0 (0) Σ. A necessary condition for this covariance matrix to exist is φ 0 (0) <, see Cambanis et al. (1981). In the following theorem, we prove that any multivariate elliptical distribution with mutually independent components must necessarily be multivariate normal, see Kelker (1970). Theorem 3. Let Y E n (µ, Σ, φ) with mutually independent components Y k. Assume that the expectations and variances of the Y k exist and that var (Y k ) > 0. Then it follows that Y is multivariate normal. Proof. Independence of the random variables and existence of their expectations imply that the covariances exist and are equal to 0. Hence,wefind that Σ is a diagonal matrix, and that φ t T t Q = n φ t k holds for all n-dimensional vectors t. This equation is known as Hamel s equation, and its solution has the form φ (x) =e αx, for some positive constant α satisfying α = φ 0 (0). Toprovethis,first note that φ t T t µ np = φ t Q k = n φ t k or equivalently, φ (u 1 + + u n )=φ (u 1 ) φ (u n ). Consider the partial derivative with respect to u k for some k =1,,...,n,wehave φ φ (u 1 + +(u k + h)+ + u n ) φ (u 1 + + u n ) = lim u k h 0 h φ (u 1 + + u n ) φ (h) φ (u 1 + + u n ) = lim h 0 h φ (u 1 + + u n )[φ(h) φ (0)] = lim h 0 h = φ (u 1 ) φ(u n ) φ 0 (0). (17)

Elliptical and Spherical Distributions 7 But the left-hand side is φ = φ (u 1 ) φ 0 (u k ) φ(u n )=φ(u 1 ) φ(u n ) φ0 (u k ) u k φ (u k ). Thus, equating the two we get φ 0 (u k ) φ (u k ) = φ0 (0) which gives the desired solution φ (x) =e αx with α = φ 0 (0). Thus, this leads to the characteristic generator of a multivariate normal..3 Multivariate Densities of Elliptical Distributions An elliptically distributed random vector Y E n (µ, Σ,φ) does not necessarily possess a multivariate density function f Y (y). A necessary condition for Y to possess a density is that rank (Σ) =n. Inthe case of a multivariate normal random vector Y N n (µ, Σ), the density is well-known to be f Y (y) = 1 i p h (π) n Σ exp 1 (y µ)t Σ 1 (y µ). (19) For elliptical distributions, one can prove that if Y E n (µ, Σ,φ) has a density, then it will be of the form f Y (y) = p c h i g (y µ) T Σ 1 (y µ) (0) Σ for some non-negative function g ( ) satisfying the condition 0 < Z 0 z n/ 1 g(z)dz < and a normalizing constant c given by (1) c = Γ (n/) π n/ Z z g(z)dz 1 n/ 1. () 0 Also, the opposite statement holds: Any non-negative function g ( ) satisfying the condition (1) can be c h i used to define an n-dimensional density p g (y µ) T Σ 1 (y µ) of an elliptical distribution, with Σ c given by (). The function g ( ) is called the density generator. One sometimes writes Y E n (µ, Σ,g) for the n-dimensional elliptical distributions generated from the function g ( ). A detailed proof of these results, using spherical transformations of rectangular coordinates, can be found in Landsman & Valdez (00). Note that for a given characteristic generator φ, the density generator g and/or the normalizing constant c may depend on the dimension of the random vector Y. Often one considers the class of elliptical distributions of dimensions 1,, 3,..., all derived from the same characteristic generator φ. In

Elliptical and Spherical Distributions 8 case these distributions have a density, we will denote their respective density generators by g n where the subscript n denotes the dimension of the random vector Y. From (19), one immediately finds that the density generators and the corresponding normalizing constants of the multivariate normal random vectors Y N n (µ, Σ) for n =1,,... are given by g n (u) =exp( u/) (3) and c n =(π) n/, (4) respectively. In Table 1, we consider some well-known families of the class of multivariate elliptical distributions. Each family consists of all elliptical distributions constructed from one particular characteristic generator φ (u). For more details about these families of elliptical distributions, see Landsman & Valdez (003) and the references in their paper. Table 1 Some Well-Known Families of Elliptical Distributions with their Characteristic Generator and/or Density Generators. Family Density g n (u) or characteristic φ (u) generators Bessel g n (u) =(u/b) a/ K a h(u/b) 1/i, a> n/,b>0 where K a ( ) is the modifiedbesselfunctionofthe3rdkind Cauchy g n (u) =(1+u) (n+1)/ Exponential Power g n (u) =exp[ r (u) s ], r, s > 0 Laplace g n (u) =exp( u ) Logistic g n (u) = exp ( u) [1 + exp ( u)] Normal Stable Laws g n (u) =exp( u/) ; φ (u) =exp( u/) h φ (u) =exp r (u) s/i, 0 <s,r >0 Student-t g n (u) = ³ 1+ u m (n+m)/, m>0 an integer As an example, let us consider the elliptical Student-t distribution E n (µ, Σ,g n ), with g n (u) = ³ 1+ m (n+m)/. u We will denote this multivariate distribution (with m degrees of freedom) by

Elliptical and Spherical Distributions 9 t (m) n (µ, Σ). Its multivariate density is given by Ã! f Y (y) = c n p 1+ (y (n+m)/ µ)t Σ 1 (y µ). (5) Σ m In order to determine the normalizing constant, first note from () that c n = Γ (n/) π n/ Z 0 1 z n/ 1 g(z)dz = Γ (n/) Z ³ π n/ z n/ 1 1+ z dz 1. 0 m (n+m)/ Performing the substitution u =1+(z/m), wefind Z ³ z n/ 1 1+ z Z (n+m)/ dz = m n/ 1 u 1 n/ 1 u m/ 1 du. m 0 Making one more substitution v =1 u 1,weget Z 0 z n/ 1 ³ 1+ z m from which we find c n = Γ ((n + m) /) (mπ) n/ Γ (m/). (n+m)/ dz = m n/ Γ (n/) Γ (m/) Γ ((n + m) /), 1 (6) From Theorem (1) and (1), we have that the marginals of the multivariate elliptical Student-t distribution are again Student-t distributions, hence, Y k t (m) 1 (µ, Σ). The results above lead to Γ " m+1 f Yk (y) = (mπ) 1/ Γ 1 1+ 1 µ # y (m+1)/ µk, k =1,,..., n, (7) m σ k m σ k which is indeed the well-known density of a univariate Student-t random variable with m degrees of freedom. Its mean is E (Y k )=µ k, (8) whereas it can be verifiedthatitsvarianceisgivenby Var(Y k )= m m σ k, (9) m provided the degrees of freedom m >. Note that m = φ 0 (0), where φ is the characteristic generator of the family of Student-t distributions with m degrees of freedom. In order to derive the characteristic function of Y k,notethat E e ity Γ m+1 Z µ k = (mπ) 1/ Γ e iµ k t e iσ ktz 1+ 1 (m+1)/ m m z dz Γ m+1 = (mπ) 1/ Γ e iµ k t Z m σ k mσ (m+1)/ cos (tz) mσ k + z (m+1)/ dz. (30) 0 k

Elliptical and Spherical Distributions 10 Hence, from Gradshteyn & Ryzhik (000, p. 907), we find that the characteristic function of Y k t (m) 1 (µ, Σ) is given by E e ity k = e iµ k t 1 m/ 1 Γ m t m/ mσ k Km/ t mσk, (31) where K ν ( ) is the Bessel function of the second kind. For a similar derivation, see Witkovsky (001). Observe that equation (31) can then be used to find the characteristic generator for the family of Student-t distributions. Furthermore, note that if the random vector Y has independent components Y k that are Student-t distributed with m degrees of freedom, it follows from Theorem (3) that Y cannot belong to the family of elliptical distributed random vectors..4 Spherical Distributions An n-dimensional random vector Z =(Z 1,..., Z n ) T is said to have a multivariate standard normal distribution if all the Z i s are mutually independent and standard normally distributed. We will write this as Z N n (0 n, I n ), where 0 n is the n-vector with i-th element E (Z i )=0and I n is the n n covariance matrix which equals the identity matrix in this case. The characteristic function of Z is given by E exp it T Z =exp 1 tt t, t T =(t 1,t,...,t n ). (3) Hence from (3), we find that the characteristic generator of N n (0 n, I n ) is given by φ(u) =exp( u/). The class of multivariate spherical distributions is an extension of the class of standard multivariate normal distributions. Definition. ArandomvectorZ =(Z 1,...,Z n ) T is said to have an n-dimensional spherical distribution with characteristic generator φ if Z E n (0 n, I n,φ). We will often use the notation S n (φ) for E n (0 n, I n,φ) in the case of spherical distributions. From the definition above, we find the following corollary. Corollary 1. Z S n (φ) if and only if E exp it T Z = φ t T t, t T =(t 1,t,...,t n ).. (33) Consider an m-dimensional random vector Y such that Y = d µ + AZ, (34) for some vector µ(n 1), some matrix A(n m) and some m-dimensional elliptical random vector Z S m (φ). Then it is straightforward to prove that Y E n (µ, Σ,φ), where the variance-covariance matrix is given by Σ = AA T. From equation (33), it immediately follows that the correlation matrices of members of the class of spherical distributions are identical. That is, if we let Corr(Z) =(r ij ) be the correlation matrix, then ½ Cov(Z i,z j ) 0, if i 6= j r ij = p Var(Zi ) Var(Z j ) = 1, if i = j.

Elliptical and Spherical Distributions 11 The factor φ 0 (0) in the covariance matrix as shown in (17) enters into the covariance structure but cancels in the correlation matrix. Note that although the correlations between different components of a spherical distributed random variable are 0, this does not imply that these components are mutually independent. As a matter of fact, it can only be independent if they belong to the family of multivariate normal distributions. Observe that from the characteristic functions of Z and a T Z, one immediately finds the following result. Lemma 1. Z S n (φ) if and only if for any n-dimensional vector a, one has a T Z a T a S 1 (φ). (35) As a special case of this result, we find that any component Z i of Z has a S 1 (φ) distribution. From the results concerning elliptical distributions, we find that if a spherical random vector Z S n (φ) possess a density f Z (z), then it will have the form f Z (z) =cg z T z, (36) where the density generator g satisfies the condition (1) and the normalizing constant c satisfies (). Furthermore, the opposite also holds: any non-negative function g ( ) satisfying the condition (1) can be used to define an n-dimensional density cg z T z of a spherical distribution with the normalizing constant c satisfying (). One often writes S n (g) for the n-dimensional spherical distribution generated from the density generator g ( )..5 Conditional Distributions It is well-known that if (Y,Λ) has a bivariate normal distribution, then the conditional distribution of Y, given that Λ = λ, is normal with mean and variance given by E (Y Λ = λ) =E (Y )+r (Y,Λ) σ Y σ Λ [λ E (Λ)] (37) and Var(Y Λ = λ) = ³1 r (Y,Λ) σ Y, (38) where r (Y,Λ) is the correlation coefficient between Y and Λ: r (Y,Λ) = Cov (Y,Λ) p Var(Y ) Var(Λ). (39) In the following theorem, it is stated that this conditioning result can be generalized to the class of bivariate elliptical distributions. This result will be useful for developing bounds for sums of random variables in Section 4.

Elliptical and Spherical Distributions 1 Theorem 4. Let the random vector Y =(Y 1,..., Y n ) T E n (µ, Σ, φ) with density generator denoted by g n ( ). Define Y and Λ to be linear combinations of the variates of Y, i.e. Y = α T Y and Λ = β T Y, for some α T =(α 1, α,...,α n ) and β T =(β 1, β,...,β n ). Then, we have that (Y,Λ) has an elliptical distribution: (Y, Λ) E ³µ (Y,Λ), Σ (Y,Λ), φ (40) where the respective parameters are given by µ (Y,Λ) = Σ (Y,Λ) = µ µy = µ Λ µ σ Y µ α T µ β T µ r (Y,Λ) σ Y σ Λ, (41) r (Y, Λ) σ Y σ Λ σ Λ µ α = T Σα α T Σβ α T Σβ β T Σβ. (4) Furthermore, conditionally given Λ = λ, the random variable Y has a univariate elliptical distribution: µ Y Λ = λ E 1 µ Y + r (Y,Λ) σ Y (λ µ σ Λ ), ³1 r (Y, Λ) σ Y, φ a, (43) Λ for some characteristic generator φ a ( ) depending on a =(λ µ Λ ) /σ Λ. Proof. The joint distribution of (Y,Λ) immediately follows from Theorem (1). We know that if its joint density exists, it will be of the form f Y,Λ (y, λ) = c 1 r σ Y σ " Ã Λ µy 1 µy g 1 r σ Y µ y µy r σ Y µ λ µλ σ Λ + µ!# λ µλ, (44) σ Λ where we simply wrote r = r (Y,Λ). Consider next the characteristic function of the conditional random variable Y Λ = λ: ϕ Y Λ (t) = E e ity Λ = λ = = 1 f Λ (λ) Z Z e ity f Y,Λ (y, λ) dy, e ity f Y Λ (y λ) dy where f Y,Λ (y, λ) is given in (44). Using the transformation w 1 =(y µ Y ) /σ Y and w =(λ µ Λ ) /σ Λ, we have h Z ϕ Y Λ (t) =e itµ Y e i(tσ Y )w 1 c /c g 1 1 1 r w 1 rw 1 w + w i dw 1 r g 1 w 1.

Elliptical and Spherical Distributions 13 1 Since we can write 1 r w 1 rw 1 w + w ³ = w1 1 r rw + w and using the transformation w1 = (w 1 rw ) / 1 r,wehave Note that Z Z ϕ Y Λ (t) =e it(µ Y +σ Y rw ) c g w 1 + w dw c 1 g 1 w 1 =1 which clearly indicates that c g w 1 + w c 1 g 1 w h ³ p i exp i σ Y 1 r t w1 c g w 1 + w dw c 1 g 1 w 1. (45) is a legitimate density. Furthermore, it can be shown that it is the density of a spherical random variable, denoted by W, with density generator g a (w) = g (w + a) R u 1/ g (u + a) du where a = w, see Fang et al. (1990), pages 39 to 41. Thus, the characteristic function of this spherical random variable W can be expressed as ϕ W (t) =φ a t for some characteristic generator φ a ( ) which depends on a. Therefore, the characteristic function in (45) simplifies to p ϕ Y Λ (t) = e it(µ Y +σ Y rw ) ψ W ³σ Y 1 r t = e it(µ Y +σ Y rw ) φ a σ Y 1 r t. We see it to be the characteristic function of an elliptical random variable with parameters in (37) and (38). where From this theorem, it follows that the characteristic function of Y Λ = λ is given by E e iy t Λ = λ ³ =exp iµ Y Λ=λ t φ a ³σ t Y Λ=λ and µ Y Λ=λ = µ Y + r (Y,Λ) σ Y σ Λ (λ µ Λ ) σ Y Λ=λ = ³ 1 r (Y,Λ) σ Y.

Log-Elliptical Distributions 14 3 Log-Elliptical Distributions Multivariate log-elliptical distributions are natural generalizations of multivariate log-normal distributions. For any n-dimensional vector x =(x 1,...,x n ) T with positive components x i,wedefine log x =(logx 1, log x,..., log x n ) T. Recall that an n-dimensional random vector has a multivariate log-normal distribution if log X has a multivariate normal distribution. In this case we have that log X N n (µ, Σ). Definition 3. The random vector X is said to have a multivariate log-elliptical distribution with parameters µ and Σ if log X has an elliptical distribution: log X E n (µ, Σ, φ). (46) In the sequel, we shall denote log X E n (µ, Σ, φ) as X LE n (µ, Σ, φ). When µ = 0 n and Σ = I n, we shall write X LS n (φ). Clearly, if Y E n (µ, Σ, φ) and X =exp(y), the LE n (µ, Σ, φ). If the density of X LE n (µ, Σ, φ) exists, then the density of Y =logx E n (µ, Σ, φ) also exists. From (0), it follows that the density of X must be equal to f X (x) = p c µ nq h i g (log x µ) T Σ 1 (log x µ), (47) Σ x 1 k see Fang et al. (1990). The density of the multivariate log-normal distribution with parameters µ and Σ follows from (19) and (47). Furthermore, any marginal distribution of a log-elliptical distribution is again log-elliptical. This immediately follows from the properties of elliptical distributions. Theorem 5. Let X LE n (µ, Σ, φ). If the mean of X k exists, then it is given by E (X k )=e µ kφ σ k. Provided the covariances exist, they are given by n h Cov(X k,x l )=exp(µ k + µ l ) φ (σ k + σ l ) i φ σ k o φ σ l. Proof. Define the vector a k =(0, 0,..., 0, 1, 0,..., 0) T to consist of all zero entries, except for the k-th entry which is 1. Thus,fork =1,,..., n, wehave E (X k ) = E e Y k = E ³e Y at k = ϕ a T k Y ( i) = exp a T k µ φ a T k Σa k = e µ kφ σ k

Log-Elliptical Distributions 15 and the result for the mean immediately follows. For the covariance, first define the vector b kl = (0, 0,...,1, 0,...0, 1, 0,..., 0) T to consist of all zero entries, except for the k-th and l-thentrieswhichare each 1. Note that ν kl = E (X k X l ) E (X k ) E (X l ) where E (X k X l ) = E ³e Y bt kl = ϕ b T kl Y ( i) = exp b T klµ φ b T klσb kl = exp(µ k + µ l ) φ h (σ k + σ l ) i and the result should now be obvious. Some risk measures for log-elliptical distributions are derived in the next theorem. Theorem 6. Let X LE 1 µ,σ, φ and Z S 1 (φ) with density f Z (x), then FX 1 (p) = exp µ + σfz 1 (p), 0 <p<1, E (X d) + = e µ φ σ F Z E X X >F 1 X (p) = e µ 1 p φ σ F Z ³ µ log d σ df Z ³ µ log d σ, d > 0, F 1 Z (1 p), 0 <p<1, where the density of Z is given by f Z (x) = f Z(x)e σx φ ( σ ). Proof. (1) The quantiles of X follow immediately from log X = d µ + σz. () Using f X (x) = 1 µ log x µ σx f Z, and substituting x by t = log x µ,wefind that σ σ Z xf X (x)dx = e µ φ σ µ µ log d F Z. σ d The stated result then follows from E Z (X d) + = d xf X (x) dx df Z µ µ log d σ. (3) The expression for the tail conditional expectation follows from E X X>FX 1 (p) = FX 1 (p)+ 1 h X i 1 p E F 1 X (p). +

Convex Order Bounds for Sums of Random Variables 16 Note that the density of Z in the theorem above can be interpreted as the Esscher transform with parameter σ of Z. Further, note that the expression for the quantiles holds for any one-dimensional elliptical distribution, whereas the expressions for the stop-loss premiums and tail conditional expectations were only derived for continuous elliptical distributions. We also have that if g is the normalized density generator of Z S 1 (φ), then F Z (x) = Z x g z dz. 4 Convex Order Bounds for Sums of Random Variables This section describes the bounds for (the distribution function of) sums of random variables as presented in Dhaene et al. (00a, 00b). We first define the notions of convex order and comonotonicity, which are essential ingredients for developing the bounds. 4.1 Convex Order In the sequel, all random variables are assumed to have a finite mean. In this case, we find E (X) = Z 0 F X (x) dx Z 0 F X (x) dx. (48) In actuarial science, it is common to replace a random variable with another which is less attractive but with a simpler structure so that the distribution function can at least be easier to determine. In this paper, the notion of less attractive will be translated in terms of convex order, as defined below. Definition 4. A random variable X is said to precede another random variable Y in convex order, written as X ¹ cx Y,if E [v (X)] E [v (Y )] holds for all convex functions v for which the expectations exist. Alternatively, it can be shown that X ¹ cx Y if and only if E (X) =E (Y ) and E (X d) + E (Y d)+ for all d, (49) see e.g. Shaked & Shanthikumar (1994). As stop-loss premiums can be seen as measures for the upper tail of the distribution function, X ¹ cx Y means that observing large outcomes is more likely for Y than for X. Other characterizations of convex ordering can be found in Dhaene et al. (00a). 4. Comonotonicity Consider an n-dimensional random vector X =(X 1,..., X n ) T with multivariate distribution function given by F X (x) =Pr(X 1 <x 1,...,X n <x n ), for any x =(x 1,,x n ) T. It is well-known that this

Convex Order Bounds for Sums of Random Variables 17 multivariate distribution function satisfies the so called Fréchet bounds: Ã! ³ max F Xk (x k ) (n 1), 0 F X (x) min F X1 (x 1 ),,F (x X n n), see Hoeffding (1940) or Fréchet (1951). Definition 5. ArandomvectorX is said to be comonotonic if its joint distribution is given by the Fréchet upper bound, i.e., ³ F X (x) =min F X1 (x 1 ),,F (x X n n). Alternative characterizations of comonotonicity of a random vector are given in the following theorem, the proof of which can be found in Dhaene et al. (00a). Theorem 7. Suppose X is an n-dimensional random vector. Then the following statements are equivalent: 1. X is comonotonic. ³. X = d FX 1 1 (U),..., FX 1 n (U) defined by for U Uniform(0, 1) where F 1 X k ( ) denotes the quantile function F 1 X k (q) =inf(x R F X (x) q ),for0 q 1. 3. There exists a random variable Z and non-decreasing functions h 1,..., h n such that X d =(h 1 (Z),...,h n (Z)). In the sequel, we shall use the superscript c to denote comonotonicity of a random vector. Hence, the vector (X c 1,..., Xc n) is a comonotonic random vector with the same marginals as the vector (X 1,..., X n ). The former vector is called the comonotonic counterpart of the latter. Consider the comonotonic random sum S c = X1 c + + Xn. c (50) In Dhaene et al. (00a), it is proven that each quantile of S c is equal to the sum of the corresponding quantiles of the marginals involved: FS 1 (q) = F 1 c X k (q), 0 q 1. (51) Furthermore, they showed that in case all marginal distributions F Xk are strictly increasing, the stoploss premiums of a comonotonic sum S c can easily be computed from the stop-loss premiums of the marginals: E (S c d) + = E (X k d k ) +, (5)

Convex Order Bounds for Sums of Random Variables 18 where the d k s are determined by d k = F 1 X k (F S c (d)). (53) This result can be extended to the case of marginal distributions that are not necessarily strictly increasing, see Dhaene et al. (00a). 4.3 Convex Bounds In this subsection, we shall provide lower and upper for the sum S = X 1 + + X n. (54) Proofs for these bounds can be found in Dhaene et al. (00). The comonotonic counterpart of X =(X 1,...,X n ) T is denoted by X c =(X c 1,...,Xc n) T, where X c k is given by X c k = F 1 X k (U), (55) with U U (0, 1). The comonotonic convex upper bound for the sum S c is defined by S c = X c 1 + + X c n. (56) It can be shown that S cx S c, (57) which implies that S c is indeed a convex order upper bound for S. Let us now suppose that we have additional information available about the dependency structure of X, in the sense that there is some random variable Λ with a known distribution function and such that we also know the distributions of the random variables X k Λ = λ for all outcomes λ of Λ and for all k =1,..., n. LetF 1 X k Λ (U) be a notation for the random variable f k(u, Λ), where the function f k is defined by f k (u, λ) =F 1 X k Λ=λ (u). Now, consider the random vector Xu =(X1 u,..., Xu n) T, where Xk u is given by X u k = F 1 X k Λ (U). (58) The improved upper bound (corresponding to Λ) of the sum S is then defined as S u = X u 1 + + X u n. (59) Notice that the random vectors X c and X u have the same marginals. It can be proven that S cx S u cx S c, (60) which means that the sum S u is indeed an improved upper bound (in the sense of convex order) for the original sum S.

Non-Independent Log-Elliptical Risks 19 Finally, consider the random vector X l = X l 1,..., Xl n T, where X l k is given by X l k = E (X k Λ). (61) Using Jensen s inequality, it is straightforward to prove that the sum S l = X l 1 + + X l n (6) isaconvexorderlowerboundfors: S l cx S. (63) 5 Non-Independent Log-Elliptical Risks In this section we develop convex lower and upper bounds for sums involving log-elliptical random variables. It generalizes the results for the log-normal case as obtained in Kaas, Dhaene & Goovaerts (000). Consider a series of deterministic non-negative payments α 1,...,α n, that are due at times 1,..., n respectively. The present value random variable S is defined by S = α i exp [ (Y 1 + + Y i )], (64) i=1 where the random variable Y i represents the continuously compounded rate of return over the period (i 1,i),i=1,,...,n. Furthermore, define Y (i) =Y 1 + + Y i,thesumofthefirst i elements of the random vector Y =(Y 1,..., Y n ) T,andX i =exp[ Y (i)]. Using these notations, we can write the present value random variable in (64) as S = α i e Y (i) = X i. i=1 i=1 (65) We will assume that the return vector Y =(Y 1,..., Y n ) T belongs to the class of multivariate elliptical distributions, i.e. Y E n (µ, Σ, φ), with parameters µ and Σ given by µ =(µ 1,..., µ n ) T, Σ =(σ kl ) for k, l =1,,...,n. (66) Thus, the random vector X =(X 1,...,X n ) T is a log-elliptical random vector. From (13), we find that Y (i) E 1 µ (i),σ (i), φ with µ (i) = σ (i) = ix µ k, ix l=1 ix σ kl. (67) (68)

Non-Independent Log-Elliptical Risks 0 In order to develop lower and improved upper bounds for S, we define the conditioning random variable Λ as a linear combination of the random variables Y i, i =1,,..., n : Λ = β i Y i. i=1 Using the property of elliptical distributions described in (14), we know that Λ E 1 µλ,σ Λ, φ where and µ Λ = σ Λ = β i µ i i=1 β i β j σ ij. i,.j=1 (69) (70) Note that if the mean and variance of Λ exist, then they are given by E (Λ) =µ Λ and Var(Λ) = φ 0 (0) σ Λ, respectively. Proposition 1. Let S bethepresentvaluesumasdefined in (64), with α i,i=1,...,n,nonnegative real numbers and Y E n (µ, Σ, φ). Let the conditioning random variable Λ be defined by Λ = β i Y i. i=1 (71) Then the comonotonic upper bound S c, the improved upper bound S u and the lower bound S l are given by S c = S u = S l = i=1 i=1 α i exp µ (i)+σ (i) F 1 Z (U), (7) q α i exp µ (i) r i σ (i) FZ 1 (U)+ 1 ri σ (i) FZ 1 (V ), (73) i=1 α i exp µ (i) r i σ (i) F 1 Z (U) φ a σ (i) 1 r i, (74) where U and V are mutually independent uniform(0,1) random variables, Z S 1 (φ), andr i is defined by P i P n l=1 r i = β lσ kl. σ (i) σ Λ (75) Proof. (a) From X(i) d = α i e µ(i)+σ(i)z,

Non-Independent Log-Elliptical Risks 1 we find that F 1 S c (p) = i=1 F 1 X(i) (p) = i=1 1 µ(i)+σ(i)f α i e Z (p), 0 <p<1, Hence, the comonotonic upper bound S c of S is given by (7). (b) In order to derive the lower bound S l,wefirst determine the characteristic function of the bivariate random vector (Y (i), Λ) for i =1,...,n. For any -vector (t 1,t ) T,wefind E [exp (i (t 1 Y (i)+t Λ))] = E exp is T Y with ½ t1 + t s k = β k : k =1,...,i, t β k : k = i +1,...,n. As Y E n (µ, Σ, φ), this leads to E [exp (i (t 1 Y (i)+t Λ))] = exp is T µ φ s T Σs = exp it T µ φ t T Σ t with µ = (µ (i),µ Λ ) T, µ σ Σ (i) σ 1, = σ,1 σ Λ and σ 1, = σ,1 = r i σ(i)σ Λ. From Theorem (1), we can conclude that the bivariate random vector (Y (i), Λ) is elliptical: (Y (i), Λ) T E (µ, Σ, φ). Thus, from Theorem (4), we know that (Y (I) Λ = λ) E 1 ³µ Y (i) Λ, σ Y (i) Λ, φ a with and µ Y (i) Λ = E (Y (i) Λ = λ) =µ (i)+r i σ (i) σ Λ (λ µ Λ ) (76) Note that σ Y (i) Λ = Var(Y (i) Λ = λ) =σ (i) 1 r i. (77) µ ip P r i = Corr(Y (i), Λ) =Corr Y k, n l=1 ³ Pi Corr Y k, P n l=1 β ly l = = σ (i) σ Λ P i β l Y l P n l=1 β lσ kl σ (i) σ Λ.

Non-Independent Log-Elliptical Risks Consequently, using the moment generating function of an elliptical, we have ³ E α k e Y (i) Λ = λ ³ = α i E e Y (i) Λ = λ = α i M Y (i) Λ ( 1) = α i exp ³ µ Y (i) Λ φ a ³ σ Y (i) Λ. Thus, if V U (0, 1), then a lower bound for the present value random sum in (64) can be written as S l = X l 1 + + Xl n with ³ Xi l = E α i e Y (i) Λ µ σ (i) = α i exp µ (i) r r µλ + σ Λ F σ Z 1 (V ) µ Λ φ a σ (i) 1 r i Λ = α i exp µ (i) r i σ (i) FZ 1 (V ) φ a σ (i) 1 ri. (78) (c) As suggested in Section 4, an improved upper bound can be developed using the additional information as provided by Λ = P n i=1 β iy i..now,define Xi u = F 1 X i Λ (U) as in (59). Because (Y (i) Λ = λ) E 1 ³µ Y (i) Λ, σ Y (i) Λ,g 1, it follows that h F 1 X i Λ=λ (u) =α i exp F 1 Z Thus, if U U (0, 1), wehave h X i Λ (U) = α i exp = α i exp = α i exp F 1 = α i exp FZ 1 (u) σ Y (i) Λ µ Y (i) Λ i. (U) σ Y (i) Λ µ Y (i) Λ i FZ qσ 1 (U) (i) 1 ri q FZ 1 (U) σ (i) ½ σ (i) F 1 Z µ (i) ri σ (i) FZ 1 (V ) (V ) 1 r i µ (i) r iσ (i) F 1 (U) q 1 r i r if 1 Z (V ) µ (i) Z ¾. Thus,animprovedupperboundisexpressedasin(73). 6 Concluding Remarks Inthispaperwederivedupperandlowerboundsforthedistributionfunctionofasumofnonindependent log-elliptical random variables. The results presented in this paper extend the results of Dhaene, Denuit, Kaas, Goovaerts & Vyncke (00a, 00b) who constructed bounds for sums of lognormal random variables. The extension to the class of elliptical distributions makes sense because

References 3 it makes the ideas developed in Kaas, Dhaene & Goovaerts (000) also applicable in situations where the shape of log-normal distributions is not fitted, but heavier tailed distributions such as Studentt are required. As multivariate log-elliptical distributions share many of the tractable properties of multivariate log-normal distributions, it is not surprising to find that the bounds developed for the log-elliptical case are very similar in form to those developed for the log-normal case. The upper bound is based on the sum of comonotonic random variables while lower and improved upper bounds can be constructed from conditioning on some additional available random variable. 7 References [1] Cambanis, S., Huang, S., and Simons, G. (1981), On the Theory of Elliptically Contoured Distributions, Journal of Multivariate Analysis 11: 368-385. [] Dhaene, J. and De Pril, N. (1994), On a Class of Approximate Computation Methods in the Individual Risk Model, Insurance: Mathematics & Economics 14: 181-196. [3] Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., and Vyncke, D. (001), The Concept of Comonotonicity in Actuarial Science and Finance: Theory, Insurance: Mathematics & Economics 31(1), 3-33. [4] Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., and Vyncke, D. (001), The Concept of Comonotonicity in Actuarial Science and Finance: Applications, Insurance: Mathematics & Economics 31(), 133-161. [5] Dhaene, J., Vandebroek, M. (1995), Recursions for the Individual Model, Insurance: Mathematics & Economics 16, 31-38. [6] Fang, K.T., Kotz, S. and Ng, K.W. (1990) Symmetric Multivariate and Related Distributions London: Chapman & Hall. [7] Fréchet, M. (1951) Sur les tableaux de corrélation dont les marges sont données et programmes linéaires, Ann. Univ. Lyon, Sect. A 9: 53-77. [8] Gradshteyn, I.S. and Ryzhik, I.M. (001), Table of Integrals, Series, and Products New York: Academic Press. [9] Gupta, A.K. and Varga, T. (1993), Elliptically Contoured Models in Statistics Netherlands: Kluwer Academic Publishers. [10] Hoeffding, W. (1940) Masstabinvariante Korrelationstheorie, Schriften Math. Inst. Univ. Berlin 5: 181-33. [11] Joe, H. (1997), Multivariate Models and Dependence Concepts London: Chapman & Hall. [1] Kaas, R., Dhaene, J. and Goovaerts, M.J. (000), Upper and lower bounds for sums of random variables, Insurance: Mathematics & Economics 7, 151-168. [13] Kaas, R., Goovaerts, M., Dhaene, J. and Denuit, M. (001), Modern Actuarial Risk Theory, Boston, Dordrecht, London, Kluwer Academic Publishers.

References 4 [14] Kelker, D. (1970), Distribution Theory of Spherical Distributions and Location-Scale Parameter Generalization, Sankhya 3: 419-430. [15] Kotz, S., Balakrishnan, N. and Johnson, N.L. (000), Continuous Multivariate Distributions, New York: John Wiley & Sons, Inc. [16] Landsman, Z. and Valdez, E.A. (00), Tail Conditional Expectations for Elliptical Distributions, submitted for publication, North American Actuarial Journal. Available at <http://www.actuary.unsw.edu.au/research/papers.html>. [17] Shaked, M. and Shanthikumar, J.G. (1994) Stochastic Orders and their Applications New York: Academic Press. [18] Witkovsky, V. (001), On the Exact Computation of the Density and of the Quantiles of Linear Combinations of t and F random variables, Journal of Statistical Planning and Inference 94: 1-13.