Calculation of Bayes Premium for Conditional Elliptical Risks

Similar documents
Some results on Denault s capital allocation rule

Probability Transforms with Elliptical Generators

Characterization of Upper Comonotonicity via Tail Convex Order

Scandinavian Actuarial Journal. Asymptotics for ruin probabilities in a discrete-time risk model with dependent financial and insurance risks

Asymptotics of the Finite-time Ruin Probability for the Sparre Andersen Risk. Model Perturbed by an Inflated Stationary Chi-process

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications

Risk Aggregation with Dependence Uncertainty

On the Fisher Bingham Distribution

Applications of axiomatic capital allocation and generalized weighted allocation

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables

Remarks on quantiles and distortion risk measures

The Use of Copulas to Model Conditional Expectation for Multivariate Data

Multivariate Normal-Laplace Distribution and Processes

Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management

Bounds for Sums of Non-Independent Log-Elliptical Random Variables

ON A MULTIVARIATE PARETO DISTRIBUTION 1

A Probability Review

Elliptically Contoured Distributions

Experience Rating in General Insurance by Credibility Estimation

Comonotonicity and Maximal Stop-Loss Premiums

A new approach for stochastic ordering of risks

Information geometry for bivariate distribution control

ON THE TRUNCATED COMPOSITE WEIBULL-PARETO MODEL

Regularly Varying Asymptotics for Tail Risk

VaR bounds in models with partial dependence information on subgroups

REGRESSION TREE CREDIBILITY MODEL

A New Family of Bivariate Copulas Generated by Univariate Distributions 1

44 CHAPTER 2. BAYESIAN DECISION THEORY

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

Modelling and Estimation of Stochastic Dependence

Randomly Weighted Sums of Conditionnally Dependent Random Variables

Uniform Correlation Mixture of Bivariate Normal Distributions and. Hypercubically-contoured Densities That Are Marginally Normal

Generalized Gaussian Bridges of Prediction-Invertible Processes

Stochastic Comparisons of Weighted Sums of Arrangement Increasing Random Variables

Tail Conditional Expectations for Extended Exponential Dispersion Models

The Credibility Estimators with Dependence Over Risks

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims

Homogenization for chaotic dynamical systems

EVALUATING THE BIVARIATE COMPOUND GENERALIZED POISSON DISTRIBUTION

A Note on the Central Limit Theorem for a Class of Linear Systems 1

Multivariate Distributions

Differential Stein operators for multivariate continuous distributions and applications

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Counts using Jitters joint work with Peng Shi, Northern Illinois University

Quantile prediction of a random eld extending the gaussian setting

Frontier estimation based on extreme risk measures

Claims Reserving under Solvency II

University Of Calgary Department of Mathematics and Statistics

The extremal elliptical model: Theoretical properties and statistical inference

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

Tail dependence for two skew slash distributions

On the quantiles of the Brownian motion and their hitting times.

Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables

Tail Properties and Asymptotic Expansions for the Maximum of Logarithmic Skew-Normal Distribution

Tail Mutual Exclusivity and Tail- Var Lower Bounds

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

A measure of radial asymmetry for bivariate copulas based on Sobolev norm

A Measure of Monotonicity of Two Random Variables

Tail Conditional Expectations for. Extended Exponential Dispersion Models

Tail Dependence of Multivariate Pareto Distributions

IF B AND f(b) ARE BROWNIAN MOTIONS, THEN f IS AFFINE

The Multivariate Normal Distribution 1

A Note On The Erlang(λ, n) Risk Process

Lecture Quantitative Finance Spring Term 2015

A Note on Tail Behaviour of Distributions. the max domain of attraction of the Frechét / Weibull law under power normalization

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

Practice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:

Jackknife Euclidean Likelihood-Based Inference for Spearman s Rho

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

An application of hidden Markov models to asset allocation problems

Bayesian Nonparametric Point Estimation Under a Conjugate Prior

Characterizations on Heavy-tailed Distributions by Means of Hazard Rate

MATH 104: INTRODUCTORY ANALYSIS SPRING 2008/09 PROBLEM SET 8 SOLUTIONS

THIELE CENTRE for applied mathematics in natural science

VaR vs. Expected Shortfall

Lecture 2 : CS6205 Advanced Modeling and Simulation

Implicit Functions, Curves and Surfaces

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS

Sharp bounds on the VaR for sums of dependent risks

Bivariate Uniqueness in the Logistic Recursive Distributional Equation

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

ABC methods for phase-type distributions with applications in insurance risk problems

CS 195-5: Machine Learning Problem Set 1

Asset Pricing. Chapter IX. The Consumption Capital Asset Pricing Model. June 20, 2006

On robust and efficient estimation of the center of. Symmetry.

Characterizations of probability distributions via bivariate regression of record values

Functional central limit theorem for super α-stable processes

Tail negative dependence and its applications for aggregate loss modeling

Multivariate negative binomial models for insurance claim counts

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

Qualifying Exam in Probability and Statistics.

Strong log-concavity is preserved by convolution

Ruin probabilities in multivariate risk models with periodic common shock

Introduction to Dependence Modelling

Calendar Year Dependence Modeling in Run-Off Triangles

Transcription:

1 Calculation of Bayes Premium for Conditional Elliptical Risks Alfred Kume 1 and Enkelejd Hashorva University of Kent & University of Lausanne February 1, 13 Abstract: In this paper we discuss the calculation of the Bayes premium for conditionally elliptical multivariate risks. In our framework the prior distribution is allowed to be very general requiring only that its probability density function satisfies some smoothness conditions. Based on previous results of Landsman and Nešlehová (8) and Hamada and Valdez (8) we show in this paper that for conditionally multivariate elliptical risks the calculation of the Bayes premium is closely related to Brown identity and the celebrated Stein s Lemma. Key words and phrases: Bayes premium; Credibility premium; Elliptically symmetric distribution; Stein s Lemma; Brown identity; Gaussian multivariate model. 1 Introduction In the setup of classical credibility theory (see e.g., Bühlmann and Giesler (6), Mikosch (6), or Kaas et al. (8)) calculation of the Bayes premium is a central task. Considering the L loss function, that task reduces to the calculation of the conditional expectation E{Θ X} with Θ being some random parameter with some probability density function h and X a random loss measure, say for instance the net loss amount. When the conditional random variable X Θ = θ has the Normal distribution function with mean θ and variance σ (write X N (θ, σ )), then for Θ N (µ, τ ), τ > we have almost surely (see Bühlmann and Giesler (6)) E{Θ X} = X + σ σ (µ X). (1.1) + τ The identity (1.1) is a direct consequence of the fact that the conditional distributions of Gaussian (or Normal) random vectors are again Gaussian. In fact, (1.1) can be stated for general Θ with some probability density function h, in the form known in the literature as Brown identity (see e.g., DasGupta (1)), i.e, E{Θ X X = x} = σ E{h (x + Y )} E{h(x + Y )}, x R, Y N (, σ ), (1.) 1 School of Mathematics, Statistics and Actuarial Sciences, University of Kent, Canterbury, CT 7NF, United Kingdom, email: a.kume@kent.ac.uk Actuarial Department, Faculty of Business and Economics, University of Lausanne, Extranef, UNIL-Dorigny, 115 Lausanne, Switzerland, email: enkelejd.hashorva@unil.ch

provided that the derivative h of h exits and the ratio in the right-hand side above is finite. Since the Gaussian distribution is a canonical example of elliptically symmetric (for short elliptical) distributions, in this paper our main interest is to show Brown identity for X a d-dimensional elliptical random vector, and thus deriving an explicit expression of Bayes premium for conditional elliptical models. The parameter Θ, which in our framework below is a d-dimensional random vector, is assumed to possess a probability density function h satisfying some regularity conditions. It turns out that Brown identity is closely related to Stein s lemma, which is recently discussed for elliptical random vectors in Landsman (6), Landsman and Nešlehová (8), and Hamada and Valdez (8). Several influential papers such as Landsman and Valdez (3), Goovaerts et al. (5), Vanduffel et al. (8), Valdez et al. (9) and among many others in the actuarial literature have derived tractable properties of elliptical random vectors which allow for important applications in insurance and risk management. Our results show that this class of random vectors is also tractable in the Bayesian paradigm which is a key pillar of actuarial science and practice. Outline of the rest of the paper: In Section we give some preliminary results and definitions. The main result is presented in Section 3. Proofs and some additional results are relegated to Section 4. Preliminaries We shall discuss first some distributional properties of elliptical random vectors and then we shall present an extension of (1.1) to univariate elliptical risks. Let A R d d, d 1 be a non-singular square matrix, and consider an elliptical random vector X = (X 1,..., X d ) with stochastic representation X = d RAU + µ, µ R d, (.1) with R > a random radius being independent of U which is uniformly distributed on the unit sphere of R d (with respect to L -norm). Throughout in the following Σ = AA, with A a d d non-singular matrix, h denotes the probability density function of the random parameter Θ (in the d-dimensional setup we shall write instead Θ), and R will be referred to as the radial component of X. Here d = and stand for the equality of the distribution functions and the transpose sign, respectively. For the basic distributional properties of elliptical random vectors see e.g., Cambanis et al. (1981), Valdez and Chernih (3) or Denuit et al. (6). It is well-known that the elliptically distributed random vector X as in (.1) possesses a probability density function f if and only if its radial component R possesses a probability density function. Moreover, f is given by f(x) = 1 ( (x µ) c det(σ) g Σ 1 (x µ) ), x R d, (.) where the positive measurable function g is the so-called density generator satisfying c = (π)d/ Γ(d/) u d/ 1 g(u) du (, ). (.3)

3 When the distribution function of R has a finite upper endpoint ω := sup{x R : P {R x} < 1} (, ), then we take g(x) = for all x > ω. For any x > set further g(x) = x g(s) ds, which is well-defined if E{ X 1 } < or equivalently E{R} <. We note in passing that g is also a density generator if E{R } <, see Lemma 4.1 below. If X has stochastic representation (.1) with generator g, then we write X E d (µ, Σ, g). A canonical example of elliptically symmetric random vectors is an X being Gaussian with covariance matrix Σ and density generator g(x) = exp( x), x >. (.4) Consequently, we have g(x) = g(x), x >. (.5) We now present the extension of (1.) to the elliptical framework for the 1-dimensional setup. Theorem.1. Let Y E 1 (, 1, g), and let Θ be a random parameter with differentiable probability density function h. Assume that E{Y } (, ) and let Ỹ E 1(, 1, g). If X Θ = θ E 1 (θ, 1, g) such that for some x R M h (x) := E{h(x + Y )} (, ), E{ h (x + Ỹ ) } <, (.6) then the Bayes premium is with L h (x) := E{h (x + Ỹ )}. L h (x) E{Θ X = x} = x + c 1 M h (x), c r 1/ g(r)dr 1 := r 1/ g(r)dr, (.7) Remarks: a) For simplicity, in the above theorem we consider only σ = 1. The general case with σ (, ) can be easily derived by multiplying the right hand side of (.7) by σ since E{(Θ X) X} = σ L c h 1 M h (X), see the main result in the next section. b) When the probability density function h of Θ satisfies h(x) a x α 1 exp( b x c ), x R with a, α, b, c some positive constants, then condition (.6) is satisfied. A particular instance is the Gaussian case which we discuss below in some more details. Example 1. (Gaussian risks) Under the setup of Theorem.1 consider the special case of generator g(x) = exp( x), x R and Y N (, σ ), σ (, ). Clearly, Y is an elliptical random variable, i.e., Y E 1 (, σ, g). Let further Θ N (µ, τ ), µ R, τ (, ) be also a Gaussian random variable and denote by h(x; µ, τ ) its probability density function. By (.5) we have c 1 = 1. In view of Theorem.1, since h (s + t; µ, τ ) = µ s t τ h(s + t; µ, τ ), s, t R

4 and by the fact that Ỹ and Y have the same distribution, we obtain E{(Θ X) X = x} = σ s R h (x + s; µ, τ )h(s;, σ ) ds s R h(x + s; µ, τ )h(s;, σ ) ds = 1 ( σ τ (µ x s)h s; (µ x) σ + τ, ( 1 σ + 1 τ ) 1) ds s R = σ (µ x)(1 σ τ σ + τ ) = σ σ (µ x) + τ for any x R. Consequently, the previous claim in (1.1) follows immediately. 3 Main Result In this section we focus on multivariate d-dimensional conditional elliptical models. Let therefore X, Θ be two d-dimensional random vectors such that (X Θ = θ) E d (θ, Σ, g), with Θ a d-dimensional random parameter with probability density function h. Again, as in the univariate setup, the credibility premium is calculated (under L loss function) by the conditional expectation E{Θ X}, which for the Gaussian framework is closely related to Brown identity, see DasGupta (1). In the sequel, we consider Θ such that its probability density function h is almost differentiable, adopting the following definition from Stein (1981). Definition 3.1. A function q : R d R is almost differentiable if there exists q : R d R d such that q(x + z) q(x) = 1 z q(x + tz)dt, x, z R d. Note that q is the vector function of component-wise partial derivatives, and it is almost surely unique. We derive below the expression for the Bayes premium, which boils down to the multivariate Brown identity for elliptical risks. The importance of our result is that it also shows the direct connection between Brown identity and Stein s Lemma for elliptically symmetric risks. Theorem 3.. Let Y E d (, Σ, g) with = (,..., ) R d be a d-dimensional elliptical random vector with radial component R >. Assume that (X Θ = θ) E d (θ, Σ, g), where the random parameter Θ has probability density function h which is almost differentiable. If E{R } (, ) and for some x R d we have M h (x) := E{h(x + Y )} (, ), E{ h(x + Ỹ ) } <, (3.1) with Ỹ E d(, Σ, g), then the Bayes premium is E{Θ X = x} = x + c d Σ L h(x) M h (x), c d := E{R }, (3.) d with L h (x) := E{ h(x + Ỹ )}. Moreover we have E{Y h(x + Y )} = c d ΣE{ h(x + Ỹ )}. (3.3)

5 Remarks: a) In order to retrieve the expression of the constant c d appearing in Theorem.1 we need to write it in terms of g and g as c d = r d/ 1 g(r) dr r d/ 1 g(r) dr. When g(t) = exp( t) by (.5) we immediately get that c d = 1. Further in view of (.4) both Ỹ, Y have the Gaussian distribution with mean zero and covariance matrix Σ. Consequently, by (3.3) E{Y h(x + Y )} = ΣE{ h(x + Y )}, (3.4) where Y is a mean-zero Gaussian random vector with covariance matrix Σ. For Σ the identity matrix, (3.4) appears in Lemma of Stein (1981). In the case that Y is elliptically symmetric (3.3) is established in Landsman (6); see also Landsman and Nešlehová (8) and Hamada and Valdez (8). b) Clearly, for non-gaussian risks the Bayes premium is given by (3.) and is in general not a credibility premium. c) In several tractable cases such as Gaussian scale mixture distributions the distribution of Ỹ can be explicitly calculated, see Landsman and Nešlehová (8). d) The referee of the paper suggested the validity of our result when Σ is semi-positive definite. If we go through our definitions and the results of Theorem 3., we see that Σ appears without its inverse matrix, whose existence is not possible for Σ semi-positive definite. This observation suggests that the assumption that Σ is positive-definite in our main result is redundant. In the bivariate setup, this condition has been removed in Hamada and Valdez (8). With some extra technical efforts (but with a different proof that we give here), it is possible to drop that assumption. In the context of Bayes premium there is no particular motivation to allow for singular matrices Σ. Therefore in order to avoid some extra technical details, we shall postpone the proof to a forthcoming technical manuscript. Example. (Multivariate Gaussian model) We denote the multivariate Gaussian distribution with mean µ and covariance matrix Σ as N d (µ, Σ). Its probability density function is denoted by f(x; µ, Σ), µ R d. Next, assume that X Θ N d (Θ, Σ) where Θ N d (µ, Σ ). If both Σ and Σ are non-singular covariance matrices, using further the fact that for the multivariate Gaussian density functions we have that (set B := (Σ 1 + Σ 1 ) 1 ) f(y;, Σ)f(x + y, µ, Σ ) f(y, Σ 1 B(µ x), B), for any x, y R d (with meaning proportionality), then with the proportionality constants canceling out in the ratios of Eq. (3.) we obtain E{(Θ X) X} = ΣE{Σ 1 (µ x Y )}, where Consequently, Y N d (Σ 1 B(µ x), B). ( ) E{(Θ X) X = x} = ΣΣ 1 µ x Σ 1 B(µ x) ( = ΣΣ 1 I d (I d + Σ Σ 1 ) 1) (µ x) ( ) 1(µ = ΣΣ 1 I d + ΣΣ 1 x)

6 = ( Σ Σ 1 + I d ) 1(µ x), where I d denotes the d d identity matrix. An interesting special case is when Σ = aσ with a some positive constant. Clearly, Σ is positive definite if and only if Σ is positive definite matrix. Applying the formula above, we obtain E{(Θ X) X} = a (µ X). 1 + a In particular, when Σ = σ I d and Σ = τ I d with σ, τ positive constant we obtain the result presented in Example 1. 4 Proofs Next we provide a lemma which clarifies the properties of g needed to proceed with the proofs of the main result. Lemma 4.1. Let Y E d (, I d, g) be a given elliptical random vector. If E{R } (, ), then g is a density generator of a d-dimensional elliptical random vector, and moreover r d/ 1 g(r) dr = E{R } d r d/ 1 g(r) dr (, ). (4.1) Proof: The random vector Y has radial decomposition with positive radial component R which has probability density function f R given by (set K := r d/ 1 g(r) dr) f R (r) = rd 1 g(r /), r >. (4.) d/ 1 K Since g is a density generator, partial integration implies r d/ 1 g(r) ( ) dr = r d/ 1 g(s) ds dr = = d = K d = K d r ( ) s g(s) r d/ 1 dr ds s d/ g(s) ds t d+1 g(t /) d/ 1 K dt t f R (t) dt = K d E{R } (, ), hence the claim follows. Proof of Theorem.1 By the assumption (X Θ = θ) E 1 (θ, 1, g) it follows that the conditional random variable Θ X = x has probability density function q( x) given by q(θ x) = h(θ)g((x θ) /) R h(θ)g((x θ) /)dθ

7 = h(θ)g((x θ) /), M h (x) with M h (x) := E{h(x Y )} = E{h(x + Y )} for x such that P {X x} (, 1). Consequently, M h (x)e{(θ X) X = x} = E{Y h(x Y )}. Since Y is symmetric about, i.e., Y d = Y we have further M h (x)e{(θ X) X = x} = E{Y h(x + Y )}, with Y E 1 (, 1, g). By the assumptions and Lemma 4.1 g is a density generator with some normalising constant c (, ) as in (.3). Since we assume that E{Y } (, ), then c 1 = s 1/ g(s)ds s 1/ g(s)ds = c c E{Y } is finite with c (, ) the normalising constant of g. By Lemma of Hamada and Valdez (8) (see also Theorem 1 of Landsmann (6)) for any x R we obtain E{Y h(x + Y )} = c 1 E{h (x + Ỹ )}, with Ỹ E 1(, 1, g), and thus the claim follows. Proof of Theorem 3. Let Y E d (, Σ, g) with = (,..., ) R d and let A be a square matrix such that AA = Σ. Note first that as in the univariate case we have Y is symmetric about origin, i.e., Y d = Y. As in the proof of Theorem.1 for any x R d in the support of X we have E{Θ X = x} = x + E{Y h(x + Y )}, (4.3) M h (x) provided that M h (x) := E{h(x + Y )} is finite and non-zero. Applying Lemma 3 of Landsman and Nešlehová (8) we obtain ( ) vh(v)g(v v/) dv = h(v) g(u) du dv R d R d v v/ = h(v) g(v v/) dv. R d Hence with c as in (.3) and Ỹ d = AV E d (, Σ, g) we may further write E{Y h(x + Y )} = 1 c R Σ h(x + Av) g(v v/) dv d = 1 c Σ h(x + y) g(y Σ 1 y/) dy det(σ) R d 1 = c ΣE{ h(x + Ỹ )}. c

8 In view of Lemma 4.1 c c = u d/ 1 g(u)du u d/ 1 g(u)du = E{R } d and thus the rest of the proof proceeds as in the univariate case, hence the claim follows. Acknowledgement: We thank the referee of the paper for careful reading and several suggestions. In particular, the referee pointed out the validity of the results where the matrix Σ is semi-positive definite. E. Hashorva kindly acknowledges the support by the Swiss National Science Foundation Grants 1-134785 and 1-141633/1. References [1] Bühlmann, H., and Giesler, A. (6) A Course in Credibility Theory and its Applications. Springer, New York. [] Cambanis, S., Huang, S., and Simons, G. (1981) On the theory of elliptically contured distributions. J. Multivariate Anal., 11, 368-385. [3] DasGupta, A. (1) False vs. missed discoveries, Gaussian decision theory, and the Donsker- Varadhan principle. IMS Collections, 1-1. [4] Denuit, M., Dhaene, J., Goovaerts, M., and Kaas, R. (6) Actuarial Theory for Dependent Risks: Measures, Orders and Models. Wiley. [5] Goovaerts, M., Kaas, R., Laeven, R., Tang, Q., and Vernic, R. (5) The tail probability of discounted sums of Pareto-like losses in insurance. Scandinavian Actuarial Journal, 6, 446 461. [6] Hamada, M., and Valdez, E.A. (8) CAPM and option pricing with elliptically contoured distributions. J. Risk & Insurance, 75, 387 49. [7] Kaas, R., Goovaerts, M., Dhaene, J., and Denuit, M. (8) Modern Actuarial Risk Theory: Using R. nd Edt, Springer, New York. [8] Landsman, Z. (6) On the generalization of Stein s lemma for elliptical class of distributions. Statist. Probab. Lett., 76, 11 116. [9] Landsman, Z., and Nešlehová, J. (8) Stein s Lemma for elliptical random vectors. J. Multivariate Anal., 99, 91-97. [1] Mikosch, T. (6) Non-Life Insurance Mathematics: An Introduction with Stochastic Processes. nd Edt, Springer, New York. [11] Stein, C.M. (1981) Estimation of the mean of a multivariate normal distribution. Ann. Statist., 9, 1135-1751. [1] Valdez, E.A., Dhaene, J., Maj, M., and Vanduffel, S. (9) Bounds and approximations for sums of dependent log-elliptical random variables. Insurance: Mathematics and Economics, 44, 385 397.

9 [13] Valdez, E.A., Chernih, A. (3) Wangs capital allocation formula for elliptically contoured distributions. Insurance: Mathematics and Economics, 33,3, 517-53. [14] Landsman, Z., Valdez, E.A. (3) Tail conditional expectations for elliptical distributions. North American Actuarial Journal, 7, 55 71. [15] Vanduffel, S., Chena, C.X., Dhaene, J., Goovaerts, M., Henrard, L., Kaas, R. (8) Optimal approximations for risk measures of sums of lognormals based on conditional expectations. J. Comp. Appl. Math. 1, -18.