Non-Life Insurance Mathematics. Christel Geiss and Stefan Geiss Department of Mathematics and Statistics University of Jyväskylä

Size: px
Start display at page:

Download "Non-Life Insurance Mathematics. Christel Geiss and Stefan Geiss Department of Mathematics and Statistics University of Jyväskylä"

Transcription

1 Non-Life Insurance Mathematics Christel Geiss and Stefan Geiss Department of Mathematics and Statistics University of Jyväskylä November 5, 218

2 2

3 Contents 1 Introduction The ruin of an insurance company Solvency II Directive Idea of the mathematical model Some facts about probability Claim number process models The homogeneous Poisson process A generalization of the Poisson process: the renewal process The inhomogeneous Poisson process The total claim amount process S(t) The renewal model and the Cramér-Lundberg-model Properties of S(t) Premium calculation principles Classical premium calculation principles Net principle Expected value principle The variance principle The standard deviation principle The constant income principle Modern premium calculation principles The Exponential Principle The Esscher Principle The Quantile Principle Reinsurance treaties Random walk type reinsurance

4 4 CONTENTS Extreme value type reinsurance Probability of ruin: small claim sizes The risk process Bounds for the ruin probability Fundamental integral equation Proof of Cramér s ruin bound Probability of ruin: large claim sizes Tails of claim size distributions Subexponential distributions Definition and basic properties Examples Another characterization of subexponential distributions An asymptotics for the ruin probability Conditions for F X,I S Summary Distributions QQ-Plot The distribution of S(t) Compound Poisson random variables Applications in insurance The Panjer recursion Approximation of F S(t) Monte Carlo approximations of F S(t)

5 Chapter 1 Introduction Insurance Mathematics might be divided into life insurance, health insurance, non-life insurance. Life insurance includes for instance life insurance contracts and pensions, where long terms are covered. Non-life insurance comprises insurances against fire, water damage, earthquake, industrial catastrophes or car insurance, for example. Non-life insurances cover in general a year or other fixed time periods. Health insurance is special because it is differently organized in each country. The course material is based on the textbook Non-Life Insurance Mathematics by Thomas Mikosch [7]. 1.1 The ruin of an insurance company Solvency II Directive In the following we concentrate ourselves on non-life insurance. There is a the Solvency II Directive of the European Union. Published: 29 Taken into effect: 1/1/216 5

6 6 CHAPTER 1. INTRODUCTION Contents: Defines requirements for insurance companies. One of these requirements is the amount of capital an insurer should hold, or in other words, the Minimum Capital Requirement (MCR): The probability that the assets of an insurer are greater or equal to its liabilities (in other words, to avoid the ruin) has to be larger or equal than 99,5 %. In the lecture we will treat exactly this problem. Here we slightly simplify the problem by only looking at one particular insurance contract instead of looking at the overall company (this is a common approach in research as well). What are the key parameters for a non-life insurance contract for a certain class of claims? 1. How often does this event occur? Significant weather catastrophes in Europe: 2 per year Accidents in public transportation in Berlin in 216: Amount of loss or the typical claim size: Hurricane Niklas, 215, Europe: 75 Mio Euro Storm Ela, 214, Europe, 65 Mio Euro Damage on a parking area: 5-15 Euro Opposite to the Solvency II Directive an insurance company needs to have reasonable low premiums and fees to attract customers. So there has to be a balance between the Minimum Capital Requirement and the premiums Idea of the mathematical model We will consider the following situation: (1) Insurance contracts (or policies) are sold. The resulting premium (yearly or monthly payments of the customers for the contract) form the income of the insurance company. (2) At times T i, T 1 T 2... claims happen. The times T i are called the claim arrival times.

7 1.2. SOME FACTS ABOUT PROBABILITY 7 (3) The i-th claim arriving at time T i causes the claim size X i. Mathematical problem: Find a stochastic model for the T i s and X i s to compute or estimate how much an insurance company should demand for its contracts and how much initial capital of the insurance company is required to keep the probability of ruin below a certain level. 1.2 Some facts about probability We shortly recall some definitions and facts from probability theory which we need in this course. For more information see [12], or [2] and [3], for example. (1) A probability space is a triple (Ω, F, P), where Ω is a non-empty set, F is a σ-algebra consisting of subsets of Ω, and P is a probability measure on (Ω, F). (2) A function f : Ω R is called a random variable if and only if for all intervals (a, b), < a < b < we have that f 1 ((a, b)) : {ω Ω : a < f(ω) < b} F. (3) By B(R) we denote the Borel σ-algebra. It is the smallest σ-algebra on R which contains all open intervals. The σ-algebra B(R n ) is the Borel σ- algebra, which is the smallest σ-algebra containing all the open rectangles (a 1, b 1 )... (a n, b n ). (4) The random variables f 1,..., f n are independent if and only if P(f 1 B 1,..., f n B n ) P(f 1 B 1 ) P(f n B n ) for all B k B(R), k 1,..., n. If the f i s have discrete values, i.e. f i : Ω {x 1, x 2, x 3,...}, then the random variables f 1,..., f n are independent if and only if P(f 1 k 1,..., f n k n ) P(f 1 k 1 ) P(f n k n ) for all k i {x 1, x 2, x 3...}.

8 8 CHAPTER 1. INTRODUCTION (5) If f 1,..., f n are independent random variables such that f i has the density function h i (x), i.e. P(f i (a, b)) b h a i(x)dx, then P((f 1,..., f n ) B) 1I B (x 1,..., x n )h 1 (x) h n (x n )dx 1 dx n R n for all B B(R n ).. (6) The function 1I B (x) is the indicator function for the set B, which is defined as { 1 if x B 1I B (x) if x B. (7) A random variable f : Ω {, 1, 2,...} is Poisson distributed with parameter λ > if and only if λ λk P(f k) e k!. This is often written as f P ois(λ). (8) A random variable g : Ω [, ) is exponentially distributed with parameter λ > if and only if for all a < b P(g (a, b)) λ b a 1I [, ) (x)e λx dx. The picture below shows the density λ1i [, ) (x)e λx for λ 3. density for lambda3 y x

9 1.2. SOME FACTS ABOUT PROBABILITY 9 (9) How to characterize distributions? We briefly recall how to describe the distribution of a random variable. Let (Ω, F, P) be a probability space. (a) The distribution of a random variable f : Ω R can be uniquely described by its distribution function F : R [, 1], F (x) : P({ω Ω : f(ω) x}), x R. (b) Especially, it holds for g : R R, such that g 1 (B) B(R) for all B B(R), that Eg(f) g(x)df (x) R in the sense that, if either side of this expression exists, so does the other, and then they are equal, see [8], pp (c) The distribution of f can also be determined by its characteristic function (see [12]) ϕ f (u) : Ee iuf, u R, or by its moment-generating function m f (h) : Ee hf, h ( h, h ) provided that Ee h f < for some h >. We also recall that for independent random variables f and g it holds that ϕ f+g (u) ϕ f (u)ϕ g (u).

10 1 CHAPTER 1. INTRODUCTION

11 Chapter 2 The Models for the claim number process N(t) In the following we will introduce three processes which are used as claim number processes: the Poisson process, the renewal process and the inhomogeneous Poisson process. 2.1 The homogeneous Poisson process with parameter λ > Definition (homogeneous Poisson process). A stochastic process N (N(t)) t [, ) is a Poisson process if the following conditions are fulfilled: (P1) N() a.s. (almost surely), i.e. P({ω Ω : N(, ω) }) 1. (P2) The process N has independent increments, i.e. for all n 1 and t < t 1 <... < t n < the random variables N(t n ) N(t n 1 ), N(t n 1 ) N(t n 2 ),..., N(t 1 ) N(t ) are independent. (P3) For any s and t > the random variable N(t+s) N(s) is Poisson distributed, i.e. λt (λt)m P(N(t + s) N(s) m) e, m, 1, 2,... m! (P4) The paths of N, i.e. the functions (N(t, ω)) t [, ) for fixed ω, are almost surely right continuous and have left its. One says N has càdlàg (continue à droite, ite à gauche) paths. 11

12 12 CHAPTER 2. CLAIM NUMBER PROCESS MODELS Remark One can show that the definition implies the following: there is a set Ω F with P(Ω ) 1 such that for all ω Ω one has that (1) N(, ω) 1, (2) the paths t N(t, ω) are càdlàg, take values in {, 1, 2,...}, are nondecreasing, (3) all jumps have the size 1. In the following we prove that the Poisson process does exist. Lemma Assume W 1, W 2,... are independent and exponentially distributed with parameter λ >. Then, for any t > we have n 1 P(W W n t) 1 e λt k (λt) k. k! Consequently, the sum of independent exponentially distributed random variables is a Gamma distributed random variable. The proof is subject to an Exercise. Definition Let W 1, W 2,... be independent and exponentially distributed with parameter λ >. Define T n : W W n, ˆN(t, ω) : #{i 1 : T i (ω) t}, t. Lemma For each n, 1, 2,... and for all t > it holds P({ω Ω : ˆN(t, ω) n}) e λt (λt)n, n! i.e. ˆN(t) is Poisson distributed with parameter λt. Proof. From the definition of ˆN it can be concluded that {ω Ω : ˆN(t, ω) n} {ω Ω : Tn (ω) t < T n+1 (ω)} {ω Ω : T n (ω) t} \ {ω Ω : T n+1 (ω) t}.

13 2.1. THE HOMOGENEOUS POISSON PROCESS 13 Because of T n T n+1 we have the inclusion {T n+1 t} {T n t}. This implies P( ˆN(t) n) P(T n t) P(T n+1 t) n 1 1 e λt (λt) k 1 + e λt k! k λt (λt)n e. n! n (λt) k Theorem (1) ˆN(t)t [, ) is a Poisson process with parameter λ >. (2) Any Poisson process N(t) with parameter λ > can be written as N(t) #{i 1, T i t}, t, where T n W W n, n 1, and W 1, W 2,... are independent and exponentially distributed with λ >. Proof. (1) We check the properties of the Definition (P1) From (8) of Section 1.2 we get that P(W 1 > ) P(W 1 (, )) λ k 1I [, ) (y)e λy dy 1. This implies that ˆN(, ω) if only < T 1 (ω) W 1 (ω) but W 1 > holds almost surely. Hence ˆN(, ω) a.s. (P2) We only show that ˆN(s) and ˆN(t) ˆN(s) are independent, i.e. P( ˆN(s) l, ˆN(t) ˆN(s) m)p( ˆN(s) l)p( ˆN(t) ˆN(s) m) (1) for l, m. The independence of arbitrary many increments can be shown similarly. It holds for l and m 1 that P( ˆN(s) l, ˆN(t) ˆN(s) m) P( ˆN(s) l, ˆN(t) m + l) By defining functions f 1, f 2, f 3 and f 4 as f 1 : T l P(T l s < T l+1, T l+m t < T l+m+1 ). k!

14 14 CHAPTER 2. CLAIM NUMBER PROCESS MODELS f 2 : W l+1 f 3 : W l W l+m f 4 : W l+m+1, and h 1,..., h 4 as the corresponding densities, it follows that P(T l s < T l+1, T l+m t < T l+m+1 ) P(f 1 s < f 1 + f 2, f 1 + f 2 + f 3 t < f 1 + f 2 + f 3 + f 4 ) P( f 1 < s, s f 1 < f 2 <, f 3 < t f 1 f 2, : I 1 s t x 1 x 2 s x 1 h 4 (x 4 )dx 4 h 3 (x 3 )dx 3 t x 1 x 2 x } 3 {{} I 4 (x 1,x 2,x 3 ) }{{} I 3 (x 1,x 2 ) t (f 1 + f 2 + f 3 ) < f 4 < ) } {{ } I 2 (x 1 ) h 2 (x 2 )dx 2 h 1 (x 1 )dx 1 By direct computation and rewriting the density function of f 4 W l+m+1, I 4 (x 1, x 2, x 3 ) t x 1 x 2 x 3 λe λx 4 1I [, ) (x 4 )dx 4 e λ(t x 1 x 2 x 3 ). Here we used t x 1 x 2 x 3. This is true because the integration w.r.t. x 3 implies x 3 t x 1 x 2. The density of f 3 W l W l+m is Therefore, I 3 (x 1, x 2 ) x m 2 3 h 3 (x 3 ) λ m 1 (m 2)! 1I [, )(x 3 )e λx 3. t x 1 x 2 λ m 1 x m 2 3 (m 2)! e λx 3 e λ(t x 1 x 2 x 3 ) dx 3 1I [,t x1 )(x 2 )e λ(t x 1 x 2 ) λ m 1 (t x 1 x 2 ) m 1. (m 1)!

15 2.1. THE HOMOGENEOUS POISSON PROCESS 15 The density of f 2 W l+1 is This implies I 2 (x 1 ) h 2 (x 2 ) 1I [, ) (x 2 )λe λx 2. 1I [,t x1 )(x 2 )e λ(t x1 x2) λ m 1 (t x 1 x 2)m 1 (m 1)! s x 1 λ m e λ(t x 1) (t s)m. m! Finally, from Lemma we conclude If we sum I 1 s λ m e λ(t x 1) λ m λ l λt (t s)m e ( m! (λs) l l! (t s)m m! s l l! λ l x l 1 1 ) ( ) (λ(t s)) e λs m e λ(t s) m! P( ˆN(s) l)p( ˆN(t s) m). λe λx 2 dx 2 (l 1)! 1I [, )(x 1 )e λx 1 dx 1 P( ˆN(s) l, ˆN(t) ˆN(s) m) P( ˆN(s) l)p( ˆN(t s) m) over l N we get P( ˆN(t) ˆN(s) m) P( ˆN(t s) m) (2) and hence (1) for l and m 1. The case m can deduced from that above by exploiting and P( ˆN(s) l, ˆN(t) ˆN(s) ) P( ˆN(s) l) P( ˆN(t s) ) 1 P( ˆN(s) l, ˆN(t) ˆN(s) m) m1 P( ˆN(t s) m). m1

16 16 CHAPTER 2. CLAIM NUMBER PROCESS MODELS (P3) follows from Lemma and (2) and (P4) follows from the construction. (2) This part is subject to an exercise. Poisson, lambda5 X T Poisson, lambda1 X T 2.2 A generalization of the Poisson process: the renewal process To model windstorm claims, for example, it is not appropriate to use the Poisson process because windstorm claims happen rarely, sometimes with years

17 2.2. A GENERALIZATION OF THE POISSON PROCESS: THE RENEWAL PROCESS17 in between. The Pareto distribution, for example, which has the distribution function ( ) α κ F (x) 1 for x κ + x with parameters α, κ > will fit better when we use this as distribution for the waitig times, i.e. ( ) α κ P(W i x) for x. κ + x For a Pareto distributed random variable it is more likely to have large values than for an exponential distributed random variable. Definition (Renewal process). Assume that W 1, W 2,... are i.i.d. (independent and identically distributed) random variables such that W n (ω) > for all n 1 and ω Ω. Then { T : T n : W W n, n 1 is a renewal sequence and N(t) : #{i 1 : T i t}, t, is the corresponding renewal process. In order to study the it behavior of N we need the Strong Law of Large Numbers (SLLN): Theorem (SLLN). If the random variables X 1, X 2,... are i.i.d. with E X 1 <, then X 1 + X X n n EX 1 a.s. n Theorem (SLLN for renewal processes). Assume N(t) is a renewal process. If EW 1 <, then N(t) t t 1 EW 1 a.s.

18 18 CHAPTER 2. CLAIM NUMBER PROCESS MODELS Proof. Because of {ω Ω : N(t, ω) n} {ω Ω : T n (ω) t < T n+1 (ω)} for n N we have for N(t)(ω) > that T N(t,ω) (ω) N(t, ω) t N(t, ω) < T N(t,ω)+1(ω) N(t, ω) T N(t,ω)+1(ω) N(t, ω) + 1 N(t, ω) + 1. (3) N(t, ω) Note that Ω {ω Ω : T 1 (ω) < } {ω Ω : sup N(t) > }. t Theorem implies that T n n EW 1 (4) holds on a set Ω with P(Ω ) 1. Hence n T n on Ω and by definition of N also t N(t) on Ω. From (4) we get t N(t,ω)> Finally (3) implies that t N(t,ω)> T N(t,ω) N(t, ω) EW 1 for ω Ω. t N(t, ω) EW 1 for ω Ω. In the following we will investigate the behavior of EN(t) as t. Theorem (Elementary renewal theorem). Assume that (N(t)) t is a renewal process and that < EW 1 <. Then EN(t) t t 1 EW 1. (5) Remark If the W i s are exponentially distributed with parameter λ >, W i Exp(λ), i 1, then N(t) is a Poisson process and EN(t) λt.

19 2.2. A GENERALIZATION OF THE POISSON PROCESS: THE RENEWAL PROCESS19 Since EW i 1, it follows that for all t > that λ EN(t) t 1 EW 1. (6) If the W i s are not exponentially distributed, then the equation (6) holds only for the it t. In order to prove Theorem we formulate the following Lemma of Fatou type: Lemma Let Z (Z t ) t [, ) be a stochastic process such that Z t : Ω [, ) for all t and inf s t Z s : Ω [, ) is measurable for all t. Then E inf t Z t inf t EZ t. Proof. By monotone convergence, since t inf s t Z s is non-decreasing, we have E t inf s t Z s t E inf s t Z s. Obviously, E inf s t Z s EZ u for all u t which allows us to write This implies the assertion. E inf s t Z s inf u t EZ u. Proof of Theorem Let λ 1 EW 1. From Theorem we conclude λ t N(t) t N(s) inf t s t s Since Z t : N(t) for t > and Z t : fulfills the requirements of Lemma we have N(s) λ E inf inf t s t s EN(t). t t So, we only have to verify that sup t E N(t) λ. Let t >. Recall that t N(t) #{i 1 : T i t}. We introduce the filtration (F n ) n given by a.s. F n : σ(w 1,..., W n ), n 1, F : {, Ω}.

20 2 CHAPTER 2. CLAIM NUMBER PROCESS MODELS Then the random variable τ t : N(t) + 1 is a stopping time w.r.t. (F n ) n i.e. it holds {τ t n} F n, n. Let us verify this. n yields to {τ t } {N(t) 1} F, n 1 yields to {τ t 1} {N(t) } {t < W 1 } F 1, and for n 2 we have {τ t n} {T n 1 t < T n } {W W n 1 t < W W n } F n. By definition of N(t) we have that T N(t) t. Hence we get ET N(t)+1 E(T N(t) + W N(t)+1 ) t + EW 1 <. (7) On the other hand it holds τ t ET N(t)+1 E W i i1 τt K E W i K i1 by monotone convergence. Since Eτ t K < and the W i s are i.i.d with EW 1 < we may apply Wald s identity This implies τ t K E i1 W i E(τ t K)EW 1. > ET N(t)+1 K E(τ t K)EW 1 Eτ t EW 1. This relation is used in the following computation to substitute Eτ t EN(t) + 1: sup t EN(t) t sup t sup t sup t EN(t) + 1 t Eτ t t ET N(t)+1 t EW 1

21 2.3. THE INHOMOGENEOUS POISSON PROCESS 21 sup t where (7) was used for the last estimate. t + EW 1 t EW 1 1 EW 1, 2.3 The inhomogeneous Poisson process and the mixed Poisson process Definition Let µ : [, ) [, ) be a function such that (1) µ() (2) µ is non-decreasing, i.e. s t µ(s) µ(t) (3) µ is càdlàg. Then the function µ is called a mean-value function. µ(t)t y t µ(t) continuous y t

22 22 CHAPTER 2. CLAIM NUMBER PROCESS MODELS µ(t) càdlàg y t Definition (Inhomogeneous Poisson process). A stochastic process N N(t) t [, ) is an inhomogeneous Poisson process if and only if it has the following properties: (P1) N() a.s. (P2) N has independent increments, i.e. if t < t 1 <... < t n, (n 1), it holds that N(t n ) N(t n 1 ), N(t n 1 ) N(t n 2 ),..., N(t 1 ) N(t ) are independent. (P inh. 3) There exists a mean-value function µ such that for s < t (µ(t) µ(s)) (µ(t) µ(s))m P(N(t) N(s) m) e, m! where m, 1, 2,..., and t >. (P4) The paths of N are càdlàg a.s. Theorem (Time change for the Poisson process). If µ denotes the mean-value function of an inhomogeneous Poisson process N and Ñ is a homogeneous Poisson process with λ 1, then (1) (N(t)) t [, ) d (Ñ(µ(t))) t [, ) (2) If µ is continuous, increasing and t µ(t), then N(µ 1 (t)) t [, ) d (Ñ(t)) t [, ).

23 2.3. THE INHOMOGENEOUS POISSON PROCESS 23 Here µ 1 (t) denotes the inverse function of µ and f d g means that the two random variables f and g have the same distribution (but one can not conclude that f(ω) g(ω) for ω Ω). Definition (Mixed Poisson process). Let ˆN be a homogeneous Poisson process with intensity λ 1 and µ be a mean-value function. Let θ : Ω R be a random variable such that θ > a.s., and θ is independent of ˆN. Then N(t) : ˆN(θµ(t)), t is a mixed Poisson process with mixing variable θ. Proposition It holds ( var( ˆN(θµ(t))) E ˆN(θµ(t)) 1 + var(θ) ) Eθ µ(t). Proof. We recall that E ˆN(t) var( ˆN(t)) t and therefore E ˆN(t) 2 t + t 2. We conclude var( ˆN(θµ(t))) E ˆN(θµ(t)) 2 [ E ˆN(θµ(t)) ] 2 E ( θµ(t) + θ 2 µ(t) 2) (Eθµ(t)) 2 µ(t) (Eθ + varθµ(t)). The property var(n(t)) > EN(t) is called over-dispersion. inhomogeneous Poisson process, then If N is an var(n(t)) EN(t).

24 24 CHAPTER 2. CLAIM NUMBER PROCESS MODELS

25 Chapter 3 The total claim amount process S(t) 3.1 The renewal model and the Cramér- Lundberg-model Definition The renewal model (or Sparre-Anderson-model) considers the following setting: (1) Claims happen at the claim arrival times T 1 T 2... of a renewal process N(t) #{i 1 : T i t}, t. (2) At time T i the claim size X i happens and it holds that the sequence (X i ) i1 is i.i.d., X i. (3) The processes (T i ) i1 and (X i ) i1 are independent. The renewal model is called Cramér-Lundberg-model if the claims happen at the claim arrival times < T 1 < T 2 <... of a Poisson process 25

26 26 CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T ) 3.2 Properties of the total claim amount process S(t) Definition The total claim amount process is defined as N(t) S(t) : X i, t. i1 The insurance company needs information about S(t) in order to determine a premium which covers the losses represented by S(t). In general, the distribution of S(t), i.e. P({ω Ω : S(t, ω) x}), x, can only be approximated by numerical methods or simulations while ES(t) and var(s(t)) are easy to compute exactly. One can establish principles which use only ES(t) and var(s(t)) to calculate the premium. This will be done in chapter 4. Proposition One has that ES(t) EX 1 EN(t), var(s(t)) var(x 1 )EN(t) + var(n(t)(ex 1 ) 2. Consequently, one obtains the following relations: (1) Cramér-Lundberg-model: It holds (i) ES(t) λtex 1, (ii) var(s(t)) λtex 2 1. (2) Renewal model: Let EW 1 1 λ (, ) and EX 1 <. (i) Then t ES(t) t λex 1. (ii) If var(w 1 ) < and var(x 1 ) <, then var(s(t)) t t λ ( var(x 1 ) + var(w 1 )λ 2 (EX 1 ) 2).

27 3.2. PROPERTIES OF S(T) 27 Proof. (a) Since by a direct computation, 1 1I Ω (ω) 1I {N(t)k}, k N(t) ES(t) E E i1 k X i ( ( k ) X i )1I {N(t)k} i1 E(X X k ) E1I }{{} {N(t)k} }{{} k kex 1 P(N(t)k) EX 1 kp(n(t) k) k EX 1 EN(t). In the Cramér-Lundberg-model we have EN(t) λt. For the general case we use the Elementary Renewal Theorem (Thereom 2.2.4) to get the assertion. (b) We continue with ES(t) 2 N(t) E E i1 X i 2 ( ( k ) ) 2 E X i 1I {N(t)k} k ( k ) 2 X i 1I {N(t)k} k k i,j1 EX 2 1 i1 k E ( ) X i X j 1I {N(t)k} i1 kp(n(t) k) + (EX 1 ) 2 k k1 EX 2 1 EN(t) + (EX 1 ) 2 (EN(t) 2 EN(t)) var(x 1 )EN(t) + (EX 1 ) 2 EN(t) 2. k(k 1)P(N(t) k)

28 28 CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T ) It follows that var(s(t)) ES(t) 2 (ES(t)) 2 ES(t) 2 (EX 1 ) 2 (EN(t)) 2 var(x 1 )EN(t) + (EX 1 ) 2 var(n(t)). For the Cramér-Lundberg-model it holds EN(t) var(n(t)) λt, hence we have var(s(t)) λt(var(x 1 ) + (EX 1 ) 2 ) λtex1. 2 For the renewal model we get var(x 1 )EN(t) var(x 1 )λ. t t The relation is shown in [6, Theorem 2.5.2]. var(n(t)) t t var(w 1) (EW 1 ) 3. Theorem (SLLN and CLT for renewal model). (1) SLLN for (S(t)): If EW 1 1 λ < and EX 1 <, then S(t) t t λex 1 a.s. (2) CLT for (S(t)): If var(w 1 ) <, and var(x 1 ) <, then ( ) P S(t) ES(t) x Φ(x) t, var(s(t)) sup x R where Φ is the distribution function of the standard normal distribution, Φ(x) 1 2π x e y2 2 dy. Proof. (1) We follow the proof of [7, Theorem ]. We have shown that N(t) t t λ a.s.

29 3.2. PROPERTIES OF S(T) 29 and therefore it holds N(t) t a.s. Because of S(t) X 1 + X X N(t) and, by the SSLN we get S(t) t t (2) See [5, Theorem ]. X X n n n t N(t) t t EX 1 a.s., S(t) N(t) λex 1 a.s.

30 3 CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T )

31 Chapter 4 Premium calculation principles The standard problem in insurance is to determine that amount of premium such that the losses S(t) are covered. On the other hand, the price of the premiums should be low enough to be competitive and attract customers. In the following we let ρ(t) [, ) be the cumulative premium income up to time t [, ) in our stochastic model. Below we review some premium calculation principles. 4.1 Classical premium calculation principles First approximations of S(t) are given by ES(t) and var(s(t)), and the classical principles are based on these quantities. Intuitively, we have: ρ(t) < ES(t) insurance company loses on average ρ(t) > ES(t) insurance company gains on average Net principle The Net Principle ρ NET (t) ES(t) defines the premium to be a fair market premium. However, this usually leads to the ruin for the company as we will see later. 31

32 32 CHAPTER 4. PREMIUM CALCULATION PRINCIPLES Expected value principle In the Expected Value Principle the premium is calculated by p EV (t) (1 + ρ)es(t) where ρ > is called the safety loading The variance principle The Variance Principle is given by p VAR (t) ES(t) + αvar(s(t)), α >. This principle is in the renewal model asymptotically the same as p EV (t), since by Proposition we have that t p EV (t) p VAR (t) t (1 + ρ)es(t) ES(t) + αvar(s(t)) (1 + ρ) var(s(t)) 1 + α t ES(t) is a constant. This means that α plays the role of the safety loading ρ The standard deviation principle This principle is given by p SD (t) ES(t) + α var(s(t)), α > The constant income principle Here we simply ρ const (t) : ct, c >. In the case of the Cramér-Lundberg-modelthis principle coincides with the expected value principle by setting c : (1 + ρ)λex 1. In the case of the renewal model it is asymptotically the expected value principle as by Proposition we have ES(t) [λex 1 ]t. In Definition we introduce the Net Profit Condition that gives the necessary range for c (or ρ).

33 4.2. MODERN PREMIUM CALCULATION PRINCIPLES Modern premium calculation principles In the following principles the expected value E(g(S(t)) needs to be computed for certain functions g(x) in order to compute ρ(t). This means it is not enough to know ES(t) and var(s(t)), the distribution of S(t) is needed as well The Exponential Principle The Exponential Principle is defined as ρ exp (t) : 1 δ log EeδS(t), for some δ >, where δ is the risk aversion constant. The function p exp (t) is motivated by the so-called utility theory. By Jensen s inequality one checks that ρ exp (t) 1 δ log EeδS(t) ES(t) as the function x e x is convex The Esscher Principle The Esscher principle is defined as As a homework we show that ρ Ess (t) : ES(t)eδS(t) Ee δ(s(t)), δ >.. ρ Ess (t) ES(t)eδS(t) Ee δ(s(t)) ES(t) The Quantile Principle Denote by F t (x) : P({ω : S(t, ω) x}), x R, the distribution function of S(t). Given < ε < 1, we let ρ quant (t) : min{x : P({ω Ω : S(t, ω) x}) 1 ε}.

34 34 CHAPTER 4. PREMIUM CALCULATION PRINCIPLES This principle is called (1 ε) quantile principle. Note that P(S(t) > ρ quant (t)) ε. This setting is related to the theory of Value at Risk. 4.3 Reinsurance treaties Reinsurance treaties are mutual agreements between different insurance companies to reduce the risk in a particular insurance portfolio. Reinsurances can be considered as insurance for the insurance company. Reinsurances are used if there is a risk of rare but huge claims. Examples of these usually involve a catastrophe such as earthquake, nuclear power station disaster, industrial fire, war, tanker accident, etc. According to Wikipedia, the world s largest reinsurance company in 29 is Munich Re, based in Germany, with gross written premiums worth over $31.4 billion, followed by Swiss Re (Switzerland), General Re (USA) and Hannover Re (Germany). There are two different types of reinsurance: Random walk type reinsurance 1. Proportional reinsurance: The reinsurer pays an agreed proportion p of the claims, R prop (t) : ps(t). 2. Stop-loss reinsurance: The reinsurer covers the losses that exceed an agreed amount of K, where x + max{x, }. R SL (t) : (S(t) K) +, 3. Excess-of-loss reinsurance: The reinsurer covers the losses that exceed an agreed amount of D for each claim separately, where D is the deductible. N(t) R ExL : (X i D) +, i1

35 4.3. REINSURANCE TREATIES Extreme value type reinsurance Extreme value type reinsurances cover the largest claims in a portfolio. Mathematically, these contracts are investigated with extreme value theory techniques. The ordering of the claims X 1,..., X N(t) is denoted by X (1)... X (N(t)). 1. Largest claims reinsurance: The largest claims reinsurance covers the k largest claims arriving within time frame [, t], R LC (t) : k X (N(t) i+1). i1 2. ECOMOR reinsurance: (Excédent du coût moyen relatif means excess of the average cost). Define k N(t)+1 2. Then R ECOMOR (t) N(t) (X (N(t) i+1) X (N(t) k+1) ) + i1 k 1 X (N(t) i+1) (k 1)X (N(t) k+1) i1 Treaties of random walk type can be handled like before. For example, P( R SL (t) }{{} (S(t) K) + x) P(S(t) K) + P(K < S(t) x + K), so if F S(t) is known, so is F RSL (t).

36 36 CHAPTER 4. PREMIUM CALCULATION PRINCIPLES

37 Chapter 5 Probability of ruin: small claim sizes 5.1 The risk process In this chapter, if not stated differently, we use the following assumptions and notation: The renewal model is assumed. Total claim amount process: S(t) : N(t) i1 X i with t. Premium income function: ρ(t) ct where c > is the premium rate. The risk process or surplus process is given by U(t) : u + ρ(t) S(t), t, where U(t) is the insurer s capital balance at time t and u is the initial capital. 37

38 38 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES risk process U(t) U(t) U() t Definition (Ruin, ruin time, ruin probability). We let ruin(u) : {ω Ω : U(t, ω) < for some t > } the event that U ever falls below zero, Ruin time T : inf{t > : U(t) < } the time when the process falls below zero for the first time. The ruin probability is given by Remark ψ(u) P(ruin(u)) P(T < ). (1) T : Ω R { } is an extended random variable (i.e. T can also take the value ). (2) In the literature ψ(u) is often written as ψ(u) P(ruin U() u) to indicate the dependence on the initial capital u. (3) Ruin can only occur at the times t T n, n 1. This implies ruin(u) {ω Ω : T (ω) < } {ω Ω : inf U(t, ω) < } t>

39 5.1. THE RISK PROCESS 39 {ω Ω : inf n 1 U(T n(ω), ω) < } {ω Ω : inf n 1 (u + ct n S(T n )) < }, where the last equation yields from the fact that U(t) u + ct S(t). Since in the renewal model it was assumed that W i >, it follows that and where which imply that By setting and it follows that N(T n ) #{i 1 : T i T n } n S(T n ) N(T n) i1 X i n X i, i1 T n W W n, { ( ) } ω Ω : inf u + ct n S(T n ) < n 1 { ( ) } n ω Ω : inf u + ct n X i <. n 1 i1 Z n : X n cw n, n 1 G n : Z Z n, n 1, G :, {ω Ω : T (ω) < } { } ω Ω : inf ( G n) < u n 1 { } ω Ω : sup G n > u n 1 and for the ruin probability the equality it holds that ( ) ψ(u) P sup G n (ω) > u n 1. First we state the theorem that justifies the Net Profit Condition introduced below:

40 4 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES Theorem If EW 1 <, EX 1 <, and EZ 1 EX 1 cew 1, then ψ(u) 1 for all u >, i.e. ruin occurs with probability one independent from the initial capital u. Proof. (a) EZ 1 > : By the Strong Law of Large Numbers, G n n n EZ 1 almost surely. Because we assumed EZ 1 >, one gets that G n a.s., n, because G n nez 1 for large n. This means ruin probability ψ(u) 1 for all u >. (b) The case EZ 1 we show under the additional assumption that EZ1 2 <. Let { } Z Z n Ā m : sup m. n n Notice that for fixed ω Ω and n 1 we have iff Hence sup n sup n n Ā m Z 1 (ω) Z n (ω) n m Z n (ω) Z n (ω) n m. n 1 σ(z n, Z n +1,...). The sequence (Z n ) n 1 consists of independent random variables. By the 1 law of Kolmogorov (see [4, Proposition 2.1.6]) we conclude that P(Ām) {, 1}. Since ( Z Z P sup n n n ) m P(Ām)

41 5.2. BOUNDS FOR THE RUIN PROBABILITY 41 it suffices to show P(Ām) >. We have { Ā m sup n n1 kn Z Z n n { Z Z k k By Fatou s Lemma and the Central Limit Theorem, ( Z Z P(Ām) sup P k k k e x2 dx 2σ 2 > 2πσ 2 where σ 2 EZ 2 1. m } m } m. ) m Definition (Net profit condition). The renewal model satisfies the net profit condition (NPC) if and only if EZ 1 EX 1 cew 1 <. (NP C) The consequence of (NP C) is that on average more premium flows into the portfolio of the company than claim sizes flow out: We have G n ρ(t n ) + S(T n ) c(w W n ) + X X n which implies EG n nez 1 <. Theorem implies that any insurance company should choose the premium ρ(t) ct in such a way that EZ 1 <. In that case there is hope that the ruin probability is less than Lundberg inequality and Cramér s ruin bound In this section it is assumed, that the renewal model is used and the net profit condition holds, that means EX 1 cew 1 <.

42 42 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES Definition For a random variable f : Ω R on some probability space (Ω, F, P) the function m f (h) Ee hf, is called the moment-generating function if it is finite for h ( h, h ) for some h >. Remark (1) The map h Ee hf is called two-sided Laplace transform. (2) If E f k e hf < on ( h, h ) for all k,..., m, then m f (h) exists, is m-times differentiable, and one has Therefore d m dh m m f(h) Ef m e hf. d m dh m m f() Ef m. Definition (Small claim size condition). We say that the small claim size condition with paramer h > is satisfied if m X1 (h) Ee hx 1 and m cw1 (h) Ee hcw 1 exist for h < h. Theorem (Lundberg inequality). Assume that the small claim size condition with paramer h > is satisfied. If there is an r (, h ) with m Z1 (r) Ee r(x 1 cw 1 ) 1, then ψ(u) e ru for all u >. Definition The exponent r in Theorem is called Lundberg coefficient. The result implies, that if the small claim condition holds and the initial capital u is large, there is in principal no danger of ruin.

43 5.2. BOUNDS FOR THE RUIN PROBABILITY 43 Remark (1) If m Z1 exists in ( h, h ) for some h >, then and P(Z 1 λ) P(e εz 1 e ελ ) e ελ m Z1 (ε) P( Z 1 λ) e ελ m Z1 (ε) e ελ m Z1 ( ε) for all ε ( h, h ). This implies P( Z 1 λ) e ελ [m Z1 (ε) + m Z1 ( ε)]. (2) If the Lundberg coefficient r exists, then it is unique. First we observe that which follows from the fact that m Z1 is convex: We have e (1 θ)r Z 1 +θr 1 Z 1 (1 θ)e r Z 1 + θe r 1Z 1. Moreover, m Z1 () 1 and by Jensen s inequality, m Z1 (h) Ee Z 1h e EZ 1h such that (assuming (NPC) holds) we get m Z 1 (h) h h e EZ 1( h). If m Z1 exists in ( ε, ε) and m Z1 (h) 1 for some h {r, s} (, ε] then, by convexity, m Z1 (h) 1 h [, r s]. From (1) we have and it holds E Z 1 n P( Z 1 > λ) ce λ c for some c > P( Z 1 n > λ)dλ n n P( Z 1 > λ)λ n 1 dλ ce λ/c λ n 1 dλ ( ) n 1 λ nc n e λ/c dλ c

44 44 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES Because of n for hc < 1, the function h n n! E Z 1 n n m Z1 (h) Ee hz 1 E nc n+1 e λ λ n 1 dλ n!c n+1. h n n! n!cn+1 < (hz 1 ) n is infinitely often differentiable for h < 1/c. Moreover the function is constant on [, r s] so that, for < h < min{1/c, r s}, one has n n! d2 dh 2 m Z 1 (h) EZ 2 1e hz 1. This implies EZ 2 1e hz 1 and Z 1 a.s. (3) In practice, r is hard to compute from the distributions of X 1 and W 1. Therefore it is often approximated numerically or by Monte Carlo methods. Proof of Theorem We use Z n X n cw n and set G k : Z Z k. We consider ψ n (u) : P ( max 1 k n G k > u ), u >. Because of ψ n (u) ψ(u) for n it is sufficient to show For n 1 we get the inequality by ψ n (u) e ru, n 1, u >. ψ 1 (u) P(Z 1 > u) P(e rz 1 > e ru ) e ru Ee rz 1 e ru. Now we assume that the assertion holds for n. We have ψ n+1 (u) P ( max G k > u ) 1 k n+1

45 5.2. BOUNDS FOR THE RUIN PROBABILITY 45 P(Z 1 > u) + P ( max G k > u, Z 1 u ) 1 k n+1 P(Z 1 > u) + P ( max (G k Z 1 ) > u Z 1, Z 1 u ) 2 k n+1 P(Z 1 > u) + u P ( max 1 k n G k > u x ) df Z1 (x) where we have used for the last line that max 2 k n+1 (G k Z 1 ) and Z 1 are independent. We estimate the first term P(Z 1 > u) df Z1 (x) e r(x u) df Z1 (x), (u, ) (u, ) and proceed with the second term as follows: P ( max G k > u x ) df Z1 (x) 1 k n (,u] ψ n (u x)df Z1 (x) (,u] e r(u x) df Z1 (x). (,u] Consequently, ψ n+1 (u) e r(x u) df Z1 (x) + e r(u x) df Z1 (x) e ru. (u, ) (,u] We consider an example where it is possible to compute the Lundberg coefficient: Example Let X 1, X 2,... Exp(γ) and W 1, W 2,... Exp(λ). Then m Z1 (h) Ee h(x 1 cw 1 ) Ee hx 1 Ee hcw 1 γ λ γ h λ + ch for λ c < h < γ since Ee hx 1 e hx γe γx dx γ γ h.

46 46 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES The (NPC) condition reads as > EZ 1 EX 1 cew 1 1 γ c λ or cγ > λ. Hence m Z1 exists on ( λ, γ) and for r > we get c γ λ γ r λ + cr 1 γλ γλ + γcr λr cr 2 r γ λ c. Consequently, ψ(u) e ru e (γ λ c )u. Applying the expected value principle ρ(t) (1 + ρ)es(t) (1 + ρ)λex 1 t we get This implies γ λ c γ λ (1 + ρ) λ γ ψ(u) e ru e uγ γ ρ 1 + ρ. ρ 1+ρ, where one should notice that even ρ does not change the ruin probability considerably! The following theorem considers the special case, the Cramér-Lundbergmodel: Theorem (Cramér s ruin bound). Assume the Cramér-Lundbergmodel with the net profit condition (NPC) and with ρ(t) ct (1 +ρ)es(t). Suppose that the distribution of X 1 has a density, that m X1 (h) exists in a neighborhood ( h, h ) of the origin, and that r (, h ) is the Lundberg coefficient. Then one has with c : ρ EX 1 r u eru ψ(u) c, ( ye ry (1 F X1 (y))dy) 1. To prove this theorem introduce in the next section the fundamental integral equation for the survival probability.

47 5.3. FUNDAMENTAL INTEGRAL EQUATION Fundamental integral equation for the survival probability We introduce the survival probability ϕ(u) 1 ψ(u). Theorem (Fundamental integral equation for the survival probability). Assume that for the Cramér-Lundberg-model (NPC) holds and EX 1 <. Suppose that ρ(t) ct (1 + ρ)es(t). Then 1 ϕ(u) ϕ() + (1 + ρ)ex 1 u F X1 (y)ϕ(u y)dy (1) with F X1 (y) P(X 1 > y). Remark Let the assumptions of Theorem hold. (1) The assertion can be reformulated as follows. Let F X1,I(y) : 1 EX 1 y F X1 (z)dz, y. The function F X1,I(y) is a distribution function since F X 1,I(y) y 1 EX 1 1 EX 1 F X1 (z)dz P(X 1 > z)dz 1. Hence equation (1) can be written as ϕ(u) ϕ() ρ u ϕ(u y)df X1,I(y). (2) It holds that u ϕ(u) 1. This can be seen as follows: ϕ(u) u u u (1 ψ(u)) ( ( 1 P sup G k > u k 1 ))

48 48 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES ( ) P sup G k u u k 1 where G k Z Z k. Since EZ 1 < the SLLN implies G k k a.s. Therefore we have sup k 1 G k < a.s. and ( ) P sup G k u 1. u k 1 (3) It holds that ϕ() ρ 1 for ρ. Indeed, because of (1), and (2), 1+ρ and Theorem we may conclude that 1 ϕ() ρ ϕ() ρ ϕ() ρ ϕ() ρ. u u df X1,I(y) 1I [,u] (y)ϕ(u y)df X1,I(y) ( 1I[,u] (y)ϕ(u y) ) df X1,I(y) The interpretation of ϕ() is the survival probability when starting with Euro initial capital. Proof of Theorem (a) We first show that ϕ(u) λ c e λu c e λy c ϕ(y x)df X1 (x)dy. (2) To do this, we consider [u, ) [,y] ϕ(u) ( ) P sup G n u n 1 ( ) P Z 1 u, G n Z 1 u Z 1 for n 2

49 5.3. FUNDAMENTAL INTEGRAL EQUATION 49 [, ) [,u+cw] where we used for the last line that ( ) P G n Z 1 u (x cw) for n 2 df X1 (x)df W1 (w) x cw u and x x u + cw. We use that G n Z 1 Z Z n 1 and substitute y : u + cw in order to obtain ( ) ϕ(u) P G n u (x cw) for n 1 df X1 (x)λe λw dw [, ) [,u+cw] ϕ(u x + cw)df X1 (x)λe λw dw [, ) [,u+cw] y u λ ϕ(y x)df X1 (x)λe c d y c. [u, ) [,y] (b) Differentiation of (2) leads to ϕ (u) λ c ϕ(u) so that λ c ϕ(u) λ c t [,y] ϕ(y x)df X1 (x) λ c [,u] ϕ(t) ϕ() λ ϕ(u)du c λ t ϕ(u x)df X1 (x)du c [,u] λ t [ u ϕ(u x)f X1 (x) c + [,u] t [ ϕ()f X1 (u) ϕ(u)f X1 () + λ c λ t c ϕ() F X1 (u)du λ c λ c t ϕ(t x)f X1 (x)dx. t ϕ(u x)df X1 (x), [x,t] y u e λ c yu ] ϕ (u x)f X1 (x)dx du [,u] ] ϕ (u x)f X1 (x)dx du ϕ (u x)f X1 (x)dudx

50 5 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES This implies ϕ(t) ϕ() λ c t ϕ(t x)(1 F X1 (x))dx. Using that gives λ c 1 (1+ρ)EX 1 ES(t) λtex 1 and ct (1 + ρ)es(t) (1 + ρ)λtex 1 which yields the assertion. Now we can deduce a representation of the survival probability: Theorem Assume the Cramér-Lunberg model with the NPCcondition. Assume for the claim size distribution X 1 : Ω (, ) that EX 1 <, let (X I,k ) k 1 be i.i.d. random variables with the distribution function F X1,I, and define ϕ(u) : ρ [ ρ ] (1 + ρ) n P(X I, X I,n u) n1 Then ϕ is the unique solution to ϕ(u) ϕ() ρ u ϕ(u y)df X1,I(y) for u R. in the class G : { g : R [, ) : non-decreasing, bounded, { for x < right-continuous with g(x) ρ for x 1+ρ }. Proof. Uniqueness: Assume ϕ 1, ϕ 2 are solutions and ϕ ϕ 1 ϕ 2. Then ϕ(u) ρ ρ u u ϕ(u y)df X1,I(y) ϕ(u y) F X 1 (y) dy EX 1

51 5.3. FUNDAMENTAL INTEGRAL EQUATION 51 1 (1 + ρ)ex 1 u ϕ(y)f X1 (u y)dy and 1 u ϕ(u) ϕ(y) dy. (1 + ρ)ex 1 Gronwall s Lemma implies that ϕ(u) for u R. Verification that ϕ is a solution: Here we get ϕ() ρ [ ] 1 + (1 + ρ) n P(X I, X I,n ) 1 + ρ and ϕ() ρ ρ 1 + ρ ρ ρ 1 + ρ ρ u [ 1 + ρ 1 + ρ u u n1 ϕ(u y)df X1,I(y) ϕ(u y)df X1,I(y) (1 + ρ) n P (X I, X I,n u y) n1 ρ 1 + ρ ρ [ ρ F X1,I(u) ρ ρ 1 + ρ ρ [ ρ F X1,I(u) ρ ρ 1 + ρ ρ [ ρ ρ 1 + ρ ] df X1,I(y) ] u (1 + ρ) n P (X I, X I,n + y u) df X1,I(y) n1 ] (1 + ρ) n P (X I, X I,n+1 u) n1 (1 + ρ) 1 P(X I,1 u) + [ ρ ρ ] (1 + ρ) (n+1) P (X I, X I,n+1 u) n1 ] (1 + ρ) (n+1) P (X I, X I,n u) n1

52 52 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES ϕ(u). 5.4 Proof of Cramér s ruin bound Lemma (Smith s renewal equation). It holds e ru ψ(u) qe ru FX1,I(u) + u where q : 1, r is the Lundberg coefficient, and 1+ρ F (r) X 1 (x) : q EX 1 e r(u x) ψ(u x)df (r) X 1 (x) x e ry FX1 (y)dy. Definition The function F (r) X 1 F X1. is called the Esscher transform of Proof of Lemma (a) We first show that F (r) X 1 In fact, we have is a distribution function. F (r) x X 1 (x) q e ry FX1 (y)dy EX 1 q 1 EX 1 r (EerX 1 1), since Ee rx r P(e rx 1 > t)dt P(e rx 1 > e ry )re ry dy re ry dy + F X1 (y)e ry dy. P(X 1 > y)re ry dy

53 5.4. PROOF OF CRAMÉR S RUIN BOUND 53 From Ee r(x 1 cw 1 ) 1 we conclude F (r) x X 1 (x) ( q 1 EX 1 r q 1 EX 1 r qc EX 1 1 λ (b) From Theorem we conclude that ψ(u) q F X1,I(u) + u ) 1 Ee 1 rcw 1 ( ) rc + λ 1 λ c 1 (1 + ρ)ex 1 λ 1. ψ(u y)d(qf X1,I)(y). Indeed, by equation (1) and Remark we have Hence ϕ(u) ρ 1 + ρ ρ u ϕ(u y) F X1 (y) dy. EX 1 1 ψ(u) 1 ϕ(u) 1 + ρ 1 ϕ(u y) F X1 (y) dy 1 + ρ EX 1 u F X1 (y) u q q dy + q ψ(u y) F X1 (y) dy EX 1 EX 1 q F X1,I(u) + u u ψ(u y)d(qf X1,I)(y). This equation would have the structure of a renewal equation (see the renewal equation (3) below) only if q 1. Therefore we consider e ru ψ(u) e ru q F X1,I(u) + e ru qf X1,I(u) + u u e r(u y) ψ(u y)e ry d(q F X1,I)(y) e r(u y) ψ(u y)df (r) X 1 (y). Since F (r) is a distribution function, we have indeed a renewal equation. The following assertion is a generalization of the Blackwell Renewal Lemma.

54 54 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES Lemma (Smith s key renewal lemma). Let H : R R be a continuous distribution function such that H(x) for x and < xdh(x) <. Let R (1) k : R [, ) be Riemann integrable such that k(x) for x <, (2) x k(x). Then R(u) u k(u y)dh(y) is in the class of all functions on (, ) which are bounded on finite intervals the only solution to the renewal equation and it holds R(u) k(u) + u R(u) 1 u xdh(x) R The proof can be found in [9, pp. 22]. R(u y)dh(y) (3) Proof of Theorem We apply Lemma for For α : 1 R xdf (r) (x) we get u eru ψ(u) α α α R(u) : e ru ψ(u), k(u) : qe ru FX1,I(u), H(x) : F (r) (x). qe ry FX1,I(y)dy [ qe ry 1 1 qe ry 1 EX 1 EX 1 y y k(u)du. ] F X1 (z)dz dy F X1 (z)dzdy

55 5.4. PROOF OF CRAMÉR S RUIN BOUND 55 q α EX 1 α [ q r EX 1 z α r [1 q] α r e ry dy F X1 (z)dz e rz FX1 (z)dz q EX 1 ρ 1 + ρ. ] F X1 (z)dz Finally 1 α R xdf (r) (x) q xe rx FX1 (x)dx EX 1 1 xe rx FX1 (x)dx. (1 + ρ)ex 1 implies u eru ψ(u) ρex 1 r 1 xe rx F X1 (x)dx.

56 56 CHAPTER 5. PROBABILITY OF RUIN: SMALL CLAIM SIZES

57 Chapter 6 Probability of ruin: large claim sizes 6.1 Tails of claim size distributions So far we did not discuss how to separate small and large claim sizes, and how to choose the distributions to model the claim sizes (X i )? If one analyzes data of claim sizes that have happened in the past, for example by a histogram or a QQ-plot (see Chapter 7), it turns out that the distribution is either light-tailed or heavy-tailed, the latter case is more often the case. Definition LetX : Ω R be a random variable with X(ω) for all ω Ω with distribution function F, i.e. F (x) P({ω Ω : X 1 (ω) x}). (1) The distribution function F or the random variable X is called lighttailed if and only if there is some λ > such that one has sup x e λx P(X > x) sup e λx [1 F (x)] <. x (2) The distribution function F or the random variable X is called heavytailed if and only if for all λ > one has inf x eλx P(X > x) inf x eλx [1 F (x)] >. 57

58 58 CHAPTER 6. PROBABILITY OF RUIN: LARGE CLAIM SIZES Example (1) The exponential distribution Exp(α) is light-tailed for all α >, since the distribution function is F (x) 1 e αx, x >, and 1 F (x) e λx and by choosing < λ < α, e αx e λx e(λ α)x, sup e (λ α)x e (λ α)n, as n. x n (2) The Pareto distribution is heavy-tailed. The distribution function is κ α F (x) 1 (κ + x) α for x, α >, κ >, or F (x) 1 ba x a for x b >, a >. Proposition If X : Ω R is light-tailed, then there is an h >, such that the moment generating function is finite on ( h, h ). h Ee hx Proof. It is sufficient to find an h > such that Ee h X <. We get for < h < λ. Ee hx P(e hx > x)dx 1 + P(e hx > x)dx < (1, ) (1, ) (1, ) 1 + C 1 + C (1, ) (1, ) ( P X > 1 ) log(x) dx h [ ( 1 1 F [ e λ )] log(x) dx h )] dx ( 1 log(x) h [x λ h ] dx

59 6.2. SUBEXPONENTIAL DISTRIBUTIONS Subexponential distributions Definition and basic properties In Theorem below we need subexponential distributions: Definition A distribution function F : R [, 1] such that F () and F (x) < 1 for all x > is called subexponential if and only if for i.i.d. (X i ) i1 and P(X i u) F (u), u R, it holds that P(X X n > x) x P(max 1 k n X k > x) 1 for all n 2. We denote the class of subexponential distribution functions by S. We start with an equivalence that can be used to define S as well. Proposition (Equivalent conditions for S, part I). Assume i.i.d. (X i ) i1 with P(X i u) F (u), u R. Then F S if and only if P(X X n > x) x P(X 1 > x) Proof. It holds for S n X X n that n for all n 1. P(S n > x) P(max 1 k n X k > x) P(S n > x) 1 P(max 1 k n X k x) P(S n > x) 1 P(X 1 x) n P(S n > x) 1 (1 P(X 1 > x)) n P(S n > x) P(X 1 > x)n(1 + o(1)). Next we continue with properties of subexponential distributions that motivate the name subexponential: Proposition Assume i.i.d. random variables (X i ) i1 and a random variable X with P(X u) P(X i u) F (u), u R. Then one has the following assertions:

60 6 CHAPTER 6. PROBABILITY OF RUIN: LARGE CLAIM SIZES (1) For F S it holds F (x y) x F (x) (2) If F S, then for all ε > it holds 1 for all y >. e εx P(X > x) for x. (3) If F S, then for all ε > there exists a K > such that P(S n > x) P(X 1 > x) K(1 + ε)n n 2 and x. For the proof we need the concept of slowly varying functions: Definition (Slowly varying functions). A measurable function L : [, ) (, ) is called slowly varying if L(cξ) ξ L(ξ) 1 for all c >. Proposition (Karamata s representation). Any slowly varying function can be represented as ( ξ ) ε(t) L(ξ) c (ξ) exp dt for all ξ ξ t ξ for some ξ > where c, ε : [ξ, ) R are measurable functions with c (ξ) c > and ε(t). ξ t Corollary For any slowly varying function L it holds ξ ξδ L(ξ) for all δ >. Proof. We can enlarge ξ such that sup t ξ ε(t) < δ. With this choice we get ( ξ ) ε(t) ξ ξδ c (ξ) exp dt ξ t

61 6.2. SUBEXPONENTIAL DISTRIBUTIONS 61 ( ξ c (ξ) exp δ log ξ + ξ c (ξ) exp ξ c (ξ) exp ξ. ξ ( δ log ξ sup ) ε(t) dt t ξ ) dt t ε(t) t ξ ξ ( δ log ξ (log ξ log ξ ) sup ε(t) t ξ Proof of Proposition (1) For y x < we have P(X 1 + X 2 > x) P(t + X > x)df (t) R P(X 1 > x) P(X 1 > x) P(X 1 > x) + P(t + X > x)df (t) (,y] P(X 1 > x) P(t + X > x)df (t) (y,x] + P(X 1 > x) F (x y) 1 + F (y) + (F (x) F (y)). F (x) We choose x large enough such that F (x) F (y) > and observe that 1 as x. F (x y) F (x) ( P(X1 + X 2 > x) P(X 1 > x) ) 1 F (y) ) 1 F (x) F (y) 1 (2) Let L(ξ) : F (log ξ). It follows from (1) that for all c > one has L(cξ) ξ L(ξ) F (log c + log ξ) ξ F (log ξ) By definition, L is slowly varying. Therefore, where we use Corollary ξ ξδ F (log ξ) e δx F (x) x (3) The proof can be found in [5][Lemma 1.3.5].

62 62 CHAPTER 6. PROBABILITY OF RUIN: LARGE CLAIM SIZES Examples In order to consider fundamental examples we need the next lemma: Lemma Let X 1, X 2 be independent positive random variables such that for some α > F Xi (x) L i(x) x α where L 1, L 2 are slowly varying. Then F X1 +X 2 (x) x α (L 1 (x) + L 2 (x))(1 + o(1)). Proof. For < δ < 1 2 we have and hence {X 1 + X 2 > x} {X 1 > (1 δ)x} {X 2 > (1 δ)x} P(X 1 + X 2 > x) {X 1 > δx, X 2 > δx} F X1 ((1 δ)x) + F X2 ((1 δ)x) + F X1 (δx)f X2 (δx) [F X1 ((1 δ)x) + F X2 ((1 δ)x)][1 + F X1 (δx)] [F X1 ((1 δ)x) + F X2 ((1 δ)x)][1 + o(1)] [ L1 ((1 δ)x) ((1 δ)x) + L ] 2((1 δ)x) [1 + o(1)] α ((1 δ)x) α From this we get [ F X1 (x) L 1((1 δ)x) + F X2 (x) L 2((1 δ)x) L 1 (x) L 2 (x) ] [1 + o(1)](1 δ) α. sup x P(X 1 + X 2 > x) F X1 (x) + F X1 (x) sup x (1 δ) α. F X1 (x) L 1((1 δ)x) L 1 (x) P(X 1 + X 2 > x) + F X2 (x) L 2((1 δ)x) L 2 (x) As this is true for all < δ < 1, we may conclude 2 sup x P(X 1 + X 2 > x) F X1 (x) + F X1 (x) 1.

63 6.2. SUBEXPONENTIAL DISTRIBUTIONS 63 On the other hand, P(X 1 + X 2 > x) P({X 1 > x} {X 2 > x}) and hence Consequently, P(X 1 > x) + P(X 2 > x) P(X 1 > x)p(x 2 > x) F X1 (x) + F X2 (x) F X1 (x)f X2 (x) [ F X1 (x) + F X2 (x) ][ 1 F X1 (x) ] inf x x P(X 1 + X 2 > x) F X1 (x) + F X2 (x) 1. P(X 1 + X 2 > x) F X1 (x) + F X2 (x) 1. Definition If there exists a slowly varying function L and some α > such that for a positive random variable X it holds F X (x) L(x) x α, then F X is called regularly varying with index α or of Pareto type with exponent α. Proposition If F X is regularly varying with index α, then F X is subexponential. Proof. An iteration of Lemma implies F X X n (x) F X (x) L(x) L(x) L(x) n. Example (1) The exponential distribution with parameter λ > is not subexponential.

University Of Calgary Department of Mathematics and Statistics

University Of Calgary Department of Mathematics and Statistics University Of Calgary Department of Mathematics and Statistics Hawkes Seminar May 23, 2018 1 / 46 Some Problems in Insurance and Reinsurance Mohamed Badaoui Department of Electrical Engineering National

More information

Lecture Notes on Risk Theory

Lecture Notes on Risk Theory Lecture Notes on Risk Theory February 2, 21 Contents 1 Introduction and basic definitions 1 2 Accumulated claims in a fixed time interval 3 3 Reinsurance 7 4 Risk processes in discrete time 1 5 The Adjustment

More information

A Note On The Erlang(λ, n) Risk Process

A Note On The Erlang(λ, n) Risk Process A Note On The Erlangλ, n) Risk Process Michael Bamberger and Mario V. Wüthrich Version from February 4, 2009 Abstract We consider the Erlangλ, n) risk process with i.i.d. exponentially distributed claims

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

f X (x) = λe λx, , x 0, k 0, λ > 0 Γ (k) f X (u)f X (z u)du

f X (x) = λe λx, , x 0, k 0, λ > 0 Γ (k) f X (u)f X (z u)du 11 COLLECTED PROBLEMS Do the following problems for coursework 1. Problems 11.4 and 11.5 constitute one exercise leading you through the basic ruin arguments. 2. Problems 11.1 through to 11.13 but excluding

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems

Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems Corina Constantinescu and Enrique Thomann Abstract The classical result of Cramer-Lundberg states that if the rate

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Measuring the effects of reinsurance by the adjustment coefficient in the Sparre Anderson model

Measuring the effects of reinsurance by the adjustment coefficient in the Sparre Anderson model Measuring the effects of reinsurance by the adjustment coefficient in the Sparre Anderson model Centeno, Maria de Lourdes CEMAPRE, ISEG, Technical University of Lisbon and Centre for Actuarial Studies,

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Random variables and transform methods

Random variables and transform methods Chapter Random variables and transform methods. Discrete random variables Suppose X is a random variable whose range is {,,..., } and set p k = P (X = k) for k =,,..., so that its mean and variance are

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Ruin probabilities in multivariate risk models with periodic common shock

Ruin probabilities in multivariate risk models with periodic common shock Ruin probabilities in multivariate risk models with periodic common shock January 31, 2014 3 rd Workshop on Insurance Mathematics Universite Laval, Quebec, January 31, 2014 Ruin probabilities in multivariate

More information

Non-Life Insurance: Mathematics and Statistics

Non-Life Insurance: Mathematics and Statistics ETH Zürich, D-MATH HS 08 Prof. Dr. Mario V. Wüthrich Coordinator Andrea Gabrielli Non-Life Insurance: Mathematics and Statistics Solution sheet 9 Solution 9. Utility Indifference Price (a) Suppose that

More information

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Proceedings of the World Congress on Engineering 29 Vol II WCE 29, July 1-3, 29, London, U.K. Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Tao Jiang Abstract

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

Ruin Probabilities of a Discrete-time Multi-risk Model

Ruin Probabilities of a Discrete-time Multi-risk Model Ruin Probabilities of a Discrete-time Multi-risk Model Andrius Grigutis, Agneška Korvel, Jonas Šiaulys Faculty of Mathematics and Informatics, Vilnius University, Naugarduko 4, Vilnius LT-035, Lithuania

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Modèles de dépendance entre temps inter-sinistres et montants de sinistre en théorie de la ruine

Modèles de dépendance entre temps inter-sinistres et montants de sinistre en théorie de la ruine Séminaire de Statistiques de l'irma Modèles de dépendance entre temps inter-sinistres et montants de sinistre en théorie de la ruine Romain Biard LMB, Université de Franche-Comté en collaboration avec

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Rare-Event Simulation

Rare-Event Simulation Rare-Event Simulation Background: Read Chapter 6 of text. 1 Why is Rare-Event Simulation Challenging? Consider the problem of computing α = P(A) when P(A) is small (i.e. rare ). The crude Monte Carlo estimator

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims

Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims Serguei Foss Heriot-Watt University, Edinburgh Karlovasi, Samos 3 June, 2010 (Joint work with Tomasz Rolski and Stan Zachary

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Ruin Theory. A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science at George Mason University

Ruin Theory. A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science at George Mason University Ruin Theory A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science at George Mason University by Ashley Fehr Bachelor of Science West Virginia University, Spring

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Expectation, variance and moments

Expectation, variance and moments Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks

More information

Practical approaches to the estimation of the ruin probability in a risk model with additional funds

Practical approaches to the estimation of the ruin probability in a risk model with additional funds Modern Stochastics: Theory and Applications (204) 67 80 DOI: 05559/5-VMSTA8 Practical approaches to the estimation of the ruin probability in a risk model with additional funds Yuliya Mishura a Olena Ragulina

More information

Lectures 16-17: Poisson Approximation. Using Lemma (2.4.3) with θ = 1 and then Lemma (2.4.4), which is valid when max m p n,m 1/2, we have

Lectures 16-17: Poisson Approximation. Using Lemma (2.4.3) with θ = 1 and then Lemma (2.4.4), which is valid when max m p n,m 1/2, we have Lectures 16-17: Poisson Approximation 1. The Law of Rare Events Theorem 2.6.1: For each n 1, let X n,m, 1 m n be a collection of independent random variables with PX n,m = 1 = p n,m and PX n,m = = 1 p

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Modelling the risk process

Modelling the risk process Modelling the risk process Krzysztof Burnecki Hugo Steinhaus Center Wroc law University of Technology www.im.pwr.wroc.pl/ hugo Modelling the risk process 1 Risk process If (Ω, F, P) is a probability space

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

On the Bennett-Hoeffding inequality

On the Bennett-Hoeffding inequality On the Bennett-Hoeffding inequality of Iosif 1,2,3 1 Department of Mathematical Sciences Michigan Technological University 2 Supported by NSF grant DMS-0805946 3 Paper available at http://arxiv.org/abs/0902.4058

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Remigijus Leipus (with Yang Yang, Yuebao Wang, Jonas Šiaulys) CIRM, Luminy, April 26-30, 2010 1. Preliminaries

More information

Ruin probabilities of the Parisian type for small claims

Ruin probabilities of the Parisian type for small claims Ruin probabilities of the Parisian type for small claims Angelos Dassios, Shanle Wu October 6, 28 Abstract In this paper, we extend the concept of ruin in risk theory to the Parisian type of ruin. For

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

On Finite-Time Ruin Probabilities in a Risk Model Under Quota Share Reinsurance

On Finite-Time Ruin Probabilities in a Risk Model Under Quota Share Reinsurance Applied Mathematical Sciences, Vol. 11, 217, no. 53, 269-2629 HIKARI Ltd, www.m-hikari.com https://doi.org/1.12988/ams.217.7824 On Finite-Time Ruin Probabilities in a Risk Model Under Quota Share Reinsurance

More information

A Dynamic Contagion Process with Applications to Finance & Insurance

A Dynamic Contagion Process with Applications to Finance & Insurance A Dynamic Contagion Process with Applications to Finance & Insurance Angelos Dassios Department of Statistics London School of Economics Angelos Dassios, Hongbiao Zhao (LSE) A Dynamic Contagion Process

More information

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables EE50: Probability Foundations for Electrical Engineers July-November 205 Lecture 2: Expectation of CRVs, Fatou s Lemma and DCT Lecturer: Krishna Jagannathan Scribe: Jainam Doshi In the present lecture,

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

Upper Bounds for the Ruin Probability in a Risk Model With Interest WhosePremiumsDependonthe Backward Recurrence Time Process

Upper Bounds for the Ruin Probability in a Risk Model With Interest WhosePremiumsDependonthe Backward Recurrence Time Process Λ4flΛ4» ν ff ff χ Vol.4, No.4 211 8fl ADVANCES IN MATHEMATICS Aug., 211 Upper Bounds for the Ruin Probability in a Risk Model With Interest WhosePremiumsDependonthe Backward Recurrence Time Process HE

More information

VaR vs. Expected Shortfall

VaR vs. Expected Shortfall VaR vs. Expected Shortfall Risk Measures under Solvency II Dietmar Pfeifer (2004) Risk measures and premium principles a comparison VaR vs. Expected Shortfall Dependence and its implications for risk measures

More information

Feuille d exercices n 1 : Poisson processes. 1. Give a necessary and su cient condition for having

Feuille d exercices n 1 : Poisson processes. 1. Give a necessary and su cient condition for having Processus de Poisson et méthodes actuarielles (25-26) Feuille d exercices n : Poisson processes Exercise. Let ( n,n ) be a sequence of IID (independent and identically distributed) non-negative random

More information

SOLUTION FOR HOMEWORK 11, ACTS 4306

SOLUTION FOR HOMEWORK 11, ACTS 4306 SOLUTION FOR HOMEWORK, ACTS 36 Welcome to your th homework. This is a collection of transformation, Central Limit Theorem (CLT), and other topics.. Solution: By definition of Z, Var(Z) = Var(3X Y.5). We

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Notes on Stochastic Calculus

Notes on Stochastic Calculus Notes on Stochastic Calculus David Nualart Kansas University nualart@math.ku.edu 1 Stochastic Processes 1.1 Probability Spaces and Random Variables In this section we recall the basic vocabulary and results

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation.

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation. Solutions Stochastic Processes and Simulation II, May 18, 217 Problem 1: Poisson Processes Let {N(t), t } be a homogeneous Poisson Process on (, ) with rate λ. Let {S i, i = 1, 2, } be the points of the

More information

Math 576: Quantitative Risk Management

Math 576: Quantitative Risk Management Math 576: Quantitative Risk Management Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Week 11 Haijun Li Math 576: Quantitative Risk Management Week 11 1 / 21 Outline 1

More information

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes

STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes STAT331 Lebesgue-Stieltjes Integrals, Martingales, Counting Processes This section introduces Lebesgue-Stieltjes integrals, and defines two important stochastic processes: a martingale process and a counting

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions

Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk The University of

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Stochastic Proceses Poisson Process

Stochastic Proceses Poisson Process Stochastic Proceses 2014 1. Poisson Process Giovanni Pistone Revised June 9, 2014 1 / 31 Plan 1. Poisson Process (Formal construction) 2. Wiener Process (Formal construction) 3. Infinitely divisible distributions

More information

Lecture Notes: Actuarial Risk Theory Summer Term 2013

Lecture Notes: Actuarial Risk Theory Summer Term 2013 Lecture Notes: Actuarial Risk Theory Summer Term 213 Prof. Dr. Nina Gantert 31st October 213 Produced by Benedikt Schamberger Please send errors to b.schamberger@tum.de Contents 1 Preface 1 2 The risk

More information

Worst-Case-Optimal Dynamic Reinsurance for Large Claims

Worst-Case-Optimal Dynamic Reinsurance for Large Claims Worst-Case-Optimal Dynamic Reinsurance for Large Claims by Olaf Menkens School of Mathematical Sciences Dublin City University (joint work with Ralf Korn and Mogens Steffensen) LUH-Kolloquium Versicherungs-

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

Optimal Reinsurance Strategy with Bivariate Pareto Risks

Optimal Reinsurance Strategy with Bivariate Pareto Risks University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 2014 Optimal Reinsurance Strategy with Bivariate Pareto Risks Evelyn Susanne Gaus University of Wisconsin-Milwaukee Follow

More information

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Errata Effective July 5, 3, only the latest edition of this manual will have its errata

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

PROBABILITY THEORY II

PROBABILITY THEORY II Ruprecht-Karls-Universität Heidelberg Institut für Angewandte Mathematik Prof. Dr. Jan JOHANNES Outline of the lecture course PROBABILITY THEORY II Summer semester 2016 Preliminary version: April 21, 2016

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Exam Stochastic Processes 2WB08 - March 10, 2009,

Exam Stochastic Processes 2WB08 - March 10, 2009, Exam Stochastic Processes WB8 - March, 9, 4.-7. The number of points that can be obtained per exercise is mentioned between square brackets. The maximum number of points is 4. Good luck!!. (a) [ pts.]

More information

3 Conditional Expectation

3 Conditional Expectation 3 Conditional Expectation 3.1 The Discrete case Recall that for any two events E and F, the conditional probability of E given F is defined, whenever P (F ) > 0, by P (E F ) P (E)P (F ). P (F ) Example.

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

Quantile-quantile plots and the method of peaksover-threshold

Quantile-quantile plots and the method of peaksover-threshold Problems in SF2980 2009-11-09 12 6 4 2 0 2 4 6 0.15 0.10 0.05 0.00 0.05 0.10 0.15 Figure 2: qqplot of log-returns (x-axis) against quantiles of a standard t-distribution with 4 degrees of freedom (y-axis).

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information