Lecture Notes on Risk Theory

Similar documents
A Note On The Erlang(λ, n) Risk Process

Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims

University Of Calgary Department of Mathematics and Statistics

Poisson Processes. Stochastic Processes. Feb UC3M

Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems

Practical approaches to the estimation of the ruin probability in a risk model with additional funds

Upper Bounds for the Ruin Probability in a Risk Model With Interest WhosePremiumsDependonthe Backward Recurrence Time Process

Characterizations on Heavy-tailed Distributions by Means of Hazard Rate

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications

Measuring the effects of reinsurance by the adjustment coefficient in the Sparre Anderson model

Modelling the risk process

Stability of the Defect Renewal Volterra Integral Equations

Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions

ACM 116: Lectures 3 4

Asymptotics of the Finite-time Ruin Probability for the Sparre Andersen Risk. Model Perturbed by an Inflated Stationary Chi-process

A polynomial expansion to approximate ruin probabilities

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims

Distribution of the first ladder height of a stationary risk process perturbed by α-stable Lévy motion

4 Sums of Independent Random Variables

ELEMENTS OF PROBABILITY THEORY

Renewal processes and Queues

On Finite-Time Ruin Probabilities in a Risk Model Under Quota Share Reinsurance

S n = x + X 1 + X X n.

Point Process Control

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Research Reports on Mathematical and Computing Sciences

1 Presessional Probability

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Worst-Case-Optimal Dynamic Reinsurance for Large Claims

Modern Discrete Probability Branching processes

(A n + B n + 1) A n + B n

Multivariate Risk Processes with Interacting Intensities

Ruin probabilities in multivariate risk models with periodic common shock

Non-Life Insurance: Mathematics and Statistics

Exponential Distribution and Poisson Process

Nonlife Actuarial Models. Chapter 5 Ruin Theory

M/G/1 and Priority Queueing

Ruin Probabilities of a Discrete-time Multi-risk Model

Renewal theory and its applications

NEW FRONTIERS IN APPLIED PROBABILITY

RMSC 2001 Introduction to Risk Management

SOLUTION FOR HOMEWORK 11, ACTS 4306

RMSC 2001 Introduction to Risk Management

The finite-time Gerber-Shiu penalty function for two classes of risk processes

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

An Analysis of the Preemptive Repeat Queueing Discipline

The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation.

µ X (A) = P ( X 1 (A) )

Modèles de dépendance entre temps inter-sinistres et montants de sinistre en théorie de la ruine

1 Poisson processes, and Compound (batch) Poisson processes

ON THE MOMENTS OF ITERATED TAIL

We introduce methods that are useful in:

ABC methods for phase-type distributions with applications in insurance risk problems

MAS223 Statistical Inference and Modelling Exercises

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Universal examples. Chapter The Bernoulli process

Other properties of M M 1

1. Stochastic Processes and filtrations

Upper and lower bounds for ruin probability

Class 11 Non-Parametric Models of a Service System; GI/GI/1, GI/GI/n: Exact & Approximate Analysis.

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

1 IEOR 6712: Notes on Brownian Motion I

IEOR 6711, HMWK 5, Professor Sigman

Extremes and ruin of Gaussian processes

Lectures 16-17: Poisson Approximation. Using Lemma (2.4.3) with θ = 1 and then Lemma (2.4.4), which is valid when max m p n,m 1/2, we have

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

Research Reports on Mathematical and Computing Sciences

Lecture 10: Semi-Markov Type Processes

Selected Exercises on Expectations and Some Probability Inequalities

On lower limits and equivalences for distribution tails of randomly stopped sums 1

Chapter 5. Chapter 5 sections

Lecture Notes 3 Convergence (Chapter 5)

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Necessary and sucient condition for the boundedness of the Gerber-Shiu function in dependent Sparre Andersen model

Formulas for probability theory and linear models SF2941

1 Generating functions

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

Optimal Reinsurance Strategy with Bivariate Pareto Risks

1 Delayed Renewal Processes: Exploiting Laplace Transforms

Poisson random measure: motivation

IDLE PERIODS FOR THE FINITE G/M/1 QUEUE AND THE DEFICIT AT RUIN FOR A CASH RISK MODEL WITH CONSTANT DIVIDEND BARRIER

Martingale Theory for Finance

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

3 Conditional Expectation

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

P (A G) dp G P (A G)

Quantile-quantile plots and the method of peaksover-threshold

f X (x) = λe λx, , x 0, k 0, λ > 0 Γ (k) f X (u)f X (z u)du

3. Poisson Processes (12/09/12, see Adult and Baby Ross)

Rare event simulation for the ruin problem with investments via importance sampling and duality

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico

Rare-Event Simulation

We will briefly look at the definition of a probability space, probability measures, conditional probability and independence of probability events.

Stat410 Probability and Statistics II (F16)

arxiv: v1 [math.pr] 7 Aug 2014

A Dynamic Contagion Process with Applications to Finance & Insurance

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Transcription:

Lecture Notes on Risk Theory February 2, 21

Contents 1 Introduction and basic definitions 1 2 Accumulated claims in a fixed time interval 3 3 Reinsurance 7 4 Risk processes in discrete time 1 5 The Adjustment Coefficient 15 6 Some risk processes in continuous time 17 6.1 The Sparre Andersen model.................... 17 6.2 The compound Poisson model................... 18 i

ii

Chapter 1 Introduction and basic definitions An insurance company needs to pay claims from time to time, while collecting premiums from its customers continuously over time. We assume that it starts with an initial (risk reserve u and the premium income is linear with some slope c >. At times T n, n N, a claim occurs. Thus we call T n the nth claim time. Naturally, we assume that T 1 > and T n+1 > T n for n N. The sequence of claim times forms a counting process N = (N t : t, defined by N t := max{n N : T n t}, where we define max :=. This process N is called the claim process. The nth claim size is denoted by U n, n N. Then the insurance company s risk reserve at any time t is given by is given by N t R t = u + c t U k (1.1 where the empty sum is defined as zero, i.e. U k =. Since N t and U k are random, (R t : t is a stochastic process. A typical path looks like figure 1. We see that at time T 4 something special happens: The risk reserve R T4 is negative, implying of course the ruin of the insurance company. Therefore, the stopping time τ(u := min{t : R t } is called the time of ruin. The following questions arise: 1. The only random term on the right hand side of equation 1.1 is the sum Nt U k representing the accumulated claims until time t. We shall study this in chapter 2. 1

2. An insurance company wants to insure itself against unusually high amounts of claims in any period. Some general results on this so called reinsurance will be presented in chapter 3. 3. An obvious criterion for the viability of an insurance policy is the ruin probability ψ(u := P(τ(u <. 2

Chapter 2 Accumulated claims in a fixed time interval Define X := N t U k for some time t > that shall be fixed throughout this chapter. We assume that the claim sizes U n, n N, are iid and further that N t and all U n are independent. Denote for k N and write p k := P(N t = k δ (x := { 1, x, x < for the distribution function of the Dirac measure on zero. Let Q denote the distribution function of U 1. Further denote the distribution function of the sum of a fixed number of claims by R k (x := P(U 1 +... + U k x for k N and x. In order to obtain R k in terms of Q, we define Q := δ, Q 1 := Q, and Q n+1 (x := x Q n (x ydq(y for n N. The distribution function Q n is called the nth convolutional power of Q. Lemma 2.1 R k = Q k for all k N. 3

Proof: For k = 1 the statement holds by definition. The induction step from k to k + 1 is straightforward: R k+1 (x = P(U 1 +... + U k+1 x = = x x P(U 1 +... + U k x ydq(y Q k (x ydq(y = Q k+1 (x by definition by induction hypothesis by conditioning on U k+1 = y Theorem 2.1 P(X x = k= p kq k (x Proof: P(X x = = P(X x, N t = k = k= p k Q k (x k= p k P(X x N t = k k= by definition of X = N t U k and lemma 2.1. With p := 1 p and p k Q := p Q k we can write P(X x = (1 pδ (x + p Q(x for x. The problem remains of course to determine a simple expression for Q(x. This is possible in the following Example 2.1 Assume that N t has a geometric distribution, i.e. p k = (1 pp k for all k N, and Q = Exp(µ. Then Q k = Erl k (µ, with density function f k (x = µ (µxk 1 (k 1! e µx 4

for x (proof as an exercise. Further, Q has the density function f(x = p k p µ(µxk 1 (k 1! e µx = = (1 pµe µx = (1 pµe (1 pµx (pµx k 1 (k 1! for all x. Hence Q = Exp((1 pµ and for x. (1 pp k 1 µ (µxk 1 (k 1! e µx = (1 pµe µx e pµx P(X x = (1 pδ (x + p(1 e (1 pµx = 1 pe (1 pµx Theorem 2.2 E(X = E(N t E(U 1 Proof: E(X = E ( Nt U k = p + = E(N t E(U 1 p k E(U 1 +... + U k = p k kµ Theorem 2.3 Var(X = E(N t Var(U 1 + Var(N t E(U 1 2 E(U 2 1 max(e(n t, Var(N t Proof: Var(X = E(X 2 E(X 2 = ( k 2 p k E U n E(N t 2 E(U 1 2 according to theorem 2.2. For the expectation in the first term, we obtain ( k 2 ( k k E U n = E U i U j = E(U i U j Un 2 + E(Un 2 + i j i j 5

Since all U n are iid, we arrive at ( k 2 E U n = k E(U1 2 + k(k 1E(U 1 2 Hence Var(X = p k k (E(U 1 2 E(U 1 2 + p k k 2 E(U 1 2 E(N t 2 E(U 1 2 = E(N t Var(U 1 + Var(N t E(U 1 2 The inequality follows from here via E(U 2 1 = Var(U 1 + E(U 1 2. Example 2.2 If N is a Poisson process, then E(N t = Var(N t and Var(X = E(N t E(U 2 1. 6

Chapter 3 Reinsurance Let X denote the random variable of the accumulated claims after some fixed time t > (e.g. after one year. X is also called the risk at time t. Let F denote the distribution function of X. A reinsurance policy is a function R : R + R + such that R(x x for all x R +. In the event {X = x} the insurance company pays R(x of the claims and the resinsurer pays the rest, x R(x. We assume first that the reinsurance premium is the fair net premium P = P (R = E(X R(X without safety loading. A special reinsurance policy is the stop loss reinsurance at price P, defined as { R x, x M (x = M, x > M where M is the unique solution of M is called the deductible. P = M (x MdF (x Example 3.1 Assume that X F := (1 p δ + p Exp((1 pµ as in example 2.1. Set β := (1 pµ. Then df (x = p βe βx dx for x > and ( P = (x MdF (x = p xβe βx dx M βe βx dx M M 7 M

Integration by parts yields for the first integral [ ] x ( 1 e βx + e βx dx = M e βm + 1 M β e βm As the second integral equals e βm, we obtain M P = p 1 β e βm from which the deductible is determined as M = 1 ( β ln P β p Thus the deductible has a meaningful (i.e. positive value if and only if the condition P < p 1 β = E(X holds. This is reasonable, as for larger premiums P > E(X the insurance company would do better to cover the risk itself. Theorem 3.1 For a fixed fair net premium P the stop loss reinsurance yields the smallest variance Var(R(X of all reinsurance policies. Proof: Let R denote any reinsurance policy with fair net premium P. Define P 1 := E(X P = E(R(X and note that P 1 = E(R (X, too. Then Var(R(X + P 2 1 = = M R 2 (xdf (x (R(x M 2 df (x + 2MP 1 M 2 (R(x M 2 df (x + 2MP 1 M 2 Over the range < x < M we obtain for the integrand (R(x M 2 = (M R(x 2 (M x 2 = (M R (x 2 = (R (x M 2 As R (x = M for x > M, we further get Var(R(X+P 2 1 (R (x M 2 df (x+2mp 1 M 2 = Var(R (X+P 2 1 8

which implies the statement. Now assume that the reinsurance premium is of the form P (R = E(X R(X + f(var(x R(X where f is some positive increasing function. The term f(var(x R(X is called the safety loading. Theorem 3.2 Given a fixed value V > and the condition Var(R(X = V, the cheapest reinsurance policy R for the premium P is given by R V (x = Var(X x for all x. Remark 3.1 The term cheapest is to be understood in the following sense: After the time period [, t], the insurance company will have to pay the premium P (R to the reinsurer and R(X as its own contribution to cover the accumulated claims. Altogether, the expected amount of this is E(X R(X+f(Var(X R(X+E(R(X = E(X+f(Var(X R(X The goal is to minimise this expectation. Proof: By the above remark it suffices to minimise f(var(x R(X and hence Var(X R(X, as f is increasing. For this we can write Var(X R(X = Var(X + Var(R(X 2 Cov(X, R(X which shows that we need to maximise the covariance Cov(X, R(X. This is achieved by the form R(X = a X for a constant a. Now the condition Var(R(X = V yields V = a 2 V Var(X, i.e. a = Var(X which is the statement. 9

Chapter 4 Risk processes in discrete time Let X n denote the accumulated claims in the time interval ]n 1, n], n N (e.g. the nth year. We assume that the random variables X n, n N, are iid. As in chapter 1, the initial reserve and the rate of premium income are denoted by u and c >. The random variable n K n := u + c n is called the risk reserve at time n N. Then the survival probability until time m (with initial reserve u is defined as X k ψ(u, m := P(K n n m Remark 4.1 Let X 1 F. Conditioning on {X 1 = x} yields the recursive formula +c ψ(u, m + 1 = ψ(u + c x, mdf (x for all u > and m N. However, this is in general numerically intractable. The survival probability over infinite time is defined as ψ(u := lim ψ(u, m m Clearly, ψ(u, m is decreasing in m and bounded below by zero, such that the limit ψ(u does exist. If we define Y n := X n c as the net claim in ]n 1, n] and S n := n Y k for all n N, then the Y n are iid (because the X n are so and c is constant and 1

thus S := (S n : n N is a random walk. Y n is called the nth increment of the random walk. For technical reasons (needed in the third statement of 4.1, we exclude the trivial case by assuming that P(X n = c < 1, i.e. P(Y n = < 1, for all n N. Theorem 4.1 Let (S n : n N be a random walk with increments Y n. Then E(Y 1 > lim S n = n E(Y 1 < lim S n = n E(Y 1 = lim sup S n =, lim inf n = n n where the implications hold almost certainly, i.e. with probability one. Proof: The first two statements are immediate consequences of the strong law of large numbers. For a proof of the third statement, see Rolski et al. [3], p.234f. We say that the random walk has a positive (resp. negative drift if E(Y 1 > (resp. E(Y 1 <, and that it is oscillating if E(Y 1 =. For the remainder of this chapter we assume a negative drift, i.e. the net profit condition E(X 1 < c. Define M := sup{s n : n N } and note that theorem 4.1 states that M < almost certainly. The crucial connection to our risk process is now the observation ψ(u = P(M u Define the (first strong ascending ladder epoch by ν + := min{n N : S n > } with the understanding that min :=. The random variable ν + is a stopping time with respect to the sequence Y = (Y n : n N, i.e. P(ν + = n Y = P(ν + = n Y 1,..., Y n for all n N. This means we do not need any future information to determine the event {ν + = n}. 11

The corresponding random variable { L + S ν +, := ν + < otherwise is called the (first strong ascending ladder height. Denote the distribution function of L + by G + (x := P(L + x and note that may be strictly less than one. G + ( := lim x G + (x = P(L + < Theorem 4.2 If E(Y 1 <, then E(ν + =. Proof: Assume that E(ν + <. Then Wald s lemma (see lemma 4.1 below applies to ( ν + E(L + = E Y n = E(ν + E(Y 1 But E(L + >, while E(ν + > and E(Y 1 <, which leads to a contradiction. Hence E(ν + =. Lemma 4.1 Wald s Lemma Let Y = (Y n : n N be a sequence of iid random variables and assume that E( Y 1 <. Let S be a stopping time for the sequence Y with E(S <. Then ( S E Y n = E(S E(Y 1 Proof: For all n N define the random variables I n := 1 on the set {S n} and I n := else. Then ( S ( S ( E Y n E Y n = E I n Y n = E(I n Y n by monotone convergence, as I n Y n for all n N. Since S is a stopping time for Y, we obtain P(S 1 = 1 and further P(S n Y = 1 P(S n 1 Y = 1 P(S n 1 Y 1,..., Y n 1 12

for all n 2. Since the Y n are independent, the equality above implies that I n and Y n are independent, too. This yields E(I n Y n = E(I n E( Y n = P(S n E( Y 1 for all n N. Now the relation P(S n = E(S yields ( S E Y n P(S n E( Y 1 = E(S E( Y 1 < Now we can use dominated convergence to obtain ( S ( E Y n = E I n Y n = E(I n Y n = = E(I n E(Y n P(S n E(Y 1 = E(S E(Y 1 In the case ν + < of a finite ladder epoch, we can define a new random walk starting at (ν +, L + by S = ((S ν + +n L + : n N. As the increments are iid, S is independent from (S 1,..., S ν + and the random walks S and S have the same distribution. Doing this iteratively, we arrive at the following definitions: Let ν + := and ν n+1 + := min{k > ν n + : S k > S ν + n } for all n N, where min :=. The stopping time ν n + is called the nth (strong ascending ladder epoch. Correspondingly, we define the nth (strong ascending ladder height as { L + Sν + n n := S ν +, ν+ n 1 n < otherwise for n N. By construction the ladder heights are iid. Defining now N := max{n N : ν n + < } we obtain the representation M = N L + k Remark 4.2 The net profit condition E(Y 1 < implies that M < almost certainly, see theorem 4.1. Since the (L + k : k N are iid, this entails N < almost certainly. Hence we can conclude that G + ( < 1. 13

Theorem 4.3 If E(Y 1 <, then P(M u = (1 p with p := G + ( and (G + k (u = k= k= (1 p p k G k (u G (u := 1 p G+ (u = P(L + u ν + < for all u. Proof: By definition, p = P(ν + <. The construction of the ladder epochs yields P(N = k = P(ν + n < n k, ν + k+1 = = pk (1 p Hence ( N P(M u = P L + n u = ( k (1 p p k P L + n u k= ν+ k < by conditioning on {N = k}. The statement follows with the observation that ( k G k (u = P L + n u < ν+ k Likewise, the ruin probability is given as ψ(u = P(M > u and thus Corollary 4.1 If the net profit condition E(Y 1 < holds, then where ψ(u = G k (u = 1 G k (u. (1 p p k G k (u This corollary is known as Beekman s formula in risk theory and as the Pollaczek Khinchin formula in queueing theory. Exercise 4.1 Show that ψ( = p and lim u ψ(u =. 14

Chapter 5 The Adjustment Coefficient Let P denote a probability measure with E(P := xdp (x ], [. Assume that the moment generating function m P (α := e αx dp (x satisfies m P (α < for some α > and lim α s m P (α 1 for s := sup{α > : m P (α < }. Then the solution γ > of m P (γ = 1 is called the adjustment coefficient of P. Remark 5.1 The adjustment coefficient does exist under the given assumptions, because m P ( = 1, m P ( <, and m P (α is strictly convex as m P (α > for all α < s. The number γ is the only positive number such that e γx dp (x is a probability measure. Theorem 5.1 Lundberg inequality Let (Y n : n N denote a sequence of independent random variables with identical distribution P, and define ( { m } ψ(u := P sup Y i : m N > u for u. If E(Y 1 < and γ is the adjustment coefficient of P, then for all u. Proof: Define ψ n (u := i=1 ψ(u e γu { P (sup { m i=1 Y i : m n} > u, u 1, u < 15

for all n N. We will show that ψ n (u e γu for all u R by induction on n. Since ψ(u = lim n ψ n (u, this will yield the statement. For u and all n N, the bound holds by definition. Let n = 1 and u >. Then (see exercise below ψ 1 (u = P(Y 1 > u = P(γY 1 > γu e γu E(e γy 1 = e γu Independence of the Y n, n N, yields further ψ n+1 (u = 1 = 1 P (], u] + = 1 ψ n (u y dp (y ψ n (u y dp (y e γ(u y dp (y = e γu ψ n (u y dp (y where the third equality holds by definition of ψ n, the inequality by induction hypothesis, and the last equality by the fact that E(e γy 1 = 1. Exercise 5.1 Show that for any real valued random variable X, a real number a R, and a positive and increasing function f the inequality holds. P(X > a E(f(X f(a Remark 5.2 A distribution P is called heavy tailed if its moment generating function does not exist, i.e. if e αx dp (x does not converge for any α >. If P is heavy tailed, then it does not have an adjustment coefficient and Lundberg s inequality does not apply. 16

Chapter 6 Some risk processes in continuous time 6.1 The Sparre Andersen model Our basic assumptions for this chapter are: 1. The claim sizes U n, n N, are iid with common distribution function F. 2. The claim arrival process N is an ordinary renewal process, which means that the inter claim times W n := T n T n 1, n N, with T :=, are iid with common distribution function G. 3. Claim sizes and inter claim times are independent. As in chapter 1, we denote the rate of premium income by c >. Define n Y n := U n cw n and S n := for n N. Then the process S = (S n : n N is a random walk (cf. chapter 4. Since c >, ruin can occur only at claim epochs T n, whence we obtain for the ruin time τ(u = min{t > : R t < } = min{t n : R Tn = u S n < } This implies for the ruin probability ( ψ(u = P M := sup S n > u n N 17 Y k

meaning that all results from chapter 4 apply to the Sparre Andersen model. Further results using a random walk analysis can be found in [2]. In order to determine the drift of the random walk S, we observe that E(Y 1 = E(U 1 c E(W 1 < ρ := E(U 1 c E(W 1 < 1 η := 1 ρ 1 = 1 E(U 1 (c E(W 1 E(U 1 > The number η is called the relative safety loading, while ρ is called the system load, a name that originated in queueing theory. The inequality E(U 1 c E(W 1 < is called the net profit condition. Clearly, the random walk S has a negative drift if and only if the net profit condition holds. 6.2 The compound Poisson model This is a special case of the Sparre Andersen model with G(t := 1 e λt for t, where λ >. Thus the claim arrival process N is a Poisson process with intensity λ. Remark 6.1 It can be shown (see Doob [1] that N is necessarily a Poisson process if we postulate the following properties: 1. N = with probability one. 2. N has independent increments, i.e. N t N s and N v N u are independent for all s < t u < v. 3. N has stationary increments, i.e. N s+t N t and N t have the same distribution for all s >. 4. There are no double events. More exactly, we postulate that P(N h 2 = o(h, i.e. 1 h P(N h 2 as h 18

Now we derive a (defective renewal equation for the ruin probability ψ(u. Define the complementary distribution function F by F (x := 1 F (x for all x. Theorem 6.1 For any initial reserve u, ψ(u = λ c u F (x dx + λ c ψ(u x F (x dx Proof: We condition on the time T 1 = t of the first claim. As T 1 Exp(λ, we obtain for the survival probability ψ(u = 1 ψ(u ψ(u = = λ c u λe λt +ct e λ c (y u y ψ(u + ct sdf (sdt ψ(y sdf (sdy after the substitution y = u + ct which entails dy = c dt and t = y = u. This shows that the function ψ(x = λ y c e λ c x e λ c y ψ(y sdf (sdy (6.1 is differentiable on x ], [. Denote g(x := x Then the derivative ψ (x is given by ψ (x = d ( λ dx c e λ c x g(x = λ c As g (x = e λ c x x ψ(x sdf (s, we obtain further x e λ c y y ψ(y sdf (sdy. ( λ c e λ c x g(x + e λ c x g (x ψ (x = λ c = λ c ( λ y c e λ c x e λ c y ψ(y sdf (sdy x x e λ c x e λ c x ψ(x sdf (s ( x ψ(x ψ(x sdf (s 19

using the representation (6.1. We will determine ψ(u via its derivative as ψ(u ψ( = = λ c ψ (xdx ( The second integral transforms as x= x s= Thus we can write ψ(xdx ψ(x s df (s dx = ψ(u = ψ( + λ c = ψ( + λ c = = = x s= x=s s s= x= x x= x= s= ψ(x sdf (sdx ψ(x s dx df (s ψ(x dx df (s df (s ψ(x dx F (u x ψ(x dx (1 F (u x ψ(x dx ψ(u x F (x dx We can determine ψ( if we let u. Since ψ(u x and F (x as x, we obtain This leads to ψ( = 1 λ c ψ(u = 1 ψ(u = λ c = λ c = λ c completing the proof. u F (x dx λ c F (x dx + λ c F (x dx = 1 λ c E(U 1 F (x dx λ c 2 ψ(u x F (x dx (1 ψ(u x F (x dx ψ(u x F (x dx

Exercise 6.1 Show that lim u ψ(u x F (x dx = E(U 1 Remark 6.2 The proof shows further that the system load p = ψ( = λ c E(U 1 = ρ depends on U 1 only by its mean. This is called insensitivity with respect to the shape of F, the distribution of U 1. Remark 6.3 If c depends on λ by c = (1 + ηλe(u 1, then ψ(u is constant in λ. Remark 6.4 The equation in the above theorem 6.1 is called a (defective renewal equation for ψ(u. Theorem 6.2 The Laplace transform of ψ(u is given by L ψ (s := e su ψ(u du = 1 s c λe(u 1 cs λ(1 L F (s where L F (s = e su df (u is the Laplace transform of the claim size distribution. Proof: Using theorem 6.1, we obtain L ψ (s = λ ( e su F (x dx du + c = λ c = λ c F (x + λ c F (x 1 s Isolating L ψ (s yields L ψ (s = λ cs ( u x e su du dx e su e sx F (x e s(u x ψ(u x du dx x ( 1 e sx dx + λ c F (x dx ψ(u x F (x dx du e sx F (xlψ (s dx ( e sx F (x dx 1 λ c 21 e sx F (x dx 1

Integration by parts yields whence we obtain L ψ (s = λ cs e sx F (x dx = 1 s e sx F (x dx = 1 s + [ 1 s e sx F (x = 1 s (1 L F (s ] 1 s e sx df (x ( E(U 1 1 ( s (1 L F (s 1 λ 1 cs (1 L F (s = λe(u 1 λ s (1 L F (s cs λ(1 L F (s = 1 s c λe(u 1 cs λ(1 L F (s Example 6.1 If U 1 Exp(µ, then L F (s = µ (µ + s 1 and L ψ (s = λ λ(1 µ µ s µ+s cs λ(1 µ = µ+s λ λ µ µ+s cs λs µ+s = λs µ (s (cµ + cs λ 1 = λ µc = λ µc e su e (µ λ/cu du Now uniqueness of the Laplace transform yields λ(µ+s λ µ = cs(µ + s λs (µ λc 1 + s for u. ψ(u = λ µc e (µ λ/cu 22

Bibliography [1] J. Doob. Stochastic processes. New York: Wiley, 1953. [2] S. M. Pitts and K. Politis. The joint density of the surplus before and after ruin in the Sparre Andersen model. J. Appl. Prob., 44:695 712, 27. [3] T. Rolski, H. Schmidli, V. Schmidt, and J. Teugels. Stochastic Processes for Insurance and Finance. Wiley, Chichester etc., 1999. 23