Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci

Similar documents
Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Severity Models - Special Families of Distributions

Actuarial models. Proof. We know that. which is. Furthermore, S X (x) = Edward Furman Risk theory / 72

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Probability and Distributions

Lecture 4. Continuous Random Variables and Transformations of Random Variables

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12

2 Functions of random variables

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Brief Review of Probability

Continuous Random Variables

Random Variables and Their Distributions

Stat410 Probability and Statistics II (F16)

Continuous Random Variables and Continuous Distributions

Generating Random Variates 2 (Chapter 8, Law)

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Generation from simple discrete distributions

Mathematical Statistics 1 Math A 6330

Probability Distributions Columns (a) through (d)

4. CONTINUOUS RANDOM VARIABLES

1 Review of Probability and Distributions

Method of Moments. which we usually denote by X or sometimes by X n to emphasize that there are n observations.

BEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Continuous random variables

Joint p.d.f. and Independent Random Variables

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

STAT 512 sp 2018 Summary Sheet

Math438 Actuarial Probability

Week 1 Quantitative Analysis of Financial Markets Distributions A

Lecture 17: The Exponential and Some Related Distributions

Ch3. Generating Random Variates with Non-Uniform Distributions

Math 362, Problem set 1

Chapter 3.3 Continuous distributions

Math 576: Quantitative Risk Management

Lecture 3: Random variables, distributions, and transformations

Answer Key for STAT 200B HW No. 7

Topic 4: Continuous random variables

10 Introduction to Reliability

ST5215: Advanced Statistical Theory

Chapter 5 continued. Chapter 5 sections

Stat 512 Homework key 2

Chapter 5. Chapter 5 sections

15 Discrete Distributions

Creating New Distributions

Topic 4: Continuous random variables

Test Problems for Probability Theory ,

Probability Density Functions

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Exponential Families

STATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS

Random Variate Generation

Gamma and Normal Distribuions

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

1 Delayed Renewal Processes: Exploiting Laplace Transforms

Functions of Random Variables

Will Landau. Feb 21, 2013

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.

1 Solution to Problem 2.1

Continuous random variables

Contents 1. Contents

Parameter Estimation

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Lecture 5: Moment generating functions

Mathematical statistics

3 Modeling Process Quality

The exponential distribution and the Poisson process

Transformations and Expectations

Machine Learning. Theory of Classification and Nonparametric Classifier. Lecture 2, January 16, What is theoretically the best classifier

S6880 #7. Generate Non-uniform Random Number #1

STAT 509 Section 3.4: Continuous Distributions. Probability distributions are used a bit differently for continuous r.v. s than for discrete r.v. s.

Chapter 2 Continuous Distributions

p. 6-1 Continuous Random Variables p. 6-2

STA 256: Statistics and Probability I

Problem 1 (20) Log-normal. f(x) Cauchy

the convolution of f and g) given by

STAT 479: Short Term Actuarial Models

Chapter 4. Continuous Random Variables

1 Review of Probability

Multivariate Distributions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions

SDS 321: Introduction to Probability and Statistics

Non-parametric Inference and Resampling

Lecture 26: Likelihood ratio tests

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Statistics 3657 : Moment Approximations

Review for the previous lecture

Continuous Distributions

ST5212: Survival Analysis

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4.

Chapter 4: Modelling

Lecture 2. (See Exercise 7.22, 7.23, 7.24 in Casella & Berger)

Exercises and Answers to Chapter 1

Final Examination. STA 711: Probability & Measure Theory. Saturday, 2017 Dec 16, 7:00 10:00 pm

1 Probability and Random Variables

Transcription:

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number of parameters is fixed and finite. Edward Furman Risk theory 4280 2 / 87

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number of parameters is fixed and finite. A family of distributions can be quite simple such as for instance the exponential Exp(λ) and normal N(µ, σ 2 ). On the other hand we can have X F (θ 1, θ 2,..., θ n ) that is much more complicated. Edward Furman Risk theory 4280 2 / 87

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number of parameters is fixed and finite. A family of distributions can be quite simple such as for instance the exponential Exp(λ) and normal N(µ, σ 2 ). On the other hand we can have X F (θ 1, θ 2,..., θ n ) that is much more complicated. Definition 1.2 (Scale family of distributions) A risk X with cdf F(x; σ) where σ > 0 is said to belong to a scale family of distributions if F (x; σ) = F (x/σ; 1). Here, σ is called the scale parameter. Edward Furman Risk theory 4280 2 / 87

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number of parameters is fixed and finite. A family of distributions can be quite simple such as for instance the exponential Exp(λ) and normal N(µ, σ 2 ). On the other hand we can have X F (θ 1, θ 2,..., θ n ) that is much more complicated. Definition 1.2 (Scale family of distributions) A risk X with cdf F(x; σ) where σ > 0 is said to belong to a scale family of distributions if F (x; σ) = F (x/σ; 1). Here, σ is called the scale parameter. Note that F can have more parameters than just σ, but they are unchanged upon scaling. Edward Furman Risk theory 4280 2 / 87

Definition 1.3 (Location-scale family of distributions) A risk X with cdf F(x; µ, σ) where < µ <, σ > 0 is said to belong to a location-scale family of distributions if F (x; µ, σ) = F((x µ)/σ; 0, 1). Here, µ and σ are the location and scale parameters, respectively. Example 1.1 Edward Furman Risk theory 4280 3 / 87

Definition 1.3 (Location-scale family of distributions) A risk X with cdf F(x; µ, σ) where < µ <, σ > 0 is said to belong to a location-scale family of distributions if F (x; µ, σ) = F((x µ)/σ; 0, 1). Here, µ and σ are the location and scale parameters, respectively. Example 1.1 Let X N(µ, σ 2 ). Then F(x; µ, σ) = = x { 1 exp 1 2πσ 2 ( ) } t µ 2 dt σ Edward Furman Risk theory 4280 3 / 87

Definition 1.3 (Location-scale family of distributions) A risk X with cdf F(x; µ, σ) where < µ <, σ > 0 is said to belong to a location-scale family of distributions if F (x; µ, σ) = F((x µ)/σ; 0, 1). Here, µ and σ are the location and scale parameters, respectively. Example 1.1 Let X N(µ, σ 2 ). Then F(x; µ, σ) = = = x { 1 exp 1 2πσ 2 (x µ)/σ 1 2π exp ( ) } t µ 2 dt σ { 12 } t2 dt Edward Furman Risk theory 4280 3 / 87

Definition 1.3 (Location-scale family of distributions) A risk X with cdf F(x; µ, σ) where < µ <, σ > 0 is said to belong to a location-scale family of distributions if F (x; µ, σ) = F((x µ)/σ; 0, 1). Here, µ and σ are the location and scale parameters, respectively. Example 1.1 Let X N(µ, σ 2 ). Then F(x; µ, σ) = = x { 1 exp 1 2πσ 2 (x µ)/σ 1 2π exp = F ((x µ)/σ; 0, 1). ( ) } t µ 2 dt σ { 12 } t2 dt Edward Furman Risk theory 4280 3 / 87

Definition 1.4 (A family of parametric distributions) A family of parametric distributions is a set of parametric distributions that are related in a meaningful way. Edward Furman Risk theory 4280 4 / 87

Definition 1.4 (A family of parametric distributions) A family of parametric distributions is a set of parametric distributions that are related in a meaningful way. Example 1.2 Think of X Ga(γ, α). This can be seen as the family of gamma distributions. Setting γ = 1, for instance, we obtain Edward Furman Risk theory 4280 4 / 87

Definition 1.4 (A family of parametric distributions) A family of parametric distributions is a set of parametric distributions that are related in a meaningful way. Example 1.2 Think of X Ga(γ, α). This can be seen as the family of gamma distributions. Setting γ = 1, for instance, we obtain the Exp(α). Considering integer γ only, we have Edward Furman Risk theory 4280 4 / 87

Definition 1.4 (A family of parametric distributions) A family of parametric distributions is a set of parametric distributions that are related in a meaningful way. Example 1.2 Think of X Ga(γ, α). This can be seen as the family of gamma distributions. Setting γ = 1, for instance, we obtain the Exp(α). Considering integer γ only, we have the Erlang distribution. Also, setting α = 1/2 and γ = ν/2, we have Edward Furman Risk theory 4280 4 / 87

Definition 1.4 (A family of parametric distributions) A family of parametric distributions is a set of parametric distributions that are related in a meaningful way. Example 1.2 Think of X Ga(γ, α). This can be seen as the family of gamma distributions. Setting γ = 1, for instance, we obtain the Exp(α). Considering integer γ only, we have the Erlang distribution. Also, setting α = 1/2 and γ = ν/2, we have the X 2 (ν) distribution. When we look at gamma family, we do not know the number of parameters to work with. When we concentrate on gamma distributions, we restrict our attention to the two parameter case. Edward Furman Risk theory 4280 4 / 87

Definition 1.5 (Mixed distributions) A risk Y is said to be an n point mixture of the risks X 1, X 2,..., X n if its cdf is F Y (y) = for α 1 + α n = 1, α k > 0. n α k F Xk k=1 Edward Furman Risk theory 4280 5 / 87

Definition 1.5 (Mixed distributions) A risk Y is said to be an n point mixture of the risks X 1, X 2,..., X n if its cdf is F Y (y) = for α 1 + α n = 1, α k > 0. n α k F Xk k=1 Definition 1.6 (Variable-component mixture distributions) A variable-component mixture distribution has a cdf F Y (y) = N α k F Xk, k=1 for α 1 + + α N = 1, α k > 0. Here N is random. Edward Furman Risk theory 4280 5 / 87

Example 1.3 Let X k Exp(λ k ), where k = 1, 2,.... The n point mixture of these risks has the cdf Edward Furman Risk theory 4280 6 / 87

Example 1.3 Let X k Exp(λ k ), where k = 1, 2,.... The n point mixture of these risks has the cdf F (x) = 1 α 1 e λ 1x α 2 e λ 2x α n e λnx Edward Furman Risk theory 4280 6 / 87

Example 1.3 Let X k Exp(λ k ), where k = 1, 2,.... The n point mixture of these risks has the cdf F (x) = 1 α 1 e λ 1x α 2 e λ 2x α n e λnx = (α 1 + + α n ) α 1 e λ 1x α 2 e λ 2x α n e λnx. The pdf is then Edward Furman Risk theory 4280 6 / 87

Example 1.3 Let X k Exp(λ k ), where k = 1, 2,.... The n point mixture of these risks has the cdf F (x) = 1 α 1 e λ 1x α 2 e λ 2x α n e λnx = (α 1 + + α n ) α 1 e λ 1x α 2 e λ 2x α n e λnx. The pdf is then f (x) = α 1 λ 1 e λ 1x + α 2 λ 2 e λ 2x + + α n λ n e λnx. The hazard function is Edward Furman Risk theory 4280 6 / 87

Example 1.3 Let X k Exp(λ k ), where k = 1, 2,.... The n point mixture of these risks has the cdf F (x) = 1 α 1 e λ 1x α 2 e λ 2x α n e λnx = (α 1 + + α n ) α 1 e λ 1x α 2 e λ 2x α n e λnx. The pdf is then f (x) = α 1 λ 1 e λ 1x + α 2 λ 2 e λ 2x + + α n λ n e λnx. The hazard function is h(x) = α 1λ 1 e λ 1x + α 2 λ 2 e λ 2x + + α n λ n e λnx α 1 e λ 1x + α 2 e λ 2x + + α n e λnx. Edward Furman Risk theory 4280 6 / 87

A distribution must not be parametric. Definition 1.7 (Empirical model) An empirical model is a discrete distribution based on a sample of size n that assigns probability 1/n to each data point. Edward Furman Risk theory 4280 7 / 87

A distribution must not be parametric. Definition 1.7 (Empirical model) An empirical model is a discrete distribution based on a sample of size n that assigns probability 1/n to each data point. Example 1.4 Let us say we have a sample of 8 claims with the values {3, 5, 6, 6, 6, 7, 7, 8}. Then the empirical model is p(x) = 0.125 x = 3, 0.125 x = 5, 0.375 x = 6, 0.250 x = 7, 0.125 x = 8. Edward Furman Risk theory 4280 7 / 87

Proposition 1.1 Let X be a continuous rv having pdf f and cdf F. Let Y = θx with θ > 0. Then F Y (y) = F X (y/θ) and f Y (y) = 1 θ f X (y/θ). Proof. F Y (y) = Edward Furman Risk theory 4280 8 / 87

Proposition 1.1 Let X be a continuous rv having pdf f and cdf F. Let Y = θx with θ > 0. Then F Y (y) = F X (y/θ) and f Y (y) = 1 θ f X (y/θ). Proof. F Y (y) = P[Y y] = Edward Furman Risk theory 4280 8 / 87

Proposition 1.1 Let X be a continuous rv having pdf f and cdf F. Let Y = θx with θ > 0. Then F Y (y) = F X (y/θ) and f Y (y) = 1 θ f X (y/θ). Proof. F Y (y) = P[Y y] = P[X y/θ] = Edward Furman Risk theory 4280 8 / 87

Proposition 1.1 Let X be a continuous rv having pdf f and cdf F. Let Y = θx with θ > 0. Then F Y (y) = F X (y/θ) and f Y (y) = 1 θ f X (y/θ). Proof. F Y (y) = P[Y y] = P[X y/θ] = F X (y/θ). Also f Y (y) = Edward Furman Risk theory 4280 8 / 87

Proposition 1.1 Let X be a continuous rv having pdf f and cdf F. Let Y = θx with θ > 0. Then F Y (y) = F X (y/θ) and f Y (y) = 1 θ f X (y/θ). Proof. F Y (y) = P[Y y] = P[X y/θ] = F X (y/θ). Also f Y (y) = d dy F Y (y) = 1 θ f X (y/θ). Edward Furman Risk theory 4280 8 / 87

Proposition 1.2 Let X be a continuous rv having pdf f and cdf F, F(0) = 0. Let Y = X 1/τ. Then if τ > 0 F Y (y) = F X (y τ ) and f Y (y) = τy τ 1 f X (y τ ), y > 0 And if τ < 0, then F Y (y) = 1 F X (y τ ) and f Y (y) = τy τ 1 f X (y τ ). Edward Furman Risk theory 4280 9 / 87

Proposition 1.2 Let X be a continuous rv having pdf f and cdf F, F(0) = 0. Let Y = X 1/τ. Then if τ > 0 F Y (y) = F X (y τ ) and f Y (y) = τy τ 1 f X (y τ ), y > 0 And if τ < 0, then Proof. If τ > 0, then F Y (y) = 1 F X (y τ ) and f Y (y) = τy τ 1 f X (y τ ). F Y (y) = P[Y y] = P[X y τ ] = F X (y τ ), and the pdf follows by differentiation. Edward Furman Risk theory 4280 9 / 87

Proposition 1.2 Let X be a continuous rv having pdf f and cdf F, F(0) = 0. Let Y = X 1/τ. Then if τ > 0 F Y (y) = F X (y τ ) and f Y (y) = τy τ 1 f X (y τ ), y > 0 And if τ < 0, then Proof. If τ > 0, then F Y (y) = 1 F X (y τ ) and f Y (y) = τy τ 1 f X (y τ ). F Y (y) = P[Y y] = P[X y τ ] = F X (y τ ), and the pdf follows by differentiation. If τ < 0, then F Y (y) = P[Y y] = P[X y τ ] = 1 F X (y τ ). Edward Furman Risk theory 4280 9 / 87

Proposition 1.3 Let X be a continuous rv having pdf f and cdf F. Let Y = e X. Then, for y > 0 F Y (y) = F X (log(y)) and f Y (y) = 1 y f X (log(x)). Edward Furman Risk theory 4280 10 / 87

Proposition 1.3 Let X be a continuous rv having pdf f and cdf F. Let Y = e X. Then, for y > 0 F Y (y) = F X (log(y)) and f Y (y) = 1 y f X (log(x)). Proof. We have that P[Y y] = P[e X y] = P[X log(y)] = F X (log(y)). The density follows by differentiation. Edward Furman Risk theory 4280 10 / 87

Proposition 1.3 Let X be a continuous rv having pdf f and cdf F. Let Y = e X. Then, for y > 0 F Y (y) = F X (log(y)) and f Y (y) = 1 y f X (log(x)). Proof. We have that P[Y y] = P[e X y] = P[X log(y)] = F X (log(y)). The density follows by differentiation. Example 1.5 Let X N(µ, σ 2 ), and let Y = e X. What is the distribution of Y? Edward Furman Risk theory 4280 10 / 87

Solution For the cdf, we have that for positive y F Y (y) = Edward Furman Risk theory 4280 11 / 87

Solution For the cdf, we have that for positive y F Y (y) = F X (log(y)) = Edward Furman Risk theory 4280 11 / 87

Solution For the cdf, we have that for positive y F Y (y) = F X (log(y)) = Φ((log(y) µ)/σ). Also, for the pdf f Y (y) = Edward Furman Risk theory 4280 11 / 87

Solution For the cdf, we have that for positive y F Y (y) = F X (log(y)) = Φ((log(y) µ)/σ). Also, for the pdf f Y (y) = 1 y f X (log(y)) = Edward Furman Risk theory 4280 11 / 87

Solution For the cdf, we have that for positive y Also, for the pdf F Y (y) = F X (log(y)) = Φ((log(y) µ)/σ). f Y (y) = 1 y f X (log(y)) = 1 φ((log(y) µ)/σ), σy which of course reduces to Edward Furman Risk theory 4280 11 / 87

Solution For the cdf, we have that for positive y Also, for the pdf F Y (y) = F X (log(y)) = Φ((log(y) µ)/σ). f Y (y) = 1 y f X (log(y)) = 1 φ((log(y) µ)/σ), σy which of course reduces to { f Y (y) = 1 1 exp 1 σy 2π 2 that is a log-normal distribution. ( ) } log(y) µ 2 σ Edward Furman Risk theory 4280 11 / 87

Proposition 1.4 Let X be a continuous risk having cdf F X and pdf f X, and let h : R R be a strictly monotone function. Also, let Y = h(x), then the cdf of Y denoted by F Y is { FX (h F Y (y) = 1 (y)), if h is strictly increasing 1 F X (h 1 (y)), if h is strictly decreasing Moreover, if x = h 1 (y) is differentiable, then f Y (y) = f X (h 1 (y)) d dy h 1 (y). Proof. Edward Furman Risk theory 4280 12 / 87