Exam C. Exam C. Exam C. Exam C. Exam C

Size: px
Start display at page:

Download "Exam C. Exam C. Exam C. Exam C. Exam C"

Transcription

1 cumulative distribution function distribution function cdf survival function probability density function density function probability function probability mass function hazard rate force of mortality failure rate truncated censored k-th raw moment empirical model k-th central moment

2 x S X (x) = P r(x > x) = 1 F X (x) F (x) = f(s)ds = P (X < x) p X (x) = Pr(X = x) f(x) = F (x) = S (x) An observation is truncated at d if when it is below d it is below d it is not recorded but when it is above d it is recorded at itsobserved value. h X (x) = f X(x) S X (x) The kth raw moment of a random variable is the expected value of the kth power of the variable, provided it exists. It is denoted by E(X k ) or by µ k. The first raw moment is called the mean of the random variable and is usually denoted by µ. An observation is censored at u if, when it is above u it is recorded as being equal to u but when it is below u it is recorded at its observed value. µ k = E(X k ) = E(x) = x k f(x)dx = j 0 S(x)dx x k j p(x j ) The kth central moment of a random variable is the expected value of the kth power deviation of the variable from its mean. It is denoted by E[(X µ) k ] or by µ k. The second central moment is usually called the variance and denoted σ 2 and the square root, σ, is called the standard deviation. µ k = E[(X µ) k ] = = j (x µ) k f(x)dx (x j µ) k p(x j ) The empirical model is a discrete distribution based on a sample size n which assigns probability 1/n to each data point.

3 coefficient of variation skewness kurtosis symmetric distribution, then it has a skewness limited loss variable right censored variable limited expected value left censored and shifted variable And E(left censored and shifted variable) left censored and shifted variable mean residual life function complete expectation of life left truncated and shifted variable mean excess loss function percentile

4 µ 3 σ 3 σ µ µ 3 σ 3 = 0 µ 4 σ 4 E[(X u) k ] = = E[X u] u 0 u 0 x k f(x) + u k [1 F (u)] S(x)dx Y = X u = { X, X < u u, X u Y = X d given that X > d { 0, X < d Y = (X d) + = X d, X d E[(X d) + ] = E(X) E(X d) = d S(x)dx The 100pth percentile of a random variable is any value π p such that F (π p ) p F (π p ). e X (d) = e(d) = E(Y ) = E(X d X > d) E(X d X > d) = E(X) E(X d) 1 F (d)

5 median S k E(S k ) Distribution of lim k V ar(sk ) moment generating function moment generating function E(X n ) = probability generating function p k from probability generating function Mean and Variance using the probability generating function pgf/mgf of n j=1 X j parametric distribution scale distribution

6 Normal distribution with mean 0 and variance 1. The 50th percentile, π 0.5 is called the median. E(X n ) = M (n) X (0) M X(t) = E(e tx ) p k = P (m) X (0) m! The mth derivative of the pgf evalated at 0 devided by m factorial. P X (z) = E(z X ) = p k z k k=0 M Sk (t) = P Sk (z) = k M Xj (t) j=1 k P Xj (z) j=1 E(X) = P X(1) E[X 2 ] = P X(1) + P X(1) Var(X) = P X(1) + P X(1) [P X(1)] 2 A parametric distribution is a scale distribution if, when a random variable from that set of distributions is multiplied by a positive constant, the resulting random variable is also in that set of distributions. A parametric distribution is a set of distribution functions, each member of which is determined by specifying one or more values called parameters. The number of parameters is fixed and finite.

7 scale parameter parametric distribution family k-point mixture variable-component mixture distribution data-dependant distribution Equilibrium Distribution Survival function and hazard rate of Equilibrium Distribution Survival function based on equilibrium distribution coherent risk measure Value-at-Risk

8 A parametric distribution family is a set of parametric distributions that are related in some meaningful way. For random variables with nonnegative support, a scale parameter is a parameter for a scale distribution that meets two conditions. First, when a member of the scale distribution is multiplied by a positive constant, the scale parameter is multiplied by a positive constant, the scale parameter is multiplied by the same constant. Second, when a member of the scale distribution is multiplied by a positive constant, all other parameters are unchanged. A variable-component mixture distribution has a distribution function that can be written as A random variable Y is a k-point mixture of the random variables X 1, X 2,, X k if its cdf is given by K F (x) = a j F j (x), j=1 K a j = 1, a j > 0 j=1 F Y (y) = a 1 F X1 (y) + a 2 F X2 (y) + + a k F Xk (y) where all a j > 0 and a 1 + a a k = 1. Assume X is a continuous distribution f(x) with a survival function S(x) and mean E(X). Then the equilibrium distribution is f e (x) = S(x) E(X), x 0 A data-dependant distribution is at least as complex as the data or knowledge that produced it, and the number of parameters increases as the number of data points or amount of knowledge increase. S(x) = E(x) x e(x) e 0 [ e(t)]dt 1 S e (x) = x f e (t)dt = x S(t)dt, x 0 E(x) h e (x) = f e(x) S e (x) = S(x) x S(t)dt = 1 e(x) Let X denote a loss random variable. The Value-at- Risk of X at the 100p% level, denoted VaR p (X) or π p, is the l00p percentile (or quantile) of the distribution of X. And for continuous distributions it is the value of π p satisfying Pr(X > π p ) = 1 p VaR is not coherent as it does not meet the subaddivity requirement in some cases. 1. Subadditivity: ρ(x + Y ) ρ(x) + ρ(y ). 2. Monotonicity: If X Y for all possible outcomes, then ρ(x) ρ(y ). 3. Positive homogeneity: For any positive constant c. ρ(cx) = cρ(x). 4. Translation invariance: For any positive constant c. ρ(x + c) = ρ(x) + c.

9 Tail-Value-at-Risk Given: F (x) and f(x) find F Y (y) when Y = θx Given: F (x) and f(x) find F Y (y) when Y = X 1/τ transformed inverse inverse transformed Given: F (x) and f(x) find F Y (y) when Y = e X Given: F (x) and f(x) find F Y (y) when Y = g(x) Mixture Distribution (F X (x)) Raw Moments of a Mixture

10 ( y ) F Y (y) = F X θ f Y (y) = 1 ( y ) θ f X θ Let X denote a loss random variable. The Tail- Value-at-Risk of X at the 100p% security level, denoted TVaR p (X), is the expected loss given that the loss exceeds the l00p percentile (or quantile) of the distribution of X. And for continuous distributions it can expressed as TVaR p (X) = E(X X > π p ) = π p xf(x)dx 1 F (π p ) Y = X 1/τ τ > 0 F Y (y) = F X (y τ ) f Y (y) = τy τ 1 f X (y τ ) Y = X 1/τ τ =< 0 and τ 1 Y = X 1/τ τ = 1 h(y) = g 1 (Y ) F Y (y) = F X [h(y)] f Y (y) = h (y) f X [h(y)] F Y (y) = F X [ln(y)] f Y (y) = 1 y f X[ln(y)] E(X k ) = E[E(X k Λ)] F X (x) = F X Λ (x λ)f Λ (λ)dλ

11 Variance of a Mixture Important Mixtures: Y = Poison(Λ) Λ = Gamma(α, θ) Important Mixtures: Y = Exponential(Λ) Λ = Inv.Gamma(α, θ) Important Mixtures: Y = Inv.Exponential(Λ) Λ = Gamma(α, θ) Important Mixtures: Y = Normal(Λ, σ 2 c ) Λ = Normal(µ, σ 2 d ) k-component spliced distribution linear exponential family normalizing constant canonical parameter linear exponential family: Mean

12 Neg.Bin(r = α, β = θ) Var(X) = E[V ar(x Λ)] + V ar[e(x Λ)] Inv.Pareto(τ = α, θ) Pareto(α, θ) A k-component spliced distribution has a density function that can be expressed as follows: a 1 f 1 (x), c 0 < x < c 1 a 2 f 2 (x), c 1 < x < c 2 f X (x) =.... a k f k (x), c k 1 < x < c k Normal(µ, σ 2 c + σ 2 d ) q(θ) f(x; θ) = p(x)er(θ)x q(θ) f(x; θ) = p(x)er(θ)x q(θ) E(X) = µ(θ) = q (θ) r (θ)q(θ) r(θ) f(x; θ) = p(x)er(θ)x q(θ)

13 linear exponential family: Variance (a,b,0) class of distribution (a,b,1) class of distribution Poison (a,b,0) specification Binomial (a,b,0) specification Negative Binomial (a,b,0) specification Geometric (a,b,0) specification Geometric Relation to Negative Binomial memoryless E[(1 + r)x c]

14 Let p k be the pf of a discrete random variable. It is member of the (a,b,0) class of distributions, provided that there exists constants a and b such that Var(X) = µ (θ) r (θ) p k p k 1 = a + b k k = 1, 2, 3, a = 0 b = λ p 0 = e λ Let p k be the pf of a discrete random variable. It is member of the (a,b,0) class of distributions, provided that there exists constants a and b such that p k p k 1 = a + b k k = 2, 3, 4, p 0 = 1 k=1 p k It is called truncated if p 0 = 0. It is called zero-modified if p 0 > 0 and is a mixture of an (a,b,0) class and a distribution where p 0 = 1. (AKA truncated with zeros) a = β 1 + β β b = (r 1) 1 + β p 0 = (1 + β) r a = q 1 q q b = (m + 1) 1 q p 0 = (1 q) m Geomtric is Negative Binomial with parameter r = 1. a = β 1 + β b = 0 p 0 = (1 + β) 1 E[(1 + r)x c] = (1 + r)e [X c ] c = c 1 + r P (X > x + y X > x) = P (Y > y) The Geometric and Exponential distributions are both examples of a memoryless distribution.

15 per-loss variable per-payment variable Relationship between per-loss and per-payment Relationship between per-loss and per-payment Variance franchise deductible Expectation ordinary deductible Expectation The loss elimination ratio Co-insurance Co-insurance, deductible and limits variable Co-insurance, deductible and limits variable Expectation

16 { Y P undefined, X d = X d, X > d { Y L 0, X d = X d, X > d Var(Y P ) = E ( [Y L ] 2) S X (d) ( E(Y L ) 2 ) If f meets certian conditions [not sure how to define S X (d) them, (linear?)] then f(y P ) = f(y L ) S X (d) However, E(Y P ) = E(Y L ) S X (d) Var(Y P ) Var(Y L ) S X (d) E(X) E(X d) A franchise deductible modifies the ordinary deductible by adding the deductible whenever there is a positive amount paid. For a franchise deductible the expected cost per loss is E(X) E(X d) + d[1 F (d)] If co-insruance is the only modification, this changes the loss variable X to the payment variable Y = αx. E(X) [E(X) E(X d)] E(X) = E(X d) E(X) E(Y L ) = α(1 + r) [E (X u ) E (X d )] 0, X < d Y L = α[(1 + r)x d], d X < u α(u d), u X

17 Co-insurance, deductible and limits variable 2nd Raw Moment individual risk model collective risk model OR Compound Model Compound Model Mean Compound Model Variance Compound Model N = Poisson(λ) stop-loss insurance P r(a < S < b) = 0. Then, for a d b E[(S d) + ] = Theorem to calculate E[(S d) + ] using a discrete probability function w(p (S = kh) = f k 0) with equally spaced (h) nodes. Convolution method (X 1 + X 2 )

18 The individual risk model represents the aggregate loss as a sum, S = X X n, of a fixed number, n, of insurance contracts. The loss amounts for the n contracts are (X 1,..., X n ), where the X j s are assumed to be independent but are not assumed to be identically distributed. The distribution of the X j s usually has a probability mass at zero, corresponding to the probability of no loss or payment E[(Y L ) 2 ] = = α 2 (1 + r) 2 {E[(X u ) 2 ] E[(X d ) 2 ] 2d E(X u ) + 2d E(X d )} E[S] = E[E(S N)] = E(N)E(X) = µ n µ x S = X X N with: 1. Conditional on N = n, the random variables X 1, X 2,..., X n are i.i.d random variables. 2. Conditional on N = n, the common distribution of the random variables X 1, X 2,..., X n, does not depend on n. 3. The distribution of N does not depend in any way on the values of X 1, X 2,... E[S] = µ n µ x = λµ x V ar[s] = λ(σ 2 x + µ 2 x) = λe(x 2 ) (N=Poisson) (N=Poisson) V ar[s] = V ar[e(s N)] + E[V ar(s N)] = Var(N)E(X) 2 + E(N)Var(X) = σ 2 nµ 2 x + µ n σ 2 x E[(S d) + ] = b d b a E[(S a) +] + d a b a E[(S b) +] Insurance on the aggregate losses, subject to a deductible, is called stop-loss insurance. The expected cost of this insurance is called the net stop-loss premium and can be computed as E[(S d) + ], where d is the deductible and the notation (.) + means to use the value in parenthesis if it is positive but to use zero otherwise F S (k) = f X1 (j)f X2 (k j) all j f S (s) = f X1 (t)f X2 (s t)dt F S (s) = f X1 (t)f X2 (s t)dt k = 0, 1,.... Then, provided d = jh, with j a nonnegative integer E[(S d) + ] = h {1 F S [(m + j)h]} m=0 E{[S (j + 1)h] + } = E[(S jh) + ] h[1 F S (jh)]

19 Convolution method (X X n ) Bias asymptotically unbiased consistent mean-squared error (MSE) uniformly minimum variance unbiased estimator UMVUE confidence interval significance level uniformly most powerful p-value

20 biasˆθ = E(ˆθ θ) θ F n X (x) = f n X (x) = x 0 x 0 F (n 1) X (x t)f X (t)dt f (n 1) X (x t)f X (t)dt lim P r( ˆθ n θ > σ) = 0 n lim E(ˆθ n θ) = θ n An estimator, ˆθ, is called a uniformly minimum variance unbiased estimator (UMVUE) if it is unbiased and for any true value of θ there is no other unbiased estimator that has a smaller variance. MSEˆθ(θ) = E[(ˆθ θ) 2 θ] = V ar(ˆθ θ) + [biasˆθ(θ)] 2 The significance levelof a hypothesis test is the probability of making a Type I error given that the null hypothesis is true. If it can be true in more than one way, the level of significance is the maximum of such probabilities. The significance level is usually denoted by the letter α. A 100(1 α)% confidence interval for a parameter θ is a pair of random values, L and U, computed from a random sample such that P r(l θ U) 1 α for all θ. The p-value is the smallest level of significance at which H 0 would be rejected when a specified test procedure is used on a given data set. Once the p-value has been determined the conclusion at any particular level α results from computing the p-value to α: A hypothesis test is uniformly most powerful if no other test exists that has the same or lower significance level and for a particular value within the alternative hypothesis has a smaller probability of making a Type II error. 1. p-value α reject H 0 at level α. 2. p-value > α do not reject H 0 at level α. [Probability and Statistics, Devore, 2000]

21 Log-Transformed Confidence Interval empirical distribution F n (x) = S n (x) kernel smoothed distribution data set: variables data summary E[S n (x)] Var[S n (x)] Sample Variance Empirical estimate of the variance Empirical estimate of E[(X u) k ]

22 The empirical distribution is obtained by assigning probability 1/n to each data point. number of observations x F n (x) = n S n (x) = 1 F n (x) The 100(1 α)% log-transformed confidence interval for S n (t) is ( S n (t) U, S n (t) (1/U)) where U = exp z α/2 Var[S n (t)] S n (t) ln S n (t) 1. n - insureds 2. d i - entry time 3. x i - death time 4. u i - censored time A kernel smoothed distribution is obtained by replacing each data point with a continuous random variable and then assigning probability 1/n to each such random variable. The random variable used must be identical except for a location or scale change that is related to its associated data point. Y = number greater than x in sample E[S n (x)] = S(x) = Y n 1. m - death points 2. y j - death point time 3. s j - time y j 4. r j - time y j 1 n 1 n Y = number greater than x in sample (x i x) 2 i=1 S(x) = Y n S(x)[1 S(x)] E[S n (x)] = n 1 x k i u k [number of x i s > u] n xi u 1 n n (x i x) 2 i=1

23 cumulative hazard rate function Nelson-Åalen estimate variance of the Nelson-Åalen estimate Kaplan-Meier product-limit Estimator Greenwood Approximation Variance of the Kaplan-Meier product-limit Estimator log-transformed interval for the Nelson-Åalen estimate method of moments percentile matching smoothed empirical estimate likelihood function log-likelihood function

24 0, x < y 1 j 1 s Ĥ(x) = i i=1 r i, y j 1 x < y j, j = 2,..., k x i y k k i=1 s i r i, H(x) = ln S(x) S n (t) = 1, 0 t < y 1 ( ) j 1 ri s i i=1 r i y j 1 t < y j, j = 2,..., k ( ) k ri s i i=1 r i t y k j Var[Ĥ(y j)] = s i r 2 i=1 i = Ĥ(t)U, where U = exp ± z Var[Ĥ(y α/2 j)] Ĥ(t) Var[S n (y j )]. = [S n (y j )] 2 j i=1 s i r i (r i s i ) π gk (θ) = ˆπ gk, k = 1, 2,..., p F (ˆπ gk θ) = g k, k = 1, 2,..., p Here. indicates the greatest integer function and x (1) x (2)... x (n) are the order statistics from the sample. µ k(θ) = ˆµ k, k = 1, 2,..., p n L(θ) = Pr(X j A j θ) j=1 l(θ) = ln L(θ) ˆπ g = (1 h)x (j) + hx (j+1), where j = (n + 1)g and h = (n + 1)g j

25 Information function Variance of the likelihood estimate (θ) Delta Method (single variable) Delta Method (general) Non-normal confidence interval p-p plot D(x) plot Kolmogorov-Smirnov Test Anderson-Darling test Chi-Square Goodness-of-fit

26 Var(ˆθ) = [I(θ)] 1 [ ] 2 I(θ) = E ln L(θ) θ2 [ ( ) 2 2 ] = E ln L(θ) θ2 = n f(x; θ) 2 ln[f(x; θ)]dx θ2 Let ˆθ n = (ˆθ 1n,..., ˆθ kn ) T be a multivariate parameter vector of dimension k based on a sample size of n. Assume that ˆθ n is asymptotically normal with mean of ˆθ and covariance matrix Ω/n. Then g(θ 1,..., θ k ) is asymptotically normal with E[g(θ)] = g(ˆθ n ) Var[g(θ)] = ( g) T Ω n g Let ˆθ n be a parameter estimated using a sample size of n. Assume that ˆθ n is asymptotically normal with mean of µ and variance of σ 2 /n. Then g(ˆθ n ) is asymptotically normal with E[g(θ)] = g(ˆθ n )) Var[g(θ))] = [g (ˆθ n )] 2 σ2 n (F n (x j ), F (x j )), where F n (x j ) = j n + 1 { θ : l(θ) l(ˆθ) χ2 α/2 2 where the first term is the loglikelihood value at the maximum likelihood estimate and the second term is the 1 α percentile from the chi-square distribution with degrees of freedom equal to the number of estimated parameters. } D = max t x u F n(x) F (x) D(x) = F n (x) F (x), where F n (x j ) = j n χ 2 = = k n(ˆp j p nj ) 2 j=1 ˆp j k (E j O j ) 2 j=1 The critical values for this test comes from the chisquare distribution with degrees of freedom of (k 1 r). Where k is the number of terms in the sum and r is the number of estimated parameters values. E j +n +n u A 2 [F n (x) F (x)] 2 = n t F (x)[1 F (x)] f (x)dx = nf (u) k [1 F n (y j )] 2 (ln[1 F (y j )] ln[1 F (y j+1 )]) j=0 k F n (y j ) 2 [ln F (y j+1 ) ln F (y j )] j=0

27 likelihood ratio test Schwarz Bayesian Criterion Full credibility (Single Variable Case) Full credibility (Poisson) Full Credibility Compound Distributions # of S i s Full Credibility Compound Distributions n i=1 S i Full Credibility Compound Distributions n S i µ i=1 y Full Credibility Compound Distributions N = P oisson(λ) # of S i s Full Credibility Compound Distributions N = P oisson(λ) n i=1 S i Full Credibility Compound Distributions N = P oisson(λ) n S i µ i=1 y

28 Recommends that when ranking models a deduction of (r/2) ln n should be made from the loglikelihood value, where r is the number of estimated parameters and n is the sample size. T = 2 ln ( L1 L 0 = L(θ 0 ) L 1 = L(θ 1 ) L 0 ) = 2 (ln L 1 ln L 0 ) The critical values come from a chi-square distribution with degrees of freedom equal to the number of free parameters in L 1 less the number of free parameters in L 0. n n 0 1 λ W n n 0 n n 0 W n n 0 σ 2 ( ) 2 σ V ar[w ] = n 0 µ (E[W ]) 2 = n 0CVW 2 µ = n 0 V ar[w ] E[W ] = n 0 E[W ]CV 2 W σnµ 2 2 y + µ n σy 2 σnµ 2 2 y + µ n σy 2 n 0 n 0 µ n µ y (µ n µ y ) 2 [ ] 1 n σ2 y λ µ 2 y n 0 σ 2 nµ 2 y + µ n σ 2 y µ n µ 2 y [ ] n σ2 y µ 2 y [ ] n 0 µ y + σ2 y µ y

29 Partial Credibility Model distribution Joint distribution Prior distribution marginal distribution posterior distribution predictive distribution Bayes Premium E(x n+1 ) Bayes Premium E(x n+1 ) with: E[X θ] = θ (eg. X Poisson) 0 x a e cx dx

30 The model distribution (m x θ ) is the probability distribution for the data as collected given a particular value for the parameter. Its pdf is denoted m x θ = f X Θ (x θ). n m x θ = f X Θ (x θ) = f X Θ (x i θ) i=0 Q = ZW + (1 Z)M Where, Z = ( ) information available min information required for full credibility, 1 The prior distribution (π θ ) is a probability over the space of possible parameter values. It is denoted π(θ) and represents our opinion concerning the relative chances that various values of θ are the true value of the parameter. The joint distribution (j x,θ ) has pdf j x,θ = f X,Θ (x, θ) = m x θ π(θ) = f X Θ (x θ)π(θ) The posterior distribution (p θ x ) is the conditional probability distribution of the parameters given the observed data. Its pdf is The marginal distribution (g x ) of x has pdf g x = f X (x) = j x,θ dθ = f X Θ (x θ)π(θ)dθ p θ x = π Θ X (θ x) = j x,θ g x = f X Θ(x θ)π(θ) fx Θ (x θ)π(θ)dθ E(x n+1 x) = E(x θ)p θ x dθ The predictive distribution is the conditional probability distribution of a new observation y given the data x = x 1,..., x n. Its pdf f Y X (y x) = f Y Θ (y θ)p θ x dθ = f Y Θ (y θ)π Θ x (θ x)dθ 0 x a e cx dx = Γ(a + 1) c a+1 = a! c a+1 a is an integer E(x n+1 x) = θp θ x dθ or the mean of the posterior distribution.

31 0 e c x x k dx Conjugate Prior Poisson-Gamma λ = gamma(α, θ) x = Poisson(λ) Conjugate Prior Exponential-Inverse Gamma λ = gamma 1 (α, θ) x = exp(λ) Conjugate Prior Binomial-Beta q = beta(a, b, 1) x = bin(m, q) Conjugate Prior Inverse Exponential-Gamma λ = gamma(α, θ) x = exp 1 (λ) Conjugate Prior Normal-Normal λ = normal(µ, a 2 ) x = normal(λ, σ 2 ) Conjugate Prior Uniform-Pareto λ = single.pareto(α, θ) x = uniform(0, λ) hypothetical mean or collective premium process variance expected value of the hypothetical means EVHM

32 ( gamma α + ) θ x i, nθ e c x Γ(k 1) dx = xk c k 1 k > 1 (k 2)! = c k 1 k 2 beta(a + x i, b + km x i, 1) gamma 1 (α + n, θ + x i ) ([ xi normal σ 2 + µ ] [ n a 2 / σ ] a 2, 1 n σ a 2 ) gamma ( α + n, [ 1 θ + ] ) 1 1 x i µ(θ) = E(X ij Θ i = θ) single.pareto(α + n, max(x, θ)) µ = E[µ(θ)] v(θ) = m ij V ar(x ij Θ i = θ)

33 expected value of the process variance EVPV variance of the hypothetical means VHM Bühlmann s k or credibility coefficient Bühlmann credibility factor Bühlmann credibility premium Var(X) = f(evpv,vhm) Non-Paramtric estimation: µ Non-Paramtric estimation: v Non-Paramtric estimation: a Method using c = Non-Paramtric estimation: a Loss models technique

34 a = V ar[µ(θ)] v = E[v(θ)] Z i = m i m i + v/a k = v a Var(X) = a + v = EV P V + V HM Z i X + (1 Zi )µ v = 1 r i=1 (n i 1) r n i ( ) 2 m ij Xij X i i=1 j=1 µ = X ( a = m 1 m r i=1 m 2 i ) 1 [ r ] m i ( X i X) 2 v(r 1) i=1 [ r c = r 1 m ( i r m i=1 ( [ r r a = c r 1 ( a = c i=1 m i Var(X i ) vr m 1 m i m ) ] 1 m (X i X) 2 ) ] ) vr m

35 Non-Paramtric estimation: a µ is given Non-Paramtric estimation: a If µ is given only data available is for policy holder i inverse transformed method bootstrap estimate of the mean squared error Chi-Square Test for number of claims is the result of a sum of a number (n) of i.i.d random variables (x)

36 v i = ni j=1 m ij(x ij X i ) 2 n i 1 a i = (X i µ) 2 v i m i a = r i=1 m i m (X i µ) 2 r m v Data: y = {y 1,..., y n } A statistic: θ from the empirical distribution function. x ij = y randi (1,n) i = 1,..., m; j = 1,..., n ˆθ i = g(x i ) MSE(ˆθ) = 1 m ) 2 (ˆθi θ = Var(ˆθ) + bias m 2ˆθ i=1 x = F 1 X (rand(0, 1)) (Continuous) F (x j 1 ) rand(0, 1) < F (x j ) (Discrete) k χ 2 (E j O j ) 2 = j=1 E j = ne(x) V j = nvar(x) V j

Practice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:

Practice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes: Practice Exam 1 1. Losses for an insurance coverage have the following cumulative distribution function: F(0) = 0 F(1,000) = 0.2 F(5,000) = 0.4 F(10,000) = 0.9 F(100,000) = 1 with linear interpolation

More information

Exam C Solutions Spring 2005

Exam C Solutions Spring 2005 Exam C Solutions Spring 005 Question # The CDF is F( x) = 4 ( + x) Observation (x) F(x) compare to: Maximum difference 0. 0.58 0, 0. 0.58 0.7 0.880 0., 0.4 0.680 0.9 0.93 0.4, 0.6 0.53. 0.949 0.6, 0.8

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods

More information

Course 4 Solutions November 2001 Exams

Course 4 Solutions November 2001 Exams Course 4 Solutions November 001 Exams November, 001 Society of Actuaries Question #1 From the Yule-Walker equations: ρ φ + ρφ 1 1 1. 1 1+ ρ ρφ φ Substituting the given quantities yields: 0.53 φ + 0.53φ

More information

SPRING 2007 EXAM C SOLUTIONS

SPRING 2007 EXAM C SOLUTIONS SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

RMSC 2001 Introduction to Risk Management

RMSC 2001 Introduction to Risk Management RMSC 2001 Introduction to Risk Management Tutorial 4 (2011/12) 1 February 20, 2012 Outline: 1. Failure Time 2. Loss Frequency 3. Loss Severity 4. Aggregate Claim ====================================================

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Severity Models - Special Families of Distributions

Severity Models - Special Families of Distributions Severity Models - Special Families of Distributions Sections 5.3-5.4 Stat 477 - Loss Models Sections 5.3-5.4 (Stat 477) Claim Severity Models Brian Hartman - BYU 1 / 1 Introduction Introduction Given that

More information

RMSC 2001 Introduction to Risk Management

RMSC 2001 Introduction to Risk Management RMSC 2001 Introduction to Risk Management Tutorial 4 (2011/12) 1 February 20, 2012 Outline: 1. Failure Time 2. Loss Frequency 3. Loss Severity 4. Aggregate Claim ====================================================

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Errata and updates for ASM Exam C/Exam 4 Manual (Sixth Edition) sorted by date

Errata and updates for ASM Exam C/Exam 4 Manual (Sixth Edition) sorted by date Errata for ASM Exam C/4 Study Manual (Sixth Edition) Sorted by Date 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixth Edition) sorted by date Please note that the SOA has announced that when using

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Solutions to the Spring 2015 CAS Exam ST

Solutions to the Spring 2015 CAS Exam ST Solutions to the Spring 2015 CAS Exam ST (updated to include the CAS Final Answer Key of July 15) There were 25 questions in total, of equal value, on this 2.5 hour exam. There was a 10 minute reading

More information

Estimation for Modified Data

Estimation for Modified Data Definition. Estimation for Modified Data 1. Empirical distribution for complete individual data (section 11.) An observation X is truncated from below ( left truncated) at d if when it is at or below d

More information

Errata and updates for ASM Exam C/Exam 4 Manual (Sixth Edition) sorted by page

Errata and updates for ASM Exam C/Exam 4 Manual (Sixth Edition) sorted by page Errata for ASM Exam C/4 Study Manual (Sixth Edition) Sorted by Page Errata and updates for ASM Exam C/Exam 4 Manual (Sixth Edition) sorted by page Please note that the SOA has announced that when using

More information

Method of Moments. which we usually denote by X or sometimes by X n to emphasize that there are n observations.

Method of Moments. which we usually denote by X or sometimes by X n to emphasize that there are n observations. Method of Moments Definition. If {X 1,..., X n } is a sample from a population, then the empirical k-th moment of this sample is defined to be X k 1 + + Xk n n Example. For a sample {X 1, X, X 3 } the

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1 Estimation September 24, 2018 STAT 151 Class 6 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 1 1

More information

Experience Rating in General Insurance by Credibility Estimation

Experience Rating in General Insurance by Credibility Estimation Experience Rating in General Insurance by Credibility Estimation Xian Zhou Department of Applied Finance and Actuarial Studies Macquarie University, Sydney, Australia Abstract This work presents a new

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Elements of statistics (MATH0487-1)

Elements of statistics (MATH0487-1) Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -

More information

SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MODELS EXAM STAM SAMPLE SOLUTIONS

SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MODELS EXAM STAM SAMPLE SOLUTIONS SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MODELS EXAM STAM SAMPLE SOLUTIONS Questions -37 have been taken from the previous set of Exam C sample questions. Questions no longer relevant to the

More information

Test Problems for Probability Theory ,

Test Problems for Probability Theory , 1 Test Problems for Probability Theory 01-06-16, 010-1-14 1. Write down the following probability density functions and compute their moment generating functions. (a) Binomial distribution with mean 30

More information

TABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1

TABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1 TABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1 1.1 The Probability Model...1 1.2 Finite Discrete Models with Equally Likely Outcomes...5 1.2.1 Tree Diagrams...6 1.2.2 The Multiplication Principle...8

More information

STAT215: Solutions for Homework 2

STAT215: Solutions for Homework 2 STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

Actuarial models. Proof. We know that. which is. Furthermore, S X (x) = Edward Furman Risk theory / 72

Actuarial models. Proof. We know that. which is. Furthermore, S X (x) = Edward Furman Risk theory / 72 Proof. We know that which is S X Λ (x λ) = exp S X Λ (x λ) = exp Furthermore, S X (x) = { x } h X Λ (t λ)dt, 0 { x } λ a(t)dt = exp { λa(x)}. 0 Edward Furman Risk theory 4280 21 / 72 Proof. We know that

More information

Statistical Theory MT 2007 Problems 4: Solution sketches

Statistical Theory MT 2007 Problems 4: Solution sketches Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Probability Distributions Columns (a) through (d)

Probability Distributions Columns (a) through (d) Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

9 Bayesian inference. 9.1 Subjective probability

9 Bayesian inference. 9.1 Subjective probability 9 Bayesian inference 1702-1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake pm to win

More information

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance

More information

Statistical Theory MT 2006 Problems 4: Solution sketches

Statistical Theory MT 2006 Problems 4: Solution sketches Statistical Theory MT 006 Problems 4: Solution sketches 1. Suppose that X has a Poisson distribution with unknown mean θ. Determine the conjugate prior, and associate posterior distribution, for θ. Determine

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

STAT 461/561- Assignments, Year 2015

STAT 461/561- Assignments, Year 2015 STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and

More information

MIT Spring 2016

MIT Spring 2016 MIT 18.655 Dr. Kempthorne Spring 2016 1 MIT 18.655 Outline 1 2 MIT 18.655 Decision Problem: Basic Components P = {P θ : θ Θ} : parametric model. Θ = {θ}: Parameter space. A{a} : Action space. L(θ, a) :

More information

Statistics Ph.D. Qualifying Exam: Part II November 9, 2002

Statistics Ph.D. Qualifying Exam: Part II November 9, 2002 Statistics Ph.D. Qualifying Exam: Part II November 9, 2002 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your

More information

INTRODUCTION TO BAYESIAN METHODS II

INTRODUCTION TO BAYESIAN METHODS II INTRODUCTION TO BAYESIAN METHODS II Abstract. We will revisit point estimation and hypothesis testing from the Bayesian perspective.. Bayes estimators Let X = (X,..., X n ) be a random sample from the

More information

Subject CS1 Actuarial Statistics 1 Core Principles

Subject CS1 Actuarial Statistics 1 Core Principles Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and

More information

Risk Models and Their Estimation

Risk Models and Their Estimation ACTEX Ac a d e m i c Se r i e s Risk Models and Their Estimation S t e p h e n G. K e l l i s o n, F S A, E A, M A A A U n i v e r s i t y o f C e n t r a l F l o r i d a ( R e t i r e d ) R i c h a r

More information

Simulation. Alberto Ceselli MSc in Computer Science Univ. of Milan. Part 4 - Statistical Analysis of Simulated Data

Simulation. Alberto Ceselli MSc in Computer Science Univ. of Milan. Part 4 - Statistical Analysis of Simulated Data Simulation Alberto Ceselli MSc in Computer Science Univ. of Milan Part 4 - Statistical Analysis of Simulated Data A. Ceselli Simulation P.4 Analysis of Sim. data 1 / 15 Statistical analysis of simulated

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

Qualifying Exam in Probability and Statistics.

Qualifying Exam in Probability and Statistics. Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm Statistics GIDP Ph.D. Qualifying Exam Theory Jan, 06, 9:00am-:00pm Instructions: Provide answers on the supplied pads of paper; write on only one side of each sheet. Complete exactly 5 of the 6 problems.

More information

Continuous Distributions

Continuous Distributions Chapter 3 Continuous Distributions 3.1 Continuous-Type Data In Chapter 2, we discuss random variables whose space S contains a countable number of outcomes (i.e. of discrete type). In Chapter 3, we study

More information

Solutions to the Spring 2018 CAS Exam MAS-1

Solutions to the Spring 2018 CAS Exam MAS-1 !! Solutions to the Spring 2018 CAS Exam MAS-1 (Incorporating the Final CAS Answer Key) There were 45 questions in total, of equal value, on this 4 hour exam. There was a 15 minute reading period in addition

More information

The Marshall-Olkin Flexible Weibull Extension Distribution

The Marshall-Olkin Flexible Weibull Extension Distribution The Marshall-Olkin Flexible Weibull Extension Distribution Abdelfattah Mustafa, B. S. El-Desouky and Shamsan AL-Garash arxiv:169.8997v1 math.st] 25 Sep 216 Department of Mathematics, Faculty of Science,

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

Suggested solutions to written exam Jan 17, 2012

Suggested solutions to written exam Jan 17, 2012 LINKÖPINGS UNIVERSITET Institutionen för datavetenskap Statistik, ANd 73A36 THEORY OF STATISTICS, 6 CDTS Master s program in Statistics and Data Mining Fall semester Written exam Suggested solutions to

More information

Chapters 9. Properties of Point Estimators

Chapters 9. Properties of Point Estimators Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/?? to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your

More information

BTRY 4090: Spring 2009 Theory of Statistics

BTRY 4090: Spring 2009 Theory of Statistics BTRY 4090: Spring 2009 Theory of Statistics Guozhang Wang September 25, 2010 1 Review of Probability We begin with a real example of using probability to solve computationally intensive (or infeasible)

More information

Statistics Masters Comprehensive Exam March 21, 2003

Statistics Masters Comprehensive Exam March 21, 2003 Statistics Masters Comprehensive Exam March 21, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 STAB57H3 Introduction to Statistics Duration: 3 hours Last Name: First Name: Student number:

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information