Let A and B be two events such that P (B) > 0, then P (A B) = P (B A) P (A)/P (B).

Size: px
Start display at page:

Download "Let A and B be two events such that P (B) > 0, then P (A B) = P (B A) P (A)/P (B)."

Transcription

1 1 Coditioal Probability Let A ad B be two evets such that P (B) > 0, the P (A B) P (A B)/P (B). Bayes Theorem Let A ad B be two evets such that P (B) > 0, the P (A B) P (B A) P (A)/P (B). Theorem of total probability Let A 1, A 2, be a coutable collectio of mutually exclusive ad exhaustive evets, so that A i A j for i j ad A i Ω, the P (B) P (B A i ) P (A i ). 2 Coditioal Distributios The coditioal probability fuctio of X give that Y y is f X Y (x y) f X,Y (x, y)/f Y (y). If X ad Y are idepedet, the f X,Y (x, y) f X (x) f Y (y) which implies f X Y (x y) f X (x). Rewrite f X,Y (x, y) f X Y (x y) f Y (y). The f X (x) f X,Y (x, y)dy f X Y (x y) f Y (y)dy. Applicatio: f X (x) f X Θ (x θ) f Θ (θ)dθ; F X (x) F X Θ (x θ) f Θ (θ)dθ. Note that f X Y (x y) f Y (y) f X,Y (x, y) f Y X (y x) f X (x), implyig that f X Y (x y) f Y X(y x) f X (x) f Y (y) f Y X (y x) f X (x). 3 Coditioal Expectatio Let X be a discrete radom variable such that x 1, x 2,... are the oly values that X takes o with positive probability. Defie the coditioal expectatio of X give that evet A has occurred, deoted by E[X A], as E[X A] x i P [X x 1 A]. Cotiuous case : E[X Y y] x f X Y (x y)dx Let A 1, A 2, be a coutable collectio of mutually exclusive ad exhaustive evets, ad let X be a discrete radom variable for which E[X] exist. The E[X] E[X A i ] P [A i ]. Cotiuous case : E[X] E[X Y y] f Y (y)dy E Y [E X (X Y )] E[X] E[h(X, Y ) Y y] h(x, y) f X Y (x y)dx E Y {E X [h(x, Y ) Y ]} E[h(X, Y )] V ar[x Y ] E{[X E(X Y )] 2 Y } E[X 2 Y ] [E(X Y )] 2 E[V ar[x Y ]] E[E(X 2 Y )] E{[E(X Y )] 2 } E[X 2 ] E{[E(X Y )] 2 } 1

2 V ar[e(x Y )] E{[E(X Y )] 2 } {E[E(X Y )]} 2 E{[E(X Y )] 2 } [E(X)] 2 V ar[x] E[V ar(x Y )] + V ar[e(x Y )] 4 Noparametric Ubiased Estimators If X 1, X 2,..., X are idepedet but ot ecessarily idetical with commo mea µ E[X j ] ad commo variace σ 2 V ar[x j ], the ˆµ X 1 X j is a ubiased estimator of µ, ad ˆσ (X j X) 2 is a ubiased estimator of v 5 Full ad Partial Credibilities The followig Zs subject to Z 1; if Z 1 the a full credibility is assiged. The stadard for NO full credibility i terms of (λ 0 [y p /r] 2 ) (1) (frequecy case) the umber of policies is (use observed umber of policies for ) ( ) σx 2XN ( ) σn 2P oisso F λ 0 λ 0 λ 0 θ X θ N λ ( )1/2 ( ) 1/2 ( ) ˆθX XN 1/2 ( ˆθN P oisso ˆλ Z F λ 0 ˆσ X λ0 ˆσ N λ 0 ) 1/2; (2) (frequecy case) the expected umber of claims is (use observed umber of claims N j ˆλ for λ E[N j ] θ N ) λ λ 0 λ ( σx θ X ) 2XN ( ) σn 2 λ 0 λ 0 λ θ N λ σ2 N ( ) 1/2 { ˆθX ˆλ Z λ0 ˆσ X (λ 0 /ˆλ)ˆσ N 2 } 1/2P oisso P oisso λ 0 ( ˆλ λ 0 ) 1/2; (3) Severity case (based o compoud Poisso assumptio) the umber of policies is (use observed umber of policies for ) ( ) σx 2 σ λ 0 λ0 Nθ 2 Y 2 + θ N σy 2 θ X θnθ 2 Y 2 λ [ ( ) 0 σy 2 ] { 1+ Z λ θ Y (λ 0 /ˆλ)[1 + (ˆσ Y /ˆθ Y ) 2 ] P oisso (4) Severity case (based o compoud Poisso assumptio) the expected umber of claims is (use observed umber of claims N j ˆλ for λ E[N j ]) E[N j ] λ λ 0 [1 + ( σy θ Y ) 2 ] { ˆλ Z λ 0 [1 + (ˆσ Y /ˆθ Y ) 2 ] 2 } 1/2; } 1/2;

3 (5) Severity case (based o compoud Poisso assumptio) the expected total dollars of claims is (use observed total dollars of claims Ni Y i,j X i ˆθ X ˆλ ˆθ Y ( N j ) ˆθ Y for E[X j ] λ θ Y ) [ ] { E[X j ] λ θ Y λ 0 θ Y + σ2 Y ˆλˆθ Y Z θ Y λ 0 [ˆθ Y + (ˆσ Y 2 /ˆθ Y )] ˆθ X X i Pure Premium Losses Exposures N j X i N j N j } 1/2. Ni Y i,j N j ˆθ N ˆθ Y. # of Claims Exposures Losses # of Claims (Frequecy) (Severity). 6 Predictive ad Posterior Distributios Assume we have observed X x, where X (X 1,..., X ) ad x (x 1,..., x ), ad wat to set a rate to cover X +1. Let θ be the associated risk parameter (θ is ukow ad comes from a r,v, Θ), ad X j have coditioal probability desity fuctio f Xj Θ(x j θ), j 1,...,, + 1. We are iterested i (1) f X+1 X (x +1 x), the predictive probability desity; ad (2) f Θ X(θ x), the posterior probability desity. The joit probability desity of X ad θ is f X,Θ ( x, θ) f(x 1,..., x θ)π(θ) id. [ ] f Xj Θ(x j θ) π(θ), (where π(θ) is called the prior probability desity) ad the probability desity of X is [ f X( x) implyig that the posterior probability desity is π Θ X(θ x) f X,Θ ( x, θ) f X( x) The predictive probability desity is ] f Xj Θ(x j θ) π(θ)dθ, [ ] f Xj Θ(x j θ) f X( x) π(θ). f X+1 X (x +1 x) f X,X+1 ( x, x +1 ) f X( x) f X+1 Θ(x +1 θ) π Θ X(θ x)dθ. If θ is kow, the hypothetical mea (the premium charged depedet o θ) is µ +1 (θ) E[X +1 Θ θ] x +1 f X+1 Θ(x +1 θ)dx x+1 ; 3

4 if θ is ukow, the mea of the hypothetical meas, or the pure premium (the premium charged idepedet of θ) is µ +1 E[X +1 ] E[E[X +1 Θ]] E[µ +1 (Θ)]. The mea of the predictive distributio (the Bayesia premium) is E[X +1 X x] x +1 f X+1 X (x +1 x)dx +1 µ +1 (θ) π Θ X(θ x)dθ E[µ +1 (Θ) X x], that is, (the mea of the predictive distributio) (the mea of posterior distributio). 7 The Credibility Premium We would like to approximate µ +1 (Θ) by a liear fuctio of the past data. That is, we will choose α 0, α 1,...,α to miimize squared error loss, {[ Q E µ +1 (Θ) α 0 ] 2 α j X j }. E[X +1 ] E[µ +1 (Θ)] ˆα 0 + ˆα j E[X j ], (1) For i 1, 2,...,, we have E[µ +1 (Θ) X i ] E[X +1 X i ] ad Cov(X i, X +1 ) ˆα j Cov(X i, X j ), (2) ˆα 0 + ˆα j X j is called the credibility premium. Equatio (1) ad the equatios (2) together are called the ormal equatios which ca be expressed i a matrix form as follows: µ +1 1 µ 1 µ 2 µ σ 1, σ σ2,+1 2 1,1 2 σ1,2 2 σ 2 1, 0 σ2,1 2 σ2,2 2 σ 2 2, σ, σ,1 2 σ,2 2 σ, 2 where µ j E[X j ] ad σ 2 i,j Cov(X i, X j ), i 1, 2,..., ad j 1, 2,..., + 1. Note that the values ˆα 0, ˆα 1,..., ˆα also miimize {[ Q 1 E E(X +1 X) α 0 ] 2 } α j X j ˆα 0 ˆα 1 ˆα 2. ˆα, {[ ad Q 2 E X +1 α 0 ] 2 α j X j }. That is, the credibility premium ˆα 0 + ˆα j X j is the best liear estimator of each the hypothetical mea µ +1 (Θ)(E[X +1 Θ]), the Bayesia premium E[X +1 X], ad X +1 (all µ +1 (Θ), E[X +1 X] ad X +1 have the same expectatios). 4

5 Special case: If µ j E[X j ] µ, σ 2 j,j V ar(x j ) σ 2, ad σ 2 i,j Cov(X i, X j ) ρ σ 2 for i j, where the correlatio coefficiet ρ satisfies 1 < ρ < 1. The ˆα 0 The credibility premium is the ˆα 0 + ˆα j X j (1 ρ) µ 1 ρ + ρ ad ˆα j 1 ρ 1 ρ + ρ µ + ρ 1 ρ + ρ ρ 1 ρ + ρ. X j (1 Z) µ + Z X, where Z ρ/(1 ρ + ρ). Thus, if 0 < ρ < 1, the 0 < Z < 1 ad the credibility premium is a weighted average of the sample mea X ad the pure premium µ +1 E[X +1 ] µ. 8 The Loss Fuctios L(Θ, ˆΘ) ˆΘ miimizes EΘ [L(Θ, ˆΘ)], E Θ [L(Θ, ˆΘ) X] (Θ ˆΘ) 2 the mea, E Θ [Θ], E Θ [Θ X] Θ ˆΘ the media, Π 1 Θ [1/2], Π 1 [1/2] { Θ X c, if ˆΘ Θ 0, if ˆΘ Θ the mode, Max θp Θ [Θ θ], Max θ P X[Θ Θ θ] 9 The Parametric Buhlma Model Assume X 1 Θ, X 2 Θ,..., X Θ are idepedet ad idetically distributed. Deote the hypothetical mea µ(θ) E[X j Θ θ]; the process variace v(θ) V ar[x j Θ θ]; the expected value of the hypothetical mea (the collective premium) µ E[µ(Θ)]; the expected value of the process variace v E[v(Θ)] E[V ar(x j Θ)]; ad the variace of the hypothetical mea a V ar[µ(θ)] V ar[e(x j Θ)]. The E[X j ] E[E(X j Θ)] E[µ(Θ)] µ, ad V ar[x j ] E[V ar(x j Θ)] + V ar[e(x j Θ)] E[v(Θ)] + V ar[µ(θ)] v + a, for j 1, 2,...,. Also, for i j, Cov[X i, X j ] V ar[µ(θ)] a. From the special case, we have σ 2 v + a ad ρ a/(v + a). The credibility premium is ˆα 0 + ˆα j X j Z X + (1 Z) µ, a liear fuctio of X with the slope Z ad the itercept (1 Z) µ, where Z ρ 1 ρ + ρ + v/a 5 + k

6 is called the Buhlma credibility factor, ad k v a E[v(Θ)] V ar[µ(θ)] E[V ar(x j Θ)] V ar[e(x j Θ)]. Note that Z is icreasig i ad a V ar[µ(θ)] V ar[e(x j Θ)], but decreasig i k ad v E[v(Θ)] E[V ar(x j Θ)]. Theorem: Suppose that (1) i a sigle period of observatio there are idepedet trials, X 1,..., X, each with probability distributio fuctio F X Θ (x θ), ad (2) i each of periods of observatio there is a sigle trials, X i, i 1, 2,...,, where the X i s are idepedet ad idetically distributed with probability distributio fuctio F X Θ (x θ). The Z 1 Z 2 (the credibility factor of case (1) is equal to the credibility factor of case (2)) ad k 2 k 1. Z k E Θ[V ar(x 1 + X X Θ)] V ar Θ [E(X 1 + X X Θ)] + E Θ[V ar(x 1 Θ)] V ar Θ [E(X 1 Θ)] + k 2 Z 2. Note that the mode is ot ecessarily uique. 10 The No-parametric Buhlma Model Give policy years of experiece data o r group policyholders, 2 ad r 2, let X i,j deote the radom variable represetig the aggregate loss amout of the i th policyholder durig the j th policy year for i 1,..., r ad j 1,...,, + 1. We would like to estimate E[X i,+1 X i,1,..., X i, ] for i 1,..., r. Let X i (X i,1,..., X i, ) deote the radom vector of aggregate claim amout for the i th policyholder, i 1,..., r. Furthermore, we assume (1) X 1,..., X r are idepedet; (2) For i 1,..., r, the distributio of each elemet X i,j (j 1,..., ) of Xi depeds o a (ukow) risk parameter Θ i θ i ; (3) Θ 1... Θ are idepedet ad idetically distributed radom variables; (4) Give i, X i,1 Θ i,..., X i, Θ i are idepedet; ad (5) Each combiatio of policy year ad policyholder has a equal umber of uderlyig exposure uits. For i 1,..., r ad j 1,...,, defie µ(θ i ) E[X i,j Θ i θ i ], v(θ i ) V ar[x i,j Θ i θ i ], µ E[µ(Θ i )], the expected value of the hypothetical meas, a V ar[µ(θ i )] V ar[e(x i,j Θ i )], the variace of the hypothetical meas, ad v E[v(Θ i )] E[V ar(x i,j Θ i )], the expected value of the process variaces. The Bühlma estimate for the i th policyholder ad the ( + 1) th policy year is i 1,..., r, where Ẑ /( + ˆk), ˆk ˆv/â, E[X i,+1 X i,1,..., X i, ] Ẑ X i + (1 Ẑ)ˆµ, 6

7 X 1 (X 1,1 X 1,2 X 1, ) X 1 1 X 2 (X 2,1 X 2,2 X 2, ) X 2 1 X 1,j ˆv X 2,j ˆv (X 1,j X 1 ) 2 (X 2,j X 2 ) X r (X r,1 X r,2 X r, ) X r 1 X r,j ˆv r 1 (X r,j 1 X r ) 2 â 1 r 1 ( X i X) 2 ˆv ˆµ X 1 X i ˆv 1 1 ˆv i (X i,j r r r( 1) X i ) 2 Note that (1) Ẑ ad (1 Ẑ)ˆµ are idepedet of i; (2) it is possible that â could be egative due to the substractio. Whe that happes, it is customary to set â Ẑ0, ad the Bühlma estimate becomes ˆµ X. 11 The Die-Spier Model X k (A i B j ) I k S k (A i B j ) (I k A i ) (S k B j ) where the frequecy of claims I k A i Beroulli(p i ), ad S k B j is the severity of claims. P [A i B j ] P [A i ] P [B j ]. Bayesia Estimate P [ X x (A i B j )] id. P [X k x k (A i B j )] k1 P [X k x k (A i B j )] x k>0 P [I k 1 A i ] P [S k x k B j ] p i P [S k x k B j ]. P [X k 0 (A i B j )] P [I k 0 A i ] 1 p i. P [(A i B j ) X x] P [ X x (A i B j )] P [A i B j ]/P [ X x]. P [ X x] P [ X x (A i B j )] P [A i B j ]. P [X +1 x +1 X x] P [X +1 x +1 (A i B j )] P [(A i B j ) X x]. E[X k (A i B j )] E[I k A i ] E[S k B j ] p i E[S k B j ]. E[X +1 X x] m k P [X +1 m k X x] k E[X +1 (A i B j )] P [(A i B j ) X x]. 7

8 Credibility Estimate (combied) µ E[X k ] E[E(X k Θ)] E[X k (A i B j )] P [A i B j ]. v E{V ar[x k Θ]} V ar[x k (A i B j )] P [A i B j ]. V ar[x k (A i B j )] V ar[i k S k (A i B j )] E[I 2 k A i ] E[S 2 k B j ] {E[I k A i ] E[S k B j ]} 2 E[I k A i ] V ar[s k B j ]+V ar[i k A i ] {E[S k B j ]} 2 p i V ar[s k B j ]+p i (1 p i ) {E[S k B j ]} 2. a V ar{e[x k Θ]} E{[E(X k Θ)] 2 } {E[E(X k Θ)]} 2 E{[E(X k Θ)] 2 } {E[X k ]} 2 k (A i B j )] E[X k ]} {E[X 2 P [A i B j ] {E[X k (A i B j )] µ} 2 P [A i B j ]. P C Z X + (1 Z) µ where Z /( + k) ad k v/a. Credibility Estimate (separated) Frequecy: µ F E[I k ] E[E(I k Θ A )] E[I k A i ] P [A i ] p i P [A i ]. v F E{V ar[i k Θ A ]} V ar[i k A i ] P [A i ] p i (1 p i ) P [A i ]. a F V ar{e[i k Θ A ]} E{[E(I k Θ A )] 2 } {E[E(I k Θ A )]} 2 E{[E(I k Θ A )] 2 } {E[I k ]} 2 {E[I k A i ] E[I k ]} 2 P [A i ] {E[I k A i ] µ F } 2 P [A i ] i µ F ] [p 2 P [A i ]. P F Z F Ī + (1 Z F ) µ F where Z F /( + k F ) ad k F v F /a F. Severity: µ S E[S k ] E[E(S k Θ B )] E[S k B j ] P [B j ]. v S E{V ar[s k Θ B ]} V ar[s k B j ] P [B j ]. a S V ar{e[s k Θ B ]} E{[E(S k Θ B )] 2 } {E[E(S k Θ B )]} 2 E{[E(S k Θ B )] 2 } {E[S k ]} 2 {E[S k B j ] E[S k ]} 2 P [B j ] k B j ] µ S } {E[S 2 P [B j ]. P S Z S S+(1 Z S ) µ S where Z S S /( S +k S ), k S v S /a S ad S is # of o-zero claims. P C P F P S. 8

9 12 The Parametric Buhlma-Straub Model Assume X 1 Θ, X 2 Θ,..., X Θ are idepedet distributed with commo hypothetical mea but differet process variaces µ(θ) E[X j Θ θ] V ar[x j Θ θ] v(θ) m j where m j is a kow costat measurig exposure, j 1,..., (m j could be the umber of moths the policy was i force i past year j, or the umber of idividuals i the group i past year j, or the amout of premium icome for the policy i past year j). Let the expected value of the hypothetical mea the collective premium µ E[µ(Θ)], the expected value of the process variace v E[v(Θ)] ( E[V ar(x j Θ)] E[v(Θ)]/m j ), ad the variace of the hypothetical mea a V ar[µ(θ)] V ar[e(x j Θ)]. The ad E[X j ] E[E(X j Θ θ)] E[µ(Θ)] µ, V ar[x j ] E[V ar(x j Θ)] + V ar[e(x j Θ)] E[v(Θ)] m j + V ar[µ(θ)] v m j + a, for j 1, 2,...,. Also, for i j, Cov[X i, X j ] a. The credibility premium ˆα 0 + ˆα j X j miimizes where ad ˆα 0 {[ Q E µ +1 (Θ) α 0 µ 1 + a m/v v/a m + v/a µ ˆα j a v ˆα 0 µ m j with k v/a. The credibility premium is the ˆα 0 + ˆα j X j k m + k µ + m j m + v/a ] 2 } α j X j m m + k m j X j m k m + k µ m m + k mj m (1 Z) µ + Z X, where Z m/(m+k) ad X ( m j X j )/( m j ). Note that if X j is the average loss per idividual for the m j group members i year j, the m j X j is the total loss for the m j group members i year j, ad X is the overall average loss per group over the years. The credibility premium to be charged to the group i year +1 would be m +1 [Z X +(1 Z) µ] for m +1 members. 9

10 For the sigle observatio X, the process variace is [ V ar[ X θ] m j X j V ar ] m θ ad the expected process variace is The hypothetical mea is E[ X θ] [ 1 E m m 2 j m 2 V ar[x j θ] E[V ar( X Θ)] E[v(Θ)] m v m. the variace of the hypothetical meas is ad the expected hypothetical meas is Therefore, the credibility factor is ] m j X j θ V ar[e( X Θ)] V ar[µ(θ)] a, E[E( X Θ)] E[µ(Θ)] µ. Z v/(am) m m + v/a. m 2 j m 2 v(θ) m j m j m E[X j θ] µ(θ), v(θ) m, 13 The No-parametric Buhlma-Straub Model Let r group policyholders be such that the i th policyholder has i years of experiece data, i 1, 2,..., r (r 2). Let m i,j deote the umber of exposure uits ad X i,j be the radom variable represetig the average claim amout per exposure uit of the i th policyholder durig the j th policy year for j 1,..., i, i + 1 ( i 2) ad i 1,..., r. We would like to estimate E[X i,i +1 X i,1,..., X i,i ] for i 1,..., r. Let X i (X i,1,..., X i,i ) deote the radom vector of average claim amout ad m i (m i,1,..., m i,i ) be the radom vector of the umber of exposure uits for the i th policyholder, i 1,..., r. Furthermore, we assume (1) X 1,..., X r are idepedet; (2) For i 1,..., r, the distributio of each elemet X i,j (j 1,..., i ) of Xi depeds o a (ukow) risk parameter Θ i θ i ; (3) Θ 1... Θ are idepedet ad idetically distributed radom variables; ad (4) Give i, X i,1 Θ i,..., X i,i Θ i are idepedet. For i 1,..., r ad j 1,..., i ( i 2), defie E[X i,j Θ i θ i ] µ(θ i ) ad V ar[x i,j Θ i θ i ] v(θ i )/m i,j. 10

11 Let µ E[µ(Θ i )] deote the expected value of the hypothetical meas, a V ar[µ(θ i )] V ar[e(x i,j Θ i )] deote the variace of the hypothetical meas, ad v E[v(Θ i )] m i,j E[V ar(x i,j Θ i )] deote the expected value of the process variaces. X 1 (X 1,1 X 1,2 X 1,1 ) X m 1,j X 1,j ˆv 1 1 m 1,j (X 1,j m X 1 ) 2 1 m 1 (m 1,1 m 1,2 m 1,1 ) m 1 m 1,j X 2 (X 2,1 X 2,2 X 2,2 ) X m 2,j X 2,j ˆv m 2,j (X 2,j m X 2 ) 2 2 m 2 (m 2,1 m 2,2 m 2,2 ) m 2 m 2,j X r (X r,1 X r,2 X r,r ) X r 1 r m r,j X r,j ˆv r 1 r m r,j (X r,j m r r 1 X r ) 2 r m r (m r,1 m r,2 m r,r ) m r m r,j m i ( X i X) 2 (r 1) ˆv m i X i ( i 1) ˆv i â m 1 ˆµ X ˆv m 2 i m i ( i 1) m I this case, X i 1 i m i,j X i,j m i i is the average claim amout per exposure uit ad m i m i,j is the total umber of exposure uits for the i th policyholder durig the first i policy years for i 1,..., r, ad X 1 m i m X i 1 m i m i,j X i,j is the overall past average claim amout per exposure uit of the r policyholders where i m m i m i,j is the total exposure uits. The Bühlma estimate for the i th policyholder ad the ( i + 1) th policy year is E[X i,i +1 X i,1,..., X i,i ] Ẑi X i + (1 Ẑi) ˆµ, i 1,..., r, where Ẑi m i /(m i + ˆk) ad ˆk ˆv/â, ad the credibility premium to cover all m i,i +1 exposure uits for policyholder i i the ( i + 1) th policy year is m i,i +1 E[X i,i +1 X i,1,..., X i,i ]. Note that 11 1

12 (1) Ẑi is depedet o i; (2) it is possible that â could be egative due to the substractio. Whe that happes, it is customary to set â Ẑi0, ad the Bühlma-Straub estimate becomes ˆµ X; (3) if m i,j 1 ad i for all i ad j, the m i, m r, ad the ordiary Bühlma estimators are recovered. The method that preserves total losses (TPTL): the total losses o all policyholders is T L r m i X i, ad the total premium is T P r m i [Ẑi X i + (1 Ẑi) ˆµ]. The [ ˆµ Ẑi Ẑ j ] X i which is a credibility-factor-weighted average of Xi s with weights w i Ẑi, i 1,..., r. Ẑ j [ ] mi Compare the alterative ˆµ with the origial ˆµ X i, which is a exposure-uitweighted average of Xi s with weights w i m i m j, i 1,..., r. Note that m j (1) use X m i X i for â of ˆk of ˆµ. m j (2) The differece of these two credibility premiums for policyholder i based o differet estimators of µ is (1 Ẑi) (ˆµ ˆµ) The above aalysis assume that the parameters µ, v ad a are all ukow ad eed to be estimated. If µ is kow, the ˆv give above ca still be used to estimate v as it is ubiased whether µ is kow, ad a alterative ad simpler ubiased estimator for a is ã m i m ( X i µ) 2 r m ˆv. If there are data o oly oe policyholder (say policyholder i), ã with r 1 becomes ã i m i m i ( X i µ) 2 1 m i ˆv i ( X i µ) 2 ˆv i m i ( X i µ) 2 i m i,j (X i,j X i ) 2. m i ( i 1) 12

13 14 Semi-parametric Estimatio Assume the umber of claims N i,j m i,j X i,j for policyholder i i year j is Poisso distributed with mea m i,j θ i give Θ i θ i, that is m i,j X i,j Θ i θ i Poisso(m i,j θ i ). The E[m i,j X i,j Θ i ] V ar[m i,j X i,j Θ i ] m i,j Θ i or Therefore, µ(θ i ) E[X i,j Θ i ] Θ i ad v(θ i ) m i,j V ar[x i,j Θ i ] Θ i. µ E[µ(Θ i )] E[Θ i ] E[v(Θ i )] v. I this case, we could use ˆµ X to estimate v. Example: Assume a (coditioal) Poisso distributio for the umber of claims per policyholder, estimate the Bühlma credibility premium for the umber of claims ext year. umber of claims k total umber of isureds r 0 r 1 r 2... r k r Assume that we have r policyholders, i 1 ad m i,j 1 for i 1,..., r. Sice X i,1 Θ i Poisso(Θ i ), we have E[X i,1 Θ i ] V ar[x i,1 Θ i ] Θ i, µ(θ i ) v(θ i ) Θ i ad µ E[µ(Θ i )] E[Θ i ] E[v(Θ i )] v. Moreover, ˆµ X 1 X i,1 1 k r r [ j r j ]. Sice V ar[x i,1 ] V ar[e(x i,1 Θ i )] + E[V ar(x i,1 Θ i )] V ar[µ(θ i ] + E[v(Θ i )] a + v a + u ad E[X i,1 ] E[E(X i,1 Θ i )] E[µ(Θ i ] µ, implyig that X i,1, X i,2,..., X r,1 are idepedet radom variables with commo mea µ ad variace a + v a + µ. Sice the sample mea of X i,1, X i,2,..., X r,1 is a ubiased estimator of V ar[x i,1 ] a + v a + µ, we have 1 [X i,1 r 1 X] 2 V ar[x ˆ i,1 ] â + ˆv â + ˆµ, or â 1 r 1 [X i,1 X] 2 ˆµ. The ˆk ˆv/â ˆµ/â, Ẑ 1/(1 + ˆk) ad the Bühlma credibility premium for the umber of claim X i,1 ext year is ˆX i,2 Ẑ X i,1 + (1 Ẑ) ˆµ, for X i,1 0, 1,..., k. 13

14 15 Parametric Estimator Assume give i, X i,j Θ i are idetically ad idepedetly distributed with probability desity fuctio f Xi,j Θ i (x i,j θ i ), ad Θ 1, Θ 2,..., Θ r are also idetically ad idepedetly distributed with probability desity fuctio π Θ (θ). Let X i (X i,1, X i,2,..., X i,i ); the the ucoditioal joit desity of Xi is f Xi ( x i ) f Xi,Θ i ( x i, θ i )dθ i f Xi Θ i ( x i θ i )π Θi (θ i )dθ i id. i [ f Xi,j Θ i (x i,j θ i )]π Θi (θ i )dθ i. The likelihood fuctio is give by r r { L f Xi ( x i ) i [ f Xi,j Θ i (x i,j θ i )]π Θi (θ i )dθ i }. Maximum likelihood estimator of the associated parameters are chose to maximize L or logl. 16 Exact Credibility Whe the Bayesia premium the credibility premium, we say the credibility is exact. Recall that the solutios of the ormal equatios, α 0, α 1,..., α yield the credibility premium α 0 + α j X j which miimizes ad {[ Q E µ +1 (Θ) α 0 {[ Q 1 E E(X +1 X) α 0 {[ Q 2 E X +1 α 0 ] 2 α j X j }, ] 2 } α j X j ] 2 α j X j }. If the Bayesia premium, E(X +1 X), is a liear fuctio of X 1, X 2,... X (i geeral, it is NOT), that is E(X +1 X) a 0 + a j X j, the α j a j for j 0, 1...,, ad therefore Q 1 0. Thus, the credibility premium α 0 + α j X j a 0 + a j X j E(X +1 X), ad the credibility is exact. I summary, the Bühlma estimator α 0 + α j X j is the best liear approximatio to the Bayesia estimate E(X +1 X) uder the squared error loss fuctio Q 1. Recall that i liear regressio, Y i α 0 + α j X j + ɛ i, where ɛ i N(0, σ 2 ), for i 1, 2,..., m. 14

15 [ m ] [ ˆα 0, ˆα 1,..., ˆα are chose to miimize Q E ɛ 2 m i E (Y i α 0 α j X j ) ]. 2 Therefore, the regressio lie Ŷi ˆα 0 + ˆα j X j correspods to the credibility premium α 0 + α j X j, ad observatio Y i correspod to the Bayesia premium E[X +1 X ( x) i ], i 1, 2,..., m. Coditio for exact credibility Suppose that X j Θ θ is idepedetly distributed ad is from the liear expoetial family ( liear meas the power of the expoetial is a leaer fuctio of x j ) with probability fuctio f Xj Θ(x j θ) p(x j) e θx j, q(θ) for j 1, 2,..., + 1, ad Θ has probability fuctio π Θ (θ) [q(θ)] k e µkθ c(µ, k) where θ (θ 0, θ 1 ) with π Θ (θ 0 ) π Θ (θ 1 ) 0 ad θ 0 < θ 1. The µ E Θ [µ(θ)] E[E(X j Θ)] E[X j ] ad the posterior distributio is where k + k ad π Θ X(θ x) [q(θ)] k e µ k θ, θ (θ c(µ, k 0, θ 1 ), ) The Bayesia premium is µ X + µ k + k + k X + k + k µ. E[X +1 X] E Θ X[µ +1 (Θ)] θ1 where Z /( + k) ad k v a E[V ar(x j Θ)] V ar[e(x j Θ)]. θ 0 µ(θ)π Θ X(θ x)dθ µ Z X + (1 Z) µ, Note that the prior distributio π Θ (θ) is a cojugate prior distributio (a prior distributio is a cojugate prior distributio if the resultig posterior distributio is of the same type as the prior oe, but perhaps with differet parameters). Theorem: If f Xj Θ(x j θ) is a member of a liear expoetial family, ad the prior distributio π Θ (θ) is the cojugate prior distributio, the the Bühlma credibility estimator is equal to the Bayesia estimator (i.e. exact credibility) assumig a squared error loss fuctio. 15

16 coditioal distributio X j Θ Beroulli(Θ) Poisso(Θ) prior distributio Θ Beta(α,β) Gamma(α,β) µ(θ) E[X j Θ] Θ Θ α µ E[E(X j Θ)] αβ α + β v(θ) V ar[x j Θ] Θ(1 Θ) Θ αβ v E[V ar(x j Θ)] αβ (α + β + 1)(α + β) αβ a V ar[e(x j Θ)] αβ 2 (α + β + 1)(α + β) 2 k v/a α + β 1/β β Z + α + β β + 1 credibility premium Z X α + (1 Z) Z α + β + (1 Z) αβ posterior distributio Θ X ( ) ( ) β Beta X i + α, β + X i Gamma X i + α, β + 1 ( ) X i + α β X i + α posterior mea E[Θ X] + α + β β + 1 predictive distributio X +1 X ( X i + α ) ( ) β Beroulli NB X i + α, + α + β β + 1 ( ) X i + α β X i + α predictive mea E[X +1 X] + α + β β + 1 Note that the predictive mea (the Bayesia premium), E[X +1 X] the posterior mea, E[Θ X] ( E[µ +1 (Θ) X] sice µ +1 (Θ) E[X +1 Θ] Θ) the credibility premium, Z X + (1 Z) µ. I some situatios, we may wat to work the logarithm of the data (W j logx j, j 1, 2,..., ). The µ log (Θ) E[W j Θ] E[logX j Θ], v log (Θ) V ar[w j Θ] V ar[logx j Θ], µ log E[µ log (Θ)] E[W j ] E[logX j ], v log E[v log (Θ)] E[V ar(w j Θ)] E[V ar(logx j Θ)], a log V ar[µ log (Θ)] V ar[e(w j Θ)] V ar[e(logx j Θ)], ad Z log /( + v log /a log ). Thus, logc Z log W + (1 Z log ) µ log, or C e Z log W +(1 Z log ) µ log, which is deoted by C log ad is ot ubiased because we use liear credibility to estimate the mea of the distributio of logarithms. Recall Ĉ Z X + (1 Z) µ where µ E[µ(Θ)] E[E(X j Θ)] E[X j ], so E[Ĉ] Z E[ X] + (1 Z) E[X j ] µ E[X j ]. To make C log ubiased, let C log c e Z log W +(1 Z log ) µ log where c is determied by E[e W j ] E[X j ] E[C log ] c E[e Z log W +(1 Z log ) µ log]. 16

November 2002 Course 4 solutions

November 2002 Course 4 solutions November Course 4 solutios Questio # Aswer: B φ ρ = = 5. φ φ ρ = φ + =. φ Solvig simultaeously gives: φ = 8. φ = 6. Questio # Aswer: C g = [(.45)] = [5.4] = 5; h= 5.4 5 =.4. ˆ π =.6 x +.4 x =.6(36) +.4(4)

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

Simulation. Two Rule For Inverting A Distribution Function

Simulation. Two Rule For Inverting A Distribution Function Simulatio Two Rule For Ivertig A Distributio Fuctio Rule 1. If F(x) = u is costat o a iterval [x 1, x 2 ), the the uiform value u is mapped oto x 2 through the iversio process. Rule 2. If there is a jump

More information

Unbiased Estimation. February 7-12, 2008

Unbiased Estimation. February 7-12, 2008 Ubiased Estimatio February 7-2, 2008 We begi with a sample X = (X,..., X ) of radom variables chose accordig to oe of a family of probabilities P θ where θ is elemet from the parameter space Θ. For radom

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Exponential Families and Bayesian Inference

Exponential Families and Bayesian Inference Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where

More information

1.010 Uncertainty in Engineering Fall 2008

1.010 Uncertainty in Engineering Fall 2008 MIT OpeCourseWare http://ocw.mit.edu.00 Ucertaity i Egieerig Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu.terms. .00 - Brief Notes # 9 Poit ad Iterval

More information

TAMS24: Notations and Formulas

TAMS24: Notations and Formulas TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =

More information

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn Stat 366 Lab 2 Solutios (September 2, 2006) page TA: Yury Petracheko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/ yuryp/ Review Questios, Chapters 8, 9 8.5 Suppose that Y, Y 2,..., Y deote a radom

More information

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator Last Lecture Biostatistics 60 - Statistical Iferece Lecture 16 Evaluatio of Bayes Estimator Hyu Mi Kag March 14th, 013 What is a Bayes Estimator? Is a Bayes Estimator the best ubiased estimator? Compared

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

1 Inferential Methods for Correlation and Regression Analysis

1 Inferential Methods for Correlation and Regression Analysis 1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet

More information

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values of the secod half Biostatistics 6 - Statistical Iferece Lecture 6 Fial Exam & Practice Problems for the Fial Hyu Mi Kag Apil 3rd, 3 Hyu Mi Kag Biostatistics 6 - Lecture 6 Apil 3rd, 3 / 3 Rao-Blackwell

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

Mathematical Statistics - MS

Mathematical Statistics - MS Paper Specific Istructios. The examiatio is of hours duratio. There are a total of 60 questios carryig 00 marks. The etire paper is divided ito three sectios, A, B ad C. All sectios are compulsory. Questios

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Some Basic Cocepts of Statistical Iferece (Sec 5.) Suppose we have a rv X that has a pdf/pmf deoted by f(x; θ) or p(x; θ), where θ is called the parameter. I previous lectures, we focus o probability problems

More information

of the matrix is =-85, so it is not positive definite. Thus, the first

of the matrix is =-85, so it is not positive definite. Thus, the first BOSTON COLLEGE Departmet of Ecoomics EC771: Ecoometrics Sprig 4 Prof. Baum, Ms. Uysal Solutio Key for Problem Set 1 1. Are the followig quadratic forms positive for all values of x? (a) y = x 1 8x 1 x

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

CS284A: Representations and Algorithms in Molecular Biology

CS284A: Representations and Algorithms in Molecular Biology CS284A: Represetatios ad Algorithms i Molecular Biology Scribe Notes o Lectures 3 & 4: Motif Discovery via Eumeratio & Motif Represetatio Usig Positio Weight Matrix Joshua Gervi Based o presetatios by

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Sample Size Estimation in the Proportional Hazards Model for K-sample or Regression Settings Scott S. Emerson, M.D., Ph.D.

Sample Size Estimation in the Proportional Hazards Model for K-sample or Regression Settings Scott S. Emerson, M.D., Ph.D. ample ie Estimatio i the Proportioal Haards Model for K-sample or Regressio ettigs cott. Emerso, M.D., Ph.D. ample ie Formula for a Normally Distributed tatistic uppose a statistic is kow to be ormally

More information

4.5 Multiple Imputation

4.5 Multiple Imputation 45 ultiple Imputatio Itroductio Assume a parametric model: y fy x; θ We are iterested i makig iferece about θ I Bayesia approach, we wat to make iferece about θ from fθ x, y = πθfy x, θ πθfy x, θdθ where

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Probability and statistics: basic terms

Probability and statistics: basic terms Probability ad statistics: basic terms M. Veeraraghava August 203 A radom variable is a rule that assigs a umerical value to each possible outcome of a experimet. Outcomes of a experimet form the sample

More information

Bayesian Methods: Introduction to Multi-parameter Models

Bayesian Methods: Introduction to Multi-parameter Models Bayesia Methods: Itroductio to Multi-parameter Models Parameter: θ = ( θ, θ) Give Likelihood p(y θ) ad prior p(θ ), the posterior p proportioal to p(y θ) x p(θ ) Margial posterior ( θ, θ y) is Iterested

More information

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise First Year Quatitative Comp Exam Sprig, 2012 Istructio: There are three parts. Aswer every questio i every part. Questio I-1 Part I - 203A A radom variable X is distributed with the margial desity: >

More information

STATISTICAL INFERENCE

STATISTICAL INFERENCE STATISTICAL INFERENCE POPULATION AND SAMPLE Populatio = all elemets of iterest Characterized by a distributio F with some parameter θ Sample = the data X 1,..., X, selected subset of the populatio = sample

More information

5. Likelihood Ratio Tests

5. Likelihood Ratio Tests 1 of 5 7/29/2009 3:16 PM Virtual Laboratories > 9. Hy pothesis Testig > 1 2 3 4 5 6 7 5. Likelihood Ratio Tests Prelimiaries As usual, our startig poit is a radom experimet with a uderlyig sample space,

More information

Monte Carlo method and application to random processes

Monte Carlo method and application to random processes Mote Carlo method ad applicatio to radom processes Lecture 3: Variace reductio techiques (8/3/2017) 1 Lecturer: Eresto Mordecki, Facultad de Ciecias, Uiversidad de la República, Motevideo, Uruguay Graduate

More information

Lecture 12: September 27

Lecture 12: September 27 36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.

More information

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1. Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

11 Correlation and Regression

11 Correlation and Regression 11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record

More information

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio

More information

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences.

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences. Norwegia Uiversity of Sciece ad Techology Departmet of Mathematical Scieces Corrected 3 May ad 4 Jue Solutios TMA445 Statistics Saturday 6 May 9: 3: Problem Sow desity a The probability is.9.5 6x x dx

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Agenda: Recap. Lecture. Chapter 12. Homework. Chapt 12 #1, 2, 3 SAS Problems 3 & 4 by hand. Marquette University MATH 4740/MSCS 5740

Agenda: Recap. Lecture. Chapter 12. Homework. Chapt 12 #1, 2, 3 SAS Problems 3 & 4 by hand. Marquette University MATH 4740/MSCS 5740 Ageda: Recap. Lecture. Chapter Homework. Chapt #,, 3 SAS Problems 3 & 4 by had. Copyright 06 by D.B. Rowe Recap. 6: Statistical Iferece: Procedures for μ -μ 6. Statistical Iferece Cocerig μ -μ Recall yes

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 016 MODULE : Statistical Iferece Time allowed: Three hours Cadidates should aswer FIVE questios. All questios carry equal marks. The umber

More information

Solutions: Homework 3

Solutions: Homework 3 Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes. Term Test October 3, 003 Name Math 56 Studet Number Directio: This test is worth 50 poits. You are required to complete this test withi 50 miutes. I order to receive full credit, aswer each problem completely

More information

NOTES ON DISTRIBUTIONS

NOTES ON DISTRIBUTIONS NOTES ON DISTRIBUTIONS MICHAEL N KATEHAKIS Radom Variables Radom variables represet outcomes from radom pheomea They are specified by two objects The rage R of possible values ad the frequecy fx with which

More information

Final Examination Statistics 200C. T. Ferguson June 10, 2010

Final Examination Statistics 200C. T. Ferguson June 10, 2010 Fial Examiatio Statistics 00C T. Ferguso Jue 0, 00. (a State the Borel-Catelli Lemma ad its coverse. (b Let X,X,... be i.i.d. from a distributio with desity, f(x =θx (θ+ o the iterval (,. For what value

More information

AMS570 Lecture Notes #2

AMS570 Lecture Notes #2 AMS570 Lecture Notes # Review of Probability (cotiued) Probability distributios. () Biomial distributio Biomial Experimet: ) It cosists of trials ) Each trial results i of possible outcomes, S or F 3)

More information

Questions and Answers on Maximum Likelihood

Questions and Answers on Maximum Likelihood Questios ad Aswers o Maximum Likelihood L. Magee Fall, 2008 1. Give: a observatio-specific log likelihood fuctio l i (θ) = l f(y i x i, θ) the log likelihood fuctio l(θ y, X) = l i(θ) a data set (x i,

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

On an Application of Bayesian Estimation

On an Application of Bayesian Estimation O a Applicatio of ayesia Estimatio KIYOHARU TANAKA School of Sciece ad Egieerig, Kiki Uiversity, Kowakae, Higashi-Osaka, JAPAN Email: ktaaka@ifokidaiacjp EVGENIY GRECHNIKOV Departmet of Mathematics, auma

More information

6. Sufficient, Complete, and Ancillary Statistics

6. Sufficient, Complete, and Ancillary Statistics Sufficiet, Complete ad Acillary Statistics http://www.math.uah.edu/stat/poit/sufficiet.xhtml 1 of 7 7/16/2009 6:13 AM Virtual Laboratories > 7. Poit Estimatio > 1 2 3 4 5 6 6. Sufficiet, Complete, ad Acillary

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

EE 4TM4: Digital Communications II Probability Theory

EE 4TM4: Digital Communications II Probability Theory 1 EE 4TM4: Digital Commuicatios II Probability Theory I. RANDOM VARIABLES A radom variable is a real-valued fuctio defied o the sample space. Example: Suppose that our experimet cosists of tossig two fair

More information

CSE 527, Additional notes on MLE & EM

CSE 527, Additional notes on MLE & EM CSE 57 Lecture Notes: MLE & EM CSE 57, Additioal otes o MLE & EM Based o earlier otes by C. Grat & M. Narasimha Itroductio Last lecture we bega a examiatio of model based clusterig. This lecture will be

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

Approximations and more PMFs and PDFs

Approximations and more PMFs and PDFs Approximatios ad more PMFs ad PDFs Saad Meimeh 1 Approximatio of biomial with Poisso Cosider the biomial distributio ( b(k,,p = p k (1 p k, k λ: k Assume that is large, ad p is small, but p λ at the limit.

More information

LECTURE NOTES 9. 1 Point Estimation. 1.1 The Method of Moments

LECTURE NOTES 9. 1 Point Estimation. 1.1 The Method of Moments LECTURE NOTES 9 Poit Estimatio Uder the hypothesis that the sample was geerated from some parametric statistical model, a atural way to uderstad the uderlyig populatio is by estimatig the parameters of

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 5 STATISTICS II. Mea ad stadard error of sample data. Biomial distributio. Normal distributio 4. Samplig 5. Cofidece itervals

More information

Lecture 6: Coupon Collector s problem

Lecture 6: Coupon Collector s problem Radomized Algorithms Lecture 6: Coupo Collector s problem Sotiris Nikoletseas Professor CEID - ETY Course 2017-2018 Sotiris Nikoletseas, Professor Radomized Algorithms - Lecture 6 1 / 16 Variace: key features

More information

Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Some Basic Probability Cocepts 2. Experimets, Outcomes ad Radom Variables A radom variable is a variable whose value is ukow util it is observed. The value of a radom variable results from a experimet;

More information

KLMED8004 Medical statistics. Part I, autumn Estimation. We have previously learned: Population and sample. New questions

KLMED8004 Medical statistics. Part I, autumn Estimation. We have previously learned: Population and sample. New questions We have previously leared: KLMED8004 Medical statistics Part I, autum 00 How kow probability distributios (e.g. biomial distributio, ormal distributio) with kow populatio parameters (mea, variace) ca give

More information

Stat 421-SP2012 Interval Estimation Section

Stat 421-SP2012 Interval Estimation Section Stat 41-SP01 Iterval Estimatio Sectio 11.1-11. We ow uderstad (Chapter 10) how to fid poit estimators of a ukow parameter. o However, a poit estimate does ot provide ay iformatio about the ucertaity (possible

More information

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett Lecture Note 8 Poit Estimators ad Poit Estimatio Methods MIT 14.30 Sprig 2006 Herma Beett Give a parameter with ukow value, the goal of poit estimatio is to use a sample to compute a umber that represets

More information

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman: Math 224 Fall 2017 Homework 4 Drew Armstrog Problems from 9th editio of Probability ad Statistical Iferece by Hogg, Tais ad Zimmerma: Sectio 2.3, Exercises 16(a,d),18. Sectio 2.4, Exercises 13, 14. Sectio

More information

STATISTICAL METHODS FOR BUSINESS

STATISTICAL METHODS FOR BUSINESS STATISTICAL METHODS FOR BUSINESS UNIT 5. Joit aalysis ad limit theorems. 5.1.- -dimesio distributios. Margial ad coditioal distributios 5.2.- Sequeces of idepedet radom variables. Properties 5.3.- Sums

More information

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization ECE 90 Lecture 4: Maximum Likelihood Estimatio ad Complexity Regularizatio R Nowak 5/7/009 Review : Maximum Likelihood Estimatio We have iid observatios draw from a ukow distributio Y i iid p θ, i,, where

More information

Lesson 11: Simple Linear Regression

Lesson 11: Simple Linear Regression Lesso 11: Simple Liear Regressio Ka-fu WONG December 2, 2004 I previous lessos, we have covered maily about the estimatio of populatio mea (or expected value) ad its iferece. Sometimes we are iterested

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 9. POINT ESTIMATION 9. Covergece i Probability. The bases of poit estimatio have already bee laid out i previous chapters. I chapter 5

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

It is always the case that unions, intersections, complements, and set differences are preserved by the inverse image of a function.

It is always the case that unions, intersections, complements, and set differences are preserved by the inverse image of a function. MATH 532 Measurable Fuctios Dr. Neal, WKU Throughout, let ( X, F, µ) be a measure space ad let (!, F, P ) deote the special case of a probability space. We shall ow begi to study real-valued fuctios defied

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Itroductio to Probability ad Statistics Lecture 23: Cotiuous radom variables- Iequalities, CLT Puramrita Sarkar Departmet of Statistics ad Data Sciece The Uiversity of Texas at Austi www.cs.cmu.edu/

More information

Linear Regression Models

Linear Regression Models Liear Regressio Models Dr. Joh Mellor-Crummey Departmet of Computer Sciece Rice Uiversity johmc@cs.rice.edu COMP 528 Lecture 9 15 February 2005 Goals for Today Uderstad how to Use scatter diagrams to ispect

More information

Lecture 12: November 13, 2018

Lecture 12: November 13, 2018 Mathematical Toolkit Autum 2018 Lecturer: Madhur Tulsiai Lecture 12: November 13, 2018 1 Radomized polyomial idetity testig We will use our kowledge of coditioal probability to prove the followig lemma,

More information

4. Basic probability theory

4. Basic probability theory Cotets Basic cocepts Discrete radom variables Discrete distributios (br distributios) Cotiuous radom variables Cotiuous distributios (time distributios) Other radom variables Lect04.ppt S-38.45 - Itroductio

More information

Generalized Semi- Markov Processes (GSMP)

Generalized Semi- Markov Processes (GSMP) Geeralized Semi- Markov Processes (GSMP) Summary Some Defiitios Markov ad Semi-Markov Processes The Poisso Process Properties of the Poisso Process Iterarrival times Memoryless property ad the residual

More information

Lecture 2: Poisson Sta*s*cs Probability Density Func*ons Expecta*on and Variance Es*mators

Lecture 2: Poisson Sta*s*cs Probability Density Func*ons Expecta*on and Variance Es*mators Lecture 2: Poisso Sta*s*cs Probability Desity Fuc*os Expecta*o ad Variace Es*mators Biomial Distribu*o: P (k successes i attempts) =! k!( k)! p k s( p s ) k prob of each success Poisso Distributio Note

More information

Quick Review of Probability

Quick Review of Probability Quick Review of Probability Berli Che Departmet of Computer Sciece & Iformatio Egieerig Natioal Taiwa Normal Uiversity Refereces: 1. W. Navidi. Statistics for Egieerig ad Scietists. Chapter 2 & Teachig

More information

Simple Linear Regression

Simple Linear Regression Chapter 2 Simple Liear Regressio 2.1 Simple liear model The simple liear regressio model shows how oe kow depedet variable is determied by a sigle explaatory variable (regressor). Is is writte as: Y i

More information

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen) Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

1. Parameter estimation point estimation and interval estimation. 2. Hypothesis testing methods to help decision making.

1. Parameter estimation point estimation and interval estimation. 2. Hypothesis testing methods to help decision making. Chapter 7 Parameter Estimatio 7.1 Itroductio Statistical Iferece Statistical iferece helps us i estimatig the characteristics of the etire populatio based upo the data collected from (or the evidece 0produced

More information

MA Advanced Econometrics: Properties of Least Squares Estimators

MA Advanced Econometrics: Properties of Least Squares Estimators MA Advaced Ecoometrics: Properties of Least Squares Estimators Karl Whela School of Ecoomics, UCD February 5, 20 Karl Whela UCD Least Squares Estimators February 5, 20 / 5 Part I Least Squares: Some Fiite-Sample

More information

Clases 7-8: Métodos de reducción de varianza en Monte Carlo *

Clases 7-8: Métodos de reducción de varianza en Monte Carlo * Clases 7-8: Métodos de reducció de variaza e Mote Carlo * 9 de septiembre de 27 Ídice. Variace reductio 2. Atithetic variates 2 2.. Example: Uiform radom variables................ 3 2.2. Example: Tail

More information

Stat 319 Theory of Statistics (2) Exercises

Stat 319 Theory of Statistics (2) Exercises Kig Saud Uiversity College of Sciece Statistics ad Operatios Research Departmet Stat 39 Theory of Statistics () Exercises Refereces:. Itroductio to Mathematical Statistics, Sixth Editio, by R. Hogg, J.

More information

3/3/2014. CDS M Phil Econometrics. Types of Relationships. Types of Relationships. Types of Relationships. Vijayamohanan Pillai N.

3/3/2014. CDS M Phil Econometrics. Types of Relationships. Types of Relationships. Types of Relationships. Vijayamohanan Pillai N. 3/3/04 CDS M Phil Old Least Squares (OLS) Vijayamohaa Pillai N CDS M Phil Vijayamoha CDS M Phil Vijayamoha Types of Relatioships Oly oe idepedet variable, Relatioship betwee ad is Liear relatioships Curviliear

More information

Mathematical Statistics Anna Janicka

Mathematical Statistics Anna Janicka Mathematical Statistics Aa Jaicka Lecture XIV, 5.06.07 BAYESIAN STATISTICS Pla for Today. BayesiaStatistics a priori ad a posteriori distributios Bayesia estimatio: Maximum a posteriori probability(map)

More information

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

IIT JAM Mathematical Statistics (MS) 2006 SECTION A IIT JAM Mathematical Statistics (MS) 6 SECTION A. If a > for ad lim a / L >, the which of the followig series is ot coverget? (a) (b) (c) (d) (d) = = a = a = a a + / a lim a a / + = lim a / a / + = lim

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Lecture 9: September 19

Lecture 9: September 19 36-700: Probability ad Mathematical Statistics I Fall 206 Lecturer: Siva Balakrisha Lecture 9: September 9 9. Review ad Outlie Last class we discussed: Statistical estimatio broadly Pot estimatio Bias-Variace

More information

ECON 3150/4150, Spring term Lecture 3

ECON 3150/4150, Spring term Lecture 3 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio

More information