A HOMOTOPY CLASS OF SEMI-RECURSIVE CHAIN LADDER MODELS
|
|
- Maurice Giles Nelson
- 5 years ago
- Views:
Transcription
1 A HOMOTOPY CLASS OF SEMI-RECURSIVE CHAIN LADDER MODELS Greg Taylor Taylor Fry Consulting Actuaries Level, 55 Clarence Street Sydney NSW 2000 Australia Professorial Associate Centre for Actuarial Studies Faculty of Economics and Commerce University of Melbourne Parville VIC 3052 Australia Phone: Fax: greg.taylor@taylorfry.com.au September 20
2 Abstract The chain ladder algorithm is nown to produce maximum lielihood estimates of the parameters of certain recursive and non-recursive models. These types of models represent two extremes of dependency within rows of a data array. Whereas observations within a row of a non-recursive model are stochastically independent, each observation of a recursive model is, in expectation, directly proportional to the immediately preceding observation from the same row. The correlation structures of forecasts also differ as between recursive and non-recursive models. The present paper constructs a family of models that forms a bridge between recursive and non-recursive models and so provides a continuum of intermediate cases in terms of dependency structure. The intermediate models are called semi-recursive. The statistical inference properties of semi-recursive models are investigated. It is found (Section 5.4) that the chain ladder algorithm is also maximum lielihood for semirecursive models. Sufficient, and minimally sufficient, statistics are found for the semi-recursive model (Section 6). They are found to be the same as for non-recursive models. The minimally sufficient statistic is complete, leading to minimum variance unbiased estimation (Section 7). eywords: chain ladder, correlation, non-recursive model, recursive model, minimally sufficient statistic, minimum variance unbiased estimation, ODP cross-classified model, ODP Mac model, semi-recursive model, sufficient statistic,. Introduction The actuarial literature identifies two families of chain ladder models categorised by Verrall (2000) as recursive and non-recursive models respectively. Although the model formulations are fundamentally different, both are found to yield the same maximum lielihood estimators of age-to-age factors and the same forecasts of loss reserve. The properties of these models are studied by Taylor (20a). Whereas observations within a row of a non-recursive model are stochastically independent, each observation of a recursive model is, in expectation, directly proportional to the immediately preceding observation from the same row. It would be useful to define a family of models that forms a bridge between these two extreme cases of dependency, i.e. where a relation between consecutive observations in a row exists but is less than linear (in expectation). Further, distinct forecasts within a row of a run-off array are nown to be correlated differently under recursive and non-recursive models (Taylor, 20b). It would be useful to define a family of models displaying intermediate correlations.
3 Homotopy class of chain ladder models 2 The purpose of the present paper is to define ust such a family and explore its statistical inference properties. 2. Framewor and notation 2. Claims data Consider a x rectangle of claims observations Y with: accident periods represented by rows and labelled =, 2,.., ; development periods represented by columns and labelled by =, 2,,. Within the rectangle identify a development trapezoid of past observations D Y : and min, The complement of this subset, representing future observations is D c Y : and min, Y : and Also let D D D c In general, the problem is to predict c D on the basis of observed D. The usual case in the literature (though often not in practice) is that in which =, so that the trapezoid becomes a triangle. The more general trapezoid will be retained throughout the present paper. Define the cumulative row sums X Y i i (2.) and the full row and column sums (or horizontal and vertical sums) H min, Y V Y (2.2) Also define, for = + 2,,, R Y X X, 2 (2.3)
4 Homotopy class of chain ladder models 3 R 2 R (2.4) c Note that R is the sum of the (future) observations in D. It will be referred to as the total amount of outstanding losses. Liewise, R denotes the amount of outstanding losses in respect of accident period. The obective stated earlier is to forecast the R and R. Let R denote summation over the entire row of D, i.e. min, for fixed. Similarly, let C denote summation over the entire column of D. For example, (2.2) may be expressed as, i.e. for fixed V C Y Finally, let T denote summation over the entire trapezoid of (,) cells, i.e. T min, R C A with A :, D. The first column For a random variable 2.2 Families of distributions 2.2. Exponential dispersion family, D, A will denote the entire array of A will be denoted by A. The exponential dispersion family (EDF) (Nelder & Wedderburn, 972) consists of those variables Y with log-lielihoods of the form y,, y b / a c y, (2.5) for parameters (canonical parameter) and (scale parameter) and suitable functions a, b and c, with a continuous, b differentiable and one-one, and c such as to produce a total probability mass of unity. For Y so distributed, E Y b Var Y ab (2.6) (2.7)
5 Homotopy class of chain ladder models 4 If denotes E[Y], then (2.6) establishes a relation between and, and so (2.7) may be expressed in the form Var Y av (2.8) for some function V, referred to as the variance function. The notation Y ~ EDF, ; a, b, c will be used to mean that a random variable Y is subect to the EDF lielihood (2.5) Tweedie family The Tweedie family (Tweedie, 984) is the sub-family of the EDF for which a (2.9) p, 0 or V p p (2.0) For this family, 2 2 p/ p b p p (2.) μ = [( p)θ] /( p) (2.2) p 2 p y;, y / p / 2 p / c y, p p / y / (2.3) (2.4) The notation Y ~ Tw,, p will be used to mean that a random variable Y is subect to the Tweedie lielihood with parameters,, p. The abbreviated form Y ~ Tw p will mean that Y is a member of the sub-family with specific parameter p Over-dispersed Poisson family The over-dispersed Poisson (ODP) family is the Tweedie sub-family with p =. The limit of (2.2) as p gives E Y exp (2.5) By (2.8) (2.0), Var Y (2.6) By (2.4),
6 Homotopy class of chain ladder models 5 / y / (2.7) The notation Y ~ ODP, means ~,, 3. Chain ladder models 3. Heuristic chain ladder Y Tw. The chain ladder was originally (pre-975) devised as a heuristic algorithm for forecasting outstanding losses. It had no statistical foundation. The algorithm is as follows. Define the following factors: fˆ X / X,,2,...,, (3.) Note that fˆ can be expressed in the form fˆ w X / X, (3.2) with w X / X (3.3) i.e. as a weighted average of factors X /, X for fixed. Then define the following forecasts of Y D : c, 2 2 Yˆ X fˆ fˆ... fˆ fˆ (3.4) Call these chain ladder forecasts. forecasts: They yield the additional chain ladder Xˆ X fˆ... fˆ, (3.5) Rˆ Xˆ Xˆ, (3.6) Rˆ 2 Rˆ (3.7) 3.2 Recursive models A recursive model taes the general form
7 Homotopy class of chain ladder models 6 E X, X function of D and some parameters (3.8) where D is the data sub-array of D obtained by deleting diagonals on the right side of until X is contained in its right-most diagonal Mac model D The Mac model (Mac, 993) is defined by the following assumptions. (M) Accident periods are stochastically independent, i.e. Y, Y 2 2 are stochastically independent if 2. (M2) For each =, 2,,, the X ( varying) form a Marov chain. (M3) For each =, 2,, and =, 2,,, (a) E X, X f X for some parameters f > 0; and (b) Var X X X 2, 2 for some parameters ODP Mac model Taylor (20) defined the over dispersed Poisson (ODP) Mac model as that satisfying assumptions (M), (M2) and ODPM 3 For each =,2,, and =,2,,-, Y X ~ ODP f X,,, where now f. Assumption (ODPM3) implies (M3a). Moreover, in the special case, 2 independent of, (ODPM3) also implies (M3b) with f. It is evident that, for this model to be valid, it is necessary that all Y 0. Note, also that, under ODPM 3, 0 implies that X 0, for all m 0. This means that, for each, either Y 0 or 0 m X for all. A summary of these requirements in terms of the data array (R) Y 0 for all Y D (R2) For each =, 2,,, either: D is as follows.
8 Homotopy class of chain ladder models 7 (a) Y 0; or (b) 0 Y for all min, A data array satisfying these requirements will be called ODPM-regular. Assumption (ODPM3) may be expressed in the following form, suitable for GLM implementation of the OPD Mac model: Y X ~ ODP exp ln X ln f, / w,, (3.9) where w, /, (3.0) In this form, the GLM of the Y, has log lin, offsets ln, ln f, and weights w,. parameters 3.3 Non-recursive models Taylor (20) also defined the ODP cross-classified model as that satisfying the following assumptions: (ODPCC) The random variables Y D are stochastically independent. (ODPCC2) For each =, 2,, and =, 2,,, Y ~ ODP, ; (a) (b) (c) for some parameters a, 0 ; and Assumption (ODPCC2b) may be expressed in the following form, suitable for GLM implementation of the ODP cross-classified model: Y ~ ODP exp ln ln, / w (3.) In this form, the GLM of the Y has log lin, parameters ln and ln, and weights w w / satisfying Assumption (ODPCC2b) removes one degree of redundancy from the parameter set, and would be reflected by the aliasing of one parameter in the GLM. (3.2)
9 Homotopy class of chain ladder models Semi-recursive models First, a definition of homotopy is given. Let A and B be topological spaces and let : A B and : A B be continuous. A homotopy is a continuous function H : A0, B such that, for a A, H a,0 a and H a, a collection of functions H H, t ; t 0, class associated with the homotopy ust defined.. The will be referred to as the homotopy Consider a model that satisfies assumptions (M), (M2) and the following: (ODPSR3a) For each, 2,...,, and for some independent of and, subect to 0, Y ~ ODP, for some parameters 0, 0. (ODPSR3b) For each,2,...,, and,2,...,, Y X ODP By convention, 0, when 0, ~,, for the same λ as in (ODPSR3a), and where, 2,3,..., are parameters subect to 0. 0, when 0 Such a model will be called OPD semi-recursive. It is valid only for a nonnegative data arrays and, in the case 0, for ODPM-regular arrays. It will be assumed hence forth that all satisfy these requirements. D Assumptions (ODPSR3a-b) may be expressed in the following form, suitable for GLM implementation of the ODP semi-recursive model: Y ~ ODP exp ( )ln ( )ln,, / w (3.3),, Y X ~ ODP exp ln ln ln, / w (3.4) with weights, w / (3.5) In this formulation, the terms ln, nown quantities, are offsets, and the ln and ln are unnown parameters requiring estimation. ODP semi-recursive models are subect to the following representation lemma.
10 Homotopy class of chain ladder models 9 Lemma 3.. The mean ODP parameter in (ODPSR3b) may be expressed in the form f (3.6) where, for,2,3,...,, is the unique non-negative solution of i i is given by the constraint i and f i i i (3.7) (3.8),, 2,..., (3.9) The are calculated recursively from (3.7) in the order =,-,,2. In the event that in (3.6),... 0 (3.20) Proof. The uniqueness of, 2,..., is first proven for given 2, 3,...,. Consider relation (3.7) and note that i i i 0 (3.2) in the case i i (3.22) Hence (3.7) has at most one solution in also that the right side of (3.7) varies from 0 to as i in the case that (3.22) holds. Note varies from 0 to. Hence (3.7) has a non-negative solution in, which is therefore unique. i Note also that the existence of a solution to (3.7) implies that i i (3.23) Thus (3.22) implies (3.23), and so the required recursive calculation of the can proceed over,,...,2.
11 Homotopy class of chain ladder models 0 Relation (3.23) holds for 2, i2 i so define Then 0, as required by ODPRS and (3.8) is satisfied. (3.24) Substitution of (3.7) and (3.9) in the right side of (3.6) now yields / / i i i i by (3.8). By (3.7), this is equal to the left side of (3.6), and so (3.6) holds. Note that the semi-recursive models form a homotopy class. Let A be the set each of whose members consists of a data array D and a parameter set. Define H to be the mapping that sends a, f,, 2,...,,,,,, 2,...,,, 2,...,. Let a be a specific member of A and let 0 to the distributions of Y and Y, X defined by (ODPSR3a-b). Convert H to a metric space by imposing the metric () () (2) (2) a,, a, (2) 2 ( 2) 2 () ( 2) (2) where the superscripts () and (2) on the right designate the parameters,,, () () a, and associated with the respective members (2) (2), a of 0, A. The defined metric is continuous in, as required for homotopy. Moreover, it is evident from Lemma 3. that H(a,0) generates an OPD cross-classified model and H(a,) an ODP Mac model. The homotopy class includes all the intermediate models between these two.
12 Homotopy class of chain ladder models 4. Correlation between observations 4. Semi-recursive models Consider the model defined in Section 3.4, and specifically the conditional covariance for Cov X, m, X 2,, 2 m n X,, X 2 2 with m 0, n 0. The following lemma is immediate from assumption (M). Lemma 4.. In the semi-recursive model defined in Section 3.4 Cov X, m, X 2, 2 m n X,, X 0 2 for 2 2 In view of this result, attention will be focused on within-row covariances Cov X, m, X, mn X and correlations Corr X, m, X, mn X. Let the latter be denoted, m, mn, with 0, m, mn and, m, mn representing the boundary cases of the ODP cross-classified and ODP Mac models respectively. These boundary correlations are evaluated by Taylor (20b) with the following results:, m, mn m, mn 2 ( B ) for 0, (4.) with B mn m 0 m, mn i i i i i m i (4.2) B m, m n mn i m m i f... f f... f mn i i i f... f f... f mn i i i (4.3) 0 The properties of, m, mn and, m, mn and the relation between them are discussed by Taylor (20b). It is established that, while there are distinct similarities between the two, there are also distinct differences. Certainly 0, m, mn and, m, mn are numerically different. One might therefore wish to formulate a semi-recursive model with correlation structure intermediate between these two cases. It is evident from (3.6) that the homotopy class of semi-recursive models provides a continuum of correlation values between 0, m, mn and, m, mn.
13 Homotopy class of chain ladder models Evaluation of semi-recursive correlation structures However, care may be required in the selection of a semi-recursive model as, m, mn does not appear to be related to 0, m, mn and, m, mn in any simple way. Consider the evaluation of, m, mn. Let c, m, m n denote Cov X, X X, m, mn. Then c E X, X X, m, mn, m, mn X, mn X, m n X X, m, m E X, E X, X X, X, m, m,,, E X E X X E X E X X m n m n m E X X E X X X m n m m n (4.4) Difficulty arises in the evaluation of the terms E X, m X. In the case of the recursive ODP Mac model, these are evaluated recursively, thus: E X, m X E E X, m X, m X E f m X, m X etc. where (ODPM3) has been used. If, however, the same procedure is attempted for the semi- recursive model, then, by (3.6), E X, m X E E X, m X, m X E X, m X f E X X f, m E X X, m X and difficulty arises in the evaluation of the last expectation. 4.3 Non-monotonicity of semi-recursive correlations Care would also be necessary in the selection of semi-recursive correlation structures because, while it is nown from Theorem 4.4 of Taylor (20b) that 0, it cannot be assumed that ρ changes monotonically, m, mn, m, mn between these extremes. Indeed, my colleague, Hugh Miller, provides the following counter-example. Example. Consider a semi-recursive model in the representation of Lemma 3., with the following parameters: 00, 2 3 4, (Poisson case). By (3.9), f 2, f2.5, f3.33. Now consider the (unliely) case of E X = 00). Then simulation yields the values of X for some (c.f.,2,4 for various shown in Table 4..
14 Homotopy class of chain ladder models 3 Table 4. Values of,2,4 λ Ρ for varying X The values of ρ for 0, may be verified by the formulas (4.5) and (4.6) (λ = 0) and (4.6) and (4.7) (λ = 0) of Taylor (20b) but note that ρ does not proceed monotonically between 0 and 0.2. If a less eccentric value of X is chosen, say X 50, the results are as in Table 4.2. Table 4.2 Values of,2,4 λ Ρ for varying X 50 Evidence of slight sampling error is apparent from a comparison of Tables 4. and 4.2 at. Nonetheless, monotonicity of, m, mn as a function of appears to have been achieved in this second example. Note also, by comparison of Tables 4. and 4.2 at λ = 0., 0.2, that, m, mn depends on the observed value of for 0 < λ <, whereas it is independent of this observation for λ = 0,. X More detail on the relation between and,, future research. m m n might be a fruitful area for
15 Homotopy class of chain ladder models 4 5. Parameter estimation and forecasts 5. Recursive models Consider MLE of parameters in the OPD Mac model defined in Section The conditional log-lielihood of a single observation in D is (terms extraneous to MLE omitted) Y Y n n f f (5.),,, The conditional log- lielihood of the entire row ofd Y, Y,... Y Y 2 3, 2 is,...,, Y Y Y 3, 2 Y 2 Y 3,..., Y, 2 by assumption (M2). By extension of this argument 2,...,,, 2 Y Y Y for (5.2) The reasoning for is similar but with the upper limit of summation replaced by. Then, by assumption (M), min(, ) Y Y, (5.3) 2 2 Substitution of (5.) into (5.3) and differentiation with respect to f for a particular value of, yields,, f Y f (5.4) Setting this to zero and rearranging gives the following MLE of f : fˆ, Y,,,,, (5.5) In the special case in which weights are column dependent only,, independent of (5.6)
16 Homotopy class of chain ladder models 5 the estimator (5.5) reduces to the usual chain ladder estimator (5.7) f ˆ /, The forecast of a future (i.e. + > +) value of is ˆ R ˆ ˆ ˆ f f 2... f, 2, 3,...,, (5.8), The estimation and forecast algorithm consisting of (5.7) and (5.8) constitute the chain ladder algorithm described in Section Non-recursive models Consider MLE of parameters in the OPD cross-classified model defined in Section 3.3. The log-lielihood of a single observation in Y Y n n (5.9) The log-lielihood for the entire min (, ) Y Y D is (5.0) Substitution of (5.9) into (5.0) and differentiation with respect to for a particular value of, yields min(, ) Y / Differentiation with respect to yields Y Setting (5.) and (5.2) to zero gives the following MLEs of, : min(, ) min(, ) ˆ Y D is (5.) (5.2) ˆ (5.3) ˆ Y ˆ (5.4) In the special case in which weights are column dependent only, i.e. (5.6) holds, relations (5.3) and (5.4) reduce to the following: min (, ) min(, ) ˆ Y ˆ (5.3a) ˆ Y ˆ (5.4a) In the alternative special case in which weights are row dependent only,
17 Homotopy class of chain ladder models 6, independent of (5.5) relations (5.3) and (5.4) reduce to the following: ˆ min (, ) min(, ) Y ˆ (5.3b) ˆ Y ˆ (5.4b) In the even more specialised case in which weights are uniform across all cells, the relations simplify further, as follows ˆ min (, ) min(, ) Y ˆ (5.3c) ˆ Y ˆ (5.4c) The last case includes the case, i.e. the ODP distribution reduces to Poisson. This is a case where MLEs have been studied in detail by Hachemeister & Stanard (975), Renshaw & Verrall (998) and Taylor (2000), among others, where it is shown that (5.3c) and (5.4c) are equivalent to the chain ladder estimates (5.7) when the f ˆ and ˆ are related by fˆ ˆ ˆ i. It is shown by England & Verrall (2002) that this result continues to hold in the more general case. The forecast of a future value of Y is Yˆ NR ˆ ˆ, 2, 3,..., (5.6) i 5.3 Relation between recursive and non-recursive cases 5.3. Poisson distribution Taylor (2000, Chapter 2) studies the ODP cross-classified model subect to (i.e. Poisson distribution in each cell of D ), with MLEs given by (5.3c) and (5.4c). It is shown there (equation (2.47) ) that X / X ˆ / ˆ (5.7), i i i i Comparison of this result with (5.7) shows that fˆ ˆ ˆ (5.8) i i i i establishing the relation between the MLEs of the recursive and non-recursive models.
18 Homotopy class of chain ladder models 7 Verrall (2000, p.93) shows that, is the MLE of z for where z is defined by z i (5.9) i It follows that, / ˆ i i ˆ (5.20) Substitution of (5.8) and (5.20) into (5.8) yields ˆ R ˆ i i in which case the forecast of Yˆ ˆ ˆ ˆ ˆ Yˆ R R R NR, by (5.6). Y in the recursive model is (5.2) Thus, the recursive (Poisson Mac) and non- recursive (Poisson cross-classified) models produce the same forecasts when parameters are estimated by MLEs in the case. It then follows from Section 5. that those forecasts are obtainable from the chain ladder algorithm OPD distribution Now consider the more general case in which, independent of and but not necessarily equal to unity. In both OPD Mac and ODP cross- classified models Y ~ ODP(, ) for some mean The meaning of this is. Y ~ Poiss( / ) (5.22) Application of (5.22) to (ODPM3) yields Y, ~ Poiss( f / ) (5.23) For any fixed value of, this last relation indicates that the MLE of f is obtained by application of the Poisson theory of Section 5.3. (i.e. with ) but with each Y replaced by Y. This leaves the estimator (5.7) unchanged. On the other hand, application of (5.22) to (ODPCC2) yields
19 Homotopy class of chain ladder models 8 Y ~ Poiss (5.24) This last relation indicates that the MLEs of the and / are again obtained by application of the Poisson theory of Section 5.3. but with Y Y replaced by /. Equations (5.3c) and (5.4c) are unchanged by these substitutions, indicating that the MLE, and forecasts of the OPD cross-classified model are the same as in the Poisson case. This fact was noted by England & Verrall (2002, p.449) This leads to the following result. Lemma 5.. For a given data array D models with dispersion parameters uniform across, the OPD Mac and OPD cross-classified D ( ), generate the c same forecasts of D on the basis of ML. The ML parameter estimates for the two models are related through (5.8) and (5.20). Proof. Section 5.3. gives the proof for the case. The present sub-section shows that all of the forecasts and parameter estimates discussed in Section 5.3. are unaffected by a change in to a value not equal to unity. 5.4 Semi-recursive models Consider MLE of parameters in the semi-recursive model defined in Section 3.4. The log-lielihood of the data array is Y X Y X min(, ) Y, (5.25) by the same argument as led to (5.3). The partial log-lielihood for ( Y ) can be obtained from assumption (ODPSR3a) and that for Y, from Lemma 3.. These give NR Y ( ) Y (5.26) R NR Y, Y, Y, (5.27) R where denotes a log-lielihood within the recursive model of Section 5. and NR within the non-recursive model of Section 5.2. Substitution of (5.26) and (5.27) into (5.25) gives R NR Y Y Y (5.28)
20 Homotopy class of chain ladder models 9 by (5.3) and (5.0). It is shown in Section 5. that the chain ladder estimates (5.7) of the parameters R f set the log-lielihood component to zero in the case of column dependent dispersion parameters (5.6). Liewise, it is shown in Section 5.2 that the chain ladder estimates (5.7) of the NR parameters f set the log-lielihood component to zero in the case of uniform dispersion parameters when the f ˆ It follows that the chain ladder estimates (5.7) of the parameters set the loglielihood (5.28) to zero under the same conditions. summarised as follows. and ˆ are related as in (5.8). f These results may be Theorem 5.2. Suppose that the data array D is subect to a semi-recursive model as represented in Lemma 3. with, independent of and. Then (a) the MLEs of its parameters f are obtained by treating the data array D as if subect to the (recursive) ODP Mac model. (b) the MLEs of parameters, are obtained by treating the data array as if subect to the (non-recursive) ODP cross-classified model. (c) these parameter estimates are related by (5.8) and (5.20) and the ODP Mac, ODP cross-classified, and semi-recursive forecasts of any particular future Y are all identical. The forecasts are obtainable by application of the chain ladder algorithm. The theorem shows that the chain ladder algorithm provides ML parameter estimates and forecasts for the entire homotopy class of semi-recursive models defined Section Sufficient statistics The following results are special cases of more general results appearing in Taylor (20). Lemma 6.. statistic for f. (a) For an OPD Mac model, (b) For an OPD cross-classified model, C and Y is a sufficient statistic for. C Y,, is a sufficient R Y is a sufficient statistic for (c) In case (b), the sufficient statistic for the full parameter set {, } consists of the row sums and column sums. This sufficient statistic is not minimal. A minimal sufficient statistic is obtained by deletion of an arbitrary single component.
21 Homotopy class of chain ladder models 20 Proof. (a) See Theorem 5. of Taylor (20a). (b) See Theorem 5.2 of Taylor (20a). (c) See Theorem 5.3 of Taylor (20a). Remar. The minimal statistic defined in part (c) of the theorem is not unique. For full detail on the construction of alternative minimal sufficient statistics, see Theorem 5.3 of Taylor (20a). Theorem 6.2. For the semi-recursive model defined in Section 3.4, R ( ) ( ) (a) The vector s = Y,,...,, Y,,...,, C is a sufficient statistic for the parameter set f,..., f,...,,,...,. (b) A minimal sufficient statistic can be obtained by the deletion of an arbitrary single component of s. This statistic is complete. Proof. (a) Recall the form (5.28) for the log-lielihood Y. Theorem 5. of R Taylor (20a) shows that satisfies Fisher-Neyman factorisation with respect to the parameter set f,..., f and the statistic s. Similarly, NR with respect to the set,...,,,.... Thus Y satisfies Fisher-Neyman factorisation with respect to entire parameter set f,..., f,...,,,..., and it follows that s is a sufficient statistic for that parameter set. s,..., s, s,..., s (b) Let the components of s be denoted, relations at the end of Section 2,. By the s... s s... s (6.) whence any component of s can be expressed in terms of the other components. This means that s min, obtained from s by the deletion of an arbitrary component, contains the same information as s and is therefore sufficient for f,..., f,...,,...,., Now note that this last parameter set can be reduced in dimension. By (3.8), each f may be expressed in terms of,..., and so { f,..., f } may be expressed in terms of {,..., }. Further, by (3.7), this last set may be reduced to {,..., }. Thus the parameter set for the semi-recursive model may be taen as,...,,,..., of dimension. Now s min is of the same dimension and, for a regression model such as the semirecursive model, with error terms distributed as a member of the EDF, this equality of dimensions is a necessary and sufficient condition for a sufficient statistic to be complete (Cox & Hinley, 974, p.3).
22 Homotopy class of chain ladder models 2 Finally, since smin is a complete sufficient statistic, it is immediately minimally sufficient (Lehmann & Casella, 998). 7. Minimum variance estimation The Mac model is nown to generate unbiased MLEs, of loss reserve (Mac, 993). The ODP Mac model with column dependent dispersion parameters contains the same expectations and leads to the same MLEs (5.7) and (5.8). The same is not true, however, of the ODP cross-classified model. Since the semi-recursive model is a mixture of these two, one can expect that its MLEs will, in general, be biased. However, any bias can be corrected as follows. Let Z : Z D D R be some predictand and let Zˆ : D R be a predictor of c. Define Z Zˆ Zˆ D D (7.) Then Z D Z D (7.2) and so Z is a bias corrected form of the predictor Ẑ. Theorem 7.. Let D be subect to the semi-recursive model of Section 3.4, with, const. Then the bias corrected chain ladder estimates X, R and R (derived from Xˆ ; Rˆ and ˆR defined by (3.5) (3.7) respectively) are minimum variance unbiased estimators (MVUEs) of X D, R D R D. Proof. Lemma 4.3 of Taylor (20a) shows that the estimators MLE for the ODP cross-classified model with England & Verrall (2002). ˆ, Rˆ and and ˆR are. The result also appears in These estimators are expressible in terms of the statistic s, defined in Theorem 6.2. It is apparent from the proof of that theorem that s is expressible in terms of s min which, by the same theorem, is a complete sufficient (in fact, minimal sufficient) statistic for the semi-recursive model parameter set f,..., f,...,,...,., Thus, X, R and ˆR are unbiased estimators that are functions of a complete sufficient statistic. By the Lehmann-Scheffe theorem, they are MVUEs.
23 Homotopy class of chain ladder models 22 The application of Theorem 7. is limited by the fact that the bias correction factors in, etc. would rarely be nown practice. On the other hand, however, the biases contained in chain ladder estimates are tolerated in practice and, in this context, the theorem shows that the chain ladder provides a minimum variance estimate of whatever it is estimating. When the chain ladder bias is small, it provides minimum variance almost unbiased estimators. References Cox DR & Hincley DV (974). London U. Theoretical Statistics. Chapman and Hall, England P D & Verrall R (2002). Stochastic claims reserving in general insurance. British Actuarial ournal, 8(iii), Hachemeister C A & Stanard N (975). IBNR claims count estimation with static lag functions. Spring meeting of the Casualty Actuarial Society. Lehmann EL & Casella G (998). Springer. Theory of point estimation (2nd edition). Mac T (993). Distribution-free calculation of the standard error of chain ladder reserve estimates. Astin Bulletin, 23(2), Renshaw AE & Verrall R (998). A stochastic model underlying the chainladder technique. British Actuarial ournal, 4(iv), Taylor G (2000). Loss reserving: an actuarial perspective. luwer Academic Publishers, Boston. Taylor (20a). Maximum lielihood and estimation efficiency of the chain ladder. Astin Bulletin, 4(), Taylor (20b). Chain ladder correlations. Research paper No. 220 at mics.unimelb.edu.au/actwww/wps20.shtml. Verrall R (2000). An investigation into stochastic claims reserving models and the chain-ladder technique. Insurance: mathematics and economics, 26(), 9-99.
CHAIN LADDER FORECAST EFFICIENCY
CHAIN LADDER FORECAST EFFICIENCY Greg Taylor Taylor Fry Consulting Actuaries Level 8, 30 Clarence Street Sydney NSW 2000 Australia Professorial Associate, Centre for Actuarial Studies Faculty of Economics
More informationGREG TAYLOR ABSTRACT
MAXIMUM LIELIHOOD AND ESTIMATION EFFICIENCY OF THE CHAIN LADDER BY GREG TAYLOR ABSTRACT The chain ladder is considered in relation to certain recursive and non-recursive models of claim observations. The
More informationGreg Taylor. Taylor Fry Consulting Actuaries University of Melbourne University of New South Wales. CAS Spring meeting Phoenix Arizona May 2012
Chain ladder correlations Greg Taylor Taylor Fry Consulting Actuaries University of Melbourne University of New South Wales CAS Spring meeting Phoenix Arizona 0-3 May 0 Overview The chain ladder produces
More informationChain ladder with random effects
Greg Fry Consulting Actuaries Sydney Australia Astin Colloquium, The Hague 21-24 May 2013 Overview Fixed effects models Families of chain ladder models Maximum likelihood estimators Random effects models
More informationPrediction Uncertainty in the Bornhuetter-Ferguson Claims Reserving Method: Revisited
Prediction Uncertainty in the Bornhuetter-Ferguson Claims Reserving Method: Revisited Daniel H. Alai 1 Michael Merz 2 Mario V. Wüthrich 3 Department of Mathematics, ETH Zurich, 8092 Zurich, Switzerland
More informationGARY G. VENTER 1. CHAIN LADDER VARIANCE FORMULA
DISCUSSION OF THE MEAN SQUARE ERROR OF PREDICTION IN THE CHAIN LADDER RESERVING METHOD BY GARY G. VENTER 1. CHAIN LADDER VARIANCE FORMULA For a dozen or so years the chain ladder models of Murphy and Mack
More informationOn the Importance of Dispersion Modeling for Claims Reserving: Application of the Double GLM Theory
On the Importance of Dispersion Modeling for Claims Reserving: Application of the Double GLM Theory Danaïl Davidov under the supervision of Jean-Philippe Boucher Département de mathématiques Université
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationBootstrapping the triangles
Bootstrapping the triangles Prečo, kedy a ako (NE)bootstrapovat v trojuholníkoch? Michal Pešta Charles University in Prague Faculty of Mathematics and Physics Actuarial Seminar, Prague 19 October 01 Overview
More informationIdentification of the age-period-cohort model and the extended chain ladder model
Identification of the age-period-cohort model and the extended chain ladder model By D. KUANG Department of Statistics, University of Oxford, Oxford OX TG, U.K. di.kuang@some.ox.ac.uk B. Nielsen Nuffield
More informationClaims Reserving under Solvency II
Claims Reserving under Solvency II Mario V. Wüthrich RiskLab, ETH Zurich Swiss Finance Institute Professor joint work with Michael Merz (University of Hamburg) April 21, 2016 European Congress of Actuaries,
More informationSynchronous bootstrapping of loss reserves
Synchronous bootstrapping of loss reserves Greg Taylor Taylor Fry Consulting Actuaries University of Melbourne University of New South Wales Gráinne McGuire Taylor Fry Consulting Actuaries ASTIN Colloquium
More informationDELTA METHOD and RESERVING
XXXVI th ASTIN COLLOQUIUM Zurich, 4 6 September 2005 DELTA METHOD and RESERVING C.PARTRAT, Lyon 1 university (ISFA) N.PEY, AXA Canada J.SCHILLING, GIE AXA Introduction Presentation of methods based on
More informationForecasting with the age-period-cohort model and the extended chain-ladder model
Forecasting with the age-period-cohort model and the extended chain-ladder model By D. KUANG Department of Statistics, University of Oxford, Oxford OX1 3TG, U.K. di.kuang@some.ox.ac.uk B. Nielsen Nuffield
More informationA few basics of credibility theory
A few basics of credibility theory Greg Taylor Director, Taylor Fry Consulting Actuaries Professorial Associate, University of Melbourne Adjunct Professor, University of New South Wales General credibility
More informationLOGISTIC REGRESSION Joseph M. Hilbe
LOGISTIC REGRESSION Joseph M. Hilbe Arizona State University Logistic regression is the most common method used to model binary response data. When the response is binary, it typically takes the form of
More informationPENALIZED LIKELIHOOD PARAMETER ESTIMATION FOR ADDITIVE HAZARD MODELS WITH INTERVAL CENSORED DATA
PENALIZED LIKELIHOOD PARAMETER ESTIMATION FOR ADDITIVE HAZARD MODELS WITH INTERVAL CENSORED DATA Kasun Rathnayake ; A/Prof Jun Ma Department of Statistics Faculty of Science and Engineering Macquarie University
More informationLinear Prediction Theory
Linear Prediction Theory Joseph A. O Sullivan ESE 524 Spring 29 March 3, 29 Overview The problem of estimating a value of a random process given other values of the random process is pervasive. Many problems
More informationGeneralized Mack Chain-Ladder Model of Reserving with Robust Estimation
Generalized Mac Chain-Ladder Model of Reserving with Robust Estimation Przemyslaw Sloma Abstract n the present paper we consider the problem of stochastic claims reserving in the framewor of Development
More informationCalendar Year Dependence Modeling in Run-Off Triangles
Calendar Year Dependence Modeling in Run-Off Triangles Mario V. Wüthrich RiskLab, ETH Zurich May 21-24, 2013 ASTIN Colloquium The Hague www.math.ethz.ch/ wueth Claims reserves at time I = 2010 accident
More informationAnalytics Software. Beyond deterministic chain ladder reserves. Neil Covington Director of Solutions Management GI
Analytics Software Beyond deterministic chain ladder reserves Neil Covington Director of Solutions Management GI Objectives 2 Contents 01 Background 02 Formulaic Stochastic Reserving Methods 03 Bootstrapping
More informationThe main results about probability measures are the following two facts:
Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a
More informationStochastic Incremental Approach for Modelling the Claims Reserves
International Mathematical Forum, Vol. 8, 2013, no. 17, 807-828 HIKARI Ltd, www.m-hikari.com Stochastic Incremental Approach for Modelling the Claims Reserves Ilyes Chorfi Department of Mathematics University
More informationarxiv: v1 [stat.ap] 17 Jun 2013
Modeling Dependencies in Claims Reserving with GEE Šárka Hudecová a, Michal Pešta a, arxiv:1306.3768v1 [stat.ap] 17 Jun 2013 a Charles University in Prague, Faculty of Mathematics and Physics, Department
More informationApplying the proportional hazard premium calculation principle
Applying the proportional hazard premium calculation principle Maria de Lourdes Centeno and João Andrade e Silva CEMAPRE, ISEG, Technical University of Lisbon, Rua do Quelhas, 2, 12 781 Lisbon, Portugal
More informationConditional Least Squares and Copulae in Claims Reserving for a Single Line of Business
Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business Michal Pešta Charles University in Prague Faculty of Mathematics and Physics Ostap Okhrin Dresden University of Technology
More informationCredibility Theory for Generalized Linear and Mixed Models
Credibility Theory for Generalized Linear and Mixed Models José Garrido and Jun Zhou Concordia University, Montreal, Canada E-mail: garrido@mathstat.concordia.ca, junzhou@mathstat.concordia.ca August 2006
More informationBasic concepts in estimation
Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates
More informationTheory and Methods of Statistical Inference. PART I Frequentist likelihood methods
PhD School in Statistics XXV cycle, 2010 Theory and Methods of Statistical Inference PART I Frequentist likelihood methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution
More informationFebruary 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM
February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Abstract. The Rao-Blacwell theorem told us how to improve an estimator. We will discuss conditions on when the Rao-Blacwellization of an estimator
More informationA Loss Reserving Method for Incomplete Claim Data
A Loss Reserving Method for Incomplete Claim Data René Dahms Bâloise, Aeschengraben 21, CH-4002 Basel renedahms@baloisech June 11, 2008 Abstract A stochastic model of an additive loss reserving method
More informationGeneralized Linear Models (GLZ)
Generalized Linear Models (GLZ) Generalized Linear Models (GLZ) are an extension of the linear modeling process that allows models to be fit to data that follow probability distributions other than the
More informationPRINCIPAL COMPONENTS ANALYSIS
121 CHAPTER 11 PRINCIPAL COMPONENTS ANALYSIS We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: Principal Components Analysis (PCA). PCA involves
More informationReserving for multiple excess layers
Reserving for multiple excess layers Ben Zehnwirth and Glen Barnett Abstract Patterns and changing trends among several excess-type layers on the same business tend to be closely related. The changes in
More informationIndividual loss reserving with the Multivariate Skew Normal framework
May 22 2013, K. Antonio, KU Leuven and University of Amsterdam 1 / 31 Individual loss reserving with the Multivariate Skew Normal framework Mathieu Pigeon, Katrien Antonio, Michel Denuit ASTIN Colloquium
More information10-701/15-781, Machine Learning: Homework 4
10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationA Comparison of Resampling Methods for Bootstrapping Triangle GLMs
A Comparison of Resampling Methods for Bootstrapping Triangle GLMs Thomas Hartl ABSTRACT Bootstrapping is often employed for quantifying the inherent variability of development triangle GLMs. While easy
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationExpressions for the covariance matrix of covariance data
Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden
More informationEconomics 620, Lecture 5: exp
1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function
More informationPractice Problems Section Problems
Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,
More informationProof In the CR proof. and
Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé
More informationONE-YEAR AND TOTAL RUN-OFF RESERVE RISK ESTIMATORS BASED ON HISTORICAL ULTIMATE ESTIMATES
FILIPPO SIEGENTHALER / filippo78@bluewin.ch 1 ONE-YEAR AND TOTAL RUN-OFF RESERVE RISK ESTIMATORS BASED ON HISTORICAL ULTIMATE ESTIMATES ABSTRACT In this contribution we present closed-form formulas in
More informationOptimal Auxiliary Variable Assisted Two-Phase Sampling Designs
MASTER S THESIS Optimal Auxiliary Variable Assisted Two-Phase Sampling Designs HENRIK IMBERG Department of Mathematical Sciences Division of Mathematical Statistics CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY
More informationConfidence and prediction intervals for. generalised linear accident models
Confidence and prediction intervals for generalised linear accident models G.R. Wood September 8, 2004 Department of Statistics, Macquarie University, NSW 2109, Australia E-mail address: gwood@efs.mq.edu.au
More informationGLM I An Introduction to Generalized Linear Models
GLM I An Introduction to Generalized Linear Models CAS Ratemaking and Product Management Seminar March Presented by: Tanya D. Havlicek, ACAS, MAAA ANTITRUST Notice The Casualty Actuarial Society is committed
More informationDEPARTMENT OF ACTUARIAL STUDIES RESEARCH PAPER SERIES
DEPARTMENT OF ACTUARIAL STUDIES RESEARCH PAPER SERIES State space models in actuarial science by Piet de Jong piet.dejong@mq.edu.au Research Paper No. 2005/02 July 2005 Division of Economic and Financial
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationCramér-Rao Bounds for Estimation of Linear System Noise Covariances
Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationSemiparametric Generalized Linear Models
Semiparametric Generalized Linear Models North American Stata Users Group Meeting Chicago, Illinois Paul Rathouz Department of Health Studies University of Chicago prathouz@uchicago.edu Liping Gao MS Student
More informationMultiple Linear Regression
Multiple Linear Regression Asymptotics Asymptotics Multiple Linear Regression: Assumptions Assumption MLR. (Linearity in parameters) Assumption MLR. (Random Sampling from the population) We have a random
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationNotes 19 Gradient and Laplacian
ECE 3318 Applied Electricity and Magnetism Spring 218 Prof. David R. Jackson Dept. of ECE Notes 19 Gradient and Laplacian 1 Gradient Φ ( x, y, z) =scalar function Φ Φ Φ grad Φ xˆ + yˆ + zˆ x y z We can
More information1 Delayed Renewal Processes: Exploiting Laplace Transforms
IEOR 6711: Stochastic Models I Professor Whitt, Tuesday, October 22, 213 Renewal Theory: Proof of Blackwell s theorem 1 Delayed Renewal Processes: Exploiting Laplace Transforms The proof of Blackwell s
More informationLecture 8. Poisson models for counts
Lecture 8. Poisson models for counts Jesper Rydén Department of Mathematics, Uppsala University jesper.ryden@math.uu.se Statistical Risk Analysis Spring 2014 Absolute risks The failure intensity λ(t) describes
More informationECONOMETRIC THEORY. MODULE VI Lecture 19 Regression Analysis Under Linear Restrictions
ECONOMETRIC THEORY MODULE VI Lecture 9 Regression Analysis Under Linear Restrictions Dr Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur One of the basic objectives
More informationTheory and Methods of Statistical Inference. PART I Frequentist theory and methods
PhD School in Statistics cycle XXVI, 2011 Theory and Methods of Statistical Inference PART I Frequentist theory and methods (A. Salvan, N. Sartori, L. Pace) Syllabus Some prerequisites: Empirical distribution
More informationChapter 8 Heteroskedasticity
Chapter 8 Walter R. Paczkowski Rutgers University Page 1 Chapter Contents 8.1 The Nature of 8. Detecting 8.3 -Consistent Standard Errors 8.4 Generalized Least Squares: Known Form of Variance 8.5 Generalized
More informationCHAPTER III THE PROOF OF INEQUALITIES
CHAPTER III THE PROOF OF INEQUALITIES In this Chapter, the main purpose is to prove four theorems about Hardy-Littlewood- Pólya Inequality and then gives some examples of their application. We will begin
More informationENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM
c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With
More informationRandom Dyadic Tilings of the Unit Square
Random Dyadic Tilings of the Unit Square Svante Janson, Dana Randall and Joel Spencer June 24, 2002 Abstract A dyadic rectangle is a set of the form R = [a2 s, (a+1)2 s ] [b2 t, (b+1)2 t ], where s and
More informationHaruhiko Ogasawara. This article gives the first half of an expository supplement to Ogasawara (2015).
Economic Review (Otaru University of Commerce, Vol.66, No. & 3, 9-58. December, 5. Expository supplement I to the paper Asymptotic expansions for the estimators of Lagrange multipliers and associated parameters
More informationTheory and Methods of Statistical Inference
PhD School in Statistics cycle XXIX, 2014 Theory and Methods of Statistical Inference Instructors: B. Liseo, L. Pace, A. Salvan (course coordinator), N. Sartori, A. Tancredi, L. Ventura Syllabus Some prerequisites:
More informationCS 195-5: Machine Learning Problem Set 1
CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of
More informationA Practitioner s Guide to Generalized Linear Models
A Practitioners Guide to Generalized Linear Models Background The classical linear models and most of the minimum bias procedures are special cases of generalized linear models (GLMs). GLMs are more technically
More informationLikelihood and p-value functions in the composite likelihood context
Likelihood and p-value functions in the composite likelihood context D.A.S. Fraser and N. Reid Department of Statistical Sciences University of Toronto November 19, 2016 Abstract The need for combining
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationThe Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing
1 The Finite Sample Properties of the Least Squares Estimator / Basic Hypothesis Testing Greene Ch 4, Kennedy Ch. R script mod1s3 To assess the quality and appropriateness of econometric estimators, we
More informationarxiv: v2 [stat.me] 8 Jun 2016
Orthogonality of the Mean and Error Distribution in Generalized Linear Models 1 BY ALAN HUANG 2 and PAUL J. RATHOUZ 3 University of Technology Sydney and University of Wisconsin Madison 4th August, 2013
More informationVarious Extensions Based on Munich Chain Ladder Method
Various Extensions Based on Munich Chain Ladder Method etr Jedlička Charles University, Department of Statistics 20th June 2007, 50th Anniversary ASTIN Colloquium etr Jedlička (UK MFF) Various Extensions
More informationAsymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands
Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands Elizabeth C. Mannshardt-Shamseldin Advisor: Richard L. Smith Duke University Department
More informationGeneralized Linear Models. Kurt Hornik
Generalized Linear Models Kurt Hornik Motivation Assuming normality, the linear model y = Xβ + e has y = β + ε, ε N(0, σ 2 ) such that y N(μ, σ 2 ), E(y ) = μ = β. Various generalizations, including general
More informationShort cycles in random regular graphs
Short cycles in random regular graphs Brendan D. McKay Department of Computer Science Australian National University Canberra ACT 0200, Australia bdm@cs.anu.ed.au Nicholas C. Wormald and Beata Wysocka
More informationLikelihood Function for Multivariate Hawkes Processes
Lielihood Function for Multivariate Hawes Processes Yuanda Chen January, 6 Abstract In this article we discuss the lielihood function for an M-variate Hawes process and derive the explicit formula for
More informationResearch Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation
Probability and Statistics Volume 2012, Article ID 807045, 5 pages doi:10.1155/2012/807045 Research Article Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation
More informationACE 564 Spring Lecture 8. Violations of Basic Assumptions I: Multicollinearity and Non-Sample Information. by Professor Scott H.
ACE 564 Spring 2006 Lecture 8 Violations of Basic Assumptions I: Multicollinearity and Non-Sample Information by Professor Scott H. Irwin Readings: Griffiths, Hill and Judge. "Collinear Economic Variables,
More informationIntroduction to Machine Learning Spring 2018 Note 18
CS 189 Introduction to Machine Learning Spring 2018 Note 18 1 Gaussian Discriminant Analysis Recall the idea of generative models: we classify an arbitrary datapoint x with the class label that maximizes
More informationAny of 27 linear and nonlinear models may be fit. The output parallels that of the Simple Regression procedure.
STATGRAPHICS Rev. 9/13/213 Calibration Models Summary... 1 Data Input... 3 Analysis Summary... 5 Analysis Options... 7 Plot of Fitted Model... 9 Predicted Values... 1 Confidence Intervals... 11 Observed
More informationTriangles in Life and Casualty
Triangles in Life and Casualty Gary G. Venter, Guy Carpenter LLC gary.g.venter@guycarp.com Abstract The loss development triangles in casualty insurance have a similar setup to the mortality datasets used
More informationSignal Detection and Estimation
394 based on the criteria o the unbiased and minimum variance estimator but rather on minimizing the squared dierence between the given data and the assumed signal data. We concluded the chapter with a
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationTesting Statistical Hypotheses
E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More information1. (Regular) Exponential Family
1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &
More informationF denotes cumulative density. denotes probability density function; (.)
BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models
More informationCounting Permutations by their Rigid Patterns
Counting Permutations by their Rigid Patterns A. N. Myers anmyers@math.upenn.edu University of Pennsylvania Philadelphia, PA 19104-6395 September 20, 2002 1 Abstract In how many permutations does the pattern
More informationThe Chain Ladder Reserve Uncertainties Revisited
The Chain Ladder Reserve Uncertainties Revisited by Alois Gisler Paper to be presented at the ASTIN Colloquium 6 in Lisbon Abstract: Chain ladder () is still one of the most popular and most used reserving
More informationOn the Application of the Generalized Pareto Distribution for Statistical Extrapolation in the Assessment of Dynamic Stability in Irregular Waves
On the Application of the Generalized Pareto Distribution for Statistical Extrapolation in the Assessment of Dynamic Stability in Irregular Waves Bradley Campbell 1, Vadim Belenky 1, Vladas Pipiras 2 1.
More informationParametric Techniques Lecture 3
Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to
More informationOn a simple construction of bivariate probability functions with fixed marginals 1
On a simple construction of bivariate probability functions with fixed marginals 1 Djilali AIT AOUDIA a, Éric MARCHANDb,2 a Université du Québec à Montréal, Département de mathématiques, 201, Ave Président-Kennedy
More informationTail Conditional Expectations for Extended Exponential Dispersion Models
American Researc Journal of Matematics Original Article ISSN 378-704 Volume 1 Issue 4 015 Tail Conditional Expectations for Extended Exponential Dispersion Models Ye (Zoe) Ye Qiang Wu and Don Hong 1 Program
More informationFault Detection and Diagnosis Using Information Measures
Fault Detection and Diagnosis Using Information Measures Rudolf Kulhavý Honewell Technolog Center & Institute of Information Theor and Automation Prague, Cech Republic Outline Probabilit-based inference
More informationProperties of the least squares estimates
Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares
More informationECE 275B Homework # 1 Solutions Winter 2018
ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationCopula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011
Copula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011 Outline Ordinary Least Squares (OLS) Regression Generalized Linear Models
More informationCombine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning
Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning Michalis K. Titsias Department of Informatics Athens University of Economics and Business
More informationParametric Techniques
Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure
More information