A few basics of credibility theory
|
|
- Julianna Wilkerson
- 5 years ago
- Views:
Transcription
1 A few basics of credibility theory Greg Taylor Director, Taylor Fry Consulting Actuaries Professorial Associate, University of Melbourne Adjunct Professor, University of New South Wales
2 General credibility formula Consider random variable X with E[X]=µ Suppose we have an observation of X and some collateral information leading to an independent estimate m of µ A credibility estimator is an estimator of the form (1-z)m + zx and z is called the credibility (coefficient) associated with X 2
3 American credibility Origins in workers compensation rating Mowbray A H (1914). How extensive a payroll exposure is necessary to give a dependable pure premium? PCAS, 1, Asks the question: How large must the claims experience be in order to be assigned full credibility? Answer takes the form: Sufficiently large that Prob[ X-µ >qµ] < p where p, q are selected constants 3
4 American credibility Prob[ X-µ >qµ] < p American credibility also called limited fluctuation credibility Example: X~Poisson Prob[ X-µ / µ ½ > qµ ½ ] < p qµ ½ > z 1-½p [normal standard score] µ> [z 1-½p /q] 2 For p=10%, q=5%, µ >
5 Problems with American credibility Prob[ X-µ >qµ] < p 1. What if X not Poisson? There is then a need to estimate V[X] and include it in the treatment of full credibility 2. The theory gives the sample size for full credibility. What treatment of smaller sample sizes? Ad hoc solutions Partial credibility: z=[n/n full ] ½ where n is actual sample size and n full is sample size required for full credibility 5
6 European credibility Consider a collective of risks 1,2, Risks labelled by some unobservable θ=θ 1,θ 2, [Latent parameter] Let the frequency of occurrence of a value θ in the collective be represented by d.f. U(θ) [Structure function] Let X i = claims experience of risk i Let µ(θ i ) = E[X i θ i ] Take a single observation X i How should µ(θ i ) be estimated? Remember that θ i determines µ(θ i ) but we cannot observe it 6
7 European credibility (cont d) Form a measure of the error in any candidate estimator µ*(x i ) of µ(θ i ) and then select the estimator with the least error Choose error measure E[ [µ*(x i ) - µ(θ i )] 2 θ i ] error for given θ i 7
8 European credibility (cont d) Form a measure of the error in any candidate estimator µ*(x i ) of µ(θ i ) and then select the estimator with the least error Choose error measure E[ [µ*(x i ) - µ(θ)] 2 θ ] du(θ) error for given θ allowance for unknown θ 8
9 European credibility (cont d) Form a measure of the error in any candidate estimator µ*(x i ) of µ(θ i ) and then select the estimator with the least error Choose error measure R(µ*) = E[ [µ*(x i ) - µ(θ)] 2 θ ] du(θ) error for given θ allowance for unknown θ 9
10 European credibility (cont d) Form a measure of the error in any candidate estimator µ*(x i ) of µ(θ i ) and then select the estimator with the least error Choose error measure R(µ*) = E[ [µ*(x i ) - µ(θ)] 2 θ ] du(θ) error for given θ allowance for unknown θ [R(µ*) is the risk associated with estimator µ*] 10
11 Derivation of European credibility R(µ*) = E[ [µ*(x i ) - µ(θ)] 2 θ ] du(θ) Assume that µ*(x i ) is to be linear in X i µ*(x i )= a + zx i Choose constants a, z so as to minimise the risk R(µ*) Differentiate R(µ*) with respect to a: E[ 2[µ*(X i ) - µ(θ)] θ ] du(θ) = 0 E[ [a + zx i - µ(θ)] θ ] du(θ) = 0 [µ*(x i ) unbiased] [a + zµ(θ) - µ(θ)] du(θ) = 0 a = (1- z)m where m = µ(θ) du(θ) = portfolio-wide mean claims experience µ*(x i ) = (1- z)m + zx i 11
12 Derivation of European credibility (cont d) R(µ*) = E[ [µ*(x i ) - µ(θ)] 2 θ ] du(θ) µ*(x i ) = (1- z)m + zx i R(µ*) = E[ [(1- z)m + zx i - µ(θ)] 2 θ ] du(θ) = E[ [(1- z)(m - µ(θ)) + z(x i - µ(θ))] 2 θ ] du(θ) Differentiate R(µ*) with respect to z and set result to zero: Mathematics can be found on next slide z = [1 + E{(X i - µ(θ)) 2 θ } du(θ) / (µ(θ) m) 2 du(θ) ] -1 z = [1 + E θ V[X i θ] / V θ E[X i θ] ] -1 Classical credibility formula (Bühlmann, 1967) European credibility also called greatest accuracy credibility 12
13 Derivation of European credibility mathematics R(µ*) = E[ [(1- z)(m - µ(θ)) + z(x i - µ(θ))] 2 θ ] du(θ) Differentiate R(µ*) with respect to z and set result to zero: E{2[(X i - µ(θ)) - (m - µ(θ))] [(1- z)(m - µ(θ)) + z(x i - µ(θ))] θ } du(θ) = 0 E{z(X i - µ(θ)) 2 (1-z) (µ(θ) m) 2 (1-2z) (X i - µ(θ)) (µ(θ) m) θ } du(θ) = 0 Note that µ(θ) and m are constants for given θ. So E{ (µ(θ) m) 2 θ } du(θ) = (µ(θ) m) 2 du(θ) E{ (X i - µ(θ)) (µ(θ) m) θ } du(θ) = (µ(θ) m) E{ (X i - µ(θ)) θ } du(θ) = 0 [since the expectation is zero] Then z E{(X i - µ(θ)) 2 θ } du(θ) (1-z) (µ(θ) m) 2 du(θ) = 0 z = [1 + E{(X i - µ(θ)) 2 θ } du(θ) / (µ(θ) m) 2 du(θ) ] -1 13
14 Summary µ*(x i )= (1-z)m + zx i m = µ(θ) du(θ) = E θ E[X i θ] z = [1 + E θ V[X i θ] / V θ E[X i θ] ] -1 14
15 Interpretation of credibility coefficient z = [1 + E θ V[X i θ] / V θ E[X i θ] ] -1 Average across risk groups of within-group Between-group group means Within-group variation variance of withinvariances Betweengroup variation Credibility z 0 Fixed, finite z 1 Fixed, finite z 0 Fixed, finite 0 z 0 Fixed, finite z 1 15
16 Bayesian and non- Bayesian approaches Recall error measure R(µ*) = E[ [µ*(x i ) - µ(θ)] 2 θ ] du(θ) U(.) was the d.f. of risks in the collective under consideration Alternatively, U(.) might be a Bayesian prior Then R(µ*) is the risk integrated over the prior It may apply to a single risk whose θ is a single drawing from the prior Now R(µ*) is called the Bayes risk Credibility estimator is linear Bayes estimator of µ(θ) Mathematics all works exactly as before, just interpreted differently 16
17 Exact credibility Consider the special case in which θ is a drawing from Θ~Gamma(α,β) and X i ~Poisson(θ), i.e. u(θ) = U'(θ) = const. x θ α-1 exp ( βθ), θ>0 Prob[X i =x θ] = const. x θ x exp(-θ) µ(θ) = E[X i θ] = θ, m = E θ E[X i θ] = α/β 17
18 Exact credibility (cont d) u(θ) = U'(θ) = const. x θ α-1 exp ( βθ) Prob[X i =x θ] = const. x θ x exp(-θ) Recall Bayes theorem p(θ x) = p(x θ) p(θ) / p(x) = p(x θ) p(θ) x normalising constant In our case p(θ x) = const. x Prob[X i =x θ] x u(θ) = const. x θ x+α-1 exp(-(1+β)θ) Posterior p(θ x) is gamma, just as prior p(θ) was The prior is then called the natural conjugate prior of the Poisson conditional likelihood p(x θ) The gamma family of priors is said to be closed under Bayesian revision (of the Poisson) 18
19 Exact credibility (cont d) p(θ x) = const. x θ x+α-1 exp(-(1+β)θ) E(µ(θ) x) = E(θ x) = (x+α)/(1+β) which is linear in x Recall that credibility estimator was the best linear approximation to µ(θ) x So the linear approximation is exact in this case Credibility estimator is exact for Gamma- Poisson 19
20 Exact credibility (cont d) E(µ(θ) x) = E(θ x) = (x+α)/(1+β) =(1-z)m + zx with z = 1/(1+ β) [since m = α/β] Can check that this agrees with earlier credibility coefficient z = [1 + E θ V[X θ] / V θ E[X θ] ] -1 X θ ~ Poisson (θ), so E[X θ] = V[X θ] = θ Θ~Gamma(α,β), so E θ V[X θ] = α/β, V θ E[X θ] = α/β 2 z = 1/(1+ β) 20
21 Exact credibility (cont d) Credibility estimator is exact for Gamma-Poisson This result may also be checked for certain other conjugate pairs, e.g Gamma-gamma Normal-normal 21
22 Relation between credibility and GLMs In fact credibility estimator is exact (with a minor regularity condition) for all conditional likelihoods from the exponential dispersion family (EDF) with natural conjugate priors (Jewell, 1974, 1975), i.e. p(x θ) = const. x exp {[xθ b(θ)] / φ} p(θ) = const. x exp {[nθ b(θ)] / ψ} It is well known that the EDF includes Poisson, gamma, normal 22
23 Relation between credibility and GLMs (cont d) A GLM is a model of the form Y = [Y 1,Y 2,,Y n ] T Y i ~EDF E[Y] = h -1 (Xβ) [h is link function] The error term is such as to produce exact credibility if a natural conjugate prior is associated with each Y i 23
24 Estimation of credibility coefficient z = [1 + E θ V[X i θ] / V θ E[X i θ] ] -1 Between-group variance of within- group means Average across risk groups of within-group variances Consider case in which there are n risk groups, each observed over t time intervals X ij = claims experience of i-th group in interval j Above description of credibility coefficient suggests analysis of variance In fact estimate z by [1+1/F] -1 where F is ANOVA F-statistic for array {X ij } (Zehnwirth) 24
25 Multi-dimensional credibility Consider same data array {Y ij,i=1,,n;j=1,,t} [now Y instead of X] Assume For given i, the Y ij iid d.f. of Y ij characterised by latent parameter θ i {θ i,i=1,,n} an iid sample from df U(.) µ(θ) = [µ(θ 1 ),,µ(θ n )] T = X β [regression structure] nx1 nxq qx1 Find credibility estimator µ*(y) of µ(θ) 25
26 Multi-dimensional credibility (cont d) Earlier 1- dimensional error measure (Bayes risk) R(µ*) = E[ [µ*(x i ) - µ(θ)] 2 θ ] du(θ) Multi-dimensional version R(µ*) = E[ [µ*(y) - µ(θ)] 2 θ ] du(θ) = E[ [µ*(y) - Xβ] T [µ*(y) - Xβ] θ ] du(θ) Result µ*(y) = (1-Z)m + Z Y nxn with m = E θ [µ(θ)] as before Z is a credibility matrix with a form dependent on betweenand within-group dispersions, as before 26
Bayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More informationChain ladder with random effects
Greg Fry Consulting Actuaries Sydney Australia Astin Colloquium, The Hague 21-24 May 2013 Overview Fixed effects models Families of chain ladder models Maximum likelihood estimators Random effects models
More informationChapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1
Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in
More informationExperience Rating in General Insurance by Credibility Estimation
Experience Rating in General Insurance by Credibility Estimation Xian Zhou Department of Applied Finance and Actuarial Studies Macquarie University, Sydney, Australia Abstract This work presents a new
More informationINTRODUCTION TO BAYESIAN METHODS II
INTRODUCTION TO BAYESIAN METHODS II Abstract. We will revisit point estimation and hypothesis testing from the Bayesian perspective.. Bayes estimators Let X = (X,..., X n ) be a random sample from the
More informationBayesian Inference. Chapter 4: Regression and Hierarchical Models
Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 January 18, 2018 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 18, 2018 1 / 9 Sampling from a Bernoulli Distribution Theorem (Beta-Bernoulli
More informationStatistics 360/601 Modern Bayesian Theory
Statistics 360/601 Modern Bayesian Theory Alexander Volfovsky Lecture 5 - Sept 12, 2016 How often do we see Poisson data? 1 2 Poisson data example Problem of interest: understand different causes of death
More informationREGRESSION TREE CREDIBILITY MODEL
LIQUN DIAO AND CHENGGUO WENG Department of Statistics and Actuarial Science, University of Waterloo Advances in Predictive Analytics Conference, Waterloo, Ontario Dec 1, 2017 Overview Statistical }{{ Method
More informationSolutions to the Spring 2015 CAS Exam ST
Solutions to the Spring 2015 CAS Exam ST (updated to include the CAS Final Answer Key of July 15) There were 25 questions in total, of equal value, on this 2.5 hour exam. There was a 10 minute reading
More informationBayesian Inference. Chapter 4: Regression and Hierarchical Models
Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative
More informationDavid Giles Bayesian Econometrics
David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian
More informationBayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017
Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries
More informationModule 22: Bayesian Methods Lecture 9 A: Default prior selection
Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical
More information(3) Review of Probability. ST440/540: Applied Bayesian Statistics
Review of probability The crux of Bayesian statistics is to compute the posterior distribution, i.e., the uncertainty distribution of the parameters (θ) after observing the data (Y) This is the conditional
More informationParameter Estimation
Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods
More informationEcon 2140, spring 2018, Part IIa Statistical Decision Theory
Econ 2140, spring 2018, Part IIa Maximilian Kasy Department of Economics, Harvard University 1 / 35 Examples of decision problems Decide whether or not the hypothesis of no racial discrimination in job
More informationThe binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution.
The binomial model Example. After suspicious performance in the weekly soccer match, 37 mathematical sciences students, staff, and faculty were tested for the use of performance enhancing analytics. Let
More informationRemarks on Improper Ignorance Priors
As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive
More informationCourse 4 Solutions November 2001 Exams
Course 4 Solutions November 001 Exams November, 001 Society of Actuaries Question #1 From the Yule-Walker equations: ρ φ + ρφ 1 1 1. 1 1+ ρ ρφ φ Substituting the given quantities yields: 0.53 φ + 0.53φ
More informationBayesian Ingredients. Hedibert Freitas Lopes
Normal Prior s Ingredients Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models Generalized Linear Models - part II Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs.
More informationCredibility Modeling with Applications. Tatiana Khapaeva. Thesis presented in a partial fulfilment of the requirements.
Credibility Modeling with Applications by Tatiana Khapaeva Thesis presented in a partial fulfilment of the requirements for the degree of Master of Science (M.Sc.) in Computational Sciences School of Graduate
More informationPMR Learning as Inference
Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning
More informationIntroduction to Probabilistic Machine Learning
Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning
More informationStatistics Ph.D. Qualifying Exam
Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to
More informationOn the Importance of Dispersion Modeling for Claims Reserving: Application of the Double GLM Theory
On the Importance of Dispersion Modeling for Claims Reserving: Application of the Double GLM Theory Danaïl Davidov under the supervision of Jean-Philippe Boucher Département de mathématiques Université
More informationSolutions to the Fall 2016 CAS Exam S
Solutions to the Fall 2016 CAS Exam S There were 45 questions in total, of equal value, on this 4 hour exam. There was a 15 minute reading period in addition to the 4 hours. The Exam S is copyright 2016
More informationFall 2003: Maximum Likelihood II
36-711 Fall 2003: Maximum Likelihood II Brian Junker November 18, 2003 Slide 1 Newton s Method and Scoring for MLE s Aside on WLS/GLS Application to Exponential Families Application to Generalized Linear
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes
More informationFirst Year Examination Department of Statistics, University of Florida
First Year Examination Department of Statistics, University of Florida August 20, 2009, 8:00 am - 2:00 noon Instructions:. You have four hours to answer questions in this examination. 2. You must show
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 11 The Prior Distribution Definition Suppose that one has a statistical model with parameter
More informationLinear Models A linear model is defined by the expression
Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationChapter 5. Bayesian Statistics
Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective
More informationData Analysis and Uncertainty Part 2: Estimation
Data Analysis and Uncertainty Part 2: Estimation Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu 1 Topics in Estimation 1. Estimation 2. Desirable
More informationChapter 5: Generalized Linear Models
w w w. I C A 0 1 4. o r g Chapter 5: Generalized Linear Models b Curtis Gar Dean, FCAS, MAAA, CFA Ball State Universit: Center for Actuarial Science and Risk Management M Interest in Predictive Modeling
More informationRegression Tree Credibility Model. Liqun Diao, Chengguo Weng
Regression Tree Credibility Model Liqun Diao, Chengguo Weng Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, N2L 3G1, Canada Version: Sep 16, 2016 Abstract Credibility
More informationWhen is MLE appropriate
When is MLE appropriate As a rule of thumb the following to assumptions need to be fulfilled to make MLE the appropriate method for estimation: The model is adequate. That is, we trust that one of the
More informationNon-Life Insurance: Mathematics and Statistics
Exercise sheet 1 Exercise 1.1 Discrete Distribution Suppose the random variable N follows a geometric distribution with parameter p œ (0, 1), i.e. ; (1 p) P[N = k] = k 1 p if k œ N \{0}, 0 else. (a) Show
More information1. Fisher Information
1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score
More informationPart 4: Multi-parameter and normal models
Part 4: Multi-parameter and normal models 1 The normal model Perhaps the most useful (or utilized) probability model for data analysis is the normal distribution There are several reasons for this, e.g.,
More informationPARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.
PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.. Beta Distribution We ll start by learning about the Beta distribution, since we end up using
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationInference and Regression
Inference and Regression Assignment 3 Department of IOMS Professor William Greene Phone:.998.0876 Office: KMC 7-90 Home page:www.stern.nyu.edu/~wgreene Email: wgreene@stern.nyu.edu Course web page: www.stern.nyu.edu/~wgreene/econometrics/econometrics.htm.
More informationThe linear model is the most fundamental of all serious statistical models encompassing:
Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x
More informationSTA 216: GENERALIZED LINEAR MODELS. Lecture 1. Review and Introduction. Much of statistics is based on the assumption that random
STA 216: GENERALIZED LINEAR MODELS Lecture 1. Review and Introduction Much of statistics is based on the assumption that random variables are continuous & normally distributed. Normal linear regression
More informationPoint Estimation. Vibhav Gogate The University of Texas at Dallas
Point Estimation Vibhav Gogate The University of Texas at Dallas Some slides courtesy of Carlos Guestrin, Chris Bishop, Dan Weld and Luke Zettlemoyer. Basics: Expectation and Variance Binary Variables
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 2. Statistical Schools Adapted from slides by Ben Rubinstein Statistical Schools of Thought Remainder of lecture is to provide
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationSubject CS1 Actuarial Statistics 1 Core Principles
Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and
More informationNEWCASTLE UNIVERSITY SCHOOL OF MATHEMATICS & STATISTICS SEMESTER /2013 MAS2317/3317. Introduction to Bayesian Statistics: Mid Semester Test
NEWCASTLE UNIVERSITY SCHOOL OF MATHEMATICS & STATISTICS SEMESTER 2 2012/2013 Introduction to Bayesian Statistics: Mid Semester Test Time allowed: 50 minutes Candidates should attempt all questions. Marks
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationMATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours
MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY
More informationSTA216: Generalized Linear Models. Lecture 1. Review and Introduction
STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationMathematical Statistics
Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution
More informationSTAT 830 Bayesian Estimation
STAT 830 Bayesian Estimation Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Bayesian Estimation STAT 830 Fall 2011 1 / 23 Purposes of These
More informationEstimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio
Estimation of reliability parameters from Experimental data (Parte 2) This lecture Life test (t 1,t 2,...,t n ) Estimate θ of f T t θ For example: λ of f T (t)= λe - λt Classical approach (frequentist
More informationTime Series and Dynamic Models
Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee September 03 05, 2017 Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles Linear Regression Linear regression is,
More informationPrior Choice, Summarizing the Posterior
Prior Choice, Summarizing the Posterior Statistics 220 Spring 2005 Copyright c 2005 by Mark E. Irwin Informative Priors Binomial Model: y π Bin(n, π) π is the success probability. Need prior p(π) Bayes
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public
More informationGeneralized Linear Models
Generalized Linear Models David Rosenberg New York University April 12, 2015 David Rosenberg (New York University) DS-GA 1003 April 12, 2015 1 / 20 Conditional Gaussian Regression Gaussian Regression Input
More informationOther Noninformative Priors
Other Noninformative Priors Other methods for noninformative priors include Bernardo s reference prior, which seeks a prior that will maximize the discrepancy between the prior and the posterior and minimize
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationNew Bayesian methods for model comparison
Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison
More information9 Bayesian inference. 9.1 Subjective probability
9 Bayesian inference 1702-1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake pm to win
More informationSTAT J535: Chapter 5: Classes of Bayesian Priors
STAT J535: Chapter 5: Classes of Bayesian Priors David B. Hitchcock E-Mail: hitchcock@stat.sc.edu Spring 2012 The Bayesian Prior A prior distribution must be specified in a Bayesian analysis. The choice
More informationSolutions to the Fall 2017 CAS Exam S
Solutions to the Fall 2017 CAS Exam S (Incorporating the Final CAS Answer Key) There were 45 questions in total, of equal value, on this 4 hour exam. There was a 15 minute reading period in addition to
More informationHypothesis Testing - Frequentist
Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating
More informationStatistics Masters Comprehensive Exam March 21, 2003
Statistics Masters Comprehensive Exam March 21, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationNon-Parametric Bayes
Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian
More informationBeta statistics. Keywords. Bayes theorem. Bayes rule
Keywords Beta statistics Tommy Norberg tommy@chalmers.se Mathematical Sciences Chalmers University of Technology Gothenburg, SWEDEN Bayes s formula Prior density Likelihood Posterior density Conjugate
More informationBayesian Interpretations of Regularization
Bayesian Interpretations of Regularization Charlie Frogner 9.50 Class 15 April 1, 009 The Plan Regularized least squares maps {(x i, y i )} n i=1 to a function that minimizes the regularized loss: f S
More informationEXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY
EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 00 MODULE : Statistical Inference Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks. The
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationThe Credibility Estimators with Dependence Over Risks
Applied Mathematical Sciences, Vol. 8, 2014, no. 161, 8045-8050 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.410803 The Credibility Estimators with Dependence Over Risks Qiang Zhang
More informationA HOMOTOPY CLASS OF SEMI-RECURSIVE CHAIN LADDER MODELS
A HOMOTOPY CLASS OF SEMI-RECURSIVE CHAIN LADDER MODELS Greg Taylor Taylor Fry Consulting Actuaries Level, 55 Clarence Street Sydney NSW 2000 Australia Professorial Associate Centre for Actuarial Studies
More informationA Very Brief Summary of Bayesian Inference, and Examples
A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 2013-14 We know that X ~ B(n,p), but we do not know p. We get a random sample
More informationBayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification
Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification Michael Anderson, PhD Hélène Carabin, DVM, PhD Department of Biostatistics and Epidemiology The University of Oklahoma
More informationLecture 2: Priors and Conjugacy
Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.
More informationPrinciples of Statistics
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for
More informationPoisson CI s. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA
Poisson CI s Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA 1 Interval Estimates Point estimates of unknown parameters θ governing the distribution of an observed
More informationStatistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003
Statistical Approaches to Learning and Discovery Week 4: Decision Theory and Risk Minimization February 3, 2003 Recall From Last Time Bayesian expected loss is ρ(π, a) = E π [L(θ, a)] = L(θ, a) df π (θ)
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationCompleteness. On the other hand, the distribution of an ancillary statistic doesn t depend on θ at all.
Completeness A minimal sufficient statistic achieves the maximum amount of data reduction while retaining all the information the sample has concerning θ. On the other hand, the distribution of an ancillary
More informationSome slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2
Logistics CSE 446: Point Estimation Winter 2012 PS2 out shortly Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2 Last Time Random variables, distributions Marginal, joint & conditional
More informationEcon 2148, spring 2019 Statistical decision theory
Econ 2148, spring 2019 Statistical decision theory Maximilian Kasy Department of Economics, Harvard University 1 / 53 Takeaways for this part of class 1. A general framework to think about what makes a
More informationEXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY
EXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY HIGHER CERTIFICATE IN STATISTICS, 2013 MODULE 5 : Further probability and inference Time allowed: One and a half hours Candidates should answer THREE questions.
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth
More information1. (Regular) Exponential Family
1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &
More informationSPRING 2007 EXAM C SOLUTIONS
SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =
More informationConjugate Predictive Distributions and Generalized Entropies
Conjugate Predictive Distributions and Generalized Entropies Eduardo Gutiérrez-Peña Department of Probability and Statistics IIMAS-UNAM, Mexico Padova, Italy. 21-23 March, 2013 Menu 1 Antipasto/Appetizer
More informationBayesian Regression (1/31/13)
STA613/CBB540: Statistical methods in computational biology Bayesian Regression (1/31/13) Lecturer: Barbara Engelhardt Scribe: Amanda Lea 1 Bayesian Paradigm Bayesian methods ask: given that I have observed
More information