Neutral Bayesian reference models for incidence rates of (rare) clinical events
|
|
- Annice Hensley
- 6 years ago
- Views:
Transcription
1 Neutral Bayesian reference models for incidence rates of (rare) clinical events Jouni Kerman Statistical Methodology, Novartis Pharma AG, Basel BAYES2012, May 10, Aachen
2 Outline Motivation why reference (default) models? Selection criteria for the reference models Investigating candidates for reference models A proposal for Neutral reference models Augmenting the proposed reference analysis with historical data 2 BAYES2012 J Kerman May 10 Neutral reference analyses
3 3 BAYES2012 J Kerman May 10 Neutral reference analyses Motivation
4 Reference analyses for comparison We do more and more complex analyses... E.g., meta-analyses Reality check: are the results reasonable? 4 BAYES2012 J Kerman May 10 Neutral reference analyses
5 Reference analyses for comparison Comparing with point estimates to reveal discrepancies Are the results reasonable? Any excessive shrinkage? 5 BAYES2012 J Kerman May 10 Neutral reference analyses
6 Reference analyses for comparison Plotting just the data points is not enough Must visualize the uncertainty around the point estimates Need simple Bayesian models to produce point estimates and reference uncertainty intervals! 6 BAYES2012 J Kerman May 10 Neutral reference analyses
7 Reference analyses for comparison Stratified analyses Model the rate within a single treatment (sub)group Model a rate difference (e.g., LoR, RR) for two (sub)groups Pooled analyses Analyses with pooled studies/subgroups (i.e., assuming identical rates between studies or groups) 7 BAYES2012 J Kerman May 10 Neutral reference analyses
8 Stratified and pooled reference analyses Looking at the raw data 8 BAYES2012 J Kerman May 10 Neutral reference analyses
9 Stratified and pooled reference analyses Looking at the differences 9 BAYES2012 J Kerman May 10 Neutral reference analyses
10 Reference ( default ) analyses - Example: Safety Example: Kidney transplantation; one single study Treatment Deaths at 12 months A 7 / 251 B 9 / 274 C 6 / BAYES2012 J Kerman May 10 Neutral reference analyses
11 Considering selection criteria for the reference models 11 BAYES2012 J Kerman May 10 Neutral reference analyses
12 Binomial/Poisson models and shrinkage Shrinkage is unavoidable! Consider y=0 Illustration: Binomial-beta conjugate model with prior Beta(a, a) The point estimate and the length of the posterior intervals (with respect to the scale n) are determined completely by the prior (Recall: there are no uninformative models...) 12 BAYES2012 J Kerman May 10 Neutral reference analyses
13 Binomial/Poisson models and shrinkage Shrinkage is unavoidable! Consider y=1 The point estimate and the posterior intervals are strongly influenced by the prior: Pr( θ > y/n y ) > 0.74 or Pr( θ > y/n y ) > 0.37? As y increases, influence of the prior is diminished, but N can be arbitrarily large Illustration: Binomial-beta conjugate model with prior Beta(a, a) 13 BAYES2012 J Kerman May 10 Neutral reference analyses
14 Choosing a reference model The choice of shrinkage... is yours By choosing a reference model, we are in fact deciding on the amount of shrinkage What is an acceptable default amount of shrinkage? 14 BAYES2012 J Kerman May 10 Neutral reference analyses
15 Neutrality as a criterion A neutral model for rates and proportions Pr( θ > MLE y ) 50% consistently for all possible outcomes and sample sizes whenever the MLE is not at the boundary of the parameter space A priori doesn t favor high or low values relative to the MLE (sample mean) Exact neutrality cannot be achieved but some priors are more neutral than others MLE=0.2; median = dotted line Pr( θ > MLE y ) = 50.2% 15 BAYES2012 J Kerman May 10 Neutral reference analyses
16 Neutrality for the differences A neutral default model Pr(θ 1 - θ 2 > d y ) 50% where d is the observed difference on some scale, e.g. log or logit or original scale Equivalently, d should be as close to the posterior median as possible A reference model should provide neutral inferences for both rates and differences 16 BAYES2012 J Kerman May 10 Neutral reference analyses
17 Investigating candidates for reference models 17 BAYES2012 J Kerman May 10 Neutral reference analyses
18 Candidates for reference models (Binomial) Conjugate models y i ~ Binomial(n i, θ i ), i=1, 2 θ i ~ Beta(a, a); a in (0, 1) Logistic regression with different parameterizations and different vague prior distributions (Normal or scaled Student s t) total 116 models Model A Model B Model C logit(θ 1 ) = µ 1 µ µ - Δ / 2 logit(θ 2 ) = µ 2 µ + Δ µ + Δ / 2 18 BAYES2012 J Kerman May 10 Neutral reference analyses
19 Candidates for reference models(poisson) Conjugate models y i ~ Binomial(n i, θ i ), i=1, 2 θ i ~ Gamma(a, 0); a in (0, 1) Poisson regression (log link) with different parameterizations and different vague prior distributions (Normal or scaled Student s t) total 116 models Model A Model B Model C log (θ 1 ) = µ 1 µ µ - Δ / 2 log (θ 2 ) = µ 2 µ + Δ µ + Δ / 2 19 BAYES2012 J Kerman May 10 Neutral reference analyses
20 An apparent bias in rate estimates An example A noninformative analysis? y=1 event out of n=1000 Statisticians (a), (b), and (c) use different noninformative models Median estimate Pr( est > y ) Model (a) 0.7 / % Beta(0.01, 0.01) (b) 1.0 / % Beta(1/3, 1/3) (c) 1.7 / % Beta(1, 1) 20 BAYES2012 J Kerman May 10 Neutral reference analyses
21 An apparent bias in log-risk ratio estimates An example A noninformative analysis? Experimental: y=3 events out of n=1000 Placebo: y=1 events out of n=1000 Statisticians (a), (b), and (c) use different noninformative models Median odds Pr( odds > 3 y ) Model Priors (a) % C (b) % A (c) % B µ ~ N(0,100 2 ) Δ ~ N(0,10 2 ) µ 1 ~ N(0,5 2 ) µ 2 ~ N(0, 5 2 ) µ ~ N(0,5 2 ) Δ ~ N(0,2.5 2 ) 21 BAYES2012 J Kerman May 10 Neutral reference analyses
22 Asymmetric estimates in log-risk ratio estimates An example A noninformative analysis? Experimental: y=1 events out of n=1000 Placebo: y=1 events out of n=1000 Statisticians (a), (b), and (c) use different noninformative models What is your point estimate? Median odds Pr( odds > 3 y ) Logistic Model Priors (a) % B (b) % B (c) % B µ ~ N(0,5 2 ) Δ ~ N(0,5 2 ) µ ~ t(0,10, 5) Δ ~ t(0,5, 5) µ ~ N(0,100 2 ) Δ ~ N(0, 5 2 ) 22 BAYES2012 J Kerman May 10 Neutral reference analyses
23 A proposal for default models 23 BAYES2012 J Kerman May 10 Neutral reference analyses
24 Neutral models for proportions and probabilities The Binomial-Beta conjugate model with shape parameter 1/3 y ~ Binomial(θ, n) θ ~ Beta(1/3, 1/3) Behaves consistently, for all sample sizes n and outcomes y 24 BAYES2012 J Kerman May 10 Neutral reference analyses
25 Neutral models for rates Poisson-Gamma conjugate model with the shape parameter 1/3 y ~ Poisson(λX) X = exposure λ ~ Gamma(1/3, 0) Behaves consistently, for all exposures X and outcomes y 25 BAYES2012 J Kerman May 10 Neutral reference analyses
26 Neutral models for differences and ratios Treatment groups are estimated separately, then differences computed E.g., the Binomial-beta model: ( θ 1 y ) ~ Beta(1/3 + y 1, 1/3 + n 1 - y 1 ) ( θ 2 y ) ~ Beta(1/3 + y 2, 1/3 + n 2 y 2 ) Compute δ = θ 2 - θ 1 Compute Δ = logit(θ 2 ) - logit(θ 1 ) E.g., by simulation Δ and δ are neutral approximately centered at the point estimate - consistently Δ and δ are symmetric when y, n are equal in both groups 26 BAYES2012 J Kerman May 10 Neutral reference analyses
27 Behavior of the Binomial models The Beta(1/3, 1/3) conjugate model behaves the most consistently Displayed: max. absolute bias (%) for estimated rates or odds in all models (Worst case scenario, y=1 for one of the arms) Beta(1/3, 1/3) 27 BAYES2012 J Kerman May 10 Neutral reference analyses
28 Behavior of the Poisson models The Gamma(1/3, 0) conjugate model behaves the most consistently Displayed: max. absolute bias (%) for estimated rate or rate ratio in all models (Worst case scenario, y=1 for one of the arms) Gamma(1/3, 0) 28 BAYES2012 J Kerman May 10 Neutral reference analyses
29 Neutral models for differences and ratios Examples of worst cases (one group has y=1) Data 1 Data 2 Median point estimate θ 1 Median point estimate θ 2 Median odds estimate Pr( odds > obs y ) 1/1000 2/ % 1/1000 3/ % 1/1000 4/ % 1/1000 5/ % 29 BAYES2012 J Kerman May 10 Neutral reference analyses
30 Example: Meta-analysis Viewing posterior intervals from many multilevel models at once Green: pooled Gray: fully stratified reference intervals 30 Statistical Methodology Science VC Jouni Kerman Nov 9, 2010 Analyzing Proportions and Rates using Neutral Priors
31 Augmenting the default analysis with external information 31 BAYES2012 J Kerman May 10 Neutral reference analyses
32 Augmenting the default reference analysis Binomial model A family of informative Beta priors Beta(1/3 + mp, 1/3 + m(1-p)) Fix p (a priori observed point estimate) Use m to adjust prior precision Beta(1/3, 1/3) is the prior of all priors Neither shape parameter ever < 1/3 posterior median m m + n p + n m + n sample mean 32 Statistical Methodology Science VC Jouni Kerman Nov 9, 2010 Analyzing Proportions and Rates using Neutral Priors
33 Augmenting the default reference analysis Poisson model A family of informative Gamma conjugate priors Gamma(1/3 + ky, kx) Fix y / X (a priori observed point estimate) Use k within (0,1) to adjust prior precision Gamma(1/3, 0) is the prior of all priors 33 Statistical Methodology Science VC Jouni Kerman Nov 9, 2010 Analyzing Proportions and Rates using Neutral Priors
34 Conclusion The classical point estimates (sample means and their differences) remain the reference points that are inevitably compared to model-based inferences Recognizing that shrinkage is unavoidable in these count data models, we propose (approximate) neutrality as a criterion for reference models The proposed conjugate models perform consistently for all outcomes and sample sizes Symmetry and minimal bias Easily computable without MCMC Intuitively augmentable by external information 34 Statistical Methodology Science VC Jouni Kerman Nov 9, 2010 Analyzing Proportions and Rates using Neutral Priors
35 References Kerman (2011) Neutral noninformative and informative conjugate beta and gamma prior distributions. Electronic Journal of Statistics 5: Kerman (2012) Neutral Bayesian reference models for incidence rates of clinical events (Working paper) 35 BAYES2012 J Kerman May 10 Neutral reference analyses
36 A look at the neutral Beta prior (Log-odds scale) Beta(1, 1) Uniform Beta(1/2, 1/2) Jeffreys Beta(1/3, 1/3) Neutral Beta(0.001, 0.001) Approximate Haldane 36 BAYES2012 J Kerman May 10 Neutral reference analyses
37 Reference model candidates investigated Binomial & Poisson regression models Normal model For µ For Δ σ = 3.3, 5, 10, 100 σ = 2.5, 5, 10 Student-t model Scale = 3.3, 5, 10, 100 Df = 2, 5, 10 Scale = 2.5, 3.3, 5, 10 Df = same as for µ 37 BAYES2012 J Kerman May 10 Neutral reference analyses
38 Possible reference models (Binomial) y i ~ Binomial(n i, θ i ), i=1, 2 Beta Normal Scaled t A θ i ~ Beta(a, a) δ = θ 2 - θ 1 logit(θ i ) ~ N(0, σ 2 ) δ = logit(θ 2 ) - logit(θ 1 ) logit(θ i ) ~ N(0, σ 2 ) δ = logit(θ 2 ) - logit(θ 1 ) B logit(θ 1 ) ~ N(0, σ 12 ) δ ~ N(0, σ 22 ) θ 2 = logit(θ 1 ) + δ C logit(µ) ~ N(0, σ 12 ) δ ~ N(0, σ 22 ) θ 1 = logit(µ) - δ / 2 θ 2 = logit(µ) + δ / 2 logit(θ 1 ) ~ t(0, σ 1, df 1 ) δ ~ t(0, σ 2, df 2 ) θ 2 = logit(θ 1 ) + δ logit(µ) ~ t(0, σ 1, df 1 ) δ ~ t(0, σ 2, df 2 ) θ 1 = logit(µ) - δ / 2 θ 2 = logit(µ) + δ / 2 38 BAYES2012 J Kerman May 10 Neutral reference analyses
39 Possible reference models (Poisson) y i ~ Poisson(X i θ i ), i=1, 2 Gamma Normal Scaled t A θ i ~ Gamma(a, ε) δ = θ 2 - θ 1 log (θ i ) ~ N(0, σ 2 ) δ = log (θ 2 ) - log (θ 1 ) log (θ i ) ~ N(0, σ 2 ) δ = log (θ 2 ) - log (θ 1 ) B log (θ 1 ) ~ N(0, σ 12 ) δ ~ N(0, σ 22 ) θ 2 = log (θ 1 ) + δ C log (µ) ~ N(0, σ 12 ) δ ~ N(0, σ 22 ) θ 1 = log (µ) - δ / 2 θ 2 = log (µ) + δ / 2 log (θ 1 ) ~ t(0, σ 1, df 1 ) δ ~ t(0, σ 2, df 2 ) θ 2 = log (θ 1 ) + δ log (µ) ~ t(0, σ 1, df 1 ) δ ~ t(0, σ 2, df 2 ) θ 1 = log (µ) - δ / 2 θ 2 = log (µ) + δ / 2 39 BAYES2012 J Kerman May 10 Neutral reference analyses
Bayesian Statistics. Debdeep Pati Florida State University. February 11, 2016
Bayesian Statistics Debdeep Pati Florida State University February 11, 2016 Historical Background Historical Background Historical Background Brief History of Bayesian Statistics 1764-1838: called probability
More informationModule 22: Bayesian Methods Lecture 9 A: Default prior selection
Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical
More informationST440/540: Applied Bayesian Statistics. (9) Model selection and goodness-of-fit checks
(9) Model selection and goodness-of-fit checks Objectives In this module we will study methods for model comparisons and checking for model adequacy For model comparisons there are a finite number of candidate
More informationPractical considerations for survival models
Including historical data in the analysis of clinical trials using the modified power prior Practical considerations for survival models David Dejardin 1 2, Joost van Rosmalen 3 and Emmanuel Lesaffre 1
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationSTAT 425: Introduction to Bayesian Analysis
STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 2017 1 / 10 Lecture 7: Prior Types Subjective
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationPrevious lecture. P-value based combination. Fixed vs random effects models. Meta vs. pooled- analysis. New random effects testing.
Previous lecture P-value based combination. Fixed vs random effects models. Meta vs. pooled- analysis. New random effects testing. Interaction Outline: Definition of interaction Additive versus multiplicative
More informationBayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007
Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.
More informationPart 2: One-parameter models
Part 2: One-parameter models 1 Bernoulli/binomial models Return to iid Y 1,...,Y n Bin(1, ). The sampling model/likelihood is p(y 1,...,y n ) = P y i (1 ) n P y i When combined with a prior p( ), Bayes
More informationIntroduction to Probabilistic Machine Learning
Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning
More information(1) Introduction to Bayesian statistics
Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they
More informationLinear Models A linear model is defined by the expression
Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose
More informationClinical Trials. Olli Saarela. September 18, Dalla Lana School of Public Health University of Toronto.
Introduction to Dalla Lana School of Public Health University of Toronto olli.saarela@utoronto.ca September 18, 2014 38-1 : a review 38-2 Evidence Ideal: to advance the knowledge-base of clinical medicine,
More informationA primer on Bayesian statistics, with an application to mortality rate estimation
A primer on Bayesian statistics, with an application to mortality rate estimation Peter off University of Washington Outline Subjective probability Practical aspects Application to mortality rate estimation
More informationBernoulli and Poisson models
Bernoulli and Poisson models Bernoulli/binomial models Return to iid Y 1,...,Y n Bin(1, ). The sampling model/likelihood is p(y 1,...,y n ) = P y i (1 ) n P y i When combined with a prior p( ), Bayes rule
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationHypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006
Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)
More informationLecture 6. Prior distributions
Summary Lecture 6. Prior distributions 1. Introduction 2. Bivariate conjugate: normal 3. Non-informative / reference priors Jeffreys priors Location parameters Proportions Counts and rates Scale parameters
More information(4) One-parameter models - Beta/binomial. ST440/550: Applied Bayesian Statistics
Estimating a proportion using the beta/binomial model A fundamental task in statistics is to estimate a proportion using a series of trials: What is the success probability of a new cancer treatment? What
More informationModule 4: Bayesian Methods Lecture 9 A: Default prior selection. Outline
Module 4: Bayesian Methods Lecture 9 A: Default prior selection Peter Ho Departments of Statistics and Biostatistics University of Washington Outline Je reys prior Unit information priors Empirical Bayes
More informationLecture 3. Univariate Bayesian inference: conjugate analysis
Summary Lecture 3. Univariate Bayesian inference: conjugate analysis 1. Posterior predictive distributions 2. Conjugate analysis for proportions 3. Posterior predictions for proportions 4. Conjugate analysis
More informationStatistical Tools and Techniques for Solar Astronomers
Statistical Tools and Techniques for Solar Astronomers Alexander W Blocker Nathan Stein SolarStat 2012 Outline Outline 1 Introduction & Objectives 2 Statistical issues with astronomical data 3 Example:
More informationWeakly informative priors
Department of Statistics and Department of Political Science Columbia University 21 Oct 2011 Collaborators (in order of appearance): Gary King, Frederic Bois, Aleks Jakulin, Vince Dorie, Sophia Rabe-Hesketh,
More informationPart 7: Hierarchical Modeling
Part 7: Hierarchical Modeling!1 Nested data It is common for data to be nested: i.e., observations on subjects are organized by a hierarchy Such data are often called hierarchical or multilevel For example,
More informationBayesian Analysis of Bivariate Count Data
Karlis and Ntzoufras: Bayesian Analysis of Bivariate Count Data 1 Bayesian Analysis of Bivariate Count Data Dimitris Karlis and Ioannis Ntzoufras, Department of Statistics, Athens University of Economics
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables
More information36-463/663: Hierarchical Linear Models
36-463/663: Hierarchical Linear Models Taste of MCMC / Bayes for 3 or more levels Brian Junker 132E Baker Hall brian@stat.cmu.edu 1 Outline Practical Bayes Mastery Learning Example A brief taste of JAGS
More informationR-squared for Bayesian regression models
R-squared for Bayesian regression models Andrew Gelman Ben Goodrich Jonah Gabry Imad Ali 8 Nov 2017 Abstract The usual definition of R 2 (variance of the predicted values divided by the variance of the
More informationThe binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution.
The binomial model Example. After suspicious performance in the weekly soccer match, 37 mathematical sciences students, staff, and faculty were tested for the use of performance enhancing analytics. Let
More informationMultinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is
Multinomial Data The multinomial distribution is a generalization of the binomial for the situation in which each trial results in one and only one of several categories, as opposed to just two, as in
More informationInference for a Population Proportion
Al Nosedal. University of Toronto. November 11, 2015 Statistical inference is drawing conclusions about an entire population based on data in a sample drawn from that population. From both frequentist
More informationA Hierarchical Mixture Dynamic Model of School Performance in the Brazilian Mathematical Olympiads for Public Schools (OBMEP)
A of School Performance in the ian Mathematical Olympiads for Public Schools (OBMEP) Alexandra M. Schmidt IM - UFRJ, Homepage: www.dme.ufrj.br/ alex joint work with Caroline P. de Moraes and Helio S. Migon
More informationModelling Operational Risk Using Bayesian Inference
Pavel V. Shevchenko Modelling Operational Risk Using Bayesian Inference 4y Springer 1 Operational Risk and Basel II 1 1.1 Introduction to Operational Risk 1 1.2 Defining Operational Risk 4 1.3 Basel II
More informationReview of Probabilities and Basic Statistics
Alex Smola Barnabas Poczos TA: Ina Fiterau 4 th year PhD student MLD Review of Probabilities and Basic Statistics 10-701 Recitations 1/25/2013 Recitation 1: Statistics Intro 1 Overview Introduction to
More information2016 SISG Module 17: Bayesian Statistics for Genetics Lecture 3: Binomial Sampling
2016 SISG Module 17: Bayesian Statistics for Genetics Lecture 3: Binomial Sampling Jon Wakefield Departments of Statistics and Biostatistics University of Washington Outline Introduction and Motivating
More informationReview. Timothy Hanson. Department of Statistics, University of South Carolina. Stat 770: Categorical Data Analysis
Review Timothy Hanson Department of Statistics, University of South Carolina Stat 770: Categorical Data Analysis 1 / 22 Chapter 1: background Nominal, ordinal, interval data. Distributions: Poisson, binomial,
More informationOutline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution
Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model
More informationA Discussion of the Bayesian Approach
A Discussion of the Bayesian Approach Reference: Chapter 10 of Theoretical Statistics, Cox and Hinkley, 1974 and Sujit Ghosh s lecture notes David Madigan Statistics The subject of statistics concerns
More informationEstimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk
Ann Inst Stat Math (0) 64:359 37 DOI 0.007/s0463-00-036-3 Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk Paul Vos Qiang Wu Received: 3 June 009 / Revised:
More informationBayesian Methods for Estimating the Reliability of Complex Systems Using Heterogeneous Multilevel Information
Statistics Preprints Statistics 8-2010 Bayesian Methods for Estimating the Reliability of Complex Systems Using Heterogeneous Multilevel Information Jiqiang Guo Iowa State University, jqguo@iastate.edu
More informationBayesian Estimation of Bipartite Matchings for Record Linkage
Bayesian Estimation of Bipartite Matchings for Record Linkage Mauricio Sadinle msadinle@stat.duke.edu Duke University Supported by NSF grants SES-11-30706 to Carnegie Mellon University and SES-11-31897
More informationChapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1
Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in
More informationBayesian Inference. Chapter 2: Conjugate models
Bayesian Inference Chapter 2: Conjugate models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in
More informationThe Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.
Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface
More informationUsing Historical Experimental Information in the Bayesian Analysis of Reproduction Toxicological Experimental Results
Using Historical Experimental Information in the Bayesian Analysis of Reproduction Toxicological Experimental Results Jing Zhang Miami University August 12, 2014 Jing Zhang (Miami University) Using Historical
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationWeakly informative priors
Department of Statistics and Department of Political Science Columbia University 23 Apr 2014 Collaborators (in order of appearance): Gary King, Frederic Bois, Aleks Jakulin, Vince Dorie, Sophia Rabe-Hesketh,
More informationUnobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:
Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed
More informationBayesian Learning (II)
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP
More informationBayesian performance
Bayesian performance Frequentist properties of estimators refer to the performance of an estimator (say the posterior mean) over repeated experiments under the same conditions. The posterior distribution
More informationLinear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model
Regression: Part II Linear Regression y~n X, 2 X Y Data Model β, σ 2 Process Model Β 0,V β s 1,s 2 Parameter Model Assumptions of Linear Model Homoskedasticity No error in X variables Error in Y variables
More informationRonald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California
Texts in Statistical Science Bayesian Ideas and Data Analysis An Introduction for Scientists and Statisticians Ronald Christensen University of New Mexico Albuquerque, New Mexico Wesley Johnson University
More informationCOS513 LECTURE 8 STATISTICAL CONCEPTS
COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions
More informationBayesian concept for combined Phase 2a/b trials
Bayesian concept for combined Phase 2a/b trials /////////// Stefan Klein 07/12/2018 Agenda Phase 2a: PoC studies Phase 2b: dose finding studies Simulation Results / Discussion 2 /// Bayer /// Bayesian
More informationDefault Priors and Effcient Posterior Computation in Bayesian
Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature
More informationBayesian model selection for computer model validation via mixture model estimation
Bayesian model selection for computer model validation via mixture model estimation Kaniav Kamary ATER, CNAM Joint work with É. Parent, P. Barbillon, M. Keller and N. Bousquet Outline Computer model validation
More informationUsing Probability to do Statistics.
Al Nosedal. University of Toronto. November 5, 2015 Milk and honey and hemoglobin Animal experiments suggested that honey in a diet might raise hemoglobin level. A researcher designed a study involving
More informationPubH 7470: STATISTICS FOR TRANSLATIONAL & CLINICAL RESEARCH
PubH 7470: STATISTICS FOR TRANSLATIONAL & CLINICAL RESEARCH The First Step: SAMPLE SIZE DETERMINATION THE ULTIMATE GOAL The most important, ultimate step of any of clinical research is to do draw inferences;
More informationSCHOOL OF MATHEMATICS AND STATISTICS. MAS6062 Bayesian Methods and Clinical Trials
Data provided: Tables of distribution functions SCHOOL OF MATHEMATICS AND STATISTICS Bayesian Methods and Clinical Trials Spring Semester 05 06 3 hours Candidates may bring to the examination a calculator
More informationIntroduction to Machine Learning. Lecture 2
Introduction to Machine Learning Lecturer: Eran Halperin Lecture 2 Fall Semester Scribe: Yishay Mansour Some of the material was not presented in class (and is marked with a side line) and is given for
More informationINTRODUCING LINEAR REGRESSION MODELS Response or Dependent variable y
INTRODUCING LINEAR REGRESSION MODELS Response or Dependent variable y Predictor or Independent variable x Model with error: for i = 1,..., n, y i = α + βx i + ε i ε i : independent errors (sampling, measurement,
More informationLecture 2: Statistical Decision Theory (Part I)
Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical
More informationIntroduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models
Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models Matthew S. Johnson New York ASA Chapter Workshop CUNY Graduate Center New York, NY hspace1in December 17, 2009 December
More informationConjugate Priors: Beta and Normal Spring 2018
Conjugate Priors: Beta and Normal 18.05 Spring 018 Review: Continuous priors, discrete data Bent coin: unknown probability θ of heads. Prior f (θ) = θ on [0,1]. Data: heads on one toss. Question: Find
More information2018 SISG Module 20: Bayesian Statistics for Genetics Lecture 2: Review of Probability and Bayes Theorem
2018 SISG Module 20: Bayesian Statistics for Genetics Lecture 2: Review of Probability and Bayes Theorem Jon Wakefield Departments of Statistics and Biostatistics University of Washington Outline Introduction
More informationIncluding historical data in the analysis of clinical trials using the modified power priors: theoretical overview and sampling algorithms
Including historical data in the analysis of clinical trials using the modified power priors: theoretical overview and sampling algorithms Joost van Rosmalen 1, David Dejardin 2,3, and Emmanuel Lesaffre
More informationContents. Part I: Fundamentals of Bayesian Inference 1
Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian
More informationApplied Bayesian Statistics STAT 388/488
STAT 388/488 Dr. Earvin Balderama Department of Mathematics & Statistics Loyola University Chicago August 29, 207 Course Info STAT 388/488 http://math.luc.edu/~ebalderama/bayes 2 A motivating example (See
More informationINTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP
INTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP Personal Healthcare Revolution Electronic health records (CFH) Personal genomics (DeCode, Navigenics, 23andMe) X-prize: first $10k human genome technology
More informationLatent class analysis and finite mixture models with Stata
Latent class analysis and finite mixture models with Stata Isabel Canette Principal Mathematician and Statistician StataCorp LLC 2017 Stata Users Group Meeting Madrid, October 19th, 2017 Introduction Latent
More informationHierarchical expectation propagation for Bayesian aggregation of average data
Hierarchical expectation propagation for Bayesian aggregation of average data Andrew Gelman, Columbia University Sebastian Weber, Novartis also Bob Carpenter, Daniel Lee, Frédéric Bois, Aki Vehtari, and
More informationBEGINNING BAYES IN R. Bayes with discrete models
BEGINNING BAYES IN R Bayes with discrete models Beginning Bayes in R Survey on eating out What is your favorite day for eating out? Construct a prior for p Define p: proportion of all students who answer
More informationBayesian Applications in Biomarker Detection. Dr. Richardus Vonk Head, Research and Clinical Sciences Statistics
Bayesian Applications in Biomarker Detection Dr. Richardus Vonk Head, Research and Clinical Sciences Statistics Disclaimer The views expressed in this presentation are the personal views of the author,
More informationEvidence synthesis for a single randomized controlled trial and observational data in small populations
Evidence synthesis for a single randomized controlled trial and observational data in small populations Steffen Unkel, Christian Röver and Tim Friede Department of Medical Statistics University Medical
More informationSubject CS1 Actuarial Statistics 1 Core Principles
Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and
More informationReadings: K&F: 16.3, 16.4, Graphical Models Carlos Guestrin Carnegie Mellon University October 6 th, 2008
Readings: K&F: 16.3, 16.4, 17.3 Bayesian Param. Learning Bayesian Structure Learning Graphical Models 10708 Carlos Guestrin Carnegie Mellon University October 6 th, 2008 10-708 Carlos Guestrin 2006-2008
More informationSequential Experimental Designs for Generalized Linear Models
Sequential Experimental Designs for Generalized Linear Models Hovav A. Dror and David M. Steinberg, JASA (2008) Bob A. Salim May 14, 2013 Bob A. Salim Sequential Experimental Designs for Generalized Linear
More informationModel Selection in GLMs. (should be able to implement frequentist GLM analyses!) Today: standard frequentist methods for model selection
Model Selection in GLMs Last class: estimability/identifiability, analysis of deviance, standard errors & confidence intervals (should be able to implement frequentist GLM analyses!) Today: standard frequentist
More informationReview: Statistical Model
Review: Statistical Model { f θ :θ Ω} A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced the data. The statistical model
More informationComputational methods are invaluable for typology, but the models must match the questions: Commentary on Dunn et al. (2011)
Computational methods are invaluable for typology, but the models must match the questions: Commentary on Dunn et al. (2011) Roger Levy and Hal Daumé III August 1, 2011 The primary goal of Dunn et al.
More informationBayesian Prediction of Code Output. ASA Albuquerque Chapter Short Course October 2014
Bayesian Prediction of Code Output ASA Albuquerque Chapter Short Course October 2014 Abstract This presentation summarizes Bayesian prediction methodology for the Gaussian process (GP) surrogate representation
More informationBayesian Statistics Adrian Raftery and Jeff Gill One-day course for the American Sociological Association August 15, 2002
Bayesian Statistics Adrian Raftery and Jeff Gill One-day course for the American Sociological Association August 15, 2002 Bayes Course, ASA Meeting, August 2002 c Adrian E. Raftery 2002 1 Outline 1. Bayes
More informationCOMPOSITIONAL IDEAS IN THE BAYESIAN ANALYSIS OF CATEGORICAL DATA WITH APPLICATION TO DOSE FINDING CLINICAL TRIALS
COMPOSITIONAL IDEAS IN THE BAYESIAN ANALYSIS OF CATEGORICAL DATA WITH APPLICATION TO DOSE FINDING CLINICAL TRIALS M. Gasparini and J. Eisele 2 Politecnico di Torino, Torino, Italy; mauro.gasparini@polito.it
More informationIntroduction: MLE, MAP, Bayesian reasoning (28/8/13)
STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this
More informationMACHINE LEARNING INTRODUCTION: STRING CLASSIFICATION
MACHINE LEARNING INTRODUCTION: STRING CLASSIFICATION THOMAS MAILUND Machine learning means different things to different people, and there is no general agreed upon core set of algorithms that must be
More informationBTRY 4830/6830: Quantitative Genomics and Genetics
BTRY 4830/6830: Quantitative Genomics and Genetics Lecture 23: Alternative tests in GWAS / (Brief) Introduction to Bayesian Inference Jason Mezey jgm45@cornell.edu Nov. 13, 2014 (Th) 8:40-9:55 Announcements
More informationGroup Sequential Designs: Theory, Computation and Optimisation
Group Sequential Designs: Theory, Computation and Optimisation Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj 8th International Conference
More informationBayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples
Bayesian inference for sample surveys Roderick Little Module : Bayesian models for simple random samples Superpopulation Modeling: Estimating parameters Various principles: least squares, method of moments,
More informationAdaptive Prediction of Event Times in Clinical Trials
Adaptive Prediction of Event Times in Clinical Trials Yu Lan Southern Methodist University Advisor: Daniel F. Heitjan May 8, 2017 Yu Lan (SMU) May 8, 2017 1 / 19 Clinical Trial Prediction Event-based trials:
More informationA Very Brief Summary of Bayesian Inference, and Examples
A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X
More informationSTA 250: Statistics. Notes 7. Bayesian Approach to Statistics. Book chapters: 7.2
STA 25: Statistics Notes 7. Bayesian Aroach to Statistics Book chaters: 7.2 1 From calibrating a rocedure to quantifying uncertainty We saw that the central idea of classical testing is to rovide a rigorous
More informationSequential Importance Sampling for Rare Event Estimation with Computer Experiments
Sequential Importance Sampling for Rare Event Estimation with Computer Experiments Brian Williams and Rick Picard LA-UR-12-22467 Statistical Sciences Group, Los Alamos National Laboratory Abstract Importance
More informationPeter Hoff Minimax estimation November 12, Motivation and definition. 2 Least favorable prior 3. 3 Least favorable prior sequence 11
Contents 1 Motivation and definition 1 2 Least favorable prior 3 3 Least favorable prior sequence 11 4 Nonparametric problems 15 5 Minimax and admissibility 18 6 Superefficiency and sparsity 19 Most of
More informationBayesian linear regression
Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding
More informationLecture 2: Poisson and logistic regression
Dankmar Böhning Southampton Statistical Sciences Research Institute University of Southampton, UK S 3 RI, 11-12 December 2014 introduction to Poisson regression application to the BELCAP study introduction
More informationMethods and Criteria for Model Selection. CS57300 Data Mining Fall Instructor: Bruno Ribeiro
Methods and Criteria for Model Selection CS57300 Data Mining Fall 2016 Instructor: Bruno Ribeiro Goal } Introduce classifier evaluation criteria } Introduce Bias x Variance duality } Model Assessment }
More informationGeneral Bayesian Inference I
General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for
More informationMulti-level Models: Idea
Review of 140.656 Review Introduction to multi-level models The two-stage normal-normal model Two-stage linear models with random effects Three-stage linear models Two-stage logistic regression with random
More information10. Exchangeability and hierarchical models Objective. Recommended reading
10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.
More information