Frequentist Accuracy of Bayesian Estimates
|
|
- Luke Morgan Webb
- 5 years ago
- Views:
Transcription
1 Frequentist Accuracy of Bayesian Estimates Bradley Efron Stanford University RSS Journal Webinar
2 Objective Bayesian Inference Probability family F = {f µ (x), µ Ω} Parameter of interest: θ = t(µ) Prior π(µ) ˆθ = E{θ x} = t(µ)f µ (x)π(µ) dµ Ω / f µ (x)π(µ) dµ Ω Objective Bayes π(µ) uninformative e.g., Jeffreys: π(µ) = I(µ) 1/2 How accurate is ˆθ? Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 2 / 18
3 General Accuracy Formula x (µ, V µ ) µ, x R p α x (µ) = x log f µ (x) = ( )..., log f µ(x) x i,... Lemma ˆθ = E { t(µ) x } has gradient x ˆθ = cov { t(µ), αx (µ) x } = ĉov Theorem The delta-method standard deviation of ˆθ is sd ( ˆθ) = [ĉov Vx ĉov ] 1/2. Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 3 / 18
4 Implementation Posterior Sample µ 1, µ 2,..., µ B (MCMC) t(µ i ) = t i and α i = α x (µ i ) ˆθ = B i=1 t i /B ĉov = ŝd ( ˆθ) = [ĉov Vx ĉov ] 1/2 B i=1 ( α i ᾱ ) ( t i t )/ B Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 4 / 18
5 Diabetes Data Efron et al. (2004), Park and Casella (2008) n = 442 subjects p = 10 predictors: age, sex, bmi, glu,... Response y = disease progression at one year Model: y = X α + e [e N n (0, I)] n 1 n p p 1 n 1 Prior π(α) = e λ α 1 (λ = 0.37) Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 5 / 18
6 Estimating θ j = x j α The predicted value for patient j MCMC posterior sample {α 1, α 2,..., α, B = 10, 000} B t ji = x j α gives ˆθ j = B i=1 t ji /B ˆθ 125 = sd Bayes = sd freq = sd freq sd Bayes = ( x ˆΣG ˆΣx x ˆΣx ) 1/2 G = X X and ˆΣ = empirical covariance matrix of {α i } Eigen range ( ˆΣ 1/2 G ˆΣ 1/2) 1/2 = (1.007, 0.313) Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 6 / 18
7 Ratio Freq/Bayes standard errors for the 442 diabetes cases Frequency ^ ^ # ratio Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 7 / 18
8 Posterior CDF of θ 125 Apply accuracy formula to s i = I{t i < c} E{s data} = Pr{θ 125 c data} = ĉdf(c) ĉdf(0.3) = ± Bayes frequentist sd estimate Upper 95% credible limit = 0.34 ± 0.07 Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 8 / 18
9 MCMC posterior cdf for patient 125, + one freq standard error (Point is upper 95% credible limit) Prob{gamma125 < c data} [ ] c value Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 9 / 18
10 Exponential Families f α (ˆβ ) = e α ˆβ ψ(α) f 0 (ˆβ ) natural parameter α sufficient statistic ˆβ (Now α = µ, ˆβ = x ) α x (µ) = α ŝd ( ( ˆθ) = cov t, α ˆβ ) Vˆα cov ( t, α ˆβ ) Vˆα = ψ(ˆα) = cov α=ˆα {ˆβ } Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 10 / 18
11 MCMC {α 1, α 2,..., α B } ĝ(α β = ˆβ) prob 1 B on α i Reweighting ĝ(α β = b): prob w i (b) = ce (b ˆβ) α i on α i (exponential family in b) Obtain bootstrap confidence intervals for t(α) without further simulation (BCa, ABC) Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 11 / 18
12 Frequentist 95% abc confidence limits for #125 cdf Prob{mu125 < c data} c value Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 12 / 18
13 Bootstrap Estimates of a Posterior Distribution F = { f µ (x), µ Ω } MCMC sample {µ 1, µ 2,..., µ B }: Approximate g(µ x) with uniform weight 1 B on each µ i Bootstrap MLE ˆµ bootsample ˆµ i fˆµ, i = 1, 2,..., B Approximate g(µ x) with weight p i on ˆµ i where p i = π(ˆµ i )R i, R i conversion factor (easy in exponential families) Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 13 / 18
14 Simulation Accuracy How big to take B? Boot sampling q i = t i p i and d i = q i q p i p Then ˆθ = B 1 p it i has simulation coefficient of variation B cv i=1 (about 0.01 for ˆθ 125 with B = 10, 000) d 2 i 1/2 / B [ t i = t (ˆµ )] i Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 14 / 18
15 How Uninformative is a Prior? Measure of prior s effect on estimate of θ = t(µ) is Bayes MLE For patient 125, E{θ x} t(ˆµ) δ = ŝd MLE δ 125 = = 0.90 Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 15 / 18
16 Effect of Park/Cassella prior on patient estimates Frequency #125 ^.90 ^ (Bayes MLE)/Sd Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 16 / 18
17 Bayes estimates vs MLEs for the 442 diabetes patients; Linear Regression Bayes =.968 MLE MLE Bayes #125 Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 17 / 18
18 References Park and Casella (2008) The Bayesian Lasso JASA Singh et al. (2002) Gene expression correlates of clinical prostate cancer behavior Cancer Cell Efron, Hastie, Johnstone and Tibshirani (2004) Least angle regression Ann. Statist Efron (2012) Bayesian inference and the parametric bootstrap Ann. Appl. Statist Efron (2010) Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction IMS Monographs, Cambridge University Press Bradley Efron (Stanford University) Frequentist Accuracy of Bayesian Estimates RSS Journal Webinar 18 / 18
Frequentist Accuracy of Bayesian Estimates
Frequentist Accuracy of Bayesian Estimates Bradley Efron Stanford University Bayesian Inference Parameter: µ Ω Observed data: x Prior: π(µ) Probability distributions: Parameter of interest: { fµ (x), µ
More informationBayesian Inference and the Parametric Bootstrap. Bradley Efron Stanford University
Bayesian Inference and the Parametric Bootstrap Bradley Efron Stanford University Importance Sampling for Bayes Posterior Distribution Newton and Raftery (1994 JRSS-B) Nonparametric Bootstrap: good choice
More informationFREQUENTIST ACCURACY OF BAYESIAN ESTIMATES. Bradley Efron. Division of Biostatistics STANFORD UNIVERSITY Stanford, California
FREQUENTIST ACCURACY OF BAYESIAN ESTIMATES By Bradley Efron Technical Report 265 October 2013 Division of Biostatistics STANFORD UNIVERSITY Stanford, California FREQUENTIST ACCURACY OF BAYESIAN ESTIMATES
More informationModel Selection, Estimation, and Bootstrap Smoothing. Bradley Efron Stanford University
Model Selection, Estimation, and Bootstrap Smoothing Bradley Efron Stanford University Estimation After Model Selection Usually: (a) look at data (b) choose model (linear, quad, cubic...?) (c) fit estimates
More informationThe bootstrap and Markov chain Monte Carlo
The bootstrap and Markov chain Monte Carlo Bradley Efron Stanford University Abstract This note concerns the use of parametric bootstrap sampling to carry out Bayesian inference calculations. This is only
More informationTweedie s Formula and Selection Bias. Bradley Efron Stanford University
Tweedie s Formula and Selection Bias Bradley Efron Stanford University Selection Bias Observe z i N(µ i, 1) for i = 1, 2,..., N Select the m biggest ones: z (1) > z (2) > z (3) > > z (m) Question: µ values?
More informationStatistical Inference
Statistical Inference Liu Yang Florida State University October 27, 2016 Liu Yang, Libo Wang (Florida State University) Statistical Inference October 27, 2016 1 / 27 Outline The Bayesian Lasso Trevor Park
More informationBayesian inference and the parametric bootstrap
Bayesian inference and the parametric bootstrap Bradley Efron Stanford University Abstract The parametric bootstrap can be used for the efficient computation of Bayes posterior distributions. Importance
More informationMODEL SELECTION, ESTIMATION, AND BOOTSTRAP SMOOTHING. Bradley Efron. Division of Biostatistics STANFORD UNIVERSITY Stanford, California
MODEL SELECTION, ESTIMATION, AND BOOTSTRAP SMOOTHING By Bradley Efron Technical Report 262 August 2012 Division of Biostatistics STANFORD UNIVERSITY Stanford, California MODEL SELECTION, ESTIMATION, AND
More informationOr How to select variables Using Bayesian LASSO
Or How to select variables Using Bayesian LASSO x 1 x 2 x 3 x 4 Or How to select variables Using Bayesian LASSO x 1 x 2 x 3 x 4 Or How to select variables Using Bayesian LASSO On Bayesian Variable Selection
More informationSome Curiosities Arising in Objective Bayesian Analysis
. Some Curiosities Arising in Objective Bayesian Analysis Jim Berger Duke University Statistical and Applied Mathematical Institute Yale University May 15, 2009 1 Three vignettes related to John s work
More informationIntegrated Non-Factorized Variational Inference
Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014
More informationPost-selection inference with an application to internal inference
Post-selection inference with an application to internal inference Robert Tibshirani, Stanford University November 23, 2015 Seattle Symposium in Biostatistics, 2015 Joint work with Sam Gross, Will Fithian,
More informationBayesian Inference. Chapter 4: Regression and Hierarchical Models
Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School
More informationAn Algorithm for Bayesian Variable Selection in High-dimensional Generalized Linear Models
Proceedings 59th ISI World Statistics Congress, 25-30 August 2013, Hong Kong (Session CPS023) p.3938 An Algorithm for Bayesian Variable Selection in High-dimensional Generalized Linear Models Vitara Pungpapong
More informationBayesian Inference. Chapter 4: Regression and Hierarchical Models
Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative
More informationA union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling
A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling Min-ge Xie Department of Statistics, Rutgers University Workshop on Higher-Order Asymptotics
More informationConfidence Distribution
Confidence Distribution Xie and Singh (2013): Confidence distribution, the frequentist distribution estimator of a parameter: A Review Céline Cunen, 15/09/2014 Outline of Article Introduction The concept
More informationThe automatic construction of bootstrap confidence intervals
The automatic construction of bootstrap confidence intervals Bradley Efron Balasubramanian Narasimhan Abstract The standard intervals, e.g., ˆθ ± 1.96ˆσ for nominal 95% two-sided coverage, are familiar
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationBootstrap & Confidence/Prediction intervals
Bootstrap & Confidence/Prediction intervals Olivier Roustant Mines Saint-Étienne 2017/11 Olivier Roustant (EMSE) Bootstrap & Confidence/Prediction intervals 2017/11 1 / 9 Framework Consider a model with
More informationEmpirical Bayes Deconvolution Problem
Empirical Bayes Deconvolution Problem Bayes Deconvolution Problem Unknown prior density g(θ) gives unobserved realizations Θ 1, Θ 2,..., Θ N iid g(θ) Each Θ k gives observed X k p Θk (x) [p Θ (x) known]
More informationA Significance Test for the Lasso
A Significance Test for the Lasso Lockhart R, Taylor J, Tibshirani R, and Tibshirani R Ashley Petersen May 14, 2013 1 Last time Problem: Many clinical covariates which are important to a certain medical
More informationModel comparison and selection
BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)
More informationg-priors for Linear Regression
Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,
More information(1) Introduction to Bayesian statistics
Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they
More informationPackage IGG. R topics documented: April 9, 2018
Package IGG April 9, 2018 Type Package Title Inverse Gamma-Gamma Version 1.0 Date 2018-04-04 Author Ray Bai, Malay Ghosh Maintainer Ray Bai Description Implements Bayesian linear regression,
More informationHarvard University. Harvard University Biostatistics Working Paper Series
Harvard University Harvard University Biostatistics Working Paper Series Year 2008 Paper 94 The Highest Confidence Density Region and Its Usage for Inferences about the Survival Function with Censored
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationBayesian Linear Models
Eric F. Lock UMN Division of Biostatistics, SPH elock@umn.edu 03/07/2018 Linear model For observations y 1,..., y n, the basic linear model is y i = x 1i β 1 +... + x pi β p + ɛ i, x 1i,..., x pi are predictors
More informationWeb Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.
Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationSmall Area Confidence Bounds on Small Cell Proportions in Survey Populations
Small Area Confidence Bounds on Small Cell Proportions in Survey Populations Aaron Gilary, Jerry Maples, U.S. Census Bureau U.S. Census Bureau Eric V. Slud, U.S. Census Bureau Univ. Maryland College Park
More informationBayesian Analysis for Partially Complete Time and Type of Failure Data
Bayesian Analysis for Partially Complete Time and Type of Failure Data Debasis Kundu Abstract In this paper we consider the Bayesian analysis of competing risks data, when the data are partially complete
More informationLeast Angle Regression, Forward Stagewise and the Lasso
January 2005 Rob Tibshirani, Stanford 1 Least Angle Regression, Forward Stagewise and the Lasso Brad Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani Stanford University Annals of Statistics,
More informationIntroductory Bayesian Analysis
Introductory Bayesian Analysis Jaya M. Satagopan Memorial Sloan-Kettering Cancer Center Weill Cornell Medical College (Affiliate) satagopj@mskcc.org March 14, 2013 Bayesian Analysis Fit probability models
More informationA Significance Test for the Lasso
A Significance Test for the Lasso Lockhart R, Taylor J, Tibshirani R, and Tibshirani R Ashley Petersen June 6, 2013 1 Motivation Problem: Many clinical covariates which are important to a certain medical
More informationCSC321 Lecture 18: Learning Probabilistic Models
CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling
More informationLinear Models A linear model is defined by the expression
Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose
More informationGeneralized Linear Models. Kurt Hornik
Generalized Linear Models Kurt Hornik Motivation Assuming normality, the linear model y = Xβ + e has y = β + ε, ε N(0, σ 2 ) such that y N(μ, σ 2 ), E(y ) = μ = β. Various generalizations, including general
More informationEstimation of a Two-component Mixture Model
Estimation of a Two-component Mixture Model Bodhisattva Sen 1,2 University of Cambridge, Cambridge, UK Columbia University, New York, USA Indian Statistical Institute, Kolkata, India 6 August, 2012 1 Joint
More informationDefault priors and model parametrization
1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)
More informationAspects of Feature Selection in Mass Spec Proteomic Functional Data
Aspects of Feature Selection in Mass Spec Proteomic Functional Data Phil Brown University of Kent Newton Institute 11th December 2006 Based on work with Jeff Morris, and others at MD Anderson, Houston
More informationCorrelation, z-values, and the Accuracy of Large-Scale Estimators. Bradley Efron Stanford University
Correlation, z-values, and the Accuracy of Large-Scale Estimators Bradley Efron Stanford University Correlation and Accuracy Modern Scientific Studies N cases (genes, SNPs, pixels,... ) each with its own
More informationBayesian linear regression
Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding
More informationLecture 5: Soft-Thresholding and Lasso
High Dimensional Data and Statistical Learning Lecture 5: Soft-Thresholding and Lasso Weixing Song Department of Statistics Kansas State University Weixing Song STAT 905 October 23, 2014 1/54 Outline Penalized
More informationBayesian Methods. David S. Rosenberg. New York University. March 20, 2018
Bayesian Methods David S. Rosenberg New York University March 20, 2018 David S. Rosenberg (New York University) DS-GA 1003 / CSCI-GA 2567 March 20, 2018 1 / 38 Contents 1 Classical Statistics 2 Bayesian
More informationBiostatistics Advanced Methods in Biostatistics IV
Biostatistics 140.754 Advanced Methods in Biostatistics IV Jeffrey Leek Assistant Professor Department of Biostatistics jleek@jhsph.edu Lecture 12 1 / 36 Tip + Paper Tip: As a statistician the results
More informationPractice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:
Practice Exam 1 1. Losses for an insurance coverage have the following cumulative distribution function: F(0) = 0 F(1,000) = 0.2 F(5,000) = 0.4 F(10,000) = 0.9 F(100,000) = 1 with linear interpolation
More information1 Mixed effect models and longitudinal data analysis
1 Mixed effect models and longitudinal data analysis Mixed effects models provide a flexible approach to any situation where data have a grouping structure which introduces some kind of correlation between
More informationinferences on stress-strength reliability from lindley distributions
inferences on stress-strength reliability from lindley distributions D.K. Al-Mutairi, M.E. Ghitany & Debasis Kundu Abstract This paper deals with the estimation of the stress-strength parameter R = P (Y
More informationRegularization Paths
December 2005 Trevor Hastie, Stanford Statistics 1 Regularization Paths Trevor Hastie Stanford University drawing on collaborations with Brad Efron, Saharon Rosset, Ji Zhu, Hui Zhou, Rob Tibshirani and
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationThe comparative studies on reliability for Rayleigh models
Journal of the Korean Data & Information Science Society 018, 9, 533 545 http://dx.doi.org/10.7465/jkdi.018.9..533 한국데이터정보과학회지 The comparative studies on reliability for Rayleigh models Ji Eun Oh 1 Joong
More informationBayesian variable selection via. Penalized credible regions. Brian Reich, NCSU. Joint work with. Howard Bondell and Ander Wilson
Bayesian variable selection via penalized credible regions Brian Reich, NC State Joint work with Howard Bondell and Ander Wilson Brian Reich, NCSU Penalized credible regions 1 Motivation big p, small n
More informationLecture 2: Priors and Conjugacy
Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.
More informationA G-Modeling Program for Deconvolution and Empirical Bayes Estimation
A G-Modeling Program for Deconvolution and Empirical Bayes Estimation Balasubramanian Narasimhan Stanford University Bradley Efron Stanford University Abstract Empirical Bayes inference assumes an unknown
More informationBayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More informationSome new ideas for post selection inference and model assessment
Some new ideas for post selection inference and model assessment Robert Tibshirani, Stanford WHOA!! 2018 Thanks to Jon Taylor and Ryan Tibshirani for helpful feedback 1 / 23 Two topics 1. How to improve
More informationA Very Brief Summary of Bayesian Inference, and Examples
A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X
More informationBetter Bootstrap Confidence Intervals
by Bradley Efron University of Washington, Department of Statistics April 12, 2012 An example Suppose we wish to make inference on some parameter θ T (F ) (e.g. θ = E F X ), based on data We might suppose
More informationUnobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:
Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed
More informationAccepted author version posted online: 28 Nov 2012.
This article was downloaded by: [University of Minnesota Libraries, Twin Cities] On: 15 May 2013, At: 12:23 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954
More informationIntegrated Anlaysis of Genomics Data
Integrated Anlaysis of Genomics Data Elizabeth Jennings July 3, 01 Abstract In this project, we integrate data from several genomic platforms in a model that incorporates the biological relationships between
More informationPreliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference
1 / 171 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2013 2 / 171 Unpaid advertisement
More informationBootstrap (Part 3) Christof Seiler. Stanford University, Spring 2016, Stats 205
Bootstrap (Part 3) Christof Seiler Stanford University, Spring 2016, Stats 205 Overview So far we used three different bootstraps: Nonparametric bootstrap on the rows (e.g. regression, PCA with random
More informationMultiple regression. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar
Multiple regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Multiple regression 1 / 36 Previous two lectures Linear and logistic
More informationIntegrated Likelihood Estimation in Semiparametric Regression Models. Thomas A. Severini Department of Statistics Northwestern University
Integrated Likelihood Estimation in Semiparametric Regression Models Thomas A. Severini Department of Statistics Northwestern University Joint work with Heping He, University of York Introduction Let Y
More informationRegression, Ridge Regression, Lasso
Regression, Ridge Regression, Lasso Fabio G. Cozman - fgcozman@usp.br October 2, 2018 A general definition Regression studies the relationship between a response variable Y and covariates X 1,..., X n.
More informationOverview of statistical methods used in analyses with your group between 2000 and 2013
Department of Epidemiology and Public Health Unit of Biostatistics Overview of statistical methods used in analyses with your group between 2000 and 2013 January 21st, 2014 PD Dr C Schindler Swiss Tropical
More informationLarge-Scale Hypothesis Testing
Chapter 2 Large-Scale Hypothesis Testing Progress in statistics is usually at the mercy of our scientific colleagues, whose data is the nature from which we work. Agricultural experimentation in the early
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation
More information(4) One-parameter models - Beta/binomial. ST440/550: Applied Bayesian Statistics
Estimating a proportion using the beta/binomial model A fundamental task in statistics is to estimate a proportion using a series of trials: What is the success probability of a new cancer treatment? What
More informationBayesian performance
Bayesian performance Frequentist properties of estimators refer to the performance of an estimator (say the posterior mean) over repeated experiments under the same conditions. The posterior distribution
More informationChris Fraley and Daniel Percival. August 22, 2008, revised May 14, 2010
Model-Averaged l 1 Regularization using Markov Chain Monte Carlo Model Composition Technical Report No. 541 Department of Statistics, University of Washington Chris Fraley and Daniel Percival August 22,
More informationA STUDY OF PRE-VALIDATION
A STUDY OF PRE-VALIDATION Holger Höfling Robert Tibshirani July 3, 2007 Abstract Pre-validation is a useful technique for the analysis of microarray and other high dimensional data. It allows one to derive
More informationBayesian inference for multivariate extreme value distributions
Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016 Main motivation For a parametric model Z F θ of
More informationA New Bayesian Variable Selection Method: The Bayesian Lasso with Pseudo Variables
A New Bayesian Variable Selection Method: The Bayesian Lasso with Pseudo Variables Qi Tang (Joint work with Kam-Wah Tsui and Sijian Wang) Department of Statistics University of Wisconsin-Madison Feb. 8,
More informationIncorporating unobserved heterogeneity in Weibull survival models: A Bayesian approach
Incorporating unobserved heterogeneity in Weibull survival models: A Bayesian approach Catalina A. Vallejos 1 Mark F.J. Steel 2 1 MRC Biostatistics Unit, EMBL-European Bioinformatics Institute. 2 Dept.
More informationBayesian Multivariate Logistic Regression
Bayesian Multivariate Logistic Regression Sean M. O Brien and David B. Dunson Biostatistics Branch National Institute of Environmental Health Sciences Research Triangle Park, NC 1 Goals Brief review of
More informationMore on nuisance parameters
BS2 Statistical Inference, Lecture 3, Hilary Term 2009 January 30, 2009 Suppose that there is a minimal sufficient statistic T = t(x ) partitioned as T = (S, C) = (s(x ), c(x )) where: C1: the distribution
More informationProbabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016
Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier
More informationConfidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods
Chapter 4 Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods 4.1 Introduction It is now explicable that ridge regression estimator (here we take ordinary ridge estimator (ORE)
More informationA Bias Correction for the Minimum Error Rate in Cross-validation
A Bias Correction for the Minimum Error Rate in Cross-validation Ryan J. Tibshirani Robert Tibshirani Abstract Tuning parameters in supervised learning problems are often estimated by cross-validation.
More informationThe bootstrap. Patrick Breheny. December 6. The empirical distribution function The bootstrap
Patrick Breheny December 6 Patrick Breheny BST 764: Applied Statistical Modeling 1/21 The empirical distribution function Suppose X F, where F (x) = Pr(X x) is a distribution function, and we wish to estimate
More informationThe problem is: given an n 1 vector y and an n p matrix X, find the b(λ) that minimizes
LARS/lasso 1 Statistics 312/612, fall 21 Homework # 9 Due: Monday 29 November The questions on this sheet all refer to (a slight modification of) the lasso modification of the LARS algorithm, based on
More informationLinear Models and Estimation by Least Squares
Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:
More informationspikeslab: Prediction and Variable Selection Using Spike and Slab Regression
68 CONTRIBUTED RESEARCH ARTICLES spikeslab: Prediction and Variable Selection Using Spike and Slab Regression by Hemant Ishwaran, Udaya B. Kogalur and J. Sunil Rao Abstract Weighted generalized ridge regression
More informationSelection of Smoothing Parameter for One-Step Sparse Estimates with L q Penalty
Journal of Data Science 9(2011), 549-564 Selection of Smoothing Parameter for One-Step Sparse Estimates with L q Penalty Masaru Kanba and Kanta Naito Shimane University Abstract: This paper discusses the
More informationSTAT 518 Intro Student Presentation
STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible
More informationLecture 1: Introduction
Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One
More informationBayesian Inference. Chapter 2: Conjugate models
Bayesian Inference Chapter 2: Conjugate models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in
More informationNew Bayesian methods for model comparison
Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison
More informationPreliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference
1 / 172 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2014 2 / 172 Unpaid advertisement
More informationFast Likelihood-Free Inference via Bayesian Optimization
Fast Likelihood-Free Inference via Bayesian Optimization Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology
More information5.2 Expounding on the Admissibility of Shrinkage Estimators
STAT 383C: Statistical Modeling I Fall 2015 Lecture 5 September 15 Lecturer: Purnamrita Sarkar Scribe: Ryan O Donnell Disclaimer: These scribe notes have been slightly proofread and may have typos etc
More informationBayesian methods in economics and finance
1/26 Bayesian methods in economics and finance Linear regression: Bayesian model selection and sparsity priors Linear Regression 2/26 Linear regression Model for relationship between (several) independent
More informationCross-Validation with Confidence
Cross-Validation with Confidence Jing Lei Department of Statistics, Carnegie Mellon University WHOA-PSI Workshop, St Louis, 2017 Quotes from Day 1 and Day 2 Good model or pure model? Occam s razor We really
More informationPrevious lecture. P-value based combination. Fixed vs random effects models. Meta vs. pooled- analysis. New random effects testing.
Previous lecture P-value based combination. Fixed vs random effects models. Meta vs. pooled- analysis. New random effects testing. Interaction Outline: Definition of interaction Additive versus multiplicative
More information