Introductory Bayesian Analysis

Size: px
Start display at page:

Download "Introductory Bayesian Analysis"

Transcription

1 Introductory Bayesian Analysis Jaya M. Satagopan Memorial Sloan-Kettering Cancer Center Weill Cornell Medical College (Affiliate) March 14, 2013

2 Bayesian Analysis Fit probability models to observed data Unknown parameters Summarize using probability distribution For example, P(mutation increases risk by 10% data) Posterior distribution Prior information External data Elicit from available data

3 Bayes theorem Prior from external source This lecture Loss function, Expected loss Bayesian analysis with data-adaptive prior Minimize squared error loss Bayesian penalized estimation Prior to minimize other loss functions Software packages Winbugs, SAS

4 Part 1. Bayes Theorem

5 Bayes Theorem Random variables: Y and θ Prior distributions: P(Y), P(θ) Conditional distributions: P(Y θ) and P(θ Y) Know P(θ Y), P(Y), and P(θ) Need P(Y θ) [posterior distribution] ( ) = P (! Y )! P Y P Y! ( ) P! ( ) = ( ) ( )! P Y P! Y P (! Y )P( Y )dy "

6 Example Say, 5% of the population has a certain disease. When a person is sick, a particular test is used to determine whether (s)he has this disease. The test gives a positive result 2% of the times when a person actually does not have the disease. The test gives a positive result 95% of the times when the person does indeed have the disease. Now, one person gets a positive test. What is the probability the person has this disease?

7 Example continued Y = 1 (disease) or 0 (no disease) θ = 1 (positive test) or 0 (negative test) KNOWN: P(Y = 1) = 0.05 P(Y = 0) = 1 P(Y = 1) = 0.95 P(θ = 1 Y = 0) = 0.02 P(θ = 1 Y = 1) = 0.95 NEED: P(Y = 1 θ = 1) ( ) = P (! = 1 Y= 1 ) P Y =1 P Y =1! = 1 P! = 1 Y= 1 = P! = 1 Y = 1 = ( ) P (! = 1 ) ( ) ( ) + P! = 1 Y = 0 ( ) P Y =1 ( ) P Y = 1 ( ) P Y = ! ! !0.95 = ( )

8 Example Breast Cancer Risk Case-control sampling Cases (Y = 1) have breast cancer Controls (Y = 0) do not have breast cancer Record BRCA1/2 mutation Mutation present (θ = 1) or absent (θ = 0) Observe P(θ = 1 Y = 1) and P(θ = 1 Y = 0) Mutation frequency in cases and controls Need: P(Y = 1 θ = 1) Disease risk among mutation carriers Satagopan et al (2001) CEBP, 10:

9 Breast cancer risk (continued) Use Bayes theorem P ( Y = 1! = 1 ) = P (! = 1 Y = 1 ) P( Y =1) ( ) P( Y =1) + P (! = 1 Y = 0 ) P Y = 0 P! = 1 Y = 1 P(θ = 1 Y = 1) = mutation frequency in cases P(θ = 1 Y = 0) = mutation frequency in controls ( ) P(Y = 1) = 1 P(Y = 0) = prior information Get prior from external source (SEER Registry)

10 Breast cancer risk (continued) BRCA Muta*on Case Control Present Absent Data for Age group P(θ = 1 Y = 1) = 25/204 P(θ = 1 Y = 0) = 23/1113 P(Y = 1) = Disease risk in the age group (SEER registry) P(Y = 1 θ = 1) = 7.6%

11 Part 2. Loss function, Bayes estimate

12 Loss Function and Expected Loss Parameter θ Decision (estimate) d(y) based on data Y Loss incurred = L(d(Y), θ) 0 Squared error loss L(d(Y), θ) = [d(y) - θ] 2 Absolute deviation L(d(Y), θ) = d(y) - θ Expected loss = Risk = R(d,θ) = E{L(d(Y), θ)} R ( d, θ ) L d( Y ) (, θ ) f ( Y θ ) = dy

13 Bayes Estimation There is no single d that has small R(d,θ) for all θ. No uniformly best d Bayes approach Get d that minimizes the average risk W(d). W(d) is also known as the Bayes risk W d ( ) L d( Y ) (, θ ) f ( Y θ ) dy dg( ) = θ Bayes estimate d B of d: W(d B ) W(d) For squared error loss, d B is the posterior mean of θ d B (Y) = E(θ Y)

14 Part 3. Bayesian analysis with dataadaptive prior parameters GxE example

15 Bayesian analysis of GxE interactions Case-control study Y = 1 (case) Y = 0 (control) Binary risk factors (say) Genetic factor: G = 0, 1 Environmental exposure: E = 0, 1 Mukherjee and Chatterjee (2008). Biometrics, 64: Is there a significant interaction between G and E? Estimate interaction odds ratio and standard error Test: Is this odds ratio = 1? Is this log(odds ratio) = 0?

16 Interaction odds ratio (OR GE ) Y = 0 (Control data) E = 1 E = 0 G = 1 N 011 N 010 G = 0 N 001 N 000 Y = 1 (Case data) E = 1 E = 0 G = 1 N 111 N 110 G = 0 N 101 N 100 OR 0 = Odds of E associated with G among controls OR 1 = Odds of E associated with G among cases OR 0 = N 011 N 000 N 001 N 010 OR 1 = N 111 N 100 N 101 N 110 OR GE = OR 1 OR 0 GE ( GE) ( )- log( OR ) ˆ β = log OR Var ˆ!GE = log OR = ˆ β case 1 - ˆ β control 1 ( ) = Var ( ) ˆ!case + Var ( ) ˆ!control

17 Gene-Environment independence in controls Y = 0 (Control data) E = 1 E = 0 G = 1 N 011 N 010 G = 0 N 001 N 000 OR 0 = N 011 N 000 N 001 N 010 = 1 OR GE = OR 1 Var ( ) ˆ!GE = Var ( ) ˆ!case < Var ( ) ˆ!case +Var ( ) ˆ!control Independence of G and E in controls unknown. So Test: β control = 0 If hypothesis is rejected, estimate interaction OR as β GE = β case - β control. Otherwise, estimate as β GE = β case Then test whether β GE = 0 for interaction Not a good idea!!

18 Weighted estimate Estimate based on preliminary test T for β 0 = 0 ˆ β GE, PT case ( T > c) ˆ GE = I(T < c) ˆ β +I β Weighted average of case-only and case-control estimates. Weights are indicator functions Can do better without requiring preliminary test!! ˆ β GE, w case ( 1- w) ˆ GE = w ˆ β + β Choose w to minimize squared error loss Bayes risk: E data { E ( ˆ β - β )} β GE data GE,w GE

19 Bayes estimate w is function of ( ) Var ˆ β and t 2 = Var( ˆ β ˆ β ) GE GE case Shrinkage ˆ βge, B = ˆ βcase + e estimation e is error due to assuming G and E independence in controls Alternative explanation: An estimate of e is: e Prior for e: N(0, σ 2 ). Bayes estimate of e is ˆ β GE - ˆ β ˆ = ( 2 e ˆ e~ N e, t ) case 2 σ ( e eˆ ) = eˆ σ E t M & C (2008) suggest estimating σ 2 as Var β ( ˆ ) GE Empirical Bayes estimate: ˆ β = ˆ β GE, B case + E ( e eˆ )

20 Advanced Colorectal Adenoma Example 610 cases and 605 controls G = NAT2 acetylation (yes, no) E = Smoking (never, past, current) Note: lack of G and E independence in controls Need case-control estimate EB estimate, credible interval. Is 0 in interval?

21 Summary Uncertainty about underlying assumption Two possible estimates Bayes estimate: weighted average of the two Shrinkage estimation Data-adaptive estimation of prior parameters Minimize squared error loss

22 Part 4. Bayesian penalized estimation Prior to minimize various loss functions

23 Part 4a. Bayesian Ridge Regression Minimize Squared Error Loss Normal Prior

24 GWAS data (Chen and Witte 2007, AJHG, 81: ) 57 unrelated individuals of European ancestry (CEU) HapMap project Outcome = Expression of the CHI3L2 gene Cheung et al 2005, Nature, 437: Risk factors = 39,186 SNPs from Chromosome 1 Illumina 550K array from HapMap SNP rs deemed causal for CHI3L2 expression Goal: How well are the neighboring SNPs ranked well?

25 Application to GWAS Y = continuous (or binary) outcome, length N (subjects) X m = m-th SNP, m = 1, 2,, M (=500K, say) For each SNP, model: Y = µ m + X m β m + error β m is effect of SNP m MLE, std err, p-value Find the significant SNPs Find the SNPs having the 500 smallest p-values Chen and Witte AJHG, 81:

26 Hierarchical modeling Incorporate external information about SNPs Bioinformatics data (Z matrix, user-specified) conservation, various functional categories β = Zπ + U β length G, Z is G K, π is K 1 U is N(0, t 2 T) T is specified Improved estimation via second stage model Prior for β is N(Zπ, t 2 T) Need {(β - Zπ) T -1 (β - Zπ)}/t 2 to be small: Penalization

27 Posterior inference via MCMC Markov chain Monte Carlo approach to get βs Specify prior for β, π, σ 2 π ~ N(0, *) 1/σ 2 ~ Gamma(**, $$) Specify prior for t 2 or fix t 2 Generate samples from full conditional distributions β Y, π, σ 2, t 2, π Y, β, σ 2, t 2, σ 2 Y, β, π, t 2, etc. Itera*on β parameters 1 β 1 β 2 β G 2 β 1 β 2 β G G β 1 β 2 β G Posterior Summaries Avg(β 1 ) Stdev (β 1 ) Avg(β 2 ) Stdev (β 2 ) Avg(β G ) Stdev (β G )

28 Chen and Witte GWAS Example Plot p-values of top 500 SNPs

29 So, what is going on? Y = µ m + X m β m + error MLE of β s ˆ β = ˆ β, ˆ β2,, Variance Vˆ β = Zπ + U, U ~ N(0, t 2 T) MLE of π s ( ˆ ) 1 βg ( ) ( ) T 1 T 2 Z SZ Z S ˆ, S = Vˆ + t T 1 ˆ π = β Bayes estimate of β s ~ β ( I -W ) ˆ β + WZ ˆ π, W = SVˆ Large t 2 : S 0 W and Small t 2 : W I and ~ β ˆ ~ β β = Z ˆ π Shrinkage estimation

30 Some Remarks Sensitivity to choice of prior parameters Instead of p-value, P(β m > 0), m = 1,, G The Bayes estimate ~ β must ideally not be too sensitive to the choice of Z The estimated value of π will depend upon Z, but ideally the Bayes estimate should not.

31 Part 4b. Bayesian LASSO Minimize Absolute deviation Laplace prior

32 Diabetes data (Efron et al 2004, The Annals of Statistics, 32: )

33 Application to the diabetes study Y = continuous (or other type of) outcome (N 1) X = N p vector of risk factors β = p 1 vector of effects (parameters of interest) Find the significant risk factors Y = Xβ + error Park and Casella (2008). J Am Stat Assoc, 103: Many p, potentially correlated risk factors etc Estimate β to minimize β - β 0 for some β 0 (LASSO) β 0 = 0 or β 0 = Zπ, Z given and π must be estimated

34 Bayesian LASSO β - β 0 1 exp{ - β - β 0 } LHS takes the form of a Laplace distribution Y = Xβ + error error ~ N(0, σ 2 I) Laplace prior for β with mean β 0 f = ( β ) 0 j λ = exp 2σ 1 2 2πσ exp λ σ 1 2t β β 2 j λ λ 2 2 ( β β ) exp t dt j 0 j 0 j 2 2σ 2 2σ Mixture of normal prior for β and an exponential prior for its variance

35 Bayesian LASSO setup Y β, σ 2 ~ N ( 2 Xβ, σ I ) β t 2 j σ j 2 σ ~ 2,t 2 j ~ N exponential ( 2 2 0, σ t ) ( 2 λ ) ~ Inverse Gamma, j j ( a,a ) 1 j = 1,, = 1,, p 2 p independent independent t j 2 are latent variables to facilitate MCMC steps a 1 and a 2 are specified (check for sensitivity) λ 2 : empirical estimation from data or specify prior Generally a Gamma(c 1, c 2 ) prior

36 Parameter Estimation Get full conditionals, apply MCMC Bayes estimate of β Posterior median Original LASSO: quadratic programming methods

37 Part 4c. Other Bayesian Penalization Methods Brief survey

38 Bridge Regression Estimate β by minimizing p j = 1 β j Z γ iπ γ is pre-specified γ = 1 is (Bayesian) LASSO γ = 2 is (Bayesian) Ridge Fu 1998, JCGS, 7:

39 Bayesian Elasticnet Estimate β by minimizing λ p j = 1 β j Z π i + p ( 1 - λ) ( β ) j Ziπ j= 1 Compromise between LASSO and Ridge penalties 2 Normal prior constrained within certain bounds Hans (2011). J Am Stat Assoc, 106:

40 Software Packages WinBUGS Specify model for outcome Specify priors Output estimated values of β and other parameters Uses MCMC methods Diagnostic plots contents.shtml SAS Proc MCMC HTML/default/viewer.htm#mcmc_toc.htm

41 References: Textbooks JS Maritz and T Lwin (1989). Empirical Bayes Methods. Chapman and Hall. JM Bernardo and AFM Smith (1993). Bayesian Theory. Wiley. BP Carlin and TA Louis (1996). Bayes and empirical Bayes methods for data analysis. Chapman and Hall. A Gelman, JB Carlin, HS Stern, DB Rubin (1996). Bayesian data analysis. Chapman and Hall. WR Gilks, S Richardson, DJ Spiegelhalter (1996). Markov chain Monte Carlo in practice. Chapman and Hall. T Hastie, R Tibshirani, J Friedman (2001). The Elements of Statistical Learning. Springer.

42 References: Some papers R Tibshirani (1996). Regression shrinkage and selection via the Lasso. JRSS Series B, 58: J Fu (1998). Penalized regression: The Bridge versus the Lasso. JCGS, 7: MA Newton and Y Lee (2000). Inferring the location and effect of tumor suppressor genes by instability-selection modeling of allelic-loss data. Biometrics 56: JM Satagopan, K Offit, W Foulkes, ME Robson, S Wacholder, CM Eng, SE Karp, CB Begg (2001). The lifetime risks of breast cancer in Ashkenazi Jewish carriers of BRCA1 and BRCA2 mutations. Cancer Epidemiology,Biomarkers and Prevention 10:

43 References: Some papers CM Kendziorski, MA Newton, H Lan, MN Gould (2003). On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Statistics in Medicine 22: D Conti, V Cortessis, J Molitor, DC Thomas (2003). Bayesian modeling of complex metabolic pathways. Human Heredity, 56: B Efron, T Hastie, I Johnstone, R Tibshirani (2004). Least angle regression. The Annals of Statistics, 32: B Mukherjee, N Chatterjee (2008). Exploiting gene-environment independence for analysis of case-control studies: An empirical Bayes-type shrinkage estimator to trade-off between bias and efficiency. Biometrics, 64:

44 References: Some papers GK Chen, JS Witte (2007). Enriching the analysis of genomewide association studies with hierarchical modeling. AJHG, 81: T Park, G Casella (2008). The Bayesian Lasso. JASA, 103: M Park, T Hastie (2008). Penalized logistic regression for.detecting gene interactions. Biostatistics, 9: C Hans (2011). Elastic net regression modeling with the orthant normal prior. JASA, 106: Many more: Bioinformatics, Genetic Epidemiology, JASA, JRSS Series B and C, PLoS One,

Bayesian Inference. Chapter 1. Introduction and basic concepts

Bayesian Inference. Chapter 1. Introduction and basic concepts Bayesian Inference Chapter 1. Introduction and basic concepts M. Concepción Ausín Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master

More information

Previous lecture. P-value based combination. Fixed vs random effects models. Meta vs. pooled- analysis. New random effects testing.

Previous lecture. P-value based combination. Fixed vs random effects models. Meta vs. pooled- analysis. New random effects testing. Previous lecture P-value based combination. Fixed vs random effects models. Meta vs. pooled- analysis. New random effects testing. Interaction Outline: Definition of interaction Additive versus multiplicative

More information

An Algorithm for Bayesian Variable Selection in High-dimensional Generalized Linear Models

An Algorithm for Bayesian Variable Selection in High-dimensional Generalized Linear Models Proceedings 59th ISI World Statistics Congress, 25-30 August 2013, Hong Kong (Session CPS023) p.3938 An Algorithm for Bayesian Variable Selection in High-dimensional Generalized Linear Models Vitara Pungpapong

More information

Introductory Bayesian Analysis

Introductory Bayesian Analysis Introductor Baesian Analsis Jaa M. Sataoan Memorial Sloan-Ketterin Cancer Center Aril, 008 Baesian Inference Fit robabilit model to observed data Unknown arameters Summarize usin robabilit distribution

More information

Or How to select variables Using Bayesian LASSO

Or How to select variables Using Bayesian LASSO Or How to select variables Using Bayesian LASSO x 1 x 2 x 3 x 4 Or How to select variables Using Bayesian LASSO x 1 x 2 x 3 x 4 Or How to select variables Using Bayesian LASSO On Bayesian Variable Selection

More information

Bayesian Inference for the Multivariate Normal

Bayesian Inference for the Multivariate Normal Bayesian Inference for the Multivariate Normal Will Penny Wellcome Trust Centre for Neuroimaging, University College, London WC1N 3BG, UK. November 28, 2014 Abstract Bayesian inference for the multivariate

More information

Statistical Inference

Statistical Inference Statistical Inference Liu Yang Florida State University October 27, 2016 Liu Yang, Libo Wang (Florida State University) Statistical Inference October 27, 2016 1 / 27 Outline The Bayesian Lasso Trevor Park

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Reconstruction of individual patient data for meta analysis via Bayesian approach

Reconstruction of individual patient data for meta analysis via Bayesian approach Reconstruction of individual patient data for meta analysis via Bayesian approach Yusuke Yamaguchi, Wataru Sakamoto and Shingo Shirahata Graduate School of Engineering Science, Osaka University Masashi

More information

Penalized Loss functions for Bayesian Model Choice

Penalized Loss functions for Bayesian Model Choice Penalized Loss functions for Bayesian Model Choice Martyn International Agency for Research on Cancer Lyon, France 13 November 2009 The pure approach For a Bayesian purist, all uncertainty is represented

More information

Chris Fraley and Daniel Percival. August 22, 2008, revised May 14, 2010

Chris Fraley and Daniel Percival. August 22, 2008, revised May 14, 2010 Model-Averaged l 1 Regularization using Markov Chain Monte Carlo Model Composition Technical Report No. 541 Department of Statistics, University of Washington Chris Fraley and Daniel Percival August 22,

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Bayesian modelling. Hans-Peter Helfrich. University of Bonn. Theodor-Brinkmann-Graduate School

Bayesian modelling. Hans-Peter Helfrich. University of Bonn. Theodor-Brinkmann-Graduate School Bayesian modelling Hans-Peter Helfrich University of Bonn Theodor-Brinkmann-Graduate School H.-P. Helfrich (University of Bonn) Bayesian modelling Brinkmann School 1 / 22 Overview 1 Bayesian modelling

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Lecture 1 Bayesian inference

Lecture 1 Bayesian inference Lecture 1 Bayesian inference olivier.francois@imag.fr April 2011 Outline of Lecture 1 Principles of Bayesian inference Classical inference problems (frequency, mean, variance) Basic simulation algorithms

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 10

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 10 COS53: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 0 MELISSA CARROLL, LINJIE LUO. BIAS-VARIANCE TRADE-OFF (CONTINUED FROM LAST LECTURE) If V = (X n, Y n )} are observed data, the linear regression problem

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Proteomics and Variable Selection

Proteomics and Variable Selection Proteomics and Variable Selection p. 1/55 Proteomics and Variable Selection Alex Lewin With thanks to Paul Kirk for some graphs Department of Epidemiology and Biostatistics, School of Public Health, Imperial

More information

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation. PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.. Beta Distribution We ll start by learning about the Beta distribution, since we end up using

More information

Probabilistic Machine Learning

Probabilistic Machine Learning Probabilistic Machine Learning Bayesian Nets, MCMC, and more Marek Petrik 4/18/2017 Based on: P. Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. Chapter 10. Conditional Independence Independent

More information

Frequentist Accuracy of Bayesian Estimates

Frequentist Accuracy of Bayesian Estimates Frequentist Accuracy of Bayesian Estimates Bradley Efron Stanford University RSS Journal Webinar Objective Bayesian Inference Probability family F = {f µ (x), µ Ω} Parameter of interest: θ = t(µ) Prior

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

PENALIZING YOUR MODELS

PENALIZING YOUR MODELS PENALIZING YOUR MODELS AN OVERVIEW OF THE GENERALIZED REGRESSION PLATFORM Michael Crotty & Clay Barker Research Statisticians JMP Division, SAS Institute Copyr i g ht 2012, SAS Ins titut e Inc. All rights

More information

Integrated Non-Factorized Variational Inference

Integrated Non-Factorized Variational Inference Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014

More information

Bayesian model selection: methodology, computation and applications

Bayesian model selection: methodology, computation and applications Bayesian model selection: methodology, computation and applications David Nott Department of Statistics and Applied Probability National University of Singapore Statistical Genomics Summer School Program

More information

A Survey of L 1. Regression. Céline Cunen, 20/10/2014. Vidaurre, Bielza and Larranaga (2013)

A Survey of L 1. Regression. Céline Cunen, 20/10/2014. Vidaurre, Bielza and Larranaga (2013) A Survey of L 1 Regression Vidaurre, Bielza and Larranaga (2013) Céline Cunen, 20/10/2014 Outline of article 1.Introduction 2.The Lasso for Linear Regression a) Notation and Main Concepts b) Statistical

More information

Quantile POD for Hit-Miss Data

Quantile POD for Hit-Miss Data Quantile POD for Hit-Miss Data Yew-Meng Koh a and William Q. Meeker a a Center for Nondestructive Evaluation, Department of Statistics, Iowa State niversity, Ames, Iowa 50010 Abstract. Probability of detection

More information

Markov Chain Monte Carlo in Practice

Markov Chain Monte Carlo in Practice Markov Chain Monte Carlo in Practice Edited by W.R. Gilks Medical Research Council Biostatistics Unit Cambridge UK S. Richardson French National Institute for Health and Medical Research Vilejuif France

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Bayesian Networks in Educational Assessment

Bayesian Networks in Educational Assessment Bayesian Networks in Educational Assessment Estimating Parameters with MCMC Bayesian Inference: Expanding Our Context Roy Levy Arizona State University Roy.Levy@asu.edu 2017 Roy Levy MCMC 1 MCMC 2 Posterior

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

Kobe University Repository : Kernel

Kobe University Repository : Kernel Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI URL Note on the Sampling Distribution for the Metropolis-

More information

Association studies and regression

Association studies and regression Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration

More information

Geometric ergodicity of the Bayesian lasso

Geometric ergodicity of the Bayesian lasso Geometric ergodicity of the Bayesian lasso Kshiti Khare and James P. Hobert Department of Statistics University of Florida June 3 Abstract Consider the standard linear model y = X +, where the components

More information

2017 SISG Module 1: Bayesian Statistics for Genetics Lecture 7: Generalized Linear Modeling

2017 SISG Module 1: Bayesian Statistics for Genetics Lecture 7: Generalized Linear Modeling 2017 SISG Module 1: Bayesian Statistics for Genetics Lecture 7: Generalized Linear Modeling Jon Wakefield Departments of Statistics and Biostatistics University of Washington Outline Introduction and Motivating

More information

Frequentist Accuracy of Bayesian Estimates

Frequentist Accuracy of Bayesian Estimates Frequentist Accuracy of Bayesian Estimates Bradley Efron Stanford University Bayesian Inference Parameter: µ Ω Observed data: x Prior: π(µ) Probability distributions: Parameter of interest: { fµ (x), µ

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

Lasso & Bayesian Lasso

Lasso & Bayesian Lasso Readings Chapter 15 Christensen Merlise Clyde October 6, 2015 Lasso Tibshirani (JRSS B 1996) proposed estimating coefficients through L 1 constrained least squares Least Absolute Shrinkage and Selection

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

McGill University. Department of Epidemiology and Biostatistics. Bayesian Analysis for the Health Sciences. Course EPIB-682.

McGill University. Department of Epidemiology and Biostatistics. Bayesian Analysis for the Health Sciences. Course EPIB-682. McGill University Department of Epidemiology and Biostatistics Bayesian Analysis for the Health Sciences Course EPIB-682 Lawrence Joseph Intro to Bayesian Analysis for the Health Sciences EPIB-682 2 credits

More information

Least Absolute Shrinkage is Equivalent to Quadratic Penalization

Least Absolute Shrinkage is Equivalent to Quadratic Penalization Least Absolute Shrinkage is Equivalent to Quadratic Penalization Yves Grandvalet Heudiasyc, UMR CNRS 6599, Université de Technologie de Compiègne, BP 20.529, 60205 Compiègne Cedex, France Yves.Grandvalet@hds.utc.fr

More information

Regularization Paths

Regularization Paths December 2005 Trevor Hastie, Stanford Statistics 1 Regularization Paths Trevor Hastie Stanford University drawing on collaborations with Brad Efron, Saharon Rosset, Ji Zhu, Hui Zhou, Rob Tibshirani and

More information

g-priors for Linear Regression

g-priors for Linear Regression Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,

More information

Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion

Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion Glenn Heller and Jing Qin Department of Epidemiology and Biostatistics Memorial

More information

Approximate Bayesian Computation

Approximate Bayesian Computation Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki and Aalto University 1st December 2015 Content Two parts: 1. The basics of approximate

More information

Supplement to Bayesian inference for high-dimensional linear regression under the mnet priors

Supplement to Bayesian inference for high-dimensional linear regression under the mnet priors The Canadian Journal of Statistics Vol. xx No. yy 0?? Pages?? La revue canadienne de statistique Supplement to Bayesian inference for high-dimensional linear regression under the mnet priors Aixin Tan

More information

Bayesian Hierarchical Models

Bayesian Hierarchical Models Bayesian Hierarchical Models Gavin Shaddick, Millie Green, Matthew Thomas University of Bath 6 th - 9 th December 2016 1/ 34 APPLICATIONS OF BAYESIAN HIERARCHICAL MODELS 2/ 34 OUTLINE Spatial epidemiology

More information

Probability of Detecting Disease-Associated SNPs in Case-Control Genome-Wide Association Studies

Probability of Detecting Disease-Associated SNPs in Case-Control Genome-Wide Association Studies Probability of Detecting Disease-Associated SNPs in Case-Control Genome-Wide Association Studies Ruth Pfeiffer, Ph.D. Mitchell Gail Biostatistics Branch Division of Cancer Epidemiology&Genetics National

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Shrinkage Methods: Ridge and Lasso

Shrinkage Methods: Ridge and Lasso Shrinkage Methods: Ridge and Lasso Jonathan Hersh 1 Chapman University, Argyros School of Business hersh@chapman.edu February 27, 2019 J.Hersh (Chapman) Ridge & Lasso February 27, 2019 1 / 43 1 Intro and

More information

Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling

Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling Jon Wakefield Departments of Statistics and Biostatistics University of Washington 1 / 37 Lecture Content Motivation

More information

Cluster investigations using Disease mapping methods International workshop on Risk Factors for Childhood Leukemia Berlin May

Cluster investigations using Disease mapping methods International workshop on Risk Factors for Childhood Leukemia Berlin May Cluster investigations using Disease mapping methods International workshop on Risk Factors for Childhood Leukemia Berlin May 5-7 2008 Peter Schlattmann Institut für Biometrie und Klinische Epidemiologie

More information

Bayesian inference for multivariate extreme value distributions

Bayesian inference for multivariate extreme value distributions Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016 Main motivation For a parametric model Z F θ of

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

HERITABILITY ESTIMATION USING A REGULARIZED REGRESSION APPROACH (HERRA)

HERITABILITY ESTIMATION USING A REGULARIZED REGRESSION APPROACH (HERRA) BIRS 016 1 HERITABILITY ESTIMATION USING A REGULARIZED REGRESSION APPROACH (HERRA) Malka Gorfine, Tel Aviv University, Israel Joint work with Li Hsu, FHCRC, Seattle, USA BIRS 016 The concept of heritability

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Generalized Elastic Net Regression

Generalized Elastic Net Regression Abstract Generalized Elastic Net Regression Geoffroy MOURET Jean-Jules BRAULT Vahid PARTOVINIA This work presents a variation of the elastic net penalization method. We propose applying a combined l 1

More information

Package LBLGXE. R topics documented: July 20, Type Package

Package LBLGXE. R topics documented: July 20, Type Package Type Package Package LBLGXE July 20, 2015 Title Bayesian Lasso for detecting Rare (or Common) Haplotype Association and their interactions with Environmental Covariates Version 1.2 Date 2015-07-09 Author

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Parametric Empirical Bayes Methods for Microarrays

Parametric Empirical Bayes Methods for Microarrays Parametric Empirical Bayes Methods for Microarrays Ming Yuan, Deepayan Sarkar, Michael Newton and Christina Kendziorski April 30, 2018 Contents 1 Introduction 1 2 General Model Structure: Two Conditions

More information

Genotype Imputation. Biostatistics 666

Genotype Imputation. Biostatistics 666 Genotype Imputation Biostatistics 666 Previously Hidden Markov Models for Relative Pairs Linkage analysis using affected sibling pairs Estimation of pairwise relationships Identity-by-Descent Relatives

More information

Chapter 3. Linear Models for Regression

Chapter 3. Linear Models for Regression Chapter 3. Linear Models for Regression Wei Pan Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455 Email: weip@biostat.umn.edu PubH 7475/8475 c Wei Pan Linear

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

Biostatistics-Lecture 16 Model Selection. Ruibin Xi Peking University School of Mathematical Sciences

Biostatistics-Lecture 16 Model Selection. Ruibin Xi Peking University School of Mathematical Sciences Biostatistics-Lecture 16 Model Selection Ruibin Xi Peking University School of Mathematical Sciences Motivating example1 Interested in factors related to the life expectancy (50 US states,1969-71 ) Per

More information

Multiple Change-Point Detection and Analysis of Chromosome Copy Number Variations

Multiple Change-Point Detection and Analysis of Chromosome Copy Number Variations Multiple Change-Point Detection and Analysis of Chromosome Copy Number Variations Yale School of Public Health Joint work with Ning Hao, Yue S. Niu presented @Tsinghua University Outline 1 The Problem

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

BAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS

BAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS BAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS Andrew A. Neath 1 and Joseph E. Cavanaugh 1 Department of Mathematics and Statistics, Southern Illinois University, Edwardsville, Illinois 606, USA

More information

Bayesian inference & Markov chain Monte Carlo. Note 1: Many slides for this lecture were kindly provided by Paul Lewis and Mark Holder

Bayesian inference & Markov chain Monte Carlo. Note 1: Many slides for this lecture were kindly provided by Paul Lewis and Mark Holder Bayesian inference & Markov chain Monte Carlo Note 1: Many slides for this lecture were kindly provided by Paul Lewis and Mark Holder Note 2: Paul Lewis has written nice software for demonstrating Markov

More information

GWAS IV: Bayesian linear (variance component) models

GWAS IV: Bayesian linear (variance component) models GWAS IV: Bayesian linear (variance component) models Dr. Oliver Stegle Christoh Lippert Prof. Dr. Karsten Borgwardt Max-Planck-Institutes Tübingen, Germany Tübingen Summer 2011 Oliver Stegle GWAS IV: Bayesian

More information

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California Texts in Statistical Science Bayesian Ideas and Data Analysis An Introduction for Scientists and Statisticians Ronald Christensen University of New Mexico Albuquerque, New Mexico Wesley Johnson University

More information

Linear Regression. Volker Tresp 2018

Linear Regression. Volker Tresp 2018 Linear Regression Volker Tresp 2018 1 Learning Machine: The Linear Model / ADALINE As with the Perceptron we start with an activation functions that is a linearly weighted sum of the inputs h = M j=0 w

More information

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms

More information

McGill University. Department of Epidemiology and Biostatistics. Bayesian Analysis for the Health Sciences. Course EPIB-675.

McGill University. Department of Epidemiology and Biostatistics. Bayesian Analysis for the Health Sciences. Course EPIB-675. McGill University Department of Epidemiology and Biostatistics Bayesian Analysis for the Health Sciences Course EPIB-675 Lawrence Joseph Bayesian Analysis for the Health Sciences EPIB-675 3 credits Instructor:

More information

Generalized Linear Models and Its Asymptotic Properties

Generalized Linear Models and Its Asymptotic Properties for High Dimensional Generalized Linear Models and Its Asymptotic Properties April 21, 2012 for High Dimensional Generalized L Abstract Literature Review In this talk, we present a new prior setting for

More information

Statistics 220 Bayesian Data Analysis

Statistics 220 Bayesian Data Analysis Statistics 220 Bayesian Data Analysis Mark E. Irwin Department of Statistics Harvard University Spring Term Thursday, February 3, 2005 - Tuesday, May 17, 2005 Copyright c 2005 by Mark E. Irwin Personnel

More information

The STS Surgeon Composite Technical Appendix

The STS Surgeon Composite Technical Appendix The STS Surgeon Composite Technical Appendix Overview Surgeon-specific risk-adjusted operative operative mortality and major complication rates were estimated using a bivariate random-effects logistic

More information

Bayesian variable selection via. Penalized credible regions. Brian Reich, NCSU. Joint work with. Howard Bondell and Ander Wilson

Bayesian variable selection via. Penalized credible regions. Brian Reich, NCSU. Joint work with. Howard Bondell and Ander Wilson Bayesian variable selection via penalized credible regions Brian Reich, NC State Joint work with Howard Bondell and Ander Wilson Brian Reich, NCSU Penalized credible regions 1 Motivation big p, small n

More information

MINIMUM EXPECTED RISK PROBABILITY ESTIMATES FOR NONPARAMETRIC NEIGHBORHOOD CLASSIFIERS. Maya Gupta, Luca Cazzanti, and Santosh Srivastava

MINIMUM EXPECTED RISK PROBABILITY ESTIMATES FOR NONPARAMETRIC NEIGHBORHOOD CLASSIFIERS. Maya Gupta, Luca Cazzanti, and Santosh Srivastava MINIMUM EXPECTED RISK PROBABILITY ESTIMATES FOR NONPARAMETRIC NEIGHBORHOOD CLASSIFIERS Maya Gupta, Luca Cazzanti, and Santosh Srivastava University of Washington Dept. of Electrical Engineering Seattle,

More information

Directed acyclic graphs and the use of linear mixed models

Directed acyclic graphs and the use of linear mixed models Directed acyclic graphs and the use of linear mixed models Siem H. Heisterkamp 1,2 1 Groningen Bioinformatics Centre, University of Groningen 2 Biostatistics and Research Decision Sciences (BARDS), MSD,

More information

Multiple regression. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar

Multiple regression. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Multiple regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Multiple regression 1 / 36 Previous two lectures Linear and logistic

More information

Statistics Applications Epidemiology. Does adjustment for measurement error induce positive bias if there is no true association? Igor Burstyn, Ph.D.

Statistics Applications Epidemiology. Does adjustment for measurement error induce positive bias if there is no true association? Igor Burstyn, Ph.D. Statistics Applications Epidemiology Does adjustment for measurement error induce positive bias if there is no true association? Igor Burstyn, Ph.D. Community and Occupational Medicine Program, Department

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

Fully Bayesian Spatial Analysis of Homicide Rates.

Fully Bayesian Spatial Analysis of Homicide Rates. Fully Bayesian Spatial Analysis of Homicide Rates. Silvio A. da Silva, Luiz L.M. Melo and Ricardo S. Ehlers Universidade Federal do Paraná, Brazil Abstract Spatial models have been used in many fields

More information

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist

More information

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3 University of California, Irvine 2017-2018 1 Statistics (STATS) Courses STATS 5. Seminar in Data Science. 1 Unit. An introduction to the field of Data Science; intended for entering freshman and transfers.

More information

Equivalence of random-effects and conditional likelihoods for matched case-control studies

Equivalence of random-effects and conditional likelihoods for matched case-control studies Equivalence of random-effects and conditional likelihoods for matched case-control studies Ken Rice MRC Biostatistics Unit, Cambridge, UK January 8 th 4 Motivation Study of genetic c-erbb- exposure and

More information

Bayesian Inference: Concept and Practice

Bayesian Inference: Concept and Practice Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of

More information

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction

More information

Lecture 5: Spatial probit models. James P. LeSage University of Toledo Department of Economics Toledo, OH

Lecture 5: Spatial probit models. James P. LeSage University of Toledo Department of Economics Toledo, OH Lecture 5: Spatial probit models James P. LeSage University of Toledo Department of Economics Toledo, OH 43606 jlesage@spatial-econometrics.com March 2004 1 A Bayesian spatial probit model with individual

More information

Consistent high-dimensional Bayesian variable selection via penalized credible regions

Consistent high-dimensional Bayesian variable selection via penalized credible regions Consistent high-dimensional Bayesian variable selection via penalized credible regions Howard Bondell bondell@stat.ncsu.edu Joint work with Brian Reich Howard Bondell p. 1 Outline High-Dimensional Variable

More information

Machine Learning for OR & FE

Machine Learning for OR & FE Machine Learning for OR & FE Regression II: Regularization and Shrinkage Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Introduction: MLE, MAP, Bayesian reasoning (28/8/13) STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this

More information

Statistical Data Mining and Machine Learning Hilary Term 2016

Statistical Data Mining and Machine Learning Hilary Term 2016 Statistical Data Mining and Machine Learning Hilary Term 2016 Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/sdmml Naïve Bayes

More information