Some Curiosities Arising in Objective Bayesian Analysis

Size: px
Start display at page:

Download "Some Curiosities Arising in Objective Bayesian Analysis"

Transcription

1 . Some Curiosities Arising in Objective Bayesian Analysis Jim Berger Duke University Statistical and Applied Mathematical Institute Yale University May 15,

2 Three vignettes related to John s work Some puzzling things concerning invariant priors Some puzzling things concerning multiplicity What is the effective sample size? 2

3 I. Objective Priors: Why Can t We Have It All? Figure 1: John at Princeton in

4 An Example: Inference for the Correlation Coefficient The bivariate normal distribution ( of (x 1, x 2 ) has mean (µ 1, µ 2 ) and σ 2 ) covariance matrix Σ = 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2, where ρ is the correlation between x 1 and x 2. For a sample (x 11, x 21 ), (x 12, x 22 ),..., (x 1n, x 2n ), the sufficient statistics are x = (x 1, x 2 ), where x i = n 1 n j=1 x ij, and S = n ( (x i x)(x i x) s 11 r ) s 11 s 22 = r, s 11 s 22 s 22 i=1 where s ij = n k=1 (x ik x i )(x jk x j ), r = s 12 / s 11 s 22. 4

5 Three interesting priors for inference concerning ρ: The reference prior (Lindley and Bayarri) π R (µ 1, µ 2, σ 1, σ 2, ρ) = 1 σ 1 σ 2 (1 ρ 2 ). The right-haar prior π RH1 (µ 1, µ 2, σ 1, σ 2, ρ) = 1 σ 2 2 (1 ρ2 ), which is right-haar w.r.t. the lower triangular matrix group action (a 1, a 2, T l ) (x 1, x 2 ) = T l (x 1, x 2 ) + (a 1, a 2 ). The right-haar prior π RH2 (µ 1, µ 2, σ 1, σ 2, ρ) = 1 σ 2 1 (1 ρ2 ), which is right-haar w.r.t. the upper triangular matrix, T u, group action. 5

6 Credible intervals for ρ, under either right-haar prior, can be approximated by drawing independent Z N(0, 1), χ 2 n 1 and χ 2 n 2; setting ρ = Y, where Y = Z χ 2 1+Y + n 2 2 χ 2 n 1 χ 2 n 1 r 1 r 2 ; repeating this process 10, 000 times; using the α 2 % upper and lower percentiles of these generated ρ to form the desired confidence limits. Credible intervals for ρ, under the reference prior, can be found by replacing the first two steps above by Generate Σ = σ2 1 ρσ 1 σ 2 ρσ 1 σ 2 σ2 2 Inverse Wishart(S 1, n 1). Generate u unif (0, 1). If u 1 ρ 2, record ρ. Otherwise repeat. 6

7 Lemma 1 1. The Bayesian credible set for ρ, from either right-haar prior, has exact frequentist coverage 1 α. 2. This credible set is the the fiducial confidence interval for ρ of Fisher So we have it all, a simultaneous objective Bayesian, fiducial, and exact frequentist confidence set. Or do we? In the 60 s, Brillinger showed that, if one starts with the density f(r ρ) of r, there is no prior distribution for ρ whose posterior equals the fiducial distribution. Geisser and Cornfield (1963) thus conjectured that fiducial and Bayesian inference could not agree here. (They do.) But, since π RH (ρ x) can be shown only to depend on the data through r, we have a marginalization paradox (Dawid, Stone and Zidek, 1973): π RH (ρ x) = g(ρ, r) f(r ρ)π(ρ) for any π( ). 7

8 Can We Trust Bayesian Truisms with Improper Priors? Two Truisms: If considering various priors, either average (or go hierarchical); or choose the empirical Bayes prior that maximizes the marginal likelihood. 1. Consider the symmetrized right-haar prior π S (µ 1, µ 2, σ 1, σ 2, ρ) = π RH1 (µ 1, µ 2, σ 1, σ 2, ρ) + π RH2 (µ 1, µ 2, σ 1, σ 2, ρ) = 1 σ1 2(1 ρ2 ) + 1 σ2 2(1 ρ2 ). 2. Any rotation Γ of coordinates yields a new right-haar prior. The empirical Bayes prior, π EB, is the right-haar prior for that rotation for which s 12 = 0, where S ( s 11 s 12 s 12 s 22 ) = ΓSΓ. 8

9 (σ 1, σ 2, ρ) R( Σ1 ) R( Σ2 ) R( ΣS ) R( ΣEB ) (1, 1, 0) (1, 2, 0) (1, 5, 0) (1, 50, 0) (1, 1,.1) (1, 1,.5) (1, 1,.9) (1, 1,.9) Table 1: Estimated frequentist risks of various estimates of Σ, under Stein s loss and when n = 10; Σ i are the right-haar estimates, Σ S is the symmetrized prior estimate, and Σ EB is the empirical Bayes estimate. 9

10 II. Bayesian Multiplicity Issues in Variable Selection Figure 2: John Hartigan with one of his multiple grandchildren 10

11 Problem: Data X arises from a normal linear regression model, with m possible regressors having associated unknown regression coefficients β i, i = 1,... m, and unknown variance σ 2. Models: Consider selection from among the submodels M i, i = 1,..., 2 m, having only k i regressors with coefficients β i (a subset of (β 1,..., β m )) and resulting density f i (x β i, σ 2 ). Prior density under M i : Zellner-Siow priors π i (β i, σ 2 ). Marginal likelihood of M i : m i (x) = f i (x β i, σ 2 )π i (β i, σ 2 ) dβ i dσ 2 Prior probability of M i : P (M i ) Posterior probability of M i : P (M i x) = P (M i)m i (x) j P (M j)m j (x). 11

12 Common Choices of the P (M i ) Equal prior probabilities: P (M i ) = 2 m Bayes exchangeable variable inclusion: Each variable, β i, is independently in the model with unknown probability p (called the prior inclusion probability). p has a Beta(p a, b) distribution. (We use a = b = 1, the uniform distribution, as did Jeffreys 1961, who also suggested alternative choices of the P (M i ). Probably a = b = 1/2 is better.) Then, since k i is the number of variables in model M i, P (M i ) = 1 P (M i ) = ˆp k i (1 ˆp) m k i 0 p k i (1 p) m k i Beta(p a, b)dp = Beta(a + k i, b + m k i ) Beta(a, b) Empirical Bayes exchangeable variable inclusion: Find the MLE ˆp by maximizing the marginal likelihood of p, j pk j (1 p) m k j m j (x), and use as the prior model probabilities.. 12

13 Controlling for multiplicity in variable selection Equal prior probabilities: P (M i ) = 2 m does not control for multiplicity; it corresponds to fixed prior inclusion probability p = 1/2 for each variable. Empirical Bayes exchangeable variable inclusion does control for multiplicity, in that ˆp will be small if there are many β i that are zero. Bayes exchangeable variable inclusion also controls for multiplicity (see Scott and Berger, 2008), although the P (M i ) are fixed. Note: The control of multiplicity by Bayes and EB variable inclusion usually reduces model complexity, but is different than the usual Bayeisan Ockham s razor effect that reduces model complexity. The Bayesian Ockham s razor operates through the effect of model priors π i (β i, σ 2 ) on m i (x), penalizing models with more parameters. Multiplicity correction occurs through the choice of the P (M i ). 13

14 Equal model probabilities Bayes variable inclusion Number of noise variables Number of noise variables Signal β 1 : β 2 : β 3 : β 4 : β 5 : β 6 : β 7 : β 8 : β 9 : β 10 : False Positives Table 2: Posterior inclusion probabilities for 10 real variables in a simulated data set. 14

15 Comparison of Bayes and Empirical Bayes Approaches Theorem 1 In the variable-selection problem, if the null model (or full model) has the largest marginal likelihood, m(x), among all models, then the MLE of p is ˆp = 0 (or ˆp = 1.) (The naive EB approach, which assigns P (M i ) = ˆp k i (1 ˆp) m k i, concludes that the null (full) model has probability 1.) A simulation with 10,000 repetitions to gauge the severity of the problem: m = 14 covariates, orthogonal design matrix p drawn from U(0, 1); regression coefficients are 0 with probability p and drawn from a Zellner-Siow prior with probability (1 p). n = 16, 60, and 120 observations drawn from the given regression model. Case ˆp = 0 ˆp = 1 n = n = n =

16 Is empirical Bayes at least accurate asymptotically as m? Posterior model probabilities, given p: P (M i x, p) = pk i (1 p) m k i m i (x) j pk j (1 p) m k j m j (x) Posterior distribution of p: π(p x) = K j pk j (1 p) m k j m j (x) This does concentrate about the true p as m, so one might expect that P (M i x) = 1 0 P (M i x, p)π(p x)dp P (M i x, ˆp) m i (x) ˆp k i (1 ˆp) m k i. This is not necessarily true; indeed 1 0 P (M i x, p)π(p x)dp = 1 0 m i (x) p k i (1 p) m k i m i (x) π(p x)/k 1 0 π(p x) dp p k i (1 p) m k i dp m i (x)p (M i ). Caveat: Some EB techniques have been justified; see Efron and Tibshirani (2001), Johnstone and Silverman (2004), Cui and George (2006), and Bogdan et. al. (2008). 16

17 III. What is the Effective Sample Size in Generalized BIC? Figure 3: John Hartigan in Sin City 17

18 Data: Independent vectors x i g i (x i θ), for i = 1,..., n. Unknown: θ = (θ 1,..., θ p ); ˆθ is the MLE Log-likelihood function: where x = (x 1,..., x n ). l(θ) = log f(x θ) = log ( n i=1 g i(x i θ)) Usual BIC: BIC 2l( ˆθ) p log n (Schwarz, 1978) Generalization of BIC: 2l( ˆθ) p i=1 log(1 + n i) + 2 p i=1 log ( 1 e vi ) 2 vi, v i = ˆξ 2 i d i (1+n i ), the d 1 i are the eigenvalues of the observed information matrix, the ξ i are the coordinates in an orthogonally transformed θ. n i is the effective sample size corresponding to ξ. What should these be? 18

19 Ex. Group means: For i = 1,..., p and l = 1,..., r, X il = µ i + ɛ il, where ɛ il N(0, σ 2 ). It might seem that n = pr but, if one followed Schwarz, one would have (defining µ = (µ 1,..., µ p ) t ) that X l = (X 1l,..., X pl ) t N p (µ, σ 2 I), l = 1,..., r, so that the sample size appearing in BIC should be r. The effective sample size for each µ i is r, but the effective sample size for σ 2 is pr, so effective sample size is parameter-dependent. One could easily be in the situation where p but the effective sample size r is fixed. 19

20 Ex. Random effects group means: µ i N(ξ, τ 2 ), with ξ and τ 2 being unknown. What is the number of parameters (see also Pauler (1998))? (1) If τ 2 = 0, there is only one parameter ξ. (2) If τ 2 is huge, is the number of parameters p + 2? (3) But, if one integrates out µ = (µ 1,..., µ p ), then f(x σ 2, ξ, τ 2 ) = f(x µ, ξ, σ 2 )π(µ ξ, τ 2 )dµ 1 σ p(r 1) exp { ˆ σ2 2σ 2 } p i=1 exp { ( x i ξ) 2 2( σ2 r +τ 2 ) so p = 3 if one can work directly with f(x σ 2, ξ, τ 2 ). Note: In this example the effective sample sizes should be p r for σ 2, p for ξ and τ 2, and r for the µ i s. Ex. Common mean, differing variances: Suppose n/2 of the Y i are N(θ, 1), while n/2 are N(θ, 1000). Clearly the effective sample size is roughly n/2. }, 20

21 Ex. ANOVA: Y = (Y 1,..., Y n ) t N n (Xβ, σ 2 I), where X is a given n p matrix of 1 s and -1 s with orthogonal columns, where β = (β 1,..., β p ) t and σ 2 are ( unknown. Then ) the information matrix for θ = (β, σ 2 ) is Î = n ˆσ 2 I p p 0 0 n 2ˆσ 4 for all parameters. so that now the effective sample size appears to be n Note: The group means problem and ANOVA are linear models, so one can have effective sample sizes from r = 1 to n for parameters in the linear model. 21

22 Defining the effective sample size n j for ξ j : For the case where no variables are integrated out,a possible general definition for the effective sample size follows from considering the information associated with observation x i arising from the single-observation expected information matrix I i [ Ii,jk 2 ] = E log f i (x i θ) θ j θ k = O (Ii,jk )O, where θ= ˆθ. 22

23 Since I jj = n i=1 I i,jj is the expected information about ξ j, a reasonable way to define n j is define information weights w ij = I i,jj / n k=1 I k,jj ; define the effective sample size for ξ j as n j = I jj n i=1 w iji i,jj = ( I jj ) 2 n i=1 ( I i,jj ) 2. Intuitively, w ij Ii,jj is a weighted measure of the information per observation, and dividing the total information about ξ j by this information per case seems plausible as an effective sample size. 23

24 THANKS ALL 24

25 THANKS JOHN 25

Bayesian Adjustment for Multiplicity

Bayesian Adjustment for Multiplicity Bayesian Adjustment for Multiplicity Jim Berger Duke University with James Scott University of Texas 2011 Rao Prize Conference Department of Statistics, Penn State University May 19, 2011 1 2011 Rao Prize

More information

An Extended BIC for Model Selection

An Extended BIC for Model Selection An Extended BIC for Model Selection at the JSM meeting 2007 - Salt Lake City Surajit Ray Boston University (Dept of Mathematics and Statistics) Joint work with James Berger, Duke University; Susie Bayarri,

More information

g-priors for Linear Regression

g-priors for Linear Regression Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,

More information

Default priors and model parametrization

Default priors and model parametrization 1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)

More information

1 Hypothesis Testing and Model Selection

1 Hypothesis Testing and Model Selection A Short Course on Bayesian Inference (based on An Introduction to Bayesian Analysis: Theory and Methods by Ghosh, Delampady and Samanta) Module 6: From Chapter 6 of GDS 1 Hypothesis Testing and Model Selection

More information

An Overview of Objective Bayesian Analysis

An Overview of Objective Bayesian Analysis An Overview of Objective Bayesian Analysis James O. Berger Duke University visiting the University of Chicago Department of Statistics Spring Quarter, 2011 1 Lectures Lecture 1. Objective Bayesian Analysis:

More information

the unification of statistics its uses in practice and its role in Objective Bayesian Analysis:

the unification of statistics its uses in practice and its role in Objective Bayesian Analysis: Objective Bayesian Analysis: its uses in practice and its role in the unification of statistics James O. Berger Duke University and the Statistical and Applied Mathematical Sciences Institute Allen T.

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Extended Bayesian Information Criteria for Model Selection with Large Model Spaces

Extended Bayesian Information Criteria for Model Selection with Large Model Spaces Extended Bayesian Information Criteria for Model Selection with Large Model Spaces Jiahua Chen, University of British Columbia Zehua Chen, National University of Singapore (Biometrika, 2008) 1 / 18 Variable

More information

OBJECTIVE PRIORS FOR THE BIVARIATE NORMAL MODEL. BY JAMES O. BERGER 1 AND DONGCHU SUN 2 Duke University and University of Missouri-Columbia

OBJECTIVE PRIORS FOR THE BIVARIATE NORMAL MODEL. BY JAMES O. BERGER 1 AND DONGCHU SUN 2 Duke University and University of Missouri-Columbia The Annals of Statistics 008, Vol. 36, No., 963 98 DOI: 0.4/07-AOS50 Institute of Mathematical Statistics, 008 OBJECTIVE PRIORS FOR THE BIVARIATE NORMAL MODEL BY JAMES O. BERGER AND DONGCHU SUN Duke University

More information

Overall Objective Priors

Overall Objective Priors Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Linear Models A linear model is defined by the expression

Linear Models A linear model is defined by the expression Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

The linear model is the most fundamental of all serious statistical models encompassing:

The linear model is the most fundamental of all serious statistical models encompassing: Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x

More information

BAYES AND EMPIRICAL-BAYES MULTIPLICITY ADJUSTMENT IN THE VARIABLE-SELECTION PROBLEM. Department of Statistical Science, Duke University

BAYES AND EMPIRICAL-BAYES MULTIPLICITY ADJUSTMENT IN THE VARIABLE-SELECTION PROBLEM. Department of Statistical Science, Duke University Submitted to the Annals of Statistics BAYES AND EMPIRICAL-BAYES MULTIPLICITY ADJUSTMENT IN THE VARIABLE-SELECTION PROBLEM By James G. Scott and James O. Berger Department of Statistical Science, Duke University

More information

Objective Bayesian Statistical Inference

Objective Bayesian Statistical Inference Objective Bayesian Statistical Inference James O. Berger Duke University and the Statistical and Applied Mathematical Sciences Institute London, UK July 6-8, 2005 1 Preliminaries Outline History of objective

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Ben Shaby SAMSI August 3, 2010 Ben Shaby (SAMSI) OFS adjustment August 3, 2010 1 / 29 Outline 1 Introduction 2 Spatial

More information

Bayesian and frequentist inference

Bayesian and frequentist inference Bayesian and frequentist inference Nancy Reid March 26, 2007 Don Fraser, Ana-Maria Staicu Overview Methods of inference Asymptotic theory Approximate posteriors matching priors Examples Logistic regression

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Bayesian Econometrics

Bayesian Econometrics Bayesian Econometrics Christopher A. Sims Princeton University sims@princeton.edu September 20, 2016 Outline I. The difference between Bayesian and non-bayesian inference. II. Confidence sets and confidence

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

Bayesian methods in economics and finance

Bayesian methods in economics and finance 1/26 Bayesian methods in economics and finance Linear regression: Bayesian model selection and sparsity priors Linear Regression 2/26 Linear regression Model for relationship between (several) independent

More information

Module 22: Bayesian Methods Lecture 9 A: Default prior selection

Module 22: Bayesian Methods Lecture 9 A: Default prior selection Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Seminar über Statistik FS2008: Model Selection

Seminar über Statistik FS2008: Model Selection Seminar über Statistik FS2008: Model Selection Alessia Fenaroli, Ghazale Jazayeri Monday, April 2, 2008 Introduction Model Choice deals with the comparison of models and the selection of a model. It can

More information

5.2 Expounding on the Admissibility of Shrinkage Estimators

5.2 Expounding on the Admissibility of Shrinkage Estimators STAT 383C: Statistical Modeling I Fall 2015 Lecture 5 September 15 Lecturer: Purnamrita Sarkar Scribe: Ryan O Donnell Disclaimer: These scribe notes have been slightly proofread and may have typos etc

More information

Bayesian inference. Justin Chumbley ETH and UZH. (Thanks to Jean Denizeau for slides)

Bayesian inference. Justin Chumbley ETH and UZH. (Thanks to Jean Denizeau for slides) Bayesian inference Justin Chumbley ETH and UZH (Thanks to Jean Denizeau for slides) Overview of the talk Introduction: Bayesian inference Bayesian model comparison Group-level Bayesian model selection

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

Divergence Based priors for the problem of hypothesis testing

Divergence Based priors for the problem of hypothesis testing Divergence Based priors for the problem of hypothesis testing gonzalo garcía-donato and susie Bayarri May 22, 2009 gonzalo garcía-donato and susie Bayarri () DB priors May 22, 2009 1 / 46 Jeffreys and

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Frequentist Accuracy of Bayesian Estimates

Frequentist Accuracy of Bayesian Estimates Frequentist Accuracy of Bayesian Estimates Bradley Efron Stanford University RSS Journal Webinar Objective Bayesian Inference Probability family F = {f µ (x), µ Ω} Parameter of interest: θ = t(µ) Prior

More information

The comparative studies on reliability for Rayleigh models

The comparative studies on reliability for Rayleigh models Journal of the Korean Data & Information Science Society 018, 9, 533 545 http://dx.doi.org/10.7465/jkdi.018.9..533 한국데이터정보과학회지 The comparative studies on reliability for Rayleigh models Ji Eun Oh 1 Joong

More information

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior: Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed

More information

STAT215: Solutions for Homework 2

STAT215: Solutions for Homework 2 STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

STAT 740: Testing & Model Selection

STAT 740: Testing & Model Selection STAT 740: Testing & Model Selection Timothy Hanson Department of Statistics, University of South Carolina Stat 740: Statistical Computing 1 / 34 Testing & model choice, likelihood-based A common way to

More information

Statistical Inference

Statistical Inference Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA. Asymptotic Inference in Exponential Families Let X j be a sequence of independent,

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

Prior Distributions for the Variable Selection Problem

Prior Distributions for the Variable Selection Problem Prior Distributions for the Variable Selection Problem Sujit K Ghosh Department of Statistics North Carolina State University http://www.stat.ncsu.edu/ ghosh/ Email: ghosh@stat.ncsu.edu Overview The Variable

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

STAT 830 Bayesian Estimation

STAT 830 Bayesian Estimation STAT 830 Bayesian Estimation Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Bayesian Estimation STAT 830 Fall 2011 1 / 23 Purposes of These

More information

Model comparison and selection

Model comparison and selection BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)

More information

Statistical Inference

Statistical Inference Statistical Inference Liu Yang Florida State University October 27, 2016 Liu Yang, Libo Wang (Florida State University) Statistical Inference October 27, 2016 1 / 27 Outline The Bayesian Lasso Trevor Park

More information

Bayesian Hypothesis Testing: Redux

Bayesian Hypothesis Testing: Redux Bayesian Hypothesis Testing: Redux Hedibert F. Lopes and Nicholas G. Polson Insper and Chicago Booth arxiv:1808.08491v1 [math.st] 26 Aug 2018 First draft: March 2018 This draft: August 2018 Abstract Bayesian

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Carl N. Morris. University of Texas

Carl N. Morris. University of Texas EMPIRICAL BAYES: A FREQUENCY-BAYES COMPROMISE Carl N. Morris University of Texas Empirical Bayes research has expanded significantly since the ground-breaking paper (1956) of Herbert Robbins, and its province

More information

More on nuisance parameters

More on nuisance parameters BS2 Statistical Inference, Lecture 3, Hilary Term 2009 January 30, 2009 Suppose that there is a minimal sufficient statistic T = t(x ) partitioned as T = (S, C) = (s(x ), c(x )) where: C1: the distribution

More information

1 One-way analysis of variance

1 One-way analysis of variance LIST OF FORMULAS (Version from 21. November 2014) STK2120 1 One-way analysis of variance Assume X ij = µ+α i +ɛ ij ; j = 1, 2,..., J i ; i = 1, 2,..., I ; where ɛ ij -s are independent and N(0, σ 2 ) distributed.

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

Remarks on Improper Ignorance Priors

Remarks on Improper Ignorance Priors As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive

More information

Introduction to Bayesian learning Lecture 2: Bayesian methods for (un)supervised problems

Introduction to Bayesian learning Lecture 2: Bayesian methods for (un)supervised problems Introduction to Bayesian learning Lecture 2: Bayesian methods for (un)supervised problems Anne Sabourin, Ass. Prof., Telecom ParisTech September 2017 1/78 1. Lecture 1 Cont d : Conjugate priors and exponential

More information

Bayesian variable selection via. Penalized credible regions. Brian Reich, NCSU. Joint work with. Howard Bondell and Ander Wilson

Bayesian variable selection via. Penalized credible regions. Brian Reich, NCSU. Joint work with. Howard Bondell and Ander Wilson Bayesian variable selection via penalized credible regions Brian Reich, NC State Joint work with Howard Bondell and Ander Wilson Brian Reich, NCSU Penalized credible regions 1 Motivation big p, small n

More information

Bayesian Model Averaging

Bayesian Model Averaging Bayesian Model Averaging Hoff Chapter 9, Hoeting et al 1999, Clyde & George 2004, Liang et al 2008 October 24, 2017 Bayesian Model Choice Models for the variable selection problem are based on a subset

More information

BAYES AND EMPIRICAL-BAYES MULTIPLICITY ADJUSTMENT IN THE VARIABLE-SELECTION PROBLEM

BAYES AND EMPIRICAL-BAYES MULTIPLICITY ADJUSTMENT IN THE VARIABLE-SELECTION PROBLEM The Annals of Statistics 2010, Vol. 38, No. 5, 2587 2619 DOI: 10.1214/10-AOS792 Institute of Mathematical Statistics, 2010 BAYES AND EMPIRICAL-BAYES MULTIPLICITY ADJUSTMENT IN THE VARIABLE-SELECTION PROBLEM

More information

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1. Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error

More information

STAT 461/561- Assignments, Year 2015

STAT 461/561- Assignments, Year 2015 STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and

More information

A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling

A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling Min-ge Xie Department of Statistics, Rutgers University Workshop on Higher-Order Asymptotics

More information

Studentization and Prediction in a Multivariate Normal Setting

Studentization and Prediction in a Multivariate Normal Setting Studentization and Prediction in a Multivariate Normal Setting Morris L. Eaton University of Minnesota School of Statistics 33 Ford Hall 4 Church Street S.E. Minneapolis, MN 55455 USA eaton@stat.umn.edu

More information

Multiple Testing, Empirical Bayes, and the Variable-Selection Problem

Multiple Testing, Empirical Bayes, and the Variable-Selection Problem Multiple Testing, Empirical Bayes, and the Variable-Selection Problem By JAMES G. SCOTT Department of Statistical Science, Duke University, Durham, North Carolina 27708-0251, U.S.A. james@stat.duke.edu

More information

Lecture 1: Introduction

Lecture 1: Introduction Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One

More information

Student-t Process as Alternative to Gaussian Processes Discussion

Student-t Process as Alternative to Gaussian Processes Discussion Student-t Process as Alternative to Gaussian Processes Discussion A. Shah, A. G. Wilson and Z. Gharamani Discussion by: R. Henao Duke University June 20, 2014 Contributions The paper is concerned about

More information

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Theory of Maximum Likelihood Estimation. Konstantin Kashin Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical

More information

Chapter 17: Undirected Graphical Models

Chapter 17: Undirected Graphical Models Chapter 17: Undirected Graphical Models The Elements of Statistical Learning Biaobin Jiang Department of Biological Sciences Purdue University bjiang@purdue.edu October 30, 2014 Biaobin Jiang (Purdue)

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Lecture 2: Statistical Decision Theory (Part I)

Lecture 2: Statistical Decision Theory (Part I) Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical

More information

GENERAL THEORY OF INFERENTIAL MODELS I. CONDITIONAL INFERENCE

GENERAL THEORY OF INFERENTIAL MODELS I. CONDITIONAL INFERENCE GENERAL THEORY OF INFERENTIAL MODELS I. CONDITIONAL INFERENCE By Ryan Martin, Jing-Shiang Hwang, and Chuanhai Liu Indiana University-Purdue University Indianapolis, Academia Sinica, and Purdue University

More information

The Delta Method and Applications

The Delta Method and Applications Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order

More information

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information

Lecture 16 : Bayesian analysis of contingency tables. Bayesian linear regression. Jonathan Marchini (University of Oxford) BS2a MT / 15

Lecture 16 : Bayesian analysis of contingency tables. Bayesian linear regression. Jonathan Marchini (University of Oxford) BS2a MT / 15 Lecture 16 : Bayesian analysis of contingency tables. Bayesian linear regression. Jonathan Marchini (University of Oxford) BS2a MT 2013 1 / 15 Contingency table analysis North Carolina State University

More information

Estimation of a multivariate normal covariance matrix with staircase pattern data

Estimation of a multivariate normal covariance matrix with staircase pattern data AISM (2007) 59: 211 233 DOI 101007/s10463-006-0044-x Xiaoqian Sun Dongchu Sun Estimation of a multivariate normal covariance matrix with staircase pattern data Received: 20 January 2005 / Revised: 1 November

More information

Comparing Non-informative Priors for Estimation and Prediction in Spatial Models

Comparing Non-informative Priors for Estimation and Prediction in Spatial Models Environmentrics 00, 1 12 DOI: 10.1002/env.XXXX Comparing Non-informative Priors for Estimation and Prediction in Spatial Models Regina Wu a and Cari G. Kaufman a Summary: Fitting a Bayesian model to spatial

More information

Likelihood-based inference with missing data under missing-at-random

Likelihood-based inference with missing data under missing-at-random Likelihood-based inference with missing data under missing-at-random Jae-kwang Kim Joint work with Shu Yang Department of Statistics, Iowa State University May 4, 014 Outline 1. Introduction. Parametric

More information

Fiducial Inference and Generalizations

Fiducial Inference and Generalizations Fiducial Inference and Generalizations Jan Hannig Department of Statistics and Operations Research The University of North Carolina at Chapel Hill Hari Iyer Department of Statistics, Colorado State University

More information

Final Review. Yang Feng. Yang Feng (Columbia University) Final Review 1 / 58

Final Review. Yang Feng.   Yang Feng (Columbia University) Final Review 1 / 58 Final Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Final Review 1 / 58 Outline 1 Multiple Linear Regression (Estimation, Inference) 2 Special Topics for Multiple

More information

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j

Standard Errors & Confidence Intervals. N(0, I( β) 1 ), I( β) = [ 2 l(β, φ; y) β i β β= β j Standard Errors & Confidence Intervals β β asy N(0, I( β) 1 ), where I( β) = [ 2 l(β, φ; y) ] β i β β= β j We can obtain asymptotic 100(1 α)% confidence intervals for β j using: β j ± Z 1 α/2 se( β j )

More information

New Bayesian methods for model comparison

New Bayesian methods for model comparison Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison

More information

Lecture 7 October 13

Lecture 7 October 13 STATS 300A: Theory of Statistics Fall 2015 Lecture 7 October 13 Lecturer: Lester Mackey Scribe: Jing Miao and Xiuyuan Lu 7.1 Recap So far, we have investigated various criteria for optimal inference. We

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

Modelling of Dependent Credit Rating Transitions

Modelling of Dependent Credit Rating Transitions ling of (Joint work with Uwe Schmock) Financial and Actuarial Mathematics Vienna University of Technology Wien, 15.07.2010 Introduction Motivation: Volcano on Iceland erupted and caused that most of the

More information

Chapter 3: Maximum Likelihood Theory

Chapter 3: Maximum Likelihood Theory Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood

More information

AMS-207: Bayesian Statistics

AMS-207: Bayesian Statistics Linear Regression How does a quantity y, vary as a function of another quantity, or vector of quantities x? We are interested in p(y θ, x) under a model in which n observations (x i, y i ) are exchangeable.

More information

Bayesian inference J. Daunizeau

Bayesian inference J. Daunizeau Bayesian inference J. Daunizeau Brain and Spine Institute, Paris, France Wellcome Trust Centre for Neuroimaging, London, UK Overview of the talk 1 Probabilistic modelling and representation of uncertainty

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

Sparse Linear Models (10/7/13)

Sparse Linear Models (10/7/13) STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine

More information