POSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS

Size: px
Start display at page:

Download "POSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS"

Transcription

1 POSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS EDWARD I. GEORGE and ZUOSHUN ZHANG The University of Texas at Austin and Quintiles Inc. June 2 SUMMARY For Bayesian analysis of hierarchical models, it may be of interest to try and use improper priors to represent ignorance. In this article, we consider hierarchical exponential family models and obtain conditions on prior tail behavior that guarantee posterior propriety. In particular, we study three special cases, the Poisson-Gamma, the Binomial-Beta, and the Multinomial-Dirichlet models. Interestingly, in all three cases, flat priors produce improper posteriors. Although some data sets can yield proper posteriors under improper priors with the Poisson-Gamma model, we show that this cannot happen with the Binomial-Beta and Multinomial-Dirichlet models. Keywords: Hierarchical Bayes Models, Improper Priors, Poisson-Gamma, Binomial-Beta, Multinomial- Dirichlet. INTRODUCTION Hierarchical models provide a useful approach to combining data from several similar populations. A typical setup for such models consists of pairs (x,θ ),...,(x p,θ p )wherex i is the observed data from population i, andθ i is a parameter which identifies F (x i θ i ), the distribution of x i given Edward I. George holds the Ed and Molly Smith Chair and is Professor of Statistics, Department of MSIS, University of Texas, Austin, TX , egeorge@mail.utexas.edu. Zuoshun Zhang is a Biostatistician at Quintiles, Inc., 8 Rockville Pike, Suite 3, Rockville, MD , zzhang@qrrg.quintiles.com.

2 θ i. A hierarchical model is then obtained by using another model to describe the variation across the parameters θ,...,θ p. Such hierarchical models enable a statistical analysis that simultaneously uses all the data x,...,x p to make inference about all the parameters θ,...,θ p. In this paper, we focus on the following conditionally independent, three stage hierarchical model (Lindley and Smith 972, Kass and Steffey 989, George, Markov and Smith 993, 994):. Conditionally on θ,...,θ p,eachx i has distribution F (x i θ i ), and the data x,...,x p are independent of each other and are independent of λ. 2. Conditionally on λ, the parameters θ,...,θ p are iid with density π(θ λ). 3. The hyperparameter λ has a density π(λ), which may or may not be proper. In particular, we consider setups for which F (x θ) belongs to an exponential family (Brown 986) df (x θ) =exp[x θ ψ(θ) dν(x), for θ Θ, where dν(x) is a fixed σ-finite measure on the Borel sets of R k, Θ is a subset of the natural parameter space N = {θ R k : exp(x θ) dν(x) < }, and ψ(θ) =ln exp(x θ) dν(x) isthe cumulant generating function. For such F (x θ), we consider conjugate priors on θ (Diaconis and Ylvisaker 979), namely π(θ x,n )=exp[x θ n ψ(θ) ϕ(x,n )I Θ (θ), where I Θ (θ) is the indicator function and ϕ(x,n )=ln Θ exp[x θ n ψ(θ) dθ, with the range of (x,n ) properly chosen to make π(θ x,n ) a proper density. Finally, the hyperparameters (x,n ) are given some prior π(x,n ), which may be proper or improper. For any of these hierarchical exponential family setups, the only remaining specification issue is the choice of the hyperparameter prior π(x,n ). In many problems, there will be little available prior information about the hyperparameters, and it will be desirable to use a prior which is noninformative in some sense. Because improper priors are often considered for this purpose, it is of interest to know whether the resulting posterior π(θ,...,θ p,x,n x,...,x p ) is proper. This issue has increased in importance with the advent of Markov chain Monte Carlo posterior computation, which relies on such propriety to guarantee overall convergence of the simulated Markov chain, see George and Casella (992), Hobert and Casella (996, 997) and Casella (996). Berger and Strawderman (996) and Hobert and Casella (996) contain results for proper posteriors related to normal models. Natarajan and McCulloch (995) contains posterior propriety results for a class of mixed models for Binomial data. In this article, we obtain posterior propriety results for some special cases of the above hierarchical exponential family model, namely the Poisson-Gamma, the Binomial-Beta and Multinomial- Dirichlet hierarchical models. These results are conditions on the hyperparameter prior tail behavior 2

3 that guarantee posterior propriety. For the Poisson-Gamma hierarchical model, these conditions allow for some improper priors to guarantee posterior propriety. However, for the Binomial-Beta and Multinomial-Dirichlet hierarchical models, it turns out that no improper prior can guarantee a proper posterior. In particular, under the traditional parameterizations, flat priors can never guarantee posterior propriety in any of our three hierarchical setups. 2 THE POISSON-GAMMA MODEL We first consider the Poisson-Gamma model, where the data have Poisson distributions, and the conjugate prior on the parameters is the Gamma distribution. The three stages for the conditionally independent hierarchical model are then the following. For i =,...,p, θ f(x i θ i )= e θi xi i, x i =,,..., x i! π(θ i α, ) = α Γ(α) θα i e θi, θ i >, (α, ) π(α, ), α >, > where π(α, ) is a (possibly improper) prior on the hyperparameters α and. The joint posterior for all the parameters is Integrating out θ,...,θ p yields π(θ,...,θ p,α, x,...,x p ) [ ( ) α p p ( θ xi+α i exp ( +) Γ(α) i= ) p θ i π(α, ). i= π(α, x,...,x p ) [ ( ) α p p Γ(x i + α) Γ(α) ( +) xi+α π(α, ) i= [ αp p = ( +) αp+s (α + x i ) (α +)α π(α, ) i= [ = ( +) s c(i)α i e pα log(+ ) π(α, ), where s = p i= x i, k is the number of nonzero entries in the data vector (x,...,x p ), and s c(i)αi := p i= (α+x i ) (α+)α. We use the convention that the above polynomial is identically when k = s =. Hence we have k p, k s, and c(i) >,i= k,...,s. Our aim is to find conditions on the hyperparameter prior π(α, ) such that the posterior is proper, which is equivalent to m(x,...,x p )= π(θ,...,θ p,α, x,...,x p ) π(α, x,...,x p ) dα d <, 3

4 for any nonnegative integers x,,...,x p. When π(α, ) is proper, the posterior is always proper. Hence we concentrate on improper priors. We impose the following boundary conditions: (C) π(α, ) =O(α a b ) as α, (C2) π(α, ) =O(α a b ) as α, (C3) π(α, ) =O(α a b ) as α, (C4) π(α, ) =O(α a b ) as α, Our problem is now to determine the values of a,b,a,b such that the posterior is proper. Let us state our results and then carry out the proofs. Theorem 2. Let the prior π(α, ) satisfy the conditions (C) (C4) and let the data x,...,x p be fixed. Then the posterior π(θ,...,θ p,α, x,...,x p ) is proper if and only if one of the following two groups of conditions hold: (I) a <k+,b <,b + s>,a <k+,a + b > 2; (II) a <k+,b <,b + s>,a k +. Corollary 2.2 The posterior π(θ,...,θ p,α, x,...,x p ) is proper for all values of x,...,x p if and only if a <,b <,b >,a + b > 2. Corollary 2.3 Let π (α) =π 2 () =be the flat priors. Then the posterior π(θ,...,θ p,α, x,...,x p ) is improper for all values of x,...,x p. Proof: First we partition the integral over (, ) (, ) four parts, [ π(α, x,...,x p ) dα d = A + A 2 + A 3 + A 4. We proceed to derive necessary and sufficient conditions for each of A,A 2,A 3 and A 4 to be finite. A b ( +) s p log(+ ) c(i)α i a e pα log(+ ) dα d so that A < if and only if b < anda <k+. A 2 b ( +) s p log(+ ) b ( +) s [p log( + )i a+ c(i)ui a e u du d c(i)α i a e pα log(+ ) dα d b ( +) s [p log( + )i a + c(i)ui a e u du d 4

5 so that A 2 < if and only if b <. A 3 b ( +) s b +s c(i)α i a e pα log(+ ) dα d c(i)α i a dα d so that A 3 < if and only if b + s>, and a <k+. A 4 b ( +) s p log(+ ) p c(i)α i a e pα log(+ ) dα d c(i)u i a e u s+b [p log( + du d )i a + c(i)u i a e u du d. s+b +a i To establish conditions for A 4 to be finite, we consider three subcases. Subcase : When a <k+, if and only if b + a > 2. Subcase 2: When a = k +, A 4 Γ(i a +) d <, s+b +a i if and only if s + b >. A 4 p c(k)log() d <, s+b c(i) ui k e u du d s+b +k i Subcase 3: When a >k+, if and only if s + b >. A 4 p p u i a e u c(i) du d s+b +a + i c(k)u k a e u d du s+b +a k d <, s+b Combining the conditions of all the cases above, shows that m(x,...,x p )=A +A 2 +A 3 +A 4 < if and only if either of the following two groups of conditions hold: (I) a <k+,b <,b + s>,a <k+,a + b > 2; 5

6 (II) a <k+,b <,b + s>,a k +. which are exactly the conditions of the theorem. Combining them and considering all possible values of k and s from the data yields the two corollaries. 3 THE BETA-BINOMIAL MODEL Next, we turn to the Binomial-Beta model, where the data have Binomial distributions, and the conjugate prior on the parameters is the Beta distribution. The three stages for the conditionally independent hierarchical model are then the following. For i =,...,p, f(x i θ i )= n i θ xi i ( θ i) ni xi, x i =,,...,n i, x i π(θ i α, ) = B(α, ) θα i ( θ i ), (α, ) π(α, ), α >, >, where π(α, ) is a (possibly improper) prior on the hyperparameters α and. The joint posterior for all the parameters is Integrating out θ,...,θ p yields π(θ,...,θ p,α, x,...,x p ) p n i B(α, ) p θ α+xi i ( θ i ) +ni xi π(α, ). i= x i π(α, x,...,x p ) [ p B(α + x i,+ n i x i ) π(α, ) B(α, ) i= [ p Γ(α + x i )Γ( + n i x i )Γ(α + ) = π(α, ) Γ(α)Γ()Γ(n i= i + α + ) [ p (α + x i ) (α +)α( + n i x i ) ( +) = π(α, ) (n i= i + α + ) (α + +)(α + ) := g(α, )π(α, ). Our aim is to find conditions on the hyperparameter prior π(α, ) such that the posterior π(θ,...,θ p,α, x,...,x p ) is a proper distribution. For any number c such that <c<, let S c = {(α, ) :<c<α</c} be a cone-like region in the first quadrant of the (α, ) plane. Let us state our results and then carry out the proofs. Theorem 3. (A) If the posterior π(θ,...,θ p,α, x,...,x p ) is proper then S c π(α, ) dα d < 6

7 for any S c with <c<. (B) For any improper prior π(α, ), there exist values of x,...,x p for which the posterior π(θ,...,θ p,α, x,...,x p ) is improper. Corollary 3.2 Let π(α, ) =be the flat prior. Then the posterior π(θ,...,θ p,α, x,...,x p ) is improper for all values of x,...,x p. Proof: For any set S c,itcanbeeasilyshownthatthereexistconstants<a<bsuch that a g(α, ) b, where(α, ) S c. Hence a proper posterior implies S c π(α, ) dα d <, the conclusion of (A). Next we prove (B). First we pick x = = x p =. LetR = {(α, ) :<α } be the upper half of the first quadrant of the (α, ) plane. The function g(α, ) can be expressed as g(α, ) = p i= ( + n i ) ( +) (α + + n i ) (α + +)(α + ), from which we can deduce that /2 s g(α, ), for (α, ) R,where s = p i= n i. It follows that π(α, x,...,x p ) dα d < R if and only if R π(α, ) dα d <. Similarly, we may pick x = n,,x p = n p and R 2 = {(α, ) :α >}, inwhichcase, π(α, x,...,x p ) dα d < R 2 if and only if R 2 π(α, ) dα d <. Because an improper prior cannot yield finite integrals on both R and R 2, it follows that we can always find values of x,...,x p such that the posterior is improper. 4 THE MULTINOMIAL-DIRICHLET MODEL Similar results as the last section hold for its natural generalization, the Multinomial-Dirichlet model (Good 983, Leonard 977). Here, the data have Multinomial distributions, and the conjugate prior on the parameters is a Dirichlet distribution. The three stages for the conditionally independent hierarchical model are then the following. For i =,...,p, f(x (i) θ (i) )= n i! t j= x(i) j! t (θ (i) j j= )x(i) j, π(θ (i) ) = Γ( t j= j) t t j= Γ( (θ (i) j j) )j, j= π() =π(,..., t ), j >,j =,...,t 7

8 where x (i) j are non-negative integers, t j= x(i) j = n i,<θ (i) j <, t j= θ(i) j =,x (i) =(x (i),...,x(i) t ), θ (i) =(θ (i),...,θ(i) t )and =(,..., t ). Calculating the posterior as p π(θ (),...,θ (p), x (),...,x (p) ) [ f(x (i) θ (i) )π(θ (i) )π(), i= and defining S c = {(,..., t ):<c j < k < j /c, j, k =,...,t} for any c such that <c<, the proofs of the following results are very similar to those in the previous section. Theorem 4. (A) If the posterior π(θ (),...,θ (p), x (),...,x (p) ) is proper, then S c π(,..., t ) d d t <, for any possible S c with <c<. (B) For any improper prior π(,..., t ), there exist values of x,...,x p for which the posterior π(,..., t ) is improper. Corollary 4.2 Let π() = be the flat prior. Then the posterior π(θ (),...,θ (p), x (),...,x p ) is improper for all values of x,...,x p. ACKNOWLEDGEMENT The authors are grateful to George Casella and James Hobert for helpful suggestions. This work was supported by NSF grants DMS and Texas ARP grants , REFERENCES Berger, J.O. and Strawderman W.E. (996). Choice of hierarchical priors: admissibility in estimation of normal means. Ann. Statist. 24, Brown, L.D. (986). Fundamentals of statistical exponential families. IMS Lecture Notes Series (ed. S. Gupta). Hayward, California. Casella, G. (996). Statistical Inference and Monte Carlo Algorithms (with discussion) Test. 5, Casella, G. and George, E.I. (992). Explaining the Gibbs sampler. The American Statistician. 46,

9 Diaconis, P. and Ylvisaker D. (979). Conjugate priors for exponential families. Ann. Statist. 7, George, E.I., Makov, U.E., and Smith, A.F.M. (993). Conjugate likelihood distributions. Scand. J. Statist. 2, George, E.I., Makov, U.E., and Smith, A.F.M. (994). Fully Bayesian hierarchical analysis for exponential families via Monte Carlo computation. Aspects of Uncertainty A Tribute to D.V.Lindley (P.R.Freeman, A.F.M.Smith, eds). pp Chichester: John Wiley and Sons. Good, I.J. (983). The robustness of a hierarchical model for Multinomials and contingency tables. Statistical Inference, Data Analysis, and Robustness (G.Box,T.Leonard,C.-F.Wu,eds). pp 9-2. New York: Academic Press. Hobert, J.P. and Casella, G. (996). The effect of improper priors on Gibbs sampling in hierarchical linear mixed models. J. Amer. Statist. Assoc. 9, Hobert, J.P. and Casella, G. (998). Functional compatibility, Markov chains and Gibbs sampling with improper posteriors. J. Comp. Graph. Statist. 7, Leonard, T. (977). A Bayesian approach to some Multinomial estimation and pretesting problems. J. Amer. Statist. Assoc. 72, Lindley, D.V. and Smith, A.F.M. (972). Bayes estimates for the linear model (with discussion). J. R. Statist. Soc. B. 34, -4. Kass, R.E. and Steffey, D. (989). Approximate Bayesian inference in conditionally independent hierarchical models (parametric empirical Bayes). J. Amer. Statist. Assoc. 84, Natarajan, R. and McCulloch, C.E. (995). A note on the existence of the posterior distribution for a class of mixed models for Binomial responses. Biometrika. 82,

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016

Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016 Stat 5421 Lecture Notes Proper Conjugate Priors for Exponential Families Charles J. Geyer March 28, 2016 1 Theory This section explains the theory of conjugate priors for exponential families of distributions,

More information

Multinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is

Multinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is Multinomial Data The multinomial distribution is a generalization of the binomial for the situation in which each trial results in one and only one of several categories, as opposed to just two, as in

More information

STAT J535: Chapter 5: Classes of Bayesian Priors

STAT J535: Chapter 5: Classes of Bayesian Priors STAT J535: Chapter 5: Classes of Bayesian Priors David B. Hitchcock E-Mail: hitchcock@stat.sc.edu Spring 2012 The Bayesian Prior A prior distribution must be specified in a Bayesian analysis. The choice

More information

Integrated Objective Bayesian Estimation and Hypothesis Testing

Integrated Objective Bayesian Estimation and Hypothesis Testing Integrated Objective Bayesian Estimation and Hypothesis Testing José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es 9th Valencia International Meeting on Bayesian Statistics Benidorm

More information

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007 Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.

More information

General Bayesian Inference I

General Bayesian Inference I General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for

More information

Bayesian Meta-analysis with Hierarchical Modeling Brian P. Hobbs 1

Bayesian Meta-analysis with Hierarchical Modeling Brian P. Hobbs 1 Bayesian Meta-analysis with Hierarchical Modeling Brian P. Hobbs 1 Division of Biostatistics, School of Public Health, University of Minnesota, Mayo Mail Code 303, Minneapolis, Minnesota 55455 0392, U.S.A.

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

Overview of Course. Nevin L. Zhang (HKUST) Bayesian Networks Fall / 58

Overview of Course. Nevin L. Zhang (HKUST) Bayesian Networks Fall / 58 Overview of Course So far, we have studied The concept of Bayesian network Independence and Separation in Bayesian networks Inference in Bayesian networks The rest of the course: Data analysis using Bayesian

More information

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate

More information

Poisson CI s. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA

Poisson CI s. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA Poisson CI s Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA 1 Interval Estimates Point estimates of unknown parameters θ governing the distribution of an observed

More information

Data Analysis and Uncertainty Part 2: Estimation

Data Analysis and Uncertainty Part 2: Estimation Data Analysis and Uncertainty Part 2: Estimation Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu 1 Topics in Estimation 1. Estimation 2. Desirable

More information

2 Inference for Multinomial Distribution

2 Inference for Multinomial Distribution Markov Chain Monte Carlo Methods Part III: Statistical Concepts By K.B.Athreya, Mohan Delampady and T.Krishnan 1 Introduction In parts I and II of this series it was shown how Markov chain Monte Carlo

More information

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California Texts in Statistical Science Bayesian Ideas and Data Analysis An Introduction for Scientists and Statisticians Ronald Christensen University of New Mexico Albuquerque, New Mexico Wesley Johnson University

More information

Hyperparameter estimation in Dirichlet process mixture models

Hyperparameter estimation in Dirichlet process mixture models Hyperparameter estimation in Dirichlet process mixture models By MIKE WEST Institute of Statistics and Decision Sciences Duke University, Durham NC 27706, USA. SUMMARY In Bayesian density estimation and

More information

Bayesian Methods with Monte Carlo Markov Chains II

Bayesian Methods with Monte Carlo Markov Chains II Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 Part 3

More information

Introduction to Applied Bayesian Modeling. ICPSR Day 4

Introduction to Applied Bayesian Modeling. ICPSR Day 4 Introduction to Applied Bayesian Modeling ICPSR Day 4 Simple Priors Remember Bayes Law: Where P(A) is the prior probability of A Simple prior Recall the test for disease example where we specified the

More information

Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models

Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models Matthew S. Johnson New York ASA Chapter Workshop CUNY Graduate Center New York, NY hspace1in December 17, 2009 December

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Invariant HPD credible sets and MAP estimators

Invariant HPD credible sets and MAP estimators Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 Suggested Projects: www.cs.ubc.ca/~arnaud/projects.html First assignement on the web: capture/recapture.

More information

Bayesian nonparametrics

Bayesian nonparametrics Bayesian nonparametrics 1 Some preliminaries 1.1 de Finetti s theorem We will start our discussion with this foundational theorem. We will assume throughout all variables are defined on the probability

More information

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms

More information

Classical and Bayesian inference

Classical and Bayesian inference Classical and Bayesian inference AMS 132 January 18, 2018 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 18, 2018 1 / 9 Sampling from a Bernoulli Distribution Theorem (Beta-Bernoulli

More information

Learning Bayesian network : Given structure and completely observed data

Learning Bayesian network : Given structure and completely observed data Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution

More information

Patterns of Scalable Bayesian Inference Background (Session 1)

Patterns of Scalable Bayesian Inference Background (Session 1) Patterns of Scalable Bayesian Inference Background (Session 1) Jerónimo Arenas-García Universidad Carlos III de Madrid jeronimo.arenas@gmail.com June 14, 2017 1 / 15 Motivation. Bayesian Learning principles

More information

Chapter 5. Bayesian Statistics

Chapter 5. Bayesian Statistics Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective

More information

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation. PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.. Beta Distribution We ll start by learning about the Beta distribution, since we end up using

More information

Statistical Theory MT 2007 Problems 4: Solution sketches

Statistical Theory MT 2007 Problems 4: Solution sketches Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the

More information

Overall Objective Priors

Overall Objective Priors Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University

More information

Objective Bayesian Hypothesis Testing

Objective Bayesian Hypothesis Testing Objective Bayesian Hypothesis Testing José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es Statistical Science and Philosophy of Science London School of Economics (UK), June 21st, 2010

More information

HIERARCHICAL BAYESIAN ANALYSIS OF BINARY MATCHED PAIRS DATA

HIERARCHICAL BAYESIAN ANALYSIS OF BINARY MATCHED PAIRS DATA Statistica Sinica 1(2), 647-657 HIERARCHICAL BAYESIAN ANALYSIS OF BINARY MATCHED PAIRS DATA Malay Ghosh, Ming-Hui Chen, Atalanta Ghosh and Alan Agresti University of Florida, Worcester Polytechnic Institute

More information

Bayesian analysis of three parameter absolute. continuous Marshall-Olkin bivariate Pareto distribution

Bayesian analysis of three parameter absolute. continuous Marshall-Olkin bivariate Pareto distribution Bayesian analysis of three parameter absolute continuous Marshall-Olkin bivariate Pareto distribution Biplab Paul, Arabin Kumar Dey and Debasis Kundu Department of Mathematics, IIT Guwahati Department

More information

Modern Methods of Statistical Learning sf2935 Auxiliary material: Exponential Family of Distributions Timo Koski. Second Quarter 2016

Modern Methods of Statistical Learning sf2935 Auxiliary material: Exponential Family of Distributions Timo Koski. Second Quarter 2016 Auxiliary material: Exponential Family of Distributions Timo Koski Second Quarter 2016 Exponential Families The family of distributions with densities (w.r.t. to a σ-finite measure µ) on X defined by R(θ)

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 2017 1 / 10 Lecture 7: Prior Types Subjective

More information

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture

More information

Motif representation using position weight matrix

Motif representation using position weight matrix Motif representation using position weight matrix Xiaohui Xie University of California, Irvine Motif representation using position weight matrix p.1/31 Position weight matrix Position weight matrix representation

More information

Exact Statistical Inference in. Parametric Models

Exact Statistical Inference in. Parametric Models Exact Statistical Inference in Parametric Models Audun Sektnan December 2016 Specialization Project Department of Mathematical Sciences Norwegian University of Science and Technology Supervisor: Professor

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Bayesian Models in Machine Learning

Bayesian Models in Machine Learning Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of

More information

Department of Statistics

Department of Statistics Research Report Department of Statistics Research Report Department of Statistics No. 208:4 A Classroom Approach to the Construction of Bayesian Credible Intervals of a Poisson Mean No. 208:4 Per Gösta

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

A Bayesian Analysis of Some Nonparametric Problems

A Bayesian Analysis of Some Nonparametric Problems A Bayesian Analysis of Some Nonparametric Problems Thomas S Ferguson April 8, 2003 Introduction Bayesian approach remained rather unsuccessful in treating nonparametric problems. This is primarily due

More information

Statistical Inference

Statistical Inference Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Spring, 2006 1. DeGroot 1973 In (DeGroot 1973), Morrie DeGroot considers testing the

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

Bayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring

Bayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring Bayesian Inference and Life Testing Plan for the Weibull Distribution in Presence of Progressive Censoring Debasis KUNDU Department of Mathematics and Statistics Indian Institute of Technology Kanpur Pin

More information

A nonparametric Bayesian approach to inference for non-homogeneous. Poisson processes. Athanasios Kottas 1. (REVISED VERSION August 23, 2006)

A nonparametric Bayesian approach to inference for non-homogeneous. Poisson processes. Athanasios Kottas 1. (REVISED VERSION August 23, 2006) A nonparametric Bayesian approach to inference for non-homogeneous Poisson processes Athanasios Kottas 1 Department of Applied Mathematics and Statistics, Baskin School of Engineering, University of California,

More information

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions The Korean Communications in Statistics Vol. 14 No. 1, 2007, pp. 71 80 Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions R. Rezaeian 1)

More information

November 2002 STA Random Effects Selection in Linear Mixed Models

November 2002 STA Random Effects Selection in Linear Mixed Models November 2002 STA216 1 Random Effects Selection in Linear Mixed Models November 2002 STA216 2 Introduction It is common practice in many applications to collect multiple measurements on a subject. Linear

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

Bayesian Inference for Dirichlet-Multinomials

Bayesian Inference for Dirichlet-Multinomials Bayesian Inference for Dirichlet-Multinomials Mark Johnson Macquarie University Sydney, Australia MLSS Summer School 1 / 50 Random variables and distributed according to notation A probability distribution

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Introduction to Bayesian Statistics 1

Introduction to Bayesian Statistics 1 Introduction to Bayesian Statistics 1 STA 442/2101 Fall 2018 1 This slide show is an open-source document. See last slide for copyright information. 1 / 42 Thomas Bayes (1701-1761) Image from the Wikipedia

More information

Bayesian Graphical Models

Bayesian Graphical Models Graphical Models and Inference, Lecture 16, Michaelmas Term 2009 December 4, 2009 Parameter θ, data X = x, likelihood L(θ x) p(x θ). Express knowledge about θ through prior distribution π on θ. Inference

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Monte Carlo conditioning on a sufficient statistic

Monte Carlo conditioning on a sufficient statistic Seminar, UC Davis, 24 April 2008 p. 1/22 Monte Carlo conditioning on a sufficient statistic Bo Henry Lindqvist Norwegian University of Science and Technology, Trondheim Joint work with Gunnar Taraldsen,

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Parameter Estimation December 14, 2015 Overview 1 Motivation 2 3 4 What did we have so far? 1 Representations: how do we model the problem? (directed/undirected). 2 Inference: given a model and partially

More information

Bayesian estimation of the discrepancy with misspecified parametric models

Bayesian estimation of the discrepancy with misspecified parametric models Bayesian estimation of the discrepancy with misspecified parametric models Pierpaolo De Blasi University of Torino & Collegio Carlo Alberto Bayesian Nonparametrics workshop ICERM, 17-21 September 2012

More information

Bayesian GLMs and Metropolis-Hastings Algorithm

Bayesian GLMs and Metropolis-Hastings Algorithm Bayesian GLMs and Metropolis-Hastings Algorithm We have seen that with conjugate or semi-conjugate prior distributions the Gibbs sampler can be used to sample from the posterior distribution. In situations,

More information

Beta statistics. Keywords. Bayes theorem. Bayes rule

Beta statistics. Keywords. Bayes theorem. Bayes rule Keywords Beta statistics Tommy Norberg tommy@chalmers.se Mathematical Sciences Chalmers University of Technology Gothenburg, SWEDEN Bayes s formula Prior density Likelihood Posterior density Conjugate

More information

Module 22: Bayesian Methods Lecture 9 A: Default prior selection

Module 22: Bayesian Methods Lecture 9 A: Default prior selection Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical

More information

Nonparametric Bayes Estimator of Survival Function for Right-Censoring and Left-Truncation Data

Nonparametric Bayes Estimator of Survival Function for Right-Censoring and Left-Truncation Data Nonparametric Bayes Estimator of Survival Function for Right-Censoring and Left-Truncation Data Mai Zhou and Julia Luan Department of Statistics University of Kentucky Lexington, KY 40506-0027, U.S.A.

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Checking for Prior-Data Conflict

Checking for Prior-Data Conflict Bayesian Analysis (2006) 1, Number 4, pp. 893 914 Checking for Prior-Data Conflict Michael Evans and Hadas Moshonov Abstract. Inference proceeds from ingredients chosen by the analyst and data. To validate

More information

Statistical Theory MT 2006 Problems 4: Solution sketches

Statistical Theory MT 2006 Problems 4: Solution sketches Statistical Theory MT 006 Problems 4: Solution sketches 1. Suppose that X has a Poisson distribution with unknown mean θ. Determine the conjugate prior, and associate posterior distribution, for θ. Determine

More information

Weakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0

More information

Lecture 2: Priors and Conjugacy

Lecture 2: Priors and Conjugacy Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.

More information

INTRODUCTION TO BAYESIAN STATISTICS

INTRODUCTION TO BAYESIAN STATISTICS INTRODUCTION TO BAYESIAN STATISTICS Sarat C. Dass Department of Statistics & Probability Department of Computer Science & Engineering Michigan State University TOPICS The Bayesian Framework Different Types

More information

Contents. Part I: Fundamentals of Bayesian Inference 1

Contents. Part I: Fundamentals of Bayesian Inference 1 Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian

More information

Default priors and model parametrization

Default priors and model parametrization 1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

13: Variational inference II

13: Variational inference II 10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational

More information

MIT Spring 2016

MIT Spring 2016 MIT 18.655 Dr. Kempthorne Spring 2016 1 MIT 18.655 Outline 1 2 MIT 18.655 Decision Problem: Basic Components P = {P θ : θ Θ} : parametric model. Θ = {θ}: Parameter space. A{a} : Action space. L(θ, a) :

More information

Bayesian philosophy Bayesian computation Bayesian software. Bayesian Statistics. Petter Mostad. Chalmers. April 6, 2017

Bayesian philosophy Bayesian computation Bayesian software. Bayesian Statistics. Petter Mostad. Chalmers. April 6, 2017 Chalmers April 6, 2017 Bayesian philosophy Bayesian philosophy Bayesian statistics versus classical statistics: War or co-existence? Classical statistics: Models have variables and parameters; these are

More information

Introduction to Markov Chain Monte Carlo & Gibbs Sampling

Introduction to Markov Chain Monte Carlo & Gibbs Sampling Introduction to Markov Chain Monte Carlo & Gibbs Sampling Prof. Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering 101 Frank H. T. Rhodes Hall Ithaca, NY 14853-3801 Email: zabaras@cornell.edu

More information

PROBABILITY DISTRIBUTIONS. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

PROBABILITY DISTRIBUTIONS. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception PROBABILITY DISTRIBUTIONS Credits 2 These slides were sourced and/or modified from: Christopher Bishop, Microsoft UK Parametric Distributions 3 Basic building blocks: Need to determine given Representation:

More information

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/?? to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter

More information

Hierarchical Models & Bayesian Model Selection

Hierarchical Models & Bayesian Model Selection Hierarchical Models & Bayesian Model Selection Geoffrey Roeder Departments of Computer Science and Statistics University of British Columbia Jan. 20, 2016 Contact information Please report any typos or

More information

Carl N. Morris. University of Texas

Carl N. Morris. University of Texas EMPIRICAL BAYES: A FREQUENCY-BAYES COMPROMISE Carl N. Morris University of Texas Empirical Bayes research has expanded significantly since the ground-breaking paper (1956) of Herbert Robbins, and its province

More information

ST 740: Multiparameter Inference

ST 740: Multiparameter Inference ST 740: Multiparameter Inference Alyson Wilson Department of Statistics North Carolina State University September 23, 2013 A. Wilson (NCSU Statistics) Multiparameter Inference September 23, 2013 1 / 21

More information

Bayesian Prediction of Code Output. ASA Albuquerque Chapter Short Course October 2014

Bayesian Prediction of Code Output. ASA Albuquerque Chapter Short Course October 2014 Bayesian Prediction of Code Output ASA Albuquerque Chapter Short Course October 2014 Abstract This presentation summarizes Bayesian prediction methodology for the Gaussian process (GP) surrogate representation

More information

Bayesian analysis of the Hardy-Weinberg equilibrium model

Bayesian analysis of the Hardy-Weinberg equilibrium model Bayesian analysis of the Hardy-Weinberg equilibrium model Eduardo Gutiérrez Peña Department of Probability and Statistics IIMAS, UNAM 6 April, 2010 Outline Statistical Inference 1 Statistical Inference

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 11 CRFs, Exponential Family CS/CNS/EE 155 Andreas Krause Announcements Homework 2 due today Project milestones due next Monday (Nov 9) About half the work should

More information

Introduc)on to Bayesian Methods

Introduc)on to Bayesian Methods Introduc)on to Bayesian Methods Bayes Rule py x)px) = px! y) = px y)py) py x) = px y)py) px) px) =! px! y) = px y)py) y py x) = py x) =! y "! y px y)py) px y)py) px y)py) px y)py)dy Bayes Rule py x) =

More information

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017 Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries

More information

Introduction to Bayesian Statistics. James Swain University of Alabama in Huntsville ISEEM Department

Introduction to Bayesian Statistics. James Swain University of Alabama in Huntsville ISEEM Department Introduction to Bayesian Statistics James Swain University of Alabama in Huntsville ISEEM Department Author Introduction James J. Swain is Professor of Industrial and Systems Engineering Management at

More information

Comparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters

Comparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters Journal of Modern Applied Statistical Methods Volume 13 Issue 1 Article 26 5-1-2014 Comparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters Yohei Kawasaki Tokyo University

More information

Remarks on Improper Ignorance Priors

Remarks on Improper Ignorance Priors As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive

More information