Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Size: px
Start display at page:

Download "Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??"

Transcription

1 to Bayesian Methods Introduction to Bayesian Methods p.1/??

2 We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter θ is of inferential interest. Here θ may be vector valued. For example, 1. θ = difference in treatment means. θ = hazard ratio 3. θ = vector of regression coefficients 4. θ = probability a treatment is effective Introduction to Bayesian Methods p./??

3 In parametric inference, we specify a parametric model for the data, indexed by the parameter θ. Letting x denote the data, we denote this model (density) by p(x θ). The likelihood function of θ is any function proportional to p(x θ), i.e., L(θ) p(x θ). Example Suppose x θ Binomial(N, θ). Then p(x θ) = ( ) N θ x (1 θ) N x, x x = 0, 1,...,N. Introduction to Bayesian Methods p.3/??

4 We can take L(θ) = θ x (1 θ) N x. The parameter θ is unknown. In the Bayesian mind-set, we express our uncertainty about quantities by specifying distributions for them. Thus, we express our uncertainty about θ by specifying a prior distribution for it. We denote the prior density of θ by π(θ). The word "prior" is used to denote that it is the density of θ before the data x is observed. By Bayes theorem, we can construct the distribution of θ x, which is called the posterior distribution of θ. We denote the posterior distribution of θ by p(θ x). Introduction to Bayesian Methods p.4/??

5 By Bayes theorem, p(θ x) = Θ p(x θ)π(θ) p(x θ)π(θ)dθ where Θ denotes the parameter space of θ. The quantity p(x) = p(x θ)π(θ)dθ Θ is the normalizing constant of the posterior distribution. For most inference problems, p(x) does not have a closed form. Bayesian inference about θ is primarily based on the posterior distribution of θ, p(θ x). Introduction to Bayesian Methods p.5/??

6 For example, one can compute various posterior summaries, such as the mean, median, mode, variance, and quantiles. For example, the posterior mean of θ is given by E(θ x) = θp(θ x)dθ Θ Example 1 Given θ, suppose x 1,x,...,x n are i.i.d. Binomial(1,θ), and θ Beta(α, λ). The parameters of the prior distribution are often called the hyperparameters. Let us derive the posterior distribution of θ. Let x = (x 1,x,...,x n ), and thus, Introduction to Bayesian Methods p.6/??

7 p(x θ) = p(x θ) = n i=1 n i=1 p(x i θ) θ x i (1 θ) n x i where x i = n i=1 x i. Also, p(x θ) = θ P x i (1 θ) n P x i π(θ x) = Γ(α + λ) Γ(α)Γ(λ) θα 1 (1 θ) λ 1 Now, we can write the kernel of the posterior density as Introduction to Bayesian Methods p.7/??

8 p(θ x) θ P x i θ α 1 (1 θ) n P x i (1 θ) λ 1 = θ P x i +α 1 (1 θ) n P x i +λ 1 Thus p(θ x) θ P x i +α 1 (1 θ) n P x i +λ 1. We can recognize this kernel as a beta kernel with paramters ( x i + α,n x i + λ). Thus, θ x Beta( xi + α,n ) x i + λ and therefore p(θ x) = Γ(α + n + λ) Γ( P x i + α)γ(n P x i + λ) θp x i +α 1 (1 θ) n P xi +λ 1. Introduction to Bayesian Methods p.8/??

9 Remark In deriving posterior densities, an often used technique is to try and recognize the kernel of the posterior density of θ. This avoids direct computation of p(x). This technique saves lots of time in derivation. If the kernel cannot be recognized, then p(x) must be computed directly. In this example we have p(x) = p(x 1,...,x n ) 1 0 θ P x i +α 1 (1 θ) n P x i +λ 1. = Γ( x i + α)γ(n x i + λ) Γ(α + n + λ) Introduction to Bayesian Methods p.9/??

10 Thus p(x 1,...,x n ) = Γ(α + λ) Γ(α)Γ(λ) Γ( x i + α)γ(n x i + λ) Γ(α + n + λ) for x i = 0, 1, and i = 1,...,n. Suppose A 1,A,... are events such that A i Aj = φ and i=1 = Ω, where Ω denotes the sample space. Let B denote an event in Ω. Then Bayes theorem for events can be written as p(a i B) = P(B A i )P(A i ) i=1 P(B A i)p(a i ) Introduction to Bayesian Methods p.10/??

11 P(A i ) is the prior probability of A i and p(a i B) is the posterior probability ofa i given B has ocurred. Example Bayes theorem is often used in diagnostic tests for cancer. A young person was diagnosed as having a type of cancer that occurs extremely rarely in young people. Naturally, has was very upset. A friend told him that it was probably a mistake. His friend reasons as follows. No medical test is perfect: There are always incidences of false positives and false negatives. Introduction to Bayesian Methods p.11/??

12 Let C stand for the event that he has cancer and let + stand for the event that an individual responds positively to the test. Assume P(C) = 1/1, 000, 000 = 10 6 and P(+ C c ) =.01. (So only one per million people his age have the disease and the test is extremely god relative to most medical tests, giving only 1% false positives and 1% false negatives). Find the probability that he has cancer given that he has a positive response. (After you make this calculation you will not be surprised to learn that he did not have cancer.) P(C +) = P(C +) = P(+ C)P(C) P(+ C)P(C) + P(+ C c )P(C c ) (.99)(10 6 ) (.99)(10 6 ) + (.01)( ) Introduction to Bayesian Methods p.1/??

13 P(C +) = = Example 3 Suppose x 1,...,x n is a random sample from N(µ,σ ). i) Suppose σ is known and µ N(µ o,σ o). The posterior density of µ is given by: P(µ x)! ny p(x i µ, σ ) π(µ) i=1 exp j 1 X ff«j (xi σ µ) exp 1 ff«σo (µ µ o ) Introduction to Bayesian Methods p.13/??

14 exp = exp exp { 1 { 1 { 1 ( ) ( )} nσ o + σ σ µ + µ o xi + µ o σ σoσ σoσ ( )[ ( )]} nσ o + σ σ µ µ o xi + µ o σ σoσ nσo + σ ( )[ ( )] } nσ o + σ σ µ o xi + µ o σ σoσ nσo + σ We can recognize this as a normal kernel with mean P µ post = σ o xi +µ o σ and variance σ nσo +σ post = ( nσ o +σ ) 1 = σ σo o σ σ nσo +σ Thus ( ) σ µ x N o xi + µ o σ σ, oσ. nσo + σ nσo + σ Introduction to Bayesian Methods p.14/??

15 ii) Suppose µ is known and σ is unknown. Let τ = 1/σ. τ is often called the precision parameter. Suppose τ gamma( δ o, γ o ). Thus π(τ) τ δ o 1 exp ( τγ ) o Let us derive the posterior distribution of τ. { p(τ x) τ n/ exp τ } { (xi µ) τ δ o 1 exp τγ } o { p(τ x) τ n+δ o 1 exp τ (γ o + } (x i µ) ) Introduction to Bayesian Methods p.15/??

16 Thus τ x gamma ( n + δo, γ o + (x i µ) ) iii) Now suppose µ and σ are both unknown. Suppose we specify the joint prior where π(µ,τ) = π(µ τ)π(τ) µ τ N(µ o,τ 1 σo) ( δo τ gamma, γ ) o Introduction to Bayesian Methods p.16/??

17 The joint posterior density of (µ,τ) is given by ( { τ n/ exp τ }) (xi µ) { (τ 1 exp τ }) (µ µ σo o ) ( { τ δo/ 1 exp τγ }) o { = τ n+δ o+1 1 exp τ (γ o + (µ µ o) + )} (x i µ) The joint posterior does not have a clear recognizable form. Thus, σ o we need to compute p(x) by brute force. Introduction to Bayesian Methods p.17/??

18 p(x) = Z Z 0 Z Z 0 0 j τ n+δ o+1 1 exp τ γ o + (µ µ o) σ o + X (x i µ) «ff dµdτ τ n+δ o+1 1 exp{ τ (γ o + µ (n + 1/σo ) µ(x x i + µ o /σo ) + (µ o /σ o + X x i )}dµτ Z n τ n+δ o+1 1 exp τ (γ o + µ o /σ o + X o «x i ) dτ Z exp n τ µ (n + 1/σ o ) µ(x x i + µ o /σ o ) o «dµ Introduction to Bayesian Methods p.18/??

19 The integratal with respect to µ can be evaluated by completing the square. 0 = exp exp { τ(n + σ o ) exp [µ ( x i + µ o σ o ) o ) (n + σ { τ( xi + µ o σo ) (n + σo ) ] } } dµ { } τ( xi + µ o σo ) (π) 1/ τ 1/ (n + σ (n + σo o ) 1/ ) Introduction to Bayesian Methods p.19/??

20 Now we need to evaluate 0 exp exp (π) 1/ (n + σo ) 1/ τ 1/ τ n+δ o/ 1 1 { } τ [γ o + µ o/σo + x i] } xi + µ o /σo) ] dτ (n + 1/σo) { τ [( = (π) 1/ (n + σo ) 1/ 0 { exp τ τ n+δ o/ 1 1 [γ o + µ o/σ o + x i ( ]} x i + µ o /σo) (n + 1/σ o) dτ Introduction to Bayesian Methods p.0/??

21 = [ 1 (π) 1/ Γ ( n+δ o ) (n + 1/σ o ) 1 ( γ o + µ o/σ o + x i (P x i +µ o /σ o ) (n+1/σ o ) )] n+δo = (π) 1/ Γ ( n+δ o ) n+δo (n + 1/σ o) 1 [ γ o + µ o/σ o + x i (P x i +µ o /σ o ) (n+1/σ o ) ] n+δo p (x) Thus, p(x) = ( (π) (n+1)/ σ 1 o ) ( γ o )δ o/ p (x) Γ( δ o ) Introduction to Bayesian Methods p.1/??

22 The joint posterior density of (µ,τ x) can also be obtained in this case by deriving p(µ, τ x) = p(µ τ x)p(τ x). Exercise: Find p(µ τ x) and p(τ x). It is of great interest to find the marginal posterior distributions of µ and τ. p(µ x) = 0 0 exp p(µ,τ x)dτ { τ n+δ exp { τ [ γ o + µ o/σo + ]} x i τ [ µ (n + 1/σo) µ( x i + µ o /σo) ]} dτ Introduction to Bayesian Methods p./??

23 = { τ n+δ exp τ [ γ o + µ 0 o/σo + ]} x i { [ ( exp τ(n + ) ]} 1/σ o) xi + µ o /σo µ exp { τ n + 1/σo [ ]} ( xi + µ o /σo) dτ n + 1/σo Let a = ( x i +µ o /σo) (n+1/σ. Then, we can write the integral o) as Introduction to Bayesian Methods p.3/??

24 = = Z 0 exp τ n+δ 0 +1 n τ 1 h γ o + µ o/σ o + X x i + (n + 1/σ o)(µ a) (n + 1/σ o)a io dτ n+δ0 +1 Γ n+δ 0 +1 ˆγo + µ o /σ o + P x i + (n + 1/σ o )(µ a) (n + 1/σ )a o»1 + c(µ a) b ca n+δ 0 +1 where c = n + 1/σ o and b = γ o + µ o/σ o + x i. We recognize this kernel as that of a t-distribution with location parameter a ( ) and dispersion parameter (n+δo )c 1, b ca and n+δo degrees of freedom. Introduction to Bayesian Methods p.4/??

25 Definition Let y = (y 1,...,y p ) be a p 1 random vector. Then y is said to have a p diminsional multivariate t distribution with d degrees of freedom, location paramter m and dispersion matrix Σ p p if y has density p(y) = ( Γ ( d+p ) (πd) p/ Σ 1/) Γ ( ) d [ d (y m) Σ 1 (y m) ] d+p We write this as y S p (d, m, Σ). In our problem, p = ( ) 1 1, d = n + δ o, m = a, Σ 1 = (n+δ o)c b ca, Σ = (n+δo )c b ca Introduction to Bayesian Methods p.5/??

26 The marginal distribution of τ is give by p(τ y) = Z 0 τ n+δ 0 +1 n 1 exp τ h γ o + µ o /σ o + X io x i n τ exp (n + 1/σ o)a o j exp τ(n + 1/σ o ) ff (m a) dµ τ n+δ 0 +1 n 1 τ 1 exp τ hγ o + µ o /σ o + X x i (n + 1/σ o )aio = τ n+δ n 0 1 exp τ hγ o + µ o /σ o + X x i (n + 1/σ o )aio Thus,» n + δ0 τ x gamma, 1 γ o + µ o /σ o + X x i (n + 1/σ o )a. Introduction to Bayesian Methods p.6/??

27 Remark A t distribution can be obtain as a scale mixture of normals. That is, if x τ N p (m,τ 1 Σ) and τ gamma(δ o,γ o ), then is the S p (δ o,m, γ o Note: p(x) = δ o Σ ) 0 p(x τ)π(τ)dτ ) density. That is, x S p (δ o,m, γ o δ o Σ p(x τ) = (π) p/ τ p/ Σ 1/ { exp τ } (x m) Σ 1 (x m). Introduction to Bayesian Methods p.7/??

28 Remark Note that in Examples 1 and 3i),ii), the posterior distribution is of the same family as the prior distribution. When the posterior distributionof a paramter is of the sme family as the prior istribution, such prior distributions are called conjugate prior distributions. For example 1, a Beta prior in θ led to a Beta posterior for θ. In example 3i), a normal prior for µ yielded a normal posterior for µ. In example 3ii), a gamma prior for τ yielded a gamma posterior for τ. More on conjugate priors later. Introduction to Bayesian Methods p.8/??

29 Advantages of Bayesian Methods 1. Interpretation Having a distribution for your unknown parameter θ is easier to understand that a point estimate and a standard error. In addition, we consider the following example of a confidence interval. A 95% confidence interval for a population mean θ can be written as x ± (1.96)s/ n. Thus P(a < θ < b) Introduction to Bayesian Methods p.9/??

30 Advantages of Bayesian Methods 1. Interpretation We have to rely on a repeated sampling interpretation to make a probability as above. Thus, after observing the data, we cannot make a statement like the true θ has a 95% chance of falling in x ± (1.96)s/ n. although we are tempted to say this. Introduction to Bayesian Methods p.30/??

31 Advantages of Bayesian Methods. Bayes Inference Obeys the Likelihood Principal The likelihood principle: If two distinct sampling plans (designs) yield proportional likelihood functions for θ, then inference about θ should be identical from these two designs. Frequentist inference does not obey the likelihood principle, in general. Example Suppose in 1 independent tosses of a coin, 9 heads and 3 tails are observed. I wish to test the null hypothesis H o : θ = 1/ vs.h o : θ > 1/, where θ is the true probability of heads. Introduction to Bayesian Methods p.31/??

32 Advantages of Bayesian Methods Consider the following choices for the likelihood function: a) Binomial n = 1 (fixed), x = number of heads. x Binomial(1, θ) and the likelihood is ( ) n L 1 (θ) = θ x (1 θ) n x x ( ) 1 = θ 9 (1 θ) 3 9 b) Negative Binomial: n is not fixed, flip until the third tail appears. Here x is the number of flips required to complete the experiment, x NegBinomial(r=3,θ). Introduction to Bayesian Methods p.3/??

33 Advantages of Bayesian Methods L (θ) = = ( ) r + x 1 θ x (1 θ) r x ( ) 11 θ 9 (1 θ) 3 9 Note that L 1 (θ) L (θ). From a Bayesian perspective, the posterior distribution of θ is the same under either design. That is p(θ x) = L 1(θ)π(θ) L1 (θ)π(θ)dθ L (θ)π(θ) L (θ)π(θ)dθ Introduction to Bayesian Methods p.33/??

34 Advantages of Bayesian Methods However, under the frequentist paradigm, inferences about θ are quite different under each design. The rejection region based on the binomial likelihood is p(x 9 θ = 1/) = 1 j=9 ( 1 j ) θ j (1 θ) 1 j = while for the negative binomial likelihood, the p-value is ( ) + j p(x 9 θ = 1/) = θ j (1 θ) 3 = j j=9 The two designs lead to different decisions, rejecting H o under design and not under design 1. Introduction to Bayesian Methods p.34/??

35 Advantages of Bayesian Methods 3. Bayesian Inference Does not Lead to Absurd Results Absurd results can be obtained when doing UMVUE estimation. Suppose x Poisson(λ), and we want to estimate θ = e λ, 0 < θ < 1. It can be shown that the UMVUE of θ is ( 1) x. Thus, if x is even the UMVUE of θ is 1 and if x is odd the UMVUE of θ is -1!! Introduction to Bayesian Methods p.35/??

36 Advantages of Bayesian Methods 4. Bayes Theorem is a formula for learning Suppose you conduct an experiment and collect observations x 1,...,x n. Then p(θ x) = p(x θ)π(θ) p(x θ)π(θ)dθ Θ where x = (x 1,...,x n ). Suppose you collect an additional observation x n+1 in a new study. Then, p(θ x,x n+1 ) = p(x n+1 θ)π(θ x) p(x n+1 θ)π(θ x)dθ Θ So your prior in the new study is the posterior from the previous. Introduction to Bayesian Methods p.36/??

37 Advantages of Bayesian Methods 5. Bayes inference does not require large sample theory With modern computing advances, exact calculations can be carried out using Markov chain Monte Carlo (MCMC) methods. Bayes methods do not require asymptotics for valid inference. Thus small sample Bayesian inference proceeds in the same way as if one had a large sample. Introduction to Bayesian Methods p.37/??

38 Advantages of Bayesian Methods 6. Bayes inference often has frequentist inference as a special case Often one can obtain frequentists answers by choosing a uniform priorfor the parameters, i.e. π(θ) 1, so that p(θ x) L(θ) In such cases, frenquentist answers can be obtained from such a posterior distribution. Introduction to Bayesian Methods p.38/??

Chapter 5. Bayesian Statistics

Chapter 5. Bayesian Statistics Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 20 Lecture 6 : Bayesian Inference

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

Bayesian Inference: Posterior Intervals

Bayesian Inference: Posterior Intervals Bayesian Inference: Posterior Intervals Simple values like the posterior mean E[θ X] and posterior variance var[θ X] can be useful in learning about θ. Quantiles of π(θ X) (especially the posterior median)

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Introduc)on to Bayesian Methods

Introduc)on to Bayesian Methods Introduc)on to Bayesian Methods Bayes Rule py x)px) = px! y) = px y)py) py x) = px y)py) px) px) =! px! y) = px y)py) y py x) = py x) =! y "! y px y)py) px y)py) px y)py) px y)py)dy Bayes Rule py x) =

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

HPD Intervals / Regions

HPD Intervals / Regions HPD Intervals / Regions The HPD region will be an interval when the posterior is unimodal. If the posterior is multimodal, the HPD region might be a discontiguous set. Picture: The set {θ : θ (1.5, 3.9)

More information

Bayesian Inference. STA 121: Regression Analysis Artin Armagan

Bayesian Inference. STA 121: Regression Analysis Artin Armagan Bayesian Inference STA 121: Regression Analysis Artin Armagan Bayes Rule...s! Reverend Thomas Bayes Posterior Prior p(θ y) = p(y θ)p(θ)/p(y) Likelihood - Sampling Distribution Normalizing Constant: p(y

More information

Bayesian Inference: Concept and Practice

Bayesian Inference: Concept and Practice Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of

More information

Weakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Time Series and Dynamic Models

Time Series and Dynamic Models Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The

More information

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017 Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries

More information

Bayesian statistics, simulation and software

Bayesian statistics, simulation and software Module 10: Bayesian prediction and model checking Department of Mathematical Sciences Aalborg University 1/15 Prior predictions Suppose we want to predict future data x without observing any data x. Assume:

More information

Introduction to Probabilistic Machine Learning

Introduction to Probabilistic Machine Learning Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 2017 1 / 10 Lecture 7: Prior Types Subjective

More information

Hierarchical Models & Bayesian Model Selection

Hierarchical Models & Bayesian Model Selection Hierarchical Models & Bayesian Model Selection Geoffrey Roeder Departments of Computer Science and Statistics University of British Columbia Jan. 20, 2016 Contact information Please report any typos or

More information

Lecture 2: Priors and Conjugacy

Lecture 2: Priors and Conjugacy Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.

More information

Module 22: Bayesian Methods Lecture 9 A: Default prior selection

Module 22: Bayesian Methods Lecture 9 A: Default prior selection Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical

More information

Bayesian Statistics. Debdeep Pati Florida State University. February 11, 2016

Bayesian Statistics. Debdeep Pati Florida State University. February 11, 2016 Bayesian Statistics Debdeep Pati Florida State University February 11, 2016 Historical Background Historical Background Historical Background Brief History of Bayesian Statistics 1764-1838: called probability

More information

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation. PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.. Beta Distribution We ll start by learning about the Beta distribution, since we end up using

More information

Bayesian philosophy Bayesian computation Bayesian software. Bayesian Statistics. Petter Mostad. Chalmers. April 6, 2017

Bayesian philosophy Bayesian computation Bayesian software. Bayesian Statistics. Petter Mostad. Chalmers. April 6, 2017 Chalmers April 6, 2017 Bayesian philosophy Bayesian philosophy Bayesian statistics versus classical statistics: War or co-existence? Classical statistics: Models have variables and parameters; these are

More information

(1) Introduction to Bayesian statistics

(1) Introduction to Bayesian statistics Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they

More information

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model

More information

Bayesian inference: what it means and why we care

Bayesian inference: what it means and why we care Bayesian inference: what it means and why we care Robin J. Ryder Centre de Recherche en Mathématiques de la Décision Université Paris-Dauphine 6 November 2017 Mathematical Coffees Robin Ryder (Dauphine)

More information

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Introduction: MLE, MAP, Bayesian reasoning (28/8/13) STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this

More information

ST 740: Model Selection

ST 740: Model Selection ST 740: Model Selection Alyson Wilson Department of Statistics North Carolina State University November 25, 2013 A. Wilson (NCSU Statistics) Model Selection November 25, 2013 1 / 29 Formal Bayesian Model

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

Data Analysis and Uncertainty Part 2: Estimation

Data Analysis and Uncertainty Part 2: Estimation Data Analysis and Uncertainty Part 2: Estimation Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu 1 Topics in Estimation 1. Estimation 2. Desirable

More information

Learning Bayesian network : Given structure and completely observed data

Learning Bayesian network : Given structure and completely observed data Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution

More information

Weakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

ECE521 W17 Tutorial 6. Min Bai and Yuhuai (Tony) Wu

ECE521 W17 Tutorial 6. Min Bai and Yuhuai (Tony) Wu ECE521 W17 Tutorial 6 Min Bai and Yuhuai (Tony) Wu Agenda knn and PCA Bayesian Inference k-means Technique for clustering Unsupervised pattern and grouping discovery Class prediction Outlier detection

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Lecture 2: Statistical Decision Theory (Part I)

Lecture 2: Statistical Decision Theory (Part I) Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Bayesian Methods. David S. Rosenberg. New York University. March 20, 2018

Bayesian Methods. David S. Rosenberg. New York University. March 20, 2018 Bayesian Methods David S. Rosenberg New York University March 20, 2018 David S. Rosenberg (New York University) DS-GA 1003 / CSCI-GA 2567 March 20, 2018 1 / 38 Contents 1 Classical Statistics 2 Bayesian

More information

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio Estimation of reliability parameters from Experimental data (Parte 2) This lecture Life test (t 1,t 2,...,t n ) Estimate θ of f T t θ For example: λ of f T (t)= λe - λt Classical approach (frequentist

More information

Linear Models A linear model is defined by the expression

Linear Models A linear model is defined by the expression Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose

More information

Introduction to Bayesian Statistics 1

Introduction to Bayesian Statistics 1 Introduction to Bayesian Statistics 1 STA 442/2101 Fall 2018 1 This slide show is an open-source document. See last slide for copyright information. 1 / 42 Thomas Bayes (1701-1761) Image from the Wikipedia

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 208 Petros Koumoutsakos, Jens Honore Walther (Last update: June, 208) IMPORTANT DISCLAIMERS. REFERENCES: Much of the material (ideas,

More information

Bayesian Inference. Introduction

Bayesian Inference. Introduction Bayesian Inference Introduction The frequentist approach to inference holds that probabilities are intrinsicially tied (unsurprisingly) to frequencies. This interpretation is actually quite natural. What,

More information

Intro to Bayesian Methods

Intro to Bayesian Methods Intro to Bayesian Methods Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Lecture 1 1 Course Webpage Syllabus LaTeX reference manual R markdown reference manual Please come to office

More information

Some Asymptotic Bayesian Inference (background to Chapter 2 of Tanner s book)

Some Asymptotic Bayesian Inference (background to Chapter 2 of Tanner s book) Some Asymptotic Bayesian Inference (background to Chapter 2 of Tanner s book) Principal Topics The approach to certainty with increasing evidence. The approach to consensus for several agents, with increasing

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes

More information

Bayesian RL Seminar. Chris Mansley September 9, 2008

Bayesian RL Seminar. Chris Mansley September 9, 2008 Bayesian RL Seminar Chris Mansley September 9, 2008 Bayes Basic Probability One of the basic principles of probability theory, the chain rule, will allow us to derive most of the background material in

More information

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

9 Bayesian inference. 9.1 Subjective probability

9 Bayesian inference. 9.1 Subjective probability 9 Bayesian inference 1702-1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake pm to win

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 CS students: don t forget to re-register in CS-535D. Even if you just audit this course, please do register.

More information

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007 Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Bayesian statistics, simulation and software

Bayesian statistics, simulation and software Module 4: Normal model, improper and conjugate priors Department of Mathematical Sciences Aalborg University 1/25 Another example: normal sample with known precision Heights of some Copenhageners in 1995:

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

STAT 830 Bayesian Estimation

STAT 830 Bayesian Estimation STAT 830 Bayesian Estimation Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Bayesian Estimation STAT 830 Fall 2011 1 / 23 Purposes of These

More information

Bayesian Computation

Bayesian Computation Bayesian Computation CAS Centennial Celebration and Annual Meeting New York, NY November 10, 2014 Brian M. Hartman, PhD ASA Assistant Professor of Actuarial Science University of Connecticut CAS Antitrust

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

Probability and Estimation. Alan Moses

Probability and Estimation. Alan Moses Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.

More information

Bayesian Inference. Chapter 2: Conjugate models

Bayesian Inference. Chapter 2: Conjugate models Bayesian Inference Chapter 2: Conjugate models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Announcements. Proposals graded

Announcements. Proposals graded Announcements Proposals graded Kevin Jamieson 2018 1 Hypothesis testing Machine Learning CSE546 Kevin Jamieson University of Washington October 30, 2018 2018 Kevin Jamieson 2 Anomaly detection You are

More information

Lecture 1: Probability Fundamentals

Lecture 1: Probability Fundamentals Lecture 1: Probability Fundamentals IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge January 22nd, 2008 Rasmussen (CUED) Lecture 1: Probability

More information

ST 740: Multiparameter Inference

ST 740: Multiparameter Inference ST 740: Multiparameter Inference Alyson Wilson Department of Statistics North Carolina State University September 23, 2013 A. Wilson (NCSU Statistics) Multiparameter Inference September 23, 2013 1 / 21

More information

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:

More information

Inference for a Population Proportion

Inference for a Population Proportion Al Nosedal. University of Toronto. November 11, 2015 Statistical inference is drawing conclusions about an entire population based on data in a sample drawn from that population. From both frequentist

More information

INTRODUCTION TO BAYESIAN METHODS II

INTRODUCTION TO BAYESIAN METHODS II INTRODUCTION TO BAYESIAN METHODS II Abstract. We will revisit point estimation and hypothesis testing from the Bayesian perspective.. Bayes estimators Let X = (X,..., X n ) be a random sample from the

More information

The exponential family: Conjugate priors

The exponential family: Conjugate priors Chapter 9 The exponential family: Conjugate priors Within the Bayesian framework the parameter θ is treated as a random quantity. This requires us to specify a prior distribution p(θ), from which we can

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

COMP 551 Applied Machine Learning Lecture 19: Bayesian Inference

COMP 551 Applied Machine Learning Lecture 19: Bayesian Inference COMP 551 Applied Machine Learning Lecture 19: Bayesian Inference Associate Instructor: (herke.vanhoof@mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

More Spectral Clustering and an Introduction to Conjugacy

More Spectral Clustering and an Introduction to Conjugacy CS8B/Stat4B: Advanced Topics in Learning & Decision Making More Spectral Clustering and an Introduction to Conjugacy Lecturer: Michael I. Jordan Scribe: Marco Barreno Monday, April 5, 004. Back to spectral

More information

Introduction to Applied Bayesian Modeling. ICPSR Day 4

Introduction to Applied Bayesian Modeling. ICPSR Day 4 Introduction to Applied Bayesian Modeling ICPSR Day 4 Simple Priors Remember Bayes Law: Where P(A) is the prior probability of A Simple prior Recall the test for disease example where we specified the

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation Machine Learning CMPT 726 Simon Fraser University Binomial Parameter Estimation Outline Maximum Likelihood Estimation Smoothed Frequencies, Laplace Correction. Bayesian Approach. Conjugate Prior. Uniform

More information

Confidence Intervals. CAS Antitrust Notice. Bayesian Computation. General differences between Bayesian and Frequntist statistics 10/16/2014

Confidence Intervals. CAS Antitrust Notice. Bayesian Computation. General differences between Bayesian and Frequntist statistics 10/16/2014 CAS Antitrust Notice Bayesian Computation CAS Centennial Celebration and Annual Meeting New York, NY November 10, 2014 Brian M. Hartman, PhD ASA Assistant Professor of Actuarial Science University of Connecticut

More information

Lecture 1: Bayesian Framework Basics

Lecture 1: Bayesian Framework Basics Lecture 1: Bayesian Framework Basics Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de April 21, 2014 What is this course about? Building Bayesian machine learning models Performing the inference of

More information

Introduction to Bayes

Introduction to Bayes Introduction to Bayes Alan Heavens September 3, 2018 ICIC Data Analysis Workshop Alan Heavens Introduction to Bayes September 3, 2018 1 / 35 Overview 1 Inverse Problems 2 The meaning of probability Probability

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for

More information

A primer on Bayesian statistics, with an application to mortality rate estimation

A primer on Bayesian statistics, with an application to mortality rate estimation A primer on Bayesian statistics, with an application to mortality rate estimation Peter off University of Washington Outline Subjective probability Practical aspects Application to mortality rate estimation

More information

Frequentist Statistics and Hypothesis Testing Spring

Frequentist Statistics and Hypothesis Testing Spring Frequentist Statistics and Hypothesis Testing 18.05 Spring 2018 http://xkcd.com/539/ Agenda Introduction to the frequentist way of life. What is a statistic? NHST ingredients; rejection regions Simple

More information

Bayesian Phylogenetics:

Bayesian Phylogenetics: Bayesian Phylogenetics: an introduction Marc A. Suchard msuchard@ucla.edu UCLA Who is this man? How sure are you? The one true tree? Methods we ve learned so far try to find a single tree that best describes

More information

Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification

Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification Michael Anderson, PhD Hélène Carabin, DVM, PhD Department of Biostatistics and Epidemiology The University of Oklahoma

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information

Overview of Course. Nevin L. Zhang (HKUST) Bayesian Networks Fall / 58

Overview of Course. Nevin L. Zhang (HKUST) Bayesian Networks Fall / 58 Overview of Course So far, we have studied The concept of Bayesian network Independence and Separation in Bayesian networks Inference in Bayesian networks The rest of the course: Data analysis using Bayesian

More information

Introduction to Bayesian Inference

Introduction to Bayesian Inference University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ

More information

INTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP

INTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP INTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP Personal Healthcare Revolution Electronic health records (CFH) Personal genomics (DeCode, Navigenics, 23andMe) X-prize: first $10k human genome technology

More information